Project organisation and transferring data between apps

I’m quite new to Elixir, and am at a stage where I hope to handle more programming as opposed to merely using (PHP) CMSes and doing HTML and CSS. I have the start of a Phoenix project underway and was hoping to get some advice on structure, etc.

One aspect of the project is an API for the client’s Windows software to use to store some simple types of data and to submit support requests. It’s my client’s customers that will be using the Windows program, and I’m thinking that this particular service should be its own app and hosted on its own server, in order to minimise the chances of disruption to it due to work/maintenance/updates on the other, more complicated, parts/apps of the project or their server(s). I’ll refer to this app as the Workstation app.

I used Phoenix 1.3 to set up an umbrella app, and so far I have the support request- and attachment-storing of the Workstation app (with its lib/web directory) working and deployed. The Windows software uses HTTP over TLS with HTTP Basic Auth to POST support request details and attachments; the textual details are stored in a PostgreSQL database on the app server and the attachments are stored as files on the app server’s filesystem.

My plan is to provide for support request management (reading, viewing/downloading attachments, replying, marking as solved, etc.) by my client, and administration of other aspects of the project (website content, customer system data, etc.), as part of another app run on a different server. This would be called the Admin app.

I’m thinking that the Workstation app’s data would effectively be read-only from the Admin app’s point of view, and that the Admin app will keep its own copy of all the API-submitted data in its own database/filesystem, with the addition of support request properties such as status (closed, open) and replies, and that the Admin app would handle email notifications.

Is this approach to separation sound in general?

If so, regarding how the data from the Workstation app will get to the Admin app, I’d be grateful for guidance.

The two apps’ servers could be in the same data centre and thus be able to be on the same private/private-ish network, but perhaps this should not be a requirement. If I allow for them to be in different data centres for greater flexibility/less coupling(?), could/should I:

  • extend the HTTP API of the Workstation app so that the Admin app can periodically (e.g., every fifteen minutes) check for new issues;
  • (additionally or instead?) use Phoenix channels to have the Workstation app “push” new support requests to the Admin app (assuming channels can go from server to server)?

(For the attachments, I was thinking I could simply rsync over SSH.)

Or are there better options? Should I have both servers in the same data centre so that they can be nodes that talk to each other? Would using GenServer be a good way of transferring data from the Workstation app to the Admin app whenever the Admin app is available?

I’ve purchased and am part way through some Elixir/Phoenix learning resources but I’m planning to start Elixir for Programmers soon. I see that it includes sections on processes, agents, supervisors, OTP and nodes. I’ve been reading posts here and looking at documentation on such topics but I haven’t yet had any experience of using them so I’m still vague.


Why not push the entries as they come in from the workstation app to the admin app? The admin app could provide an API for this (HTTP or otherwise), and access can be easily controlled by an API token you store on the workstation app, or by source IP, or whatever else makes sense for your setup.

That way you aren’t polling, and the workstation app can keep track of what has and has not be sent on to the admin app more easily and reliably.

1 Like

Oh. Yes, that does seem like a better direction if using HTTP. Thanks!

Are there alternatives to HTTP a beginner like me should look at for communication between two Elixir (Phoenix) apps? Would something else be better suited to this situation, including the transferring of files? (Though I expect the attachments to be quite small anyway.)

Then again, as I’ll be making much of the Admin app usable via the web it will need to have an HTTP API anyway, so maybe I should stick to HTTP for the Workstation-Admin communication and save less familiar-to-me methods for later.

Another way to communicate between contexts is to use domain events. If You had multiple domains listening to WorkstationApp, it would allow You to send asynchronous messages between boudaries.

A little later than expected due to a pause in work on this, I’ve got an initial transferring of user-submitted data from the user-facing Elixir Workstation app to the separate Elixir Admin app via HTTP set up in development. I’m posting a short summary here and if anyone has any thoughts on whether or not this is a half-decent approach I’d be glad to get them.

I also have a couple of questions on processes and whether or not I’m doing enough to guard against the Workstation app crashing due to an unexpected effect of the HTTP PUT. It’s a priority to keep the Workstation app running as it is what remote users will be sending their submissions to.

Quick summary:

Users' Windows software
Elixir 'Workstation' app (Server 1. Stores submissions in its local PostgreSQL database.)
Elixir 'Workstation' app (Server 1.)
PUT (using UUID)
Elixir 'Admin' app (Server 2. Stores submissions in its local PostgreSQL database.)

When the Workstation app gets a success response from the Admin app, it updates its own record of the entity’s sent column from false to true.

I’ve added an API module to the Workstation app which uses HTTPoison which I can use to PUT submissions under their UUIDs to the Admin app:

defmodule Workstation.AdminAPI do
  use HTTPoison.Base
  # alias ...

  # Workstation app: 4000
  # Admin app: 4001
  @endpoint "http://localhost:4001/api_ws/"

  def process_url(url) do
    @endpoint <> url

  def send_unsent_entities do
    |> Enum.each(&send_unsent_entity/1)

  def send_unsent_entity(entity) do
    |> prepare_entity_data
    |> put_entity

  # Some supporting private functions...

  # The private function that PUTs
  defp put_entity(entity_data) do
    # bind values to labels ...

    case put(path, body, headers) do
      {:ok, %HTTPoison.Response{status_code: 201} = response} ->
      # match other responses including errors ...

I use Workstation.AdminAPI.send_unsent_entity/1 from the Workstation.WorkData.create_thing/1 context function, so that a non-blocking PUT happens after a user submission:

defmodule Workstation.WorkData do
  import Ecto.Query, warn: false
  alias Workstation.Repo
  # alias ...

  def create_thing(attrs \\ %{}) do
    result =
      |> Thing.changeset(attrs)
      |> Repo.insert

    case result do
      {:ok, thing} ->
        Task.start(Workstation.AdminAPI, :send_unsent_entity, [thing])
        {:ok, thing}
      _ -> result

I’m using Task.start/3 so that the response to the Windows software is not delayed by the HTTP PUT from the Workstation app to the Admin app.

Question: is this enough to make sure that the Workstation app and the API it provides to users (not shown here) won’t crash if the HTTPoison PUT between my two Elixir apps results in a crash?

Related question just for my education: if I were happy to delay the response to the Windows client until the results of the PUT were returned or a timeout occurred, would I still need to spawn another process to ensure that a PUT-triggered crash didn’t crash the current process? I.e., does HTTPoison already run in a separate process that won’t bring my app down?

In case the initial PUT triggered by create_thing/1 fails for whatever reason, I’m planning to run Workstation.AdminAPI.send_unsent_entities/0 every fifteen minutes or so. I’ve seen a means of doing this in Elixir rather than via cron, and I just need to find it again. :slight_smile: