Application communication best practices

Hi all!

This is my very first question here!

I was playing a bit with Phoenix and Elixir in a solution where I would like to use Phoenix as a client application which will provide solely a web interface for backend services allowing in future using other interfaces such as CLI or integration with other technologies through messaging or http.

Between Phoenix and Backend Elixir communication I would like ideally to take advantage of Erlang/Elixir ecosystem and make that communication through processes and I decide to use Genserver for that.

What I did was define the callback handle_call in the backend service and on client I wrap the Genserver calls in a MyApp.BackedEndApp.Client module which uses purely to invoke the backend functions.

I had received some critics from my team members about that approach. What could be a best practice in that case when I want to keep client and server decoupled in the same time I want to take the ecosystem advantages? I mean without HTTP/Rest or Messaging communication

Welcome to the forum,

What about defining the GenServer call inside public api of the GenServer? (Remember, there are two part in the GenServer)

This way, Client will call client api of the GenServer instead of calling itself It would be easy to decouple and use CLI. See pragdave’s approach if You can…

In practice, it would be like moving your code from Client to GenServer api, and replace them with api call instead.

Hey @kokolegorille thx a lot for your point, I saw the youtube video from PragDave in the past and I really like the aproach. Now I realize my first mistake on relying in the backend service implementation directly, but I still keep with some noisy in my head.

What make me think using would decouple web and service applications is because the GenServer module is available all around and having a backend module MyBackendApp.func present on my client wrapper sounds weird and looks like my web client is now heavily dependent on my backend service. Is that the way to go? I mean using a HTTP client, Message queue or seems to be a standard way to communicate between applications/components/services rather than use a explicit API which the client should not know the details or even be aware of.

Without code sample it’s difficult to tell.

I am sorry, I have the same critical point of view than your team members, I really don’t like your approach.

Mainly because

  • I would not use outside the client part of a GenServer, but send
  • GenServer already is a client-server, and I would put the API here.

And I am not sure I follow You on this idea of client/server.

Decoupling should allow You to switch between CLI, or Web Interface. There is exactly this in pragdave’s course.

I don’t want to be discouraging, but I invite You to read your post later and see if You still find it’s a good idea.

If we split the server side to public API and handle_call callbacks this allows us to change both in tandem easily. In addition clients do not need to know the internal structure of the server.


defmodule Server do
use GenServer

def init_context(), do %Server{}
def send_update(pid, context), do, {:send_update, context})

At this point the client doesn’t need to know anything about context, it doesn’t need to know how send_update works.

send_update might perform some pre-checks before calling the Server, it might chain several other GenServer.cast/call before calling the service. It might change the call to, {:update, %{context | callee_ts: DateTime.utc_now}})

This style of programming is very old and widely used by C/Ada programs to hide implementation details.

Based on your description I would recommend against GenServer. The thing is that the way Phoenix works is that it spawns a process for each request that comes in. This allows it to scale nicely over multiple cores. Assuming you start a single process to handle all the work you are effectively turning your app single-threaded. You’d get around that if you create a pool of processes etc, but that’s a lot of effort for what gain?

I would not use GenServer for this. Just create a client module and call those functions. This means your business code runs in every process spawned for requests, the way it is intended, and you still get the separation of concern and decoupling that you’re interested in. I don’t understand why you’d have two apps speaking over “HTTP/Rest” if all you’re concerned with is decoupling.

1 Like

I will try to be clearer about my doubts regarding communication best practices, all of you give me good points but I still have an unanswered point specially because I believe I brought two question in one unintended.

The first thing @kokolegorille has clarified already and I agreed since the critics from my co-worker are correct and I was wrong. So I will keep the Genserver client/server together or at least in the same application.

Below I draw the applications that I have and a snippet on how I did, but it seems still wrong to me and I really appreciate any help.

    MyPhoenixApp                              MyBackendApp               
  +------------------------+              +-------------------------+    
  |                        |              |                         |    
  |                        |              |                         |    
  |                        |              |                         |    
  |                        |------------- |                         |    
  |                        |              |                         |    
  |                        |              |                         |   -
  |                        |              |                         |    
  +------------------------+              +-------------------------+    
     node0 phoenix app                      node1 backend elixir app 

Backend service relevant code

defmodule MyBackendApp.Stock do
  use Genserver

  def stock_prices(server, {period: dayBefore}) do
    # ... supressed code, {period: dayBefore})

  # ... supressed code

  def handle_call({period: dayBefore}, from, state) do
  # ... supressed code

Web client relevant code

defmodule MyPhoenixApp.MyBackendApp.Client do
@moduledoc """
  This client will be used on Controllers or other modules interested on stock's prices

  def stock_prices(server, {period: dayBefore}) do
    # ... supressed code
    MyBackendApp.Stock.stock_prices(server, {period: dayBefore})

  # ... supressed code 

When I mentioned coupled code I mean this line MyBackendApp.Stock.stock_prices(server, {period: dayBefore}) where the client App is heavly dependent on my backend service.
Initially I have written this line like, {period: dayBefore}) because I thought that way I will rely in a standard GenServer module instead on my backend implementation. Now I had regreted that calling the backend module direct but it seems still coupling my phoenix with my pure elixir backend application.

How shall I proceed in that case?

In this post josevalim had mentioned about communication and as far as I could understand he was using agents for that.

I recommend you have a good look at this topic, the associated video and in particular this post to get a better sense of the relevant use cases for distributed BEAM.

In short the general recommendation would be to not use distributed BEAM to communicate between MyPhoenixApp and MyBackendApp (given that they are running on separate nodes and consequently physically separate machines connected via an unspecified network).

Now that doesn’t mean that it can’t be used but one needs to be aware of a number of issues that this approach has which imposes some constraints on the technical infrastructure (e.g. the nodes are running in a secured environment on a common backplane or similar localized arrangement) that may or may not be a showstopper.

“Coupling” takes a lot of forms - calling a function in one module from another is one kind, but using a generic function like requires the caller and recipient agree on the shape of the tuple ({period: dayBefore} here) so it’s also coupling.

MyBackendApp.Stock.stock_prices above reminds me of the example given in “When (not) to use a GenServer” in the documentation. Using GenServers for code organization can cause undesirable side-effects: for instance, using code like, {period: dayBefore}) with a fixed name means there’s a single process handling messages one at a time.

Thanks for sharing more about your architecture! To better understand your use case I was wondering if you could share your need for two separate nodes. Is there a specific reason that you want two separate nodes instead of running all of the code on one node? (There are definitely good reasons for both architectures, it’s just when you add multiple nodes you add complexity so I’m trying to understand if it seems worth it in this situation)

@al2o3cr This is exact what I wanna learn with that question, how can I make an efficient inter-components communication follow the practices used on Elixir world. My experience with other technologies in general follow two common ways Rest API or Messaging brokers, as Erlang/Elixir provides the benefits of using distributed programming in its soul I wanna use in the right way. So if GenServer is not the way to go could be Task or simple Process?

@axelson I just used node0 and node1 for didactic reasons to explicitly demonstrate I want to ensure both components are running independently. I can run multiple nodes inside the same server without so many networking issues. But is just to create isolation between MyBackendApp and MyPhoenixApp

@peerreynders brought a lot of resources that I take as homework in order to better understand how to proceed.

On a single node that is usually accomplished with “parallel dependencies” (poncho projects within Nerves).

For example from an elixir course: game.ex is “the hangman” backend to “the gallows” phoenix application. And game.ex isn’t even a process - it simply defines a struct which captures the state that is manipulated only through the module’s functions.

The hangman backend does have it’s own server but that is only run for use with the text client.

1 Like

Have you considered using… nothing? Let’s walk through a typical Phoenix + Ecto application to explain what I mean.

(h/t to @jola, this is a long version of that earlier comment)

There are two big clumps of processes running: database connections and acceptors.

The application’s Repo is a single process, started at boot, and manages a pool of DBConnection processes, keeping track of which connections are checked out and handling failures.

A Phoenix endpoint uses Cowboy (and ultimately Ranch) to start a bunch of “acceptor” processes which (as the name might suggest) accept incoming TCP connections and start a new “handler” process that does the actual work (here’s a diagram and additional discussion).

That process handles running the per-request Phoenix code as well as user-defined plugs and eventually calls the specified function in the controller. When that code calls functions like Ecto.Repo.insert, Ecto checks out a connection - borrowing it from the DBConnection process that holds it when idle - and interacts with the database.

Notably absent here is a process boundary: some of the code that gets executed is defined in modules within the Phoenix “app” (routing, plugs, controllers), and some comes from the Ecto “app” (schemas, changesets) but all of it executes within that single “handler” process created for the specific HTTP request.

This is what the GenServer docs mean by “use processes only to model runtime properties, not code organization”. The runtime property desired here is per-HTTP-request concurrency.

On the other hand, having an EctoApp GenServer that every HTTP request interacts with would result in a bottleneck at that GenServer’s message queue.

So “nothing” at the beginning of this post is a little hyperbolic, but “a function call” is pretty close. The BEAM has a lot of powerful architectural features, but it’s vitally important to understand the philosophy underpinning them to use them correctly. GenServers != microservices


Is there any reason why the web code has to stay in another node (separete machine)?

If the codes can stay on the same machine why not call the function directly? For me this is the best practice and the simplest.

One option to decouple and use the same machine is to use RabbitMQ or something, but for a simple API I understand it as a bad practice.

If you would you like build microservices was some approaches:

It’s worth seeing:

You got my point, since I came mostly from Java world where the lately there is a big movement towards microservices and event-driven architectures I tend to use more or less the same approach with Elixir.

Since I didn’t get constructive driven-by-example feedback from my aforementioned colleagues I end up here trying to find out the Elixir-way to write code that is decoupled, distributed and/or isolated among contained services holding mainly business logic.

I really appreciate all points that people brought me here and help me a lot. Thanks @kokolegorille @jola @peerreynders @axelson @al2o3cr and @joaothallis for spend your time to help me on my journey to become a good Elixir Dev.