Are you using a clustered Elixir deployment?

We’re considering our architecture from a viewpoint of scaling our traffic heavily over the next 6 months. Our current deployment is running in Fargate with two different task types, one for web and one for background jobs which are scaled independently. Obviously with this configuration we’re not taking advantage of the BEAM’s clustered potential so we considering what we might gain by moving to a clustered deployment.

I feel like this is a slightly neglected topic in the Elixir community, particularly as many folks are coming from languages where docker is now the defacto unit for scaling.

If you’re using a clustered deployment I’d love to hear about what your deployment looks like and how you leverage the clustering. Also what platform do you deploy on? Are you using kubernetes/docker? Are you happy with it?


It depends a bit on what you’re looking for with clustering. We are on AWS / EKS (aws managed kubernetes) and do use clustering. Our clusters are homogeneous, all nodes are running the same code / perform the same roles. The main value we get out of clustering is easy use of phoenix pubsub and other internode message passing.


We are in AWS Fargate. There are web nodes and 1 node for background things. All nodes are running the same code, just the bg node has extra ENV variable and we then add more things to supervision tree.

We are on our way to get rid of the background server though, because we moved almost all periodical tasks to AWS cron jobs where an instance is started just for the task and the killed.

About the cluster, we used Horde at first and had problems with it. Switched to Swarm last week and now it seems to run smoothly. Our application is stateful, so we need the cluster for nodes to talk to each other.

1 Like

I run an agency and we have multiple projects. In most of them we actually don’t have to use any clustering, as it often happens we are running on a pretty standard single-database-multiple-web-nodes or even single-database-single-web-node set ups (yeh, Elixir is fast!). But there are projects where we do use clustering too.

Let’s start with use cases I see:

Actually the most common use case is that we want to have PubSub (as in Phoenix PubSub) functionality shared across the cluster, without the need for additional dependency (like Redis). This is used by default by Phoenix Channels, but not only. We also use Absinthe GraphQL server, and when you do real-time push updates this way (with GraphQL subscriptions), you often have to trigger these as well with a PubSub message, and you want all clients, connected to all nodes, to receive that update. The third, also PubSub-related use case us LiveView. It’s similar to GraphQL subscriptions in a way that whnever some event happens, as in certain record gets updated, all currently connected LiveViews should receive a messge and do something like render new version of updated record. We do that also with PubSub, and when our nodes are clustered this is a no-brainer in usage and configuration.

After PubSub, the second use case for clustering I think is the need to perform a cluster-wide lock, i.e. critical section. Or more generally speaking: limit the concurrency of something cluster-wide, to one or N concurrent actions of given kind. For example, if you track some usage of your system, and have pay-as-you-go or billing level plans, you want to warn user when they are approaching limit, and then maybe suspend or switch plan, or apply more charges once they exceeded usage. You may want, these events to happen precisely once. Or, you want to throttle the usage of the user when they are exceeding some limit of API calls. You usually can do these things without clustering, and rely on something like locking records in database but that has own disadvanteges like, well, that your database connections are locked for a long time for example.

Third thing are things that can go wrong across multiple nodes. For example, if you need a circuit breaker that’ll shut off some part of the system that contacts an API that started timing out, or a rate limit usage of external API across cluster. Again, it’s probably possible to do with something like Redis or database locking, but just so much easier and more natural to do it within Elixir processe in cluster.

And finally caching and keeping the system “hot”, as in warmed up after deployments. If you have the need to keep soemthing in memory (versus in database or external service) you can duplicate the thing on all of the nodes. You can do that without clustering. But you can also form a cluster and have cluster-wide cache, provided your node-to-node connections are fast (which they should be). Or, you can do a mixture of both things, keeping some things on hand, in memory, for all nodes in cluster, and other things local for particular node. Then, what becomes interesting is the ability to pass on that cache to newly started instances. If you release to prod often, you may find your system needing some “warmup” time, when it doesn’t yet know what’s going on, and has no caches, and is building it up as the clients make requests. This starting carte blanche style may not be desirable, and it can even lead to some serious performance issues if you deploy during the high traffic hours. So, by briefly forming a cluster between shutting down, and starting-up nodes, you can pass the relevant state, as in cache/counters/processes from old version of application to new version of application.

When it comes to what platform we use:

We have been doing that on dedicated hardware, EC2 instances (with ECS) and also recently on Gigalixir. The last option is definitely the easiest to set up as they have figured most of these things out, including the state passing form shutting down to stopping nodes is possible (which I learned only recently, silly me!). Unfortunately i have no experience with your stack :/.

Hope that’s helpful!


How is that done? Really interesting.

1 Like

What do you ask here about: how it is done by infrastructure or how is state passing done on the application side?

Definitely the latter – didn’t know Erlang/Elixir could do that.

You can do this using the code_change callback in genservers, triggered when hot code reloading. You can find the docs here and more info here in the section labeled transforming state.

1 Like

Well, I think we are mixing two concepts. There are at least two ways to do what we are talking about here.

  1. Use Erlang / Elixir releases with hot code upgrade. I haven’t used this method in production to be fair. But it works when you have set of servers, where you deploy your application to, and the updated application will run on the same servers. By servers I mean OS instance, as in EC2 instance or a real hardware server, that stays the same during deployment. This method doesn’t really care about clustering, which may or may happen in parallel to hot upgrades. It’s just deploying new version of code, to the same servers that were running previous version of code, and there are hooks in GenServers and friends to handle state passing between ‘old version’ and ‘new version’ of code that was just deployed:

Again, these days most of the things I work on are not deployed to such static/dedicated servers, but to a VMs created as needed by some piece of infrastructure, and discarded after the application shut downs. This is the way anything Docker-based or Kubernetes-based works.

  1. Use clustering, no hot code upgrade but cluster starting and shutting down instances.

This is method suitable to pass the in-memory state on deployments when you use something like Kubernetes. When you deploy new version to the cloud, the old instance(s) of application is/are still running on their own containers. During deployment, the piece of infrastructure you use creates new containers for the new release. These start their own little OS instances and run application. Now, here’s the moment where your infrastructure may establish a link between legacy version of application and new version of application, so you can pass state. Again, Gigalixir does that by default, I believe.

We are always using here to handle cluster formation and this is not really important here.

What is important is that you can listen to events when new nodes join or leave cluster. And example code can be seen here:

So you monitor the nodes in your cluster in some process, and get events when new node joined/left and you can decide to pass a state to this newly started node by sending some process running on that node message with the state.

There are at least two projects that allow you to abstract most of the details here and do a lot of the legwork for you, one is Swarm another one is Horde. With both you can start processes on the nodes in the cluster, and they will react to cluster formation providing hooks to pass state. With Swarm it’s a bit easier ( but we observed some undeterministic behavior here, i.e. bugs. In theory it’s super sweet, however, and the API is really nice.

Then, you can do the same with Horde ( with a bit more of legwork


Many thanks for posting this, it’s really helpful to hear about some concrete scenarios where you’ve decided to use/not use clustering.

1 Like

Yeah you’re right, I wasn’t thinking about kubernetes because I’ve never used it personally, and probably wouldn’t want the additional complexity involved for just using kubernetes with elixir. If you have a polyglot stack then kubernetes makes more sense, Imo. I’ve also not used horde or swarm before, but I’ll check them out so thanks for the tip.

We’re using clustering but probably not in a way that you’re interested in. My current project is an embedded system of between 3 and 30 nodes which are all telecomms related equipment distributed around a site and networked together. The software on each node is responsible for configuring the attached hardware, but the nodes communicate with each other to co-ordinate this.

We do very basic cluster formation (manually, although I’m looking at libcluster) and then use for the comms. This required very little faff to set up, and although we’re only in the early stages of development it seems to work well for us.


@benwilson512 Are your apps containerized? Are you running on bare metal? What are you using for clustering?

@egze Were you experiencing issues using Elixir libraries to handle periodical tasks? If yes, would you please share?

They are containerized, we use libcluster’s K8s service plugin, it works great!

No, not at all. But we decided to use existing infrastructure tools that our DevOps support.

It also allows us to start periodical tasks with more memory, than what is needed to run the web application.

We have a few services in Elixir

  • All run on docker
  • All run on k8s - except one (running on vm + docker + systemd - but we will move this to k8s too :wink: )
  • We don’t use hot code reloading at all - just using k8s way.
  • Some services are using erlang cluster, while some are just independent containers.

For services using erlang cluster:

  • We use libcluster to form a cluster from k8s dns, and it works very well
    • A while ago, I tried to set up erlang clusters on vms across AWS EC2 instances with ansible, and it wasn’t simple… I don’t remember the details but probably due to epmd.
  • We need erlang clusters for global knowledge (e.g. websocket) and message passing between processes across nodes with minimal latency… and distributed erlang is a great feat.
  • Some application uses Horde for global registry, and we have an issue on network split (link) - but other than that it works well as it is designed.

For services not using erlang cluster:

  • To distribute async jobs - we use oban. This is our preferred choice if a service already requires postgresql. Having persistent, auditable records are often “required”.
    • oban also have “unique” job and cron job features - so we don’t need to build global lock or global scheduler which is very nice.

On docker:

  • We have our own base image (compiling erlang, reusing precompiled elixir). Some codes are available at here
  • We haven’t hit any issues from erlang/elixir on docker.

On k8s:

  • k8s is not easy - lots of things to learn, CI/CD and local dev ecosystem is still evolving… but it does make sense for certain circumstance :slight_smile:

Please keep in mind that you don’t need erlang cluster for scaling. Running independent instances behind load balancer is totally fine.

You WILL NEED erlang cluster when you choose to use any features only available in erlang cluster for scaling. For example, I’ve seen a presentation to add global cache on the top of database to reduce the latency.