GenServer use-cases

My question was triggered by this topic Can a GenServer state be too "big" and general application architecture but I’ve had it forming for a while now.

I’m struggling to understand whether GenServers should mostly be used for serializing requests or are there other major use-cases?

At first, I thought they’re suitable for any place you’d need request/(optional) response semantics, but then I realized due to their sequential nature a pool of workers sounds more suitable commonly.

Next, I thought they’d be suitable for maintaining state in memory, but then again ETS seems strictly superior in that regard.

So I now wonder what are the real, not book example, use-cases for GenServers beyond request serialization (and I guess being pooled)? They seem rather ubiquitous in Elixir/Erlang world so I feel like I’m missing something huge here…


All of the options you mentioned are valid, to me it is a matter of simplicity and priority. GenServers are easier to understand because, as you said, all requests are serialized. Therefore, if you want to keep some state or perform some sort of concurrency control, they are a simple and understandable solution.

Surely, ETS will scale better in terms of concurrent reads/writes and you may get better parallelism with a pool of processes, but they do not offer the same guarantees as a GenServer. For example, can your state or operations be partioned in the case of a pool? Are you certain you can allow both concurrent reads and writes to ETS without running into data races? Since you must answer those questions, ETS and pools must be solutions you reach to instead of being the default choice.

My advice is generally to always start with a GenServer and then upgrade to ETS or a pool once you are certain it is needed. Always use the simplest abstraction to solve the problem at hand.


Thanks a bunch, Jose, for a lightning-fast and clear answer! Much appreciated!


Using a GenServer you can hold state and build a layer of abstraction around it for accessing the state.

Just consider some software that does manage bank accounts.

When you just read and write into an ETS, then you can’t guarantee reads and writes happen in order. You can’t even guarantee atomicity of some requests, which would blow up consistency of your database.

Using a GenServer on the other hand side, does guarantee, that everything happens in the right order, as well as atomicity of transfering money from one account to another. At least it does seem like this from external processes.

You could write a wrapper around an ETS, but as soon as someone else finds out the name of the table, he can at least read at will, while there is no way to read from an GenServers state without using its official API and waiting for beeing the next one beeing processed.

Also one thing to consider when trying to decide between them, is that the state of a GenServer is garbage collected automatically, while you have to remember yourself to drop columns in an ETS when they are not needed any more.

Next thing to consider is, that an ETS is only accessible from the node it is running on (IIRC), while you can send messages to an GenServer across nodes.

Last but not least, there might be kinds of states that do not fit into a table like structure as an ETS provides it (or would generate a huge overhead in flatteing it down until it fits), such data might fit perfectly into a single GenServer. (Or the state itself is so small, that a table would be overkill; consider a simple integer-value that does count something)

From erlang point of view, most of the time, I create a GenServer first and just use it. When I realize that it is a bottleneck, I keep its API and create an ETS which I do access over the GenServers API. This way, most of the time I do only need to change code in the GenServer when I need to restructure the data in the state (beeing it passed around as state or beeing it stored into an ETS). When working with an ETS directly from the beginning, you might to change code across multiple modules to make it fit your new data structure. If your test coverage isn’t good, you might even miss some parts until you went into production and your clients start to loose money and blame you :wink: