Elixir async vs JavaScript async

I’m new to concurrency models and multi-threading since the only backend work I’ve done is with JavaScript.

The limit of my experience of working with anything close to concurrency is only JavaScript async behavior which seems to only be strong with IO type operations. If something that isn’t IO happens, then the computer blocks everything else.

Can Elixir do things that aren’t IO concurrently?

I’d love to see a comparison of Elixir vs Nodejs’s “async”/concurrent abilities and what the best use cases for each are.

1 Like

Yes! The BEAM concurrency model works regardless of what work is to be done*[1].

The BEAM VM uses preemptive scheduling meaning that it will try to let all processes execute their work evenly and swap them out after they’ve done a number of units of work. These units are called reductions and by default a process gets swapped out after it has done 2000 reductions.

This is why latency is very even in BEAM languages and that some requests won’t starve other requests and block all the scheduling threads.

In general the BEAM concurrency model is best if you want to be “fair” to all your processes. It gives very stable and robust results.

I am obviously a bit biased but I don’t think there is a comparison here. The BEAM’s concurrency is concurrency done right. It performs better (as in schedules the work better) and is easier to use than nodes model. BEAM also works on all cores instead of being single-threaded.

That said, other concurrency models are optimised for through-put rather than latency (I’m not sure about where node is here) and if you need maximum through-put and can live with having bad latency on a percentage of the requests then you may want to use other concurrency models which doesn’t have the overhead of the scheduling.

I would not pick node’s concurrency model over beam any time. This does not mean that there are other considerations besides the concurrency model that make you pick node (such as all your software is written in it, it is the only thing your developers know, etc).

[1] If software is written in external languages using NIFs this may no longer be true unless the NIF function is written to be do its own reductions or perhaps using dirty schedulers to make it behave better (from a scheduling point of view).


Hi @elderbas,

I think you will find this video by Saša Jurić to be excellent on BEAM concurrency, how it works, and interesting runtime process introspection/debugging…


Given the topic this one seems obligatory:

What every Node.js developer needs to know about Elixir - Bryan Hunter (NDC Oslo, June 2016)


With the exception of the speed of his speech Sasa Juric’s is an excellent presentation!

Haha, yeah, he always apologizes in advance for his fast speech. I actually like it as he compresses a lot of great content in < 40 minutes saving the viewer’s time.


I’m gonna make some statements about how I understand comparing Nodejs to Elixir, and please correct me for mistakes I make.

Let’s assume for each of these examples, it’s running on a server with equal amounts of RAM, and 4 CPU cores.

Syncronous Nodejs
LongCalculations - Takes about 10 seconds to calculate to return answer

  • 4 requests come in at same time.
  • Takes about 40 seconds before all requests are fulfilled, since Nodejs isn’t able to hand off any work nicely to other services.

Syncronous Elixir
LongCalculations - Takes about 10 seconds to calculate to return answer

  • 4 requests come in at same time.
  • Takes about 40 seconds before all requests are fulfilled, since developer didn’t write Elixir to spawn new processes to run LongCalculations inside

Async Nodejs
LongCalculations - Nodejs async function that makes API request to a highly performant and scaled MathAPI service, and therefore not doing anything of the work itself (takes about 2 seconds)

  • 4 requests come in at same time.
  • Takes ~3 seconds total to get responses to all 4 requests since Nodejs was able to hand off the work to the MathAPI, and mostly only had to juggle the 4 IO operations

Concurrent Elixir
LongCalculations - Takes about 10 seconds to calculate to return answer

  • 4 requests come in at same time.
  • Takes about 10 seconds before all requests are fulfilled, since developer spawns new processes each time LongCalculations needs to be executed, therefore work is more or less scheduled and split between the 4 cores. 40 seconds worth of work, split between 4 cores at same time = 10 seconds.

Async Elixir?
IO operations in Elixir spawning new processes vs NOT spawning new processes? How would you do that one? I’m confused if it’s necessary to spawn processes when you’re doing IO/Web service API operations since you don’t know how long they always take, and therefore need to “schedule” it away, or if somehow Elixir knows to do that behind the scenes, and doesn’t block other requests coming in.

I’m going to try to give some comments and an explanation. I was thinking of drawning some diagrams (which I started) but they turned out not so good so you’ll get a wall of text instead :smiley:

Synchronous Elixir

All the work in Elixir is done inside a process. Everything in a particular process is serialized. So if a process is sent 4 request to do long running calculation concurrently they will be placed in a queue and processed one after another.

The total time would be 40 seconds and it would likely be done on one of the CPUs.

I should comment that in the case of a Web Server you usually don’t “forget” to spawn a process as the web server generally spawns a single process per request meaning the concurrency is already taken care of. It is more common that a single process is introduced that becomes a serialized bottleneck. For example you wrote a GenServer which did your expensive calculation which each request is calling.

Concurrent elixir

In BEAM you spawn a process to get concurrency, again here you are correct. You spawn 1 process per request each doing the Long Calculation.

Async Elixir

Async in elixir is done doing processes just like in the concurrent example. Whenever you want to do something that you don’t want to block the sequential flow of execution you spawn a process to handle that part for you.

You can easily use the MathAPI in elixir as well by just doing the same thing as you would with your “concurrent” example. You just spawn a process which connects to the MathAPI and waits for the response and then sends the result to the calling process.

The big difference is that in node you uses callbacks (or promises (or async/await I guess)) for this kind of work. When you are done with this please call this function.

In elixir you uses message passing to send messages between the different processes. So it would more be: Once you have called the MathAPI and have a result please send me the results.

This depends a bit on your use case. As mentioned above, unless you want to synchronously wait for something in a single process you need to delegate this in a separate process.

This doesn’t mean that you always need to do this though. In the Web API sense the web server will most likely spawn a separate process per request. Because you already run it its own process it is perfectly fine to just do everything sequentially from here, even calling out to a separate service.

The general advice is: model your system to have 1 process per concurrent activity. It takes some experience to find the right sweetspot between spawning to few processes (creating serialized bottlenecks) and too many processes (generating too much overhead).

Another thing to be aware of is that while the BEAM platform is great at concurrency, the world it communicates with may not be.

This means if you communicate with File System, Databases, Web APIs you sometimes have to serialize access again not to overwhelm the external system with concurrent requests.

For example, if you have a mechanical disk you have a sequential bottleneck. Here it might makes sense to interface it with a single process, as the disk can only do one concurrent operation at a time anyway.


To utilize the scheduling in elixir you must spawn processes.

Generally erlang has one scheduler thread (OS level thread) per core. So your 4 code CPU will have 4 schedulers. The processes spawned will be divided somewhat equally between the scheduler threads.

Each process is given the same CPU time (or reductions (with a few exceptions)) meaning that latency for the computation is going to be really stable among the work to be done as no process is starved.

Feel free to ask for clarification if there is something that doesn’t make sense.


To summarize in a basic sense, based on your explanation, correct me if it’s wrong =)

In Nodejs if you have a lot of async type work, you’ll be fine using callbacks etc.

In Nodejs if you have a lot of sync type work, you’ll probably have to spin up new servers to handle more load, since they are blocking type of work.

In Elixir, if you have a lot of async type work, you can spawn new processes to handle those request if the library you’re using isn’t already doing that for you.

In Elixir, if you have a lot of sync type work, you can spawn new processes to make that type of work non blocking.

1 Like

Lots of problems in Node.js are are solved with callbacks. In BEAM languages I see asynchronous messaging (e.g. GenServer.cast/2) as the “equivalent” approach.

  • Rather than providing a function for later execution, a process sends a message to another pre-existing process “to get it done” (that process could very well have a pool of other processes working under it).
  • The process doesn’t have to wait for a response message (there may be none) - it can go about its business doing other things - until “it has time”. The next time it checks the process mailbox the response may or may not be there. If the response message is there, the process can continue by processing the payload. In essence the response message will trigger the “callback” as soon as the process inspects its mailbox for messages.