Elixir on AWS Lambda

I did a few experiments with running on lambda via exrm because I wanted to avoid writing .js as much as possible. Here is what I found.

  1. the invocation time is slow for 128MB ram, need to make sure you have enough ram since ram and CPU are proportional. 128MB execution time is ~ 2-4 seconds.
  2. permissions are tricky due to the .erlang.cookie and the deployment model. I was only able to get one piece working by setting it to nocookie. You have to be aware that one user puts your files into the container, and your invocation user may be random.
  3. exrm seems to nicely bundle up libs and dependencies for a basic app. No mucking around with LD_LIBRARY_PATH was needed.
  4. deploying less to /var/task (where your zipped files go by default) is the best way to manage permissions. You can’t change these files after they are deployed to this root, but you can deploy a dir with 777 permissions and then chmod it later.
  5. you have to set a HOME env var for each possilbe invoker or erlang will complain. This where .erlang.cookie ends up living and you never know who will invoke. Since erlang.cookie needs something like permistions of 700 or 500 your processes will crash with file access problems.

wondering if others are interested in this


Interesting stuff AWS Lambda is news to me. I am interested now. Will look into this if it could work for us :slight_smile:

1 Like

Hmm interesting. So you executing external elixir process through child node module process?

I am wondering how the speed/performance/monitoring … compare to pure nodejs application.
If you don’t like javascript you can write i purescript http://kofno.github.io/2015/10/11/aws-lambda-purescript.html :slight_smile:

1 Like

i have a phoenix server running and i’m quite pleased with the results. Base invocation time for starting elixir via escript is around 180ms, but since you can now keep something around, although frozen, for 5 minutes invoking a server on lambda makes much more sense. This saves you a ton of time and expense for lambda.

The below client/server takes between 11 and 20 ms to run with 1024MB ram lambda.

  • 512MB ram seems to be a sweet spot for an app that does nothing, it performed the same as 1024.
  • 256MB was wildly varying between 160 and 200 ms
  • 128MB ram results in ~ 230 ms which frankly isn’t bad compared with native node.


def test(conn,_params) do
    json conn, %{worked: :true,conn: inspect(conn)}


curl localhost:9080/test

I’ll test with a native node.js http request in a few. should be faster…


to add some context here, this means you can run ~100,000 invocations, or phoenix requests, for around $.04 per month in a highly available environment where you can forget about load balancers, patches, ssh keys, etc etc etc.!


Yes I totaly agree - the future is lamda architecure :slight_smile:
There are also:

But I don’t know if you can run elixir there.

1 Like

Maybe someone can explain to me because I don’t get it.

Why use AWS lambda when Elixir can just start a supervised task and perform your work for you? What kind of work would AWS lambda be better for where a lightweight process doesn’t fit better?


i suppose the biggest benefit for me is that the server goes away and I can continue to use elixir.

Let’s say I want to process a file. Old way: i setup a server and let it run for ever waiting for some request. New way. someone dumps a file in an S3 bucket which generates an event that starts my lambda function and processes the file. I’ve just removed all the muck associated with maintaining a server (or 2 or 3 if you need HA) for a trivial task. I really don’t want to think about certs and load balancers, and patches, and security 0days, not to mention the expense of having 2 of everything. Lambda is essentially devops in a box, scaling, HA, all taken care of for literally pennies.

1 Like

This is not easy task what AWS lamda can do :). You can run the same code on one machine and 1000 machines if needed depends on load. But as developer you don’t care where it runs and how many machines you will need.
The best part you only pay for work AWS lamda do for you , not machines you use for it.
The drawback you need to keep state somewhere else also there is limit how long computation can be executed (I suppose there is 5 minutes)


Just to extend the options you named, there is also Iron Worker which works quite similar to the mentioned cloudfunctions:


With IronWorker you have more power on the environment where your code runs in using Docker.

There are some options :slight_smile:

Example app in Java


Any news about this?


I have a prototype up here: https://github.com/jschoch/exlam

I’m working on another project to test it with and hope to make some improvements after I get something else working.


I don’t see Erlang mentioned as a supported language on AWS lambda docs page. Please explain!
What is the workaround or is Erlang now supported!?

1 Like

First I am not convinced that elixir/erlang is good for short running process which aws lamda is.
There are many workarounds to run not supported languages:

You can pack elixir in docker container

You can run any executable

I think Go language uses NodeJS Wrapper hack


Can you elaborate on the downsides? What makes elixir not well suited for lambda? If you boot the erlang VM in the node.js module initialization it is plenty fast and you can avoid the VM boot tax. I can get < 10ms response times with my prototype.

The upsides are that you can use elixir’s great syntax, and you don’t have to pay for a server waiting around for stuff to do for spiky workloads or very low transaction rates. the HA of Lambda is a big win not to mention avoiding all the hassle of updating/patching another set of server’s operating systems.


Yes I mean you need to spin up elrang VM. Ok but if boot the erlang VM in the node.js module initialization works fast …


Host AWS lamda by your self :slight_smile:


It would be cool to have Elixir working on Lambda because of cost saving, but the high available nature of Erlang does not make it best fit for this architecture.