Serverless Elixir with Cloud Run min instances

Elixir is considered a bad match for serverless because BEAM is relatively slow to start up. I think this just became less relevant with Cloud Run min instances.

One of the great things about serverless is its pay-for-what-you-use operating model that lets you scale a service down to 0. But for a certain class of applications, the not-so-great thing about serverless is that it scales down to 0, resulting in latency to process the first request when your application wakes back up again. This so-called “startup tax” is novel to serverless, since, as the name implies, there are no servers running if an application isn’t receiving traffic.

Today, we’re excited to announce minimum (“min”) instances for Cloud Run, our managed serverless compute platform. This important new feature can dramatically improve performance for your applications. And as a result, it makes it possible to run latency-sensitive applications on Cloud Run, so they too can benefit from a serverless compute platform.
With this feature, you can configure a set of minimum number of Cloud Run instances that are on standby and ready to serve traffic, so your service can start serving requests with minimal cold starts.

The way I’m reading it, just put your Elixir/Phoenix app inside a container, deploy to Cloud Run, and there you have it, serverless Elixir at low fixed cost + usage. Thoughts?

Cowboy is enough “serverless” for me. I do not need additional CGI-like layer on top of that. And in most cases I would use Elixir it is cheaper to have just small VM running somewhere for $5-$10 instead of dealing with FaaS infrastructure and their costs.


This is a good improvement and I liked the idea. Between Cloud Run Min and Digital Ocean’s latest offerings I am very tempted to start several hobby projects and host them for literal few bucks all per month.

However, Cloud Run Min doesn’t solve the general ineffectiveness of the BEAM VM in the serverless conditions. I’d still much prefer rolling my service in Rust, with the added pain that entails, compared to anything BEAM-related.

That being said, I am looking forward to OTP 24 and its first JIT performance gains because I hear from Jose and others that they are going to be huge (100x at places). Maybe that would make serverless BEAM more competitive.

The quote about 100x is not about the JIT but about something else José is working on.
The improvements of the JIT are much more modest, bit still impressive. Between 100-300% for some real world applications. I don’t think that will improve the start-up time though.

About cloud run. In the past that didn’t support websockets so it wasn’t an option for me. I haven’t kept up with it though, so if anyone knows if that’s solved I might take a look at it again.


I spoke misinformed. Apologies.

1 Like

In the meantime, Cloud Run added support for websockets. See Using WebSockets  |  Cloud Run Documentation  |  Google Cloud.


On Cloud Run, session affinity isn’t available, so WebSockets requests can potentially end up at different container instances, due to built-in load balancing. You need to synchronize data between container instances to solve this problem.

Does Elixir/Phoenix have tools in the box to handle this restriction?

My understanding is that they will not break existing websocket connections and re-balance them, however, new connections will be balanced. So as long as all your containers belong to the same OTP cluster with auto-joining, it should be the same as a manually managed OPT cluster.