BEAM theory question...Lambda-like deployment?

With PHP one of the perks was being able to just deploy a single file without fully deploying everything and having it work. This is mainly due to each PHP request being isolated…much like processes on the BEAM are isolated with their own heap but just in a much smaller pieces.

AWS Lambda has the perk of letting people deploy tiny snippets as well without dealing with the server infrastructure to go along with each one and personally, I think both are sharing the same use case…just most people don’t want to admit anything good about PHP.

What I’m wondering though, between process isolation and hot deployments would it be feasible to create a framework for deploying individual BEAM processes without the use of a full umbrella project, etc? If we wanted an Elixir process server for example that we could deploy individual processes to and have some means of detecting the new/updated process file, hot deploying it to the VM and setting up some basic supervisor mechanism for starting whatever it did?

If that were feasible, I could definitely see a situation where people are deploying little one off 10 line of code snippets to a utility server and somewhat replacing the use cases for Lambda or raw PHP. Feasible? Useful? Thoughts?

1 Like

I see no compelling reason for something like Lambda with beam. I hear elastic scalability as a use case for lambda, but beam scales well enough on moderate hardware that I don’t see elastic scalability as a need for vast majority of people. On the deployment side, we have ways to to specify which applications are started, as well as which processes should be started in our supervision trees, so for the case of deployment only small “pieces” of one app to a separate server, I would simply configure the app to only start that part of the app. [quote=“brightball, post:1, topic:2810”]
without the use of a full umbrella project

I think in general folks won’t need to go finer grained than this + config to add or remove workers from the supervision tree as needed.


What @chrismccord said… But if you really do want to dig deeper, you can of course do something like the Phoenix code reloading in dev mode, where you watch a set of directories for changes, and recompile if anything gets updated there.


I love the thinking behind this with trying to simplify small, granular deployments.

Currently ibGib can currently model state declarations and evolutions, keeping track in an event-sourcing type of way, the amalgamation process of state structure and the instantiations of those structures. Each step of creating these resources creates a unique ib^gib URL, where the ib is like a “name” and the gib is a hash on the ib, data, and rel8ns, very similar to how things like IPFS work. So that’s what it can already do - with state.

Behavior, the likes of which you are describing with dynamic, granular runtime-compilation, is on the roadmap for the future. These units of behavior would also have unique ib^gib URLs - think of FP, but instead of local memory-addressed functions, you have URLs accessible in the cloud. I’ve known about the overall goal, but I’ve only been considering how to tackle it in the background while I’ve been developing the state aspect. I like your specific approach in thinking in terms of individual process/module deployments, and this seems a natural approach for what I am planning. The real difficulties come into play with authentication/authorization. It is far from a trivial problem to properly allocate resources for this, but it’s definitely a goal.

So FWIW, I personally think that what you’re talking about is not only feasible, it’s totally the future :rocket:

1 Like

Related: Joe Armstrong’s favorite program, “the universal server” :slight_smile:


I don’t know if Elixir/BEAM is good match for AWS lamda. The purpose of AWS Lamda is automatic spin up container(s) when running function, and spin down when not needed. So the best much will be something really simple with fast startup and shutdown.

I don’t know about replacing Lambda, but I do share the concern of making deployments easier. Platforms such as Heroku and containerizations like Docker makes deployment of other technologies so darn easy, but with Erlang and Elixir they strip away most of the niceties of OTP (no hot upgrades, no easy node clustering, no :observer, etc., not to mention the doable but not so straightforward runtime configurations via env vars). When I first dive into Elixir deployments there seems to be so many concepts that I had to learn (and unlearn).

I’m wondering, how would an Erlang VM/OTP-oriented PaaS look like (and if it’s worth it)?

Concerning docker and gui applications like the observer I found a solution, see I can use the observer with elixir running in a docker container.


Ah yes, I believed I have encountered your repo several times, Stefan, but there are so many things involved I was kind of afraid to jump in :smile: But if I understand correctly, it only solves the dev environment, right? Not for deployment?

I have never deployed anything. But maybe you can put a production version of your elixircode in the volume of the web container and try? Maybe leave some things out in the dockerfile if you don’t need it in production? Maybe experiment with a smaller linux distro? I would be interested what is possible of course if you try some things out.