Using an old slow machine highlights impressive BEAM/Elixir/Phoenix fundamentals

I landed this morning with a few hours to kill but without my laptop handy. The only machine I could borrow to work on is a tiny little Lenovo 100S with an aged Atom processor and 2GB RAM, running Windows 10. Not expecting much, I downloaded the Elixir/Erlang install package, command-line vim, and installed everything in 10 mins or so.

It all works! Sure enough some things (full compilation & BEAM startup) are pretty slow, but I’m working on some Phoenix stuff and it’s … just fine. It’s a real testament to the thought put into the Elixir developer tools that this is all manageable without a fast machine and fancy IDE or editor. The file watching works fine, and little enough has to be compiled for most of my changes that I can get stuff done.

A couple of practical consequences: (1) I really wouldn’t dread working in Elixir if I couldn’t afford good tools. (2) Phoenix should be usable in nations and environments where the resources just don’t run to up-to-date hardware.

This is great, and truly impressive. One more occasion to thank everyone responsible.


Why do you think people run their non-super-loaded apps just fine on free hosting with 1/2 CPU cores and 128MB RAM (like Gigalixir)? :003:

The BEAM VM is super lightweight in this regard. Although I’d think for longer-running servers that work a lot with DBs we should have at least 512MB due to potentially big caches – like prepared statements, memory-mapped files et. al.


12 posts were split to a new topic: Who is fastest, better than anything else

The BEAM was designed for 80s embedded hardware and has retained that efficiency. And actually with the JIT it has got even better. I think the Erlang and Elixir ecosystem works for two impressive reasons:

  1. The fundamental model (actors, message passing) is great for programming and great for parallelism
  2. The engineering done on top of it is consistently excellent (design of abstractions in OTP and standard libraries, implementation of VM, implementation of standard libraries, tooling, documentation)

We’re benefiting here from the continually excellent implementation in point (2).


My thoughts exactly: a pile of accumulated good decisions. I can’t also help thinking that all this is greatly helped by the Elixir community tending to coalesce around projects that become more-or-less standard for their role (cowboy, plug, phoenix etc). This concentrates energies, and (relatedly) makes it easy for newcomers to know where to direct their own efforts. I haven’t been around Elixir long enough to know why this is so exactly, but have experienced other software ‘ecosystems’ where distinctly different cultures prevail, so it’s clearly not inevitable.


I think what standardisation there is is driven by community size and the gravitational pull of Phoenix.

Phoenix is by far the best web framework in the Erlang/Elixir ecosystem so everyone uses it, so the parts it uses automatically get a lot of usage as a result. Cowboy isn’t the only Erlang webserver, and the impression I had was that it was only marginally the most popular until Phoenix came on and adopted it (via Plug).

The other part of it is that it isn’t an enormous community. The JS world is massive enough that it can support several ‘competing’ development efforts - multiple client side frameworks, multiple builders, multiple libraries to do X job etc. That’s not always possible in Elixir with the smaller group of people.

1 Like

Andy giveth, Bill taketh away, and Joe and José claw it back! :slight_smile:

1 Like

Just so people like me who dont know what the reference for “Andy giveth, Bill taketh away” is: Andy and Bill's law - Wikipedia