Do people really run Phoenix servers without a load balancer in front?

I would love to read more about this.

1 Like

The BEAM. He means the BEAM is too complex to be trusted to run as root. and I agree.

2 Likes

Make sense. Just wanna double check since he wrote VM and later BEAM.

Whoops yes, that’s confusing. In this case I mean the Erlang VM. A very naive line count of the OTP code base for all .(c|h|erl|hrl) files, including comments, comes out at 2.6 million lines, vs haproxy, which is ~ 0.3 million lines of c. Approximately 10x more code to audit. VMs are complicated!

1 Like

there is a URL with more info there ;-), also How To Sandbox Processes With Systemd On Ubuntu 20.04 | DigitalOcean is a nice writeup. The actual capabilities(7) covers the details.

Well, adding .erl and .hrl files to the “VMs are complicated” make little sense, as these files do not define VM itself. Whole VM is contained in erts directory of the OTP repo.

Actually I prefer to use socket activation (so I do not need to care about capabilities for restricted ports at all) the and disable as much capabilities as I can. I even used ephemeral users for Erlang projects to reduce access to the system to minimum. It ends almost like a Docker container then.

Here is example systemd.service file

2 Likes

Do you have a full example of socket activation with phoenix and everything? Thanks in advance.

I have side project where I test a lot of stuff that is reimplementation of https://lobste.rs in Elixir with some additional stuff. It isn’t deployed anywhere nor tested with systemd directly, but I have tested it against my implementation of the systemd socket activation protocol (Dolores is meant as a development reverse proxy for handling the subdomains and TLS).

It is not really different from the plug example in the systemd repo:

4 Likes