I’ve seen various folks make claims about Elixir/Phoenix that claim Phoenix can handle tens of thousands of
simultaneous HTTP sessions and a million or so tcp connections.
What would be the typical hardware setup needed to support that level of network activity?
I just happen to have a dual 6 core CPU server with 64 Gig of memory. Would that make for a decent server for
a moderately busy website (e.g. a thousand simultaneous users?)
You can run a thousand simultaneous users on a RaspberryPi. You’ll be fine!
Joe Armstrong famously realized that a modern RaspberryPi represents the computing power of a Cray supercomputer during the period Erlang was invented.
My anecdote is ~5000 connections in 1cpu and 1GB ram (this is the default pod size in our k8s cluster at work). That’s also serving requests to and from those connections.
I haven’t tried to stress it out because I can add more pods really easily.
Your spare machine will not face any issues here.
It’s worth noting that the OS you run on may have an artificial cap on the number of connections it’ll accept. For linux systems you can run
ulimit -n to see what that number is. Historically, it’s been something like
1024 for the hard and soft limit, but newer linux kernels use
1024 for the soft limit and
4096 for the soft limit.
I may be wrong here, but I think that the
ulimit is for clients sending connections, not the server accepting them. I did a recreation of the “Road to Two Million…” blog post, and I didn’t have to tweak the server’s ulimit. I have also never done this on a production machine. The instructions for the load test that I did can be found here—the ulimit is in the client setup.
Edit: I was wrong here. Ran experiment to confirm, as well.
I’m probably mistaken then. I assumed it was for any connection sent or received.
I’m going to do more research into this—it might only matter in certain situations. I’m sort of hoping someone comes in with the definitive answer, but I’ll hopefully get some time to dig into the answer further.
I don’t think it affects the overall performance of Elixir applications. You can get objectively good utilization per server, although small snags will come up sometimes and you’ll iron them out quickly.
ulimit is for both clients and servers. It is about the number of file descriptors that can be held open by a shell and its processes. It must be correctly configured on both server and client.
In addition there is another problem when benchmarking at this scale and that is that there are only 64k (- some number) connections that can be established between per client per server port. Even though the server can open up 2 million connections each client server can only open up 64k connections. Which means you need a number of client machines to reach 2million (32 or so)
Thanks @cmkarlsson and @blatyo—I was wrong here. I had inadvertently set the ulimit in my load test image and didn’t add it to the steps (so I forgot I did so). I checked my production instances and the ulimit is just set very high by default, which is why I’ve never had to tweak it.
When I run with a ulimit of 1024:
and ulimit of 400000 (or unlimited, I just had it set to 400k):
This demonstrates the ~64k (it varies a bit) outgoing connection limit that @cmkarlsson brings up as well. That is only on the clients, so only affects a load test.
You’d be able to handle thousands of requests a second. Assuming you’re not talking to an external database too much.