Iād say this benchmark is borderline useless. There are three test cases which are not doing anything at all except a minimal amount of routing.
They test how fast the frameworks can read/write from/to a socket. It is also not specifying what headers need to be returned back which makes the comparison even more unfair. For example are date headers included? They can be quite expensive to calculate. The number of headers returned also change the size of the response which in a test like this may have quite an impact on the result as some of the frameworks might have to return double the amount of data depending on what headers they return.
The client is dubious as well and extremely simplistic. Instead something like wrk2 should be used which calculates percentiles and takes coordinate omission in account.
It also doesnāt specify hardware specifications for client server, the client is likely run on the same machine as the server, concurrency is low and the number of requests way too low. The test should run for a much longer period of time and run at various concurrency level.
The techempower web benchmarks are perhaps better but even they are artificial and test doesnāt run for long enough so that things like memory pressure and garbage collection can be taken into account. Lots of examples on these test has also been optimized for what they are being tested for which is usually a bad idea if you want a result that can be compared in the general case.
Exactly what @cmkarlsson said. Iāve yet to see a web benchmark test anything other than specific situations, in which mostly bare-metal with no fault tolerance things will utterly shine. Phoenix will, well, still blow away most things regardless of the tests short of the bare metal socket pushers (which Iāve written my share of in C++), but its fault tolerance is unmatched by anything else out, so if you want a high success rate, high speed, and high reliability, Iāve not seen any benchmark test this yet.
This is the same problem as with all benchmarking, what are you measuring and what is really, really, important for you. The fastest webserver would be one that whatever you send just returns some static lines, it will be really fast and beat anything, but in most cases totally useless. So the question is in what are you interested? Is it raw throughput? Or maybe massive concurrency? Or maybe low latency? Or maybe very versatile system? Or maybe high-reliability? Or maybe ā¦?
Unfortunately, building a system usually entails making trade-offs. TANSTAAFL.
Another important issue is how you actually measure these things. Is your way of measuring interesting and relevant for what you want/need? Do you know what you need?
I will stop here but benchmarking is a fascinating field.
Probably the best one that i saw recently compared Elixir, Python and Go through a ringer of different tests both with and without a database. The really staggering thing about it was watching the variability of request times in other languages be so all over the place vs the extremely tight formation in Elixir.
Well, while I agree with everything said. I took some time to make it work for phoenix. I am really interested in discovering how well Phoenix would go for this benchmark.
just remember to stay on the good side of the @chrismccord meme
the plug benchmark can be improved ~8x (on my machine(see post below for explanation - some mac ipv6 localhost lookup mess), from 40 sec to 5 sec - express is 17sec):
(the fans where going off on the express benchmark and not on the plug one - which made me suspicious!)
lol thx, figured out why I was seeing 8x improvementsā¦
my mac looks up ālocalhostā on two ipv6 interfaces first (both of which fails) before going to 127.0.0.1 - so thatās why it was extra sensitive to increasing the max_keepalive, and seeing that massive boost.
Iāve rerun the benchmark directly against 127.0.0.1 - and now the improvements from increasing max_keepalive is only ~10-15% on my machine.
Distillery seems a bit overkill as you could run in local prod mode, but it is definitely the ācorrectā way to do it. Overall it looks good on an initial look to me.
Heh, noticed that was there when I tried building it just now. ^.^
Iām compiling the Elixir (Your branch) and Rust ones (since rust had some of the fastest ones and Rust I also happen to have installed, Iām tempted to PR an OCaml server, it will not outperform Rust by a long-shot, but it should still be impressive, maybe PR a cowboy server too if they want really low-level (considering he just wants fastest response, which is a crappy test anyway)), and ran them, I got, well, errors because phoenix is not on port 3000.
09:27:11.747 [error] Could not find static manifest at "/home/overminddl1/tmp/which_is_the_fastest/elixir/phoenix/_build/prod/rel/my_phoenix/lib/my_phoenix-0.0.1/priv/static/manifest.json". Run "mix phoenix.digest" after building your static files or remove the configuration from "config/prod.exs".
@kelvinst Make sure that both phoenix and plug are on port 3000 since that is what the benchmark requires.
EDIT: Looks like Plug was already fine at port 3000, phoenix is set to 4000 or 4001 or something, which is not set by the benchmarker it seems?