Hi, what do you think about this benchmarks?
I do not trust much in it. He ran wrk from the same system as the servers. So wrks load may have influenced the servers under test.
Also I think a Mac is not that comperable, he should try to get his hands on real server hardware.
Especially this. Done testing with a similar thing that had signficant performance differences between a Laptop Mac and a Server Linux, other than rankings being entirely changed (except Ruby, which was always the worst, even PHP flew past Ruby like it was standing still, whyTF do people use ruby?!) it had crystal as the fastest on mac and way off down at like 5th place on a very high-end server linux.
- Run with very low concurrency (100). Also using wrk instead of wrk2. wrk2 is generally considered better because it takes coordinated omission in account.
- Someone else mentioned running the client on the same machine as the server. This is of course a big no-no.
- Also by using wrk you are “overloading” the servers. I.e sending them as much as you can which is generally not what you want. This specificially only measures what happens when a server gets too much load (which might be interesting and important in itself but it says nothing about how the server performs under normal load).
- Firstly they do send back different amount of headers as indicated by the avg. response size. This will have some affect on a benchmark where you are basically only reading/writing to a socket.
I don’t think that is the problem here though.
There is a well known bottleneck in how erts do the polling. This means
a single server wont ever be able to do above a certain number of
requests per second.
There is ongoing work correcting this. It didn’t make it into OTP20 as planned because of regression in other areas. Here is the pull request if someone wants to try it out (https://github.com/erlang/otp/pull/1552). In fact the author has asked on the erlang mailing list for anyone to try this out on their specific use case to see if it makes things better or worse. (http://erlang.org/pipermail/erlang-questions/2017-August/093154.html)
Last thing. The author says is was going to use the built-in http server for benchmarking. This was not done with elixir/erlang because the built-in one is httpd (the result would have been even worse though ;))
As usual. I hope no-one uses this to decide which server to use. In practice I have implemented web servers in a few different languages and used in anger. erlang does really well, ruby/python/php is never even close even if they “beat” erlang on these benchmarks. java has problems with latency, golang is usually faster but not so much that it would influence my decision on what to use. I haven’t used node in production and I wouldn’t call pony/nim/crystal production ready.
There’s a bit of an apples vs oranges comparison happening here too. Plug is relatively lightweight, but cowboy is hardly the fastest HTTP server on the BEAM. For something as simple as shoving as many "hello world"s as we can I would expect elli would get about twice the throughput.
There is something a lot of these tests do not test either and that is failure conditions, concurrency, slow requests (like building a report inline or so), multiple-responses, multi-server session access, etc… etc… The BEAM is wonderful for these all, most others are not…
He answered, if anybody is happy to reply.
I’ve often wondered what it would look like if you triggered all of the techempower benchmarks at the same time and compared them.