Which is the fastest web framework? (Link/Repo and results in this Topic)

phoenix
benchmark
Tags: #<Tag:0x00007fbcab791368> #<Tag:0x00007fbcab7911b0>

#1

Following https://github.com/tbrand/which_is_the_fastest |>

https://raw.githubusercontent.com/tbrand/which_is_the_fastest/master/imgs/result.png

Updated with Elixir^

Now there are frameworks in these languages:

Ruby (bundler)
Go
Crystal
Rust
node

I would love to see Phoenix framework in this benchmark


Why are Phoenix and Plug pretty slow compared to even Ruby/Python web frameworks?
Techempower benchmarks
Performance - Best practices?
Elixir Moments
Why are Phoenix and Plug pretty slow compared to even Ruby/Python web frameworks?
#2

I’d say this benchmark is borderline useless. There are three test cases which are not doing anything at all except a minimal amount of routing.

They test how fast the frameworks can read/write from/to a socket. It is also not specifying what headers need to be returned back which makes the comparison even more unfair. For example are date headers included? They can be quite expensive to calculate. The number of headers returned also change the size of the response which in a test like this may have quite an impact on the result as some of the frameworks might have to return double the amount of data depending on what headers they return.

The client is dubious as well and extremely simplistic. Instead something like wrk2 should be used which calculates percentiles and takes coordinate omission in account.

It also doesn’t specify hardware specifications for client server, the client is likely run on the same machine as the server, concurrency is low and the number of requests way too low. The test should run for a much longer period of time and run at various concurrency level.

The techempower web benchmarks are perhaps better but even they are artificial and test doesn’t run for long enough so that things like memory pressure and garbage collection can be taken into account. Lots of examples on these test has also been optimized for what they are being tested for which is usually a bad idea if you want a result that can be compared in the general case.


#3

Exactly what @cmkarlsson said. I’ve yet to see a web benchmark test anything other than specific situations, in which mostly bare-metal with no fault tolerance things will utterly shine. Phoenix will, well, still blow away most things regardless of the tests short of the bare metal socket pushers (which I’ve written my share of in C++), but its fault tolerance is unmatched by anything else out, so if you want a high success rate, high speed, and high reliability, I’ve not seen any benchmark test this yet.


#4

This is the same problem as with all benchmarking, what are you measuring and what is really, really, important for you. The fastest webserver would be one that whatever you send just returns some static lines, it will be really fast and beat anything, but in most cases totally useless. So the question is in what are you interested? Is it raw throughput? Or maybe massive concurrency? Or maybe low latency? Or maybe very versatile system? Or maybe high-reliability? Or maybe …?

Unfortunately, building a system usually entails making trade-offs. TANSTAAFL.

Another important issue is how you actually measure these things. Is your way of measuring interesting and relevant for what you want/need? Do you know what you need?

I will stop here but benchmarking is a fascinating field. :wink:


#5

Probably the best one that i saw recently compared Elixir, Python and Go through a ringer of different tests both with and without a database. The really staggering thing about it was watching the variability of request times in other languages be so all over the place vs the extremely tight formation in Elixir.


#6

Well, while I agree with everything said. I took some time to make it work for phoenix. I am really interested in discovering how well Phoenix would go for this benchmark. :wink:


#7

just remember to stay on the good side of the @chrismccord meme :slight_smile:

the plug benchmark can be improved ~8x (on my machine(see post below for explanation - some mac ipv6 localhost lookup mess), from 40 sec to 5 sec - express is 17sec):
(the fans where going off on the express benchmark and not on the plug one - which made me suspicious!)

application.ex add protocol_options: [max_keepalive: 5_000_000]

Plug.Adapters.Cowboy.child_spec(:http, MyPlug.Router, [], [port: 3000, protocol_options: [max_keepalive: 5_000_000]] )

so please add that to the phoenix one as well! read more here: http://theerlangelist.com/article/phoenix_latency

then the meme :heart_eyes: (honestly gives little here… but just to make sure)

add {:distillery, "~> 1.0"} in mix.exs and run mix deps.get, do a mix release.init
and then change
server_elixir_plug

cd elixir/plug;_build/prod/rel/my_plug/bin/my_plug foreground

and the makefile

cd elixir/plug; mix deps.get --force;MIX_ENV=prod mix release  --no-tar

benchmarker.cr

    elsif @target.name == "plug"
      path = File.expand_path("../../../elixir/plug/_build/prod/rel/my_plug/bin/my_plug", __FILE__)
      Process.run("bash #{path} stop", shell: true)

look forward to seeing the results!:101:


#8

Your post was too awesome not to code-up, so I added some fence tags. ^.^


#9

lol thx, figured out why I was seeing 8x improvements…

my mac looks up ‘localhost’ on two ipv6 interfaces first (both of which fails) before going to 127.0.0.1 - so that’s why it was extra sensitive to increasing the max_keepalive, and seeing that massive boost.

I’ve rerun the benchmark directly against 127.0.0.1 - and now the improvements from increasing max_keepalive is only ~10-15% on my machine.

oh well, synthetic benchmarks[sic]


#10

Thank you for the suggestions! I will PR them soon!


#11

PR done: https://github.com/tbrand/which_is_the_fastest/pull/32

If you could take a time to review it would be awesome! :wink:


#12

Distillery seems a bit overkill as you could run in local prod mode, but it is definitely the ‘correct’ way to do it. Overall it looks good on an initial look to me. :slight_smile:


#13

you’ll want to fix https://github.com/kelvinst/which_is_the_fastest/blob/elixir-phoenix/elixir/phoenix/config/prod.exs

add port 3000, max_keepalive and server: true and comment out the cache_static_manifest

config :my_phoenix, MyPhoenix.Endpoint,
  http: [port: 3000, protocol_options: [max_keepalive: 5_000_000]],
  url: [host: "example.com", port: 80],
  server: true
  #cache_static_manifest: "priv/static/manifest.json"

also comment out import_config "prod.secret.exs" at the bottom…


#14

Yep, I noticed that running it here. :wink: Thanks!


#15

Fixed. https://github.com/tbrand/which_is_the_fastest/pull/32/commits/5c5132c8b6570a94c76b5d87076e1aeafd6ef702


#16

Heh, noticed that was there when I tried building it just now. ^.^

I’m compiling the Elixir (Your branch) and Rust ones (since rust had some of the fastest ones and Rust I also happen to have installed, I’m tempted to PR an OCaml server, it will not outperform Rust by a long-shot, but it should still be impressive, maybe PR a cowboy server too if they want really low-level (considering he just wants fastest response, which is a crappy test anyway)), and ran them, I got, well, errors because phoenix is not on port 3000.

@kelvinst Error reported from phoenix though:

09:27:11.747 [error] Could not find static manifest at "/home/overminddl1/tmp/which_is_the_fastest/elixir/phoenix/_build/prod/rel/my_phoenix/lib/my_phoenix-0.0.1/priv/static/manifest.json". Run "mix phoenix.digest" after building your static files or remove the configuration from "config/prod.exs".

@kelvinst Make sure that both phoenix and plug are on port 3000 since that is what the benchmark requires.

EDIT: Looks like Plug was already fine at port 3000, phoenix is set to 4000 or 4001 or something, which is not set by the benchmarker it seems?


#17

Done! Now I’m just anxiously waiting for the results (I hope phoenix crash express out!!!) :lol:


#18

For some reason the phoenix server is not binding to port 3000, I’m not sure what it is binding to?

Have you run the tests? Because the phoenix one is not running…


#19

I’ve tested it manually with

make elixir
bin/server_elixir_phoenix

With this, curling to localhost:3000 works perfectly.


#20

as much as I dislike these benchmarks this little framework is really nice and well abstracted, also like how it uses crystal-lang.

example cowboy and maru benchmarks can be found here https://github.com/elixir-maru/benchmark if somebody wants to adapt and add them…:wink: