Which is the fastest web framework? (Link/Repo and results in this Topic)



I made a PHP file of (this is to follow his requirements in the README.md precisely):

if($uri == "/") die("");
else if($uri == "/user") die("");
else if(substr($uri, 0, 6) == "/user/") die(substr($uri, 6));

And tested with that using the hhvm php server since that is the only one I have installed at the moment, added the results to the rest here:

Language (Runtime)        Framework (Middleware)          Max [sec]       Min [sec]       Ave [sec]
------------------------- ------------------------- --------------- --------------- ---------------
rust                      iron                             4.559537        4.498832        4.534531
rust                      nickel                           5.253805        5.165856        5.207764
elixir                    plug                             5.444713        5.185388        5.326322
elixir                    phoenix                          5.792088        5.673842        5.752746
rust                      rocket                           5.977488        5.849199        5.914374
crystal                   router_cr                        9.139290        8.827565        9.007032
crystal                   kemal                           10.316980        9.544119       10.050472
php                       hhvm                            18.855467       18.748603       18.801522
node                      express                         32.212435       31.668852       31.824345
ruby                      roda                            66.817984       63.877961       65.617340
ruby                      sinatra                        147.016259      144.732534      146.124146
ruby                      rails                          477.844824      475.863197      476.804092

I’m impressed, PHP is pretty dang fast, especially considering it has to hit the filesystem to check if the file’s been changed on every request. PHP utterly blows Ruby away in speed, but still does not beat crystal/elixir/rust…


The graphic in the github page makes Elixir look bad?

When are they going to fix it?


That is because they tested in dev mode, and yeah they need to…


maybe PR a cowboy server too if they want really low-level

If anyone actually wants to PR a cowboy version, I think it probably would look something like this


defmodule MyCowboy.Mixfile do
  use Mix.Project

  def project do
    [app: :my_cowboy,
     version: "0.1.0",
     elixir: "~> 1.4",
     build_embedded: Mix.env == :prod,
     start_permanent: Mix.env == :prod,
     deps: deps()]

  def application do
    [mod: {MyCowboy.Application, []}]

  defp deps do
    [{:cowboy, github: "ninenines/cowboy", tag: "2.0.0-pre.9"}]


defmodule MyCowboy.Application do
  @moduledoc false

  def start(_type, _args) do
    dispatch = :cowboy_router.compile([{:_, [{:_, MyCowboy.Handler, []}]}])
    {:ok, _} = :cowboy.start_clear(:http, 100, [port: 3000], %{env: %{dispatch: dispatch}})


defmodule MyCowboy.Handler do

  def init(%{method: method, path: path} = req, opts) do
    handle(method, split_path(path), req, opts)

  defp handle("GET", [], req, opts) do
    {:ok, :cowboy_req.reply(200, %{}, "", req), opts}
  defp handle("GET", ["user", id], req, opts) do
    {:ok, :cowboy_req.reply(200, %{}, id, req), opts}
  defp handle("POST", ["user"], req, opts) do
    {:ok, :cowboy_req.reply(200, %{}, "", req), opts}

  defp split_path(path) do
    segments = :binary.split(path, "/", [:global])
    for segment <- segments, segment != "", do: segment

Couldn’t benchmark it with his client.cr though.


And here is elli


defmodule MyElli.Mixfile do
  use Mix.Project

  def project do
    [app: :my_elli,
     version: "0.1.0",
     elixir: "~> 1.4",
     build_embedded: Mix.env == :prod,
     start_permanent: Mix.env == :prod,
     deps: deps()]

  def application do
    [mod: {MyElli.Application, []}]

  defp deps do
    [{:elli, github: "elli-lib/elli", tag: "2.0.1"}]


defmodule MyElli.Application do
  @moduledoc false

  def start(_type, _args) do
    {:ok, _} = :elli.start_link(callback: MyElli.Callback, port: 3000)


defmodule MyElli.Callback do
  @behaviour :elli_handler

  def handle(req, _args) do
    do_handle(:elli_request.method(req), :elli_request.path(req))

  defp do_handle(:GET, []), do: {:ok, ""}
  defp do_handle(:GET, ["user", id]), do: {:ok, id}
  defp do_handle(:POST, ["user"]), do: {:ok, ""}

  def handle_event(_event, _data, _args), do: :ok


Testing with wrk elli handles about 40% more get "/", get "/user/:id", and post "/user" requests than cowboy.


You can set PHP to skip the file system check and use a deploy approach fwiw.


For all BEAM based solutions, I would recommend playing with VM arguments. In particular setting +K true to enable kernel poll and increasing the number of async threads (e.g. +A 100) might produce a significant difference.

Also, the multi-pollset that will hopefully be merged in OTP 20 (not available in the RC1 yet), should speed things up a bit as well.


yeah, in this context one would do that tuning here:


	cd elixir/plug; mix deps.get --force;MIX_ENV=prod mix release --erl="+K false +A 10" --no-tar
	ln -s -f ../elixir/plug/bin/server_elixir_plug bin/.

and then ‘make plug’ ("+K false +A 10" is the defaults)

tested it yesterday, but couldn’t get any consistent significant improvements compared to the defaults, ymmv.

multi-pollset does look exciting, we should revisit these benchmarks when otp 20rc2 is out.


I actually did that in a test I did after my post, but it gained less than 1% speed so I figured it was not worth it to post it. ^.^


Haven’t checked it in years but I’d bet they set the checks to a TTL at some point to avoid the overhead.


Your solution of passing arguments to mix release only affects the VM spawned to build the release - not the actual production release. You need to use the vm.args file when working with distillery as described here.

For example:

# in rel/config.exs
environment :prod do
  # ...
  set vm_args: "rel/vm.args"
# in rel/vm.args (we can use eex)
-name <%= release_name %>@
+A 100
+K true


thanks, but not sure that is correct - it does affect the release vm.

doing a MIX_ENV=prod mix release --erl="+K false +A 1000" --no-tar

makes the erl_opts show up in the "$RELEASES_DIR/$REL_VSN/$REL_NAME.sh" script which is used to start the release vm.

# Options passed to erl
ERL_OPTS="${ERL_OPTS:-+K false +A 1000}"
# Environment variables for run_erl
RUN_ERL_ENV="${RUN_ERL_ENV:-+K false +A 1000}"

also this +A 1000 release will consistently yield much worse benchmarks - so very much seems the erl_opts are in effect.


# Pass args to erlexec when running the release
mix release --erl="-env TZ UTC"


Oh, I had no idea distillery does that - you’re right, I’m sorry. It doesn’t seem to be documented anywhere, though.


I love this community. Not only did y’all improve the reporting of benchmarks for Phoenix and Plug, but y’all also improved and added the benchmarks for other frameworks not even elixir related.


Can’t know how well Elixir performs if the other things are not performing up to par as well. ^.^


Looking at the project repos, why is the plug version using elixir 1.4 and the phoenix version using elixir 1.2? Or is that going to be updated when Phoenix 1.3 comes out?


they use the same elixir version.

elixir: "~> 1.2" and elixir: "~> 1.4" are “approximately greater than” requirements and in this case running elixir 1.4, 1.5, 1.6, 1.7, 1.8, or 1.9 would satisfy those two requirements.


Just didn’t know what environment the suite was running in or see where it was defined.


At https://github.com/tbrand/which_is_the_fastest there’s comparison of the web frameworks.

It can give you a high level idea. And that is, Phoenix and Plug are slow compared to, for instance, Ruby and its framework Roda. Ruby/Roda is faster. Or at least roughly more or less the same as Phoenix and Plug.

I thought Phoenix and Plug would very well beat ruby and python with a magnitude of several times. But that’s not the case.

Why are Phoenix and Plug so slow?