We had a thread here recently that mentioned webservers in PHP, and it got me curious about the options in the BEAM world and what everyone is using. Which webservers do you use or plan to use in your apps?
You can select as many options as you like:
Which webserver/s do you use or plan to use? (Select as many as you like)
Ace
Bandit
Chatterbox
Cowboy
Elli
Erlangâs built in inets/httpd
Mist
Mochiweb
Yaws
Other - please say in thread!
0voters
Ace (Elixir)
HTTP web server and client, supports http1 and http2.
Bandit (Elixir)
Bandit is an HTTP server for Plug and Sock apps.
Chatterbox (Erlang)
HTTP/2 Server for Erlang.
Cowboy (Erlang)
Small, fast, modern HTTP server for Erlang/OTP.
Elli (Erlang)
Simple, robust and performant Erlang web server.
Erlangâs built in inets/httpd
The HTTP server, also referred to as httpd, handles HTTP requests as described in RFC 2616 with a few exceptions, such as gateway and proxy functionality. The server supports IPv6 as long as the underlying mechanisms also do so.
Glad you selected Elli @Tristan! Not sure where I heard it now but Iâm sure I heard someone say it is one of the most (if not the most) performant Erlang web servers around (think it might have be @lpil on EFS).
Do you know if thereâs a Plug adapter for it? If not do you know if anyone from the Elli team are planning one?
Wonder if anyone has benchmarked all the above? Wonder how thatâd pan out?
Great idea, Iâd love to see some benchmarks of all of them.
We have some slightly out-of-date ones for some of the above here.
Mist came out fastest of the BEAM servers, and could sometimes beat Go depending on request body size.
One thing that really surprised me was how badly Cowboy handled request bodies. JavaScript was beating it as soon as there was a reasonable amount of data to read!
There was a Plug but I donât think it was ever updated for the last major release of Plug.
My understanding is it wonât work as a Plug anymore because it works different from Cowboy. I vaguely recall looking once, not too deeply, and think it was technically possible but would be a hack.
It also is âfastâ because it isnât completely general purpose â it only supports bodies up to a staticly defined size and requires a plugin to include the datetime in the response, things like that.
Other reason it is fast is it uses the builtin Erlang http parser (C code). But Cowboy replaced this with cowlib for a reason, scaling. cowlib will scale up on cores and keep better latency as concurrent requests go up.
Iâd expect to see similar numbers in a benchmark for Bandit and Elli since they both use Erlangâs http parser â which also means Bandit will have the same scaling issue Cowboy hit years ago that resulted in cowlib being created.
But for many many use cases Elli (or Bandit I suppose) work great
I was under the impression that Bandit does not use the Erlang http parser, and I couldnât find it being used in the source code. Are you sure it does? I might have been looking in the wrong place.
Bandit author here. Late to the party, but a couple of things:
Iâve looked for historical evidence of decode_packetâs supposed scalability issues but I canât seem to find anything. Can you point me at some references?
Performance numbers in (synthetic) benchmarks suggest quite the opposite regarding the scalability of cowlib vs. decode_packet. When I was building out GitHub - mtrudel/network_benchmark I had to give up on higher concurrency tests as I was unable to get Cowboy to complete them without massive error counts, whereas Bandit hummed along just fine (and indeed grew the performance gap even more on higher concurrency tests).
It recall it was described by Loic on the Erlang mailing list. A user was hitting this issue that resulted in the creation of cowlib.Its probably like a decade ago at this point, but should be online in the archives.
This was before cowboy was supporting HTTP 2 and maybe there are some issues in that. Could try a benchmark against cowboy 1 to see what happens.
Considering the vintage of those assertions (R16-era), I donât think theyâre terribly relevant any longer, at least not without being corroborated by more contemporary comparisons. This proverbial âbest beforeâ date is even more apparent given that benchmarks done both by me and Rawhat (GitHub - rawhat/http-benchmarks: benchmarks for mist, and other webservers) indicate that Cowboyâs performance hasnât kept up.
The 0.6 release train of Bandit is going to be focused entirely on observability, performance and reproducible benchmarking. I expect to get an additional 20% or so perf gains out of Bandit based on this work, which should make it the fastest game in OTP town by a pretty significant measure. Expect work to start late 2022.
Yea, it is possible either improvements to OTP or the changes made in Cowboy 2 make this moot. And if it does still exist it is also only a concern for a very very small number of users.
It just intrinsically feels it should be better that the decode code is in Erlang to keep consistent and low latency with increased concurrency â but I know, that doesnât make it true
As an Elli and Chatterbox user and maintainer (maintaining chatterbox being the painful one) I just wish Bandit was Erlang so I could consider moving on
Level of polish is incredible - docs, syntax, config format, APIs of the config directivesâŠ!
IMO CertMagic is the crown jewel in Caddyâs crown. after months of frustration around ACME/Letâs Encrypt Python client, integrating it with HTTP server/proxy software using some patchwork scripts, in way too many steps, i find how it works for Caddy amazing