Which webserver do you use?

We had a thread here recently that mentioned webservers in PHP, and it got me curious about the options in the BEAM world and what everyone is using. Which webservers do you use or plan to use in your apps? :003:

You can select as many options as you like:

Which webserver/s do you use or plan to use? (Select as many as you like)
  • Ace
  • Bandit
  • Chatterbox
  • Cowboy
  • Elli
  • Erlang’s built in inets/httpd
  • Mist
  • Mochiweb
  • Yaws
  • Other - please say in thread!

0 voters

Ace (Elixir)

HTTP web server and client, supports http1 and http2.

Bandit (Elixir)

Bandit is an HTTP server for Plug and Sock apps.

Chatterbox (Erlang)

HTTP/2 Server for Erlang.

Cowboy (Erlang)

Small, fast, modern HTTP server for Erlang/OTP.

Elli (Erlang)

Simple, robust and performant Erlang web server.

Erlang’s built in inets/httpd

The HTTP server, also referred to as httpd, handles HTTP requests as described in RFC 2616 with a few exceptions, such as gateway and proxy functionality. The server supports IPv6 as long as the underlying mechanisms also do so.


Mist (Gleam)

A (hopefully) nice, pure Gleam web server.

MochiWeb (Erlang)

MochiWeb is an Erlang library for building lightweight HTTP servers.

Yaws (Erlang)

Yaws webserver


Glad you selected Elli @Tristan! Not sure where I heard it now but I’m sure I heard someone say it is one of the most (if not the most) performant Erlang web servers around (think it might have be @lpil on EFS).

Do you know if there’s a Plug adapter for it? If not do you know if anyone from the Elli team are planning one?

Wonder if anyone has benchmarked all the above? Wonder how that’d pan out?


Great idea, I’d love to see some benchmarks of all of them.

We have some slightly out-of-date ones for some of the above here.


Mist came out fastest of the BEAM servers, and could sometimes beat Go depending on request body size.

One thing that really surprised me was how badly Cowboy handled request bodies. JavaScript was beating it as soon as there was a reasonable amount of data to read!


There was a Plug but I don’t think it was ever updated for the last major release of Plug.

My understanding is it won’t work as a Plug anymore because it works different from Cowboy. I vaguely recall looking once, not too deeply, and think it was technically possible but would be a hack.

It also is “fast” because it isn’t completely general purpose – it only supports bodies up to a staticly defined size and requires a plugin to include the datetime in the response, things like that.

Other reason it is fast is it uses the builtin Erlang http parser (C code). But Cowboy replaced this with cowlib for a reason, scaling. cowlib will scale up on cores and keep better latency as concurrent requests go up.


I’d expect to see similar numbers in a benchmark for Bandit and Elli since they both use Erlang’s http parser – which also means Bandit will have the same scaling issue Cowboy hit years ago that resulted in cowlib being created.

But for many many use cases Elli (or Bandit I suppose) work great :slight_smile:


I was under the impression that Bandit does not use the Erlang http parser, and I couldn’t find it being used in the source code. Are you sure it does? I might have been looking in the wrong place.

1 Like

It uses decode_packet with type http as far as I can tell bandit/adapter.ex at be794eb707acc91fd864a5d479e4258af10fba58 · mtrudel/bandit · GitHub


Ah yes! Thanks!

1 Like

Bandit author here. Late to the party, but a couple of things:

  1. I’ve looked for historical evidence of decode_packet’s supposed scalability issues but I can’t seem to find anything. Can you point me at some references?

  2. Performance numbers in (synthetic) benchmarks suggest quite the opposite regarding the scalability of cowlib vs. decode_packet. When I was building out GitHub - mtrudel/network_benchmark I had to give up on higher concurrency tests as I was unable to get Cowboy to complete them without massive error counts, whereas Bandit hummed along just fine (and indeed grew the performance gap even more on higher concurrency tests).


  1. It recall it was described by Loic on the Erlang mailing list. A user was hitting this issue that resulted in the creation of cowlib.Its probably like a decade ago at this point, but should be online in the archives.

  2. This was before cowboy was supporting HTTP 2 and maybe there are some issues in that. Could try a benchmark against cowboy 1 to see what happens.

1 Like

The only references I can find specifically to decode_packet being replaced by cowlib dates back to 2013: https://groups.google.com/g/erlang-programming/c/C9OYrllYkpI/m/DeKOVx3rqEAJ

Some perf assertions also from that era: https://groups.google.com/g/erlang-programming/c/tJnDTcgb9L8/m/zd2fpAnCIyYJ

Considering the vintage of those assertions (R16-era), I don’t think they’re terribly relevant any longer, at least not without being corroborated by more contemporary comparisons. This proverbial ‘best before’ date is even more apparent given that benchmarks done both by me and Rawhat (GitHub - rawhat/http-benchmarks: benchmarks for mist, and other webservers) indicate that Cowboy’s performance hasn’t kept up.

The 0.6 release train of Bandit is going to be focused entirely on observability, performance and reproducible benchmarking. I expect to get an additional 20% or so perf gains out of Bandit based on this work, which should make it the fastest game in OTP town by a pretty significant measure. Expect work to start late 2022.



Yea, it is possible either improvements to OTP or the changes made in Cowboy 2 make this moot. And if it does still exist it is also only a concern for a very very small number of users.

It just intrinsically feels it should be better that the decode code is in Erlang to keep consistent and low latency with increased concurrency – but I know, that doesn’t make it true :slight_smile:

As an Elli and Chatterbox user and maintainer (maintaining chatterbox being the painful one) I just wish Bandit was Erlang so I could consider moving on :slight_smile:

1 Like

So far I stick to Caddy.


Huge fan of Caddy. It’s a great tool!

Someone was asking about dynamic cert support in Bandit the other day, with an eye towards doing an ACME style server on top of Bandit, see Multiple/dynamic SSL certificates? · Issue #35 · mtrudel/bandit · GitHub for more!


Level of polish is incredible - docs, syntax, config format, APIs of the config directives…!

IMO CertMagic is the crown jewel in Caddy’s crown. after months of frustration around ACME/Let’s Encrypt Python client, integrating it with HTTP server/proxy software using some patchwork scripts, in way too many steps, i find how it works for Caddy amazing :slight_smile:


In the interest of fairness I have added a vote to each project (I will probably be checking them all out anyway! :003:)

+1 on caddy + docker compose if i wanted to test a staging app via vps