Why is Phoenix + Cowboy fast for handling http?

Guys, your help wanted!

Can you give me a little proof of concept, due to what phoenix + cowboy
can handle millions of concurrent http-connections per 1 machine ?

Is it true, that in comparison to another languages, if you are using elixir + phoenix, you can, probably downscale your docker/k8s instances with your application to one erlang server?

Can you give me an example of some kind of “real” production benchmarks or share your expirience ?

I saw some syntetical tests, comparing Golang Http and Elixir Cowboy and Golang won, but with the slight advantage.

I have an expirience using Roadrunner which can boost PHP app performance a lot!

So, any help answering why elixir cowboy so unique will be appreciated.

Thanks!

1 Like

I don’t think you’ll get the answers you’re looking for. Elixir/Phoenix is not the fastest language you can choose and likely never will be. The benefits of phoenix and the stack beneight it aren’t in raw speed, but in other places like error isolation, sane latency degredation under load, ….

Personally I’d sum it up as being able to get very far on a project with less complexity.

As for examples of scaling down you can look at the stories of bleacher report. They were able to scale down quite a bit by adopting elixir.

4 Likes

There are a lot of threads about this sort of thing already. I would suggest watching something like Sasa Juric’s talk here The Soul of Erlang and Elixir • Saša Jurić • GOTO 2019 - YouTube to get a sense of what makes Elixir’s runtime unique.

3 Likes

As @benwilson512 says, definitly you should watch that video. Is the zen of Elixir.
But as a quick answer to the why is faster, is because a combination of the Beam that can start very lightweight process plus how cowboy works, that start a new process for each request so each request in handled concurrently (better explained in the video)

2 Likes

I think you’ll like what @joeerl said about this :003:

Imagine an Erlang or Elixir HTTP server managing a couple of million user sessions. Time and again I’ve heard this said:

We have a (Erlang or Elixir) web server managing 2 million user sessions.

But this statement is incorrect and stems from a fundamental misconception.

We do not have ONE web-server handling 2 millions sessions. We have 2 million webservers handling one session each.

The reason people think we have one webserver handling a couple of million users is because this is the way it works in a sequential web server. A server like Apache is actually a single webserver that handles a few million connections.

In Erlang we create very lightweight processes, one per connection and within that process spin up a web server. So we might end up with a few million web-servers with one user each.

If we can accept say 20K requests/second - this is equivalent to saying we can create 20K webservers/second.

On the surface things look very similar. But there is a fundamental difference between having one webserver handling two million connections, and two million web servers handling one connection each.

If there is a software error and the server software crashes we lose either two million connections or one depending upon the model.

In Erlang if the web server software itself is incorrect we’ll lose a single connection, which is OK. Since the software is incorrect and crashes we don’t know what to do so crashing is a good alternative. What is important is that one session crashing does not effect all the other sessions.

This requirement, goes way back to when we designed Erlang in the mid 1980’s. In Telecoms systems, losing one connection due to a software error was acceptable, losing them all due to a bug was big time bad news.

You can read it in full here:

https://joearms.github.io/published/2016-03-13-Managing-two-million-webservers.html

15 Likes