Shouldn’t there be something additional configurable as a reverse proxy that handles 2M connections? Especially since there are complexities about properly supporting Content Security Policy headers with Safari and Nginx. Not that Nginx is bad; it’s fine. But just wondering.
Who said nginx can only handle 10k connections? I’m pretty sure that is configurable…
The Nginx website says it was designed to handle the 10K connections problem and can successfully handle 10K connections. It’d be interesting to see if it can actually handle a lot more, and see it go head to head with Cowboy on this. Personally I’ve found Nginx more configurable, but I did notice a page-load time increase after I installed my Nginx reverse proxy. That’s ok for me at the moment as my site is still pretty light-weight but perhaps there are ways to develop Cowboy so it could act in the ways Nginx does for me at the moment, and perhaps there would be less impact on page-load speed.
I thought the 10k connection problem was to do with concurrent connections, not the amount per second.
So when it says “it was designed to handle the 10K connections problem” I thought that was from a good few years ago when the kernel couldn’t schedule it properly.
I guess cowboy handles it better because it does the heavy lifting in application space, so it has better handling.
Depending on how long your requests are open for, you would have a lot of requests per second before you hit 10k concurrent connections.
My understanding is that Cowboy can handle 2 million concurrent connections on a single server – using very powerful but still commercially available hardware.
Of course, connections aren’t everything, and there’s matters such as database storage and access speed that in the real world would be likely to take precedence at that sort of scale.
NGINX can handle everything Cowboy can handle, and more.
I’m not disagreeing this may be true but what are you basing this statement on, exactly? Has Nginx been tested for 2 million connections, as Cowboy has?
NGINX is heavily optimized C code, Cowboy is written in Erlang (and it’s not the most efficient web server in Erlang, more like “most complete”).
Has Nginx been tested for 2 million connections
There is no need to test it really, since it would use the same operating system’s calls, but handle memory more efficiently. Just look at how uWS works to get an idea.
Yes. That’s (NGINX is C code, Cowboy is written in Erlang) an accurate statement but is it useful? I understand C is usually faster in its post-compiled state for a large number of tasks, and have experienced a little of this myself, but Erlang was written to scale and to handle concurrency and Cowboy is also more lightweight. I get better page-load performance for my site from Cowboy than Nginx, consistently. The theory is of course very important, but you can’t write off the performance of something just because of the language it’s written in.
Then you probably have something broken in your nginx config …
Or perhaps the additional security settings I have in place slow performance: that is certainly likely.
wrk nginx handles about twice as many requests as cowboy 1.1 on my laptop …
What security settings are you using? What headers etc?
I’m using the default config. (from
brew install nginx)
How many cores are you running on?
4 cores. I’ve been planning to test elli against cowboy for some time… I’ll probably add nginx as well, because you made me curious.
Hhhhmmm not concurrency then. Let me think a second. Where are you seeing network traffic bottlenecks for each server technology?
wrk yourself. I’m not seeing any bottlenecks. It’s just one technology is inherently more efficient than the other.
wrk is known to have problems with
cowboy and applications based on that.
Running the tool to measure on the same host as the thing you measure is a bad idea as well.
It’s not a real test. It was more to show that they are just on different levels.