I can only speak to my own not-so-recent experience which is from roughly 2015-2016.
At the time, we had to build a proxy server to handle auth, multi-tenancy, and move business logic to the edge. The requirements were that it had to scale to about 20-30k qps with a consistent tail latency, handle websockets in a rudimentary way, and it had to be able to run a benchmark on my laptop (an X200 from 2008).
The team evaluated:
- Erlang/Elixir (cowbow based)
- Expensive enterprise solutions
So the nodejs prototype came first and it seemed to work. Until I ran it on my laptop, where is started segfaulting due to who knows what.
Kong / Nginx / Openresty - almost did what we needed to do out of the box except for the multi-tenancy. Very performant for the simple things we were doing. Nginx has an event loop and hands off the work to the workers, so I’m not sure if lua was going to block the event loop. I’m sure I knew the answer back then, but I can’t recall now. I do recall it could do around 8k qps on the “lapbench” we designed which was more than enough. Tail latency had some deviations occasionally.
Erlang/Elixir/Cowboy - very consistent tail latency and about 3k qps on the lapbench. I do recall the http client situation was rather abhorrent at the time with various leaks, etc. I doubt this is a problem nowadays, but you’re not writing a proxy anyway.
We ended up going with Erlang/Elixir/Cowboy in the end because of code maintainability. Both were fast enough for our use-case regardless, we only needed 3-4 servers per region and could do various things to keep them HA as our primary load balancers.
We did like the nginx + lua combination, but we couldn’t get our head around how to maintain that codebase. Today is the first day I heard of Lapis, so I can’t comment if that would be an issue. In our situation, we had enough traffic, but not at a scale where the qps performance would actually make a meaningful difference in cost.
If I take a look at your requirements based on the previous knowledge from the evaluation (which may be completely out of date due to various changes on the openresty end):
- increased development speed for programmers - openresty back then was easy to write and hard to maintain. The erlang/elixir/cowboy was very straightforward and easy (the engineers had to learn about how BEAM works, but that took only a few days to get up to speed).
- reduced cognitive load - from the evaluation, well, the team was scared to make any changes to the lua code, so elixir won that one. My personal opinion is that elixir wins hands down on this against almost any language. Just take a look at the thread where npm/webpack is being replaced by esbuild by default for Phoenix 1.6. The core contributors really think things through.
- efficient resource usage - I left this for last since it’s probably the most complicated of them.
a) What’s fast and efficient “enough”? You can write everything in assembly. I do believe that nginx/openresty would be more efficient (other than the blocking thing which I’m not sure if it does, but for the sake of discussion, let’s assume it’s fine). From our testing, nginx/lua was 2.5x faster, but was not meaningful from a cost perspective.
b) From a memory perspective, again, that gets more complicated as you have ETS vs Redis. In memory lookups end up being much faster than going over a tcp socket potentially another server. Each “process” on beam is light, but they do have a stack. You’ll have to look in :observer to find what the actual stack size is for your use-case. The memory model on openresty is isolated over more processes (more like mod_php I’m thinking from the LAMP stack).
c) No…that’s definitely too much writing for one day.
So I would say Elixir wins on 2 of 3 and I’m not sure the 3rd one really matters. In 2005, language performance was really important, but for most traffic sites in 2020 considering server costs and general performance…it’s just unlikely to make a meaningful difference. The BEAM is “fast enough” for web.
The one caveat I’d add to this is that if you go the elixir route, you do have to learn some stuff to get the most use out of it. Functional programming (though I really don’t consider erlang to be “functional” in the purest sense), maybe OTP, maybe some other stuff. However, on the other hand, I’m sure you’ll have to learn a ton about nginx/lua/openresty/lapis also and have different problems.
The above tradeoffs are based on an evaluation from 2015, so even though it’s relatively factual, my knowledge of alternatives could be out of date.
But I do have a biased opinion also.
Anyone who picks something other than elixir, well, they are completely nuts. I cannot even describe to you the complete blessings and abundance and goodness that rains down from the sky when you start using this language. Even the demi-gods above who are frolicking in their little heavens are jealous when they look at the mortals coding in Elixir and desire to join the mortal realm. Once you take it upon yourself to learn a BEAM language, you understanding of system design, nay, the universe itself, will increase.
But seriously, once you get the hang of it, it will be really hard to go back to another language. The BEAM makes everything easier to do, I can’t really describe it all. And Elixir is really good. Welcome to the community and I hope you stick around.