I was just introduced to Elixir and Phoenix. I was told about the 2 million websocket test that was done 2 years ago. From my research, that test was just opening connections, but it was not actually sending data (the clients were connecting and then sleeping for a long time):
Compared to Asp.Net Core, I have found performance tests showing over 1 million requests/sec or 12 Gbps (which saturated the network card):
I also found this comparison which shows similar results for Asp.Net Core with pipelining (scroll to last table). It also showed highest performance for serving a plaintext file from “Netty” which was serving almost 3 million requests per second:
Anyway, I couldn’t find any benchmark tests that put Phoenix and Asp.Net Core head to head on the same machine characteristics.
From these stats, it’s hard to compare serving 1.2 million simple requests (on a 32GB machine) vs sleeping 2 million connections (on a 128 GB machine). In fact those numbers alone make Asp.Net Core seem like the winner.
Well, if productivity is the measure, I prefer node.js even though it is 10x slower than Asp.net.
And with serverless architecture (Azure Functions) that scales infinitely and only costs me per use, then I have no performance limitations and decent performance is good enough. This is especially true if I can throw a CDN in front of any static content, which I can easily do.
So I am currently writing everything in Typescript from the client, to the server, to the json database. Testability and realibility are very high, so I don’t have a problem there.
However, this does not give me a solution for real-time applications like twitch games, really fast chat, or broadcasting, etc. For that I need web sockets, which requires a different solution than a serverless architecture like Azure Functions can provide.
My options are using something like Azure Fabric like the above article does, or something outside of Azure (which is where my interest in Phoenix came.)
So, if real-time is the goal, then performance is the king.
I would like to know if Phoenix is actually faster than Asp.Net Core. Or if it is just a preference because of the functional language (which I can get with F# if I wanted it.)
There are so many other factors that need to be considered than just speed (though even that itself is pretty amazing in Phoenix). Have a look at this post by Saša that touches on the topic in another recent thread about performance:
Yes, I agree there are other issues, but coming from Azure and .Net and Node, I have great solutions to everything except real-time performance. I know there are solutions for that as well in Azure, but if Phoenix is significantly better, I would want to consider that.
Phoenix is a library of plugs and other helpers on the Plug library, it adds no overhead over Plugs.
Plug is a library on Elixir for simple and fast web hosting, it adds a microscopic amount of overhead over the web server (cowboy), but buys a lot in safety.
Cowboy is an Erlang web server library designed to handle any and all kinds of network communication as-fast-as-possible, a bit ugly of an interface because of that but it is fantastic in speed, Plug on top of it makes it blissful to use for most web protocols.
Elixir is the language that compiles to the BEAM/EVM (Erlang Virtual Machine).
Asp.Net (the modern version, the older ones were utterly horrible) is a lot better now, but it breaks often (I know, we use it at work on some things that we cannot replace because Federal Laws…).
.NET Core is basically mono and .NET merged, a making from when MS bought Xamarin(sp?), a normal Java-horrible-like GC’d VM, but decently fast.
Now, comparing parts to parts:
.NET Core vs BEAM/EVM:
.NET Core will be faster on raw JIT performance, like processing numbers, but how often are you doing multi-dimensional matrix transforms in web hosting.
BEAM/EVM barely has any JIT happening at all, but it is still very performant, however the language design combined with internal very low level async I/O allows the BEAM/EVM to outperform ‘almost’ anything on I/O (like networking, say, for webhosting) while being safer than any just about any language at all.
However, .NET Core has very little ability to debug production, you mostly have to rely on your logs or hook up a debugger that often stops-the-world.
Compared to the BEAM/EVM that has introspection that would make any network or server admin just drool, but it is done in a different way than you would do it on .NET Core (mostly because debugging individual instruction is as you would on .NET is hell for a concurrently real-time system).
If you need some really heavy processing, .NET is good, not great, I’d pick, say, Rust or C++ or OCaml or a host of other languages over ever touching .NET, but it is not bad. However a bad crash in it brings everything down, unlike on the BEAM/EVM where it is much safer in that it is expecting things to crash, and more importantly to handle it (not just talking about exceptions, also talking about, say, hardware failing). And if you need speed for something it is so trivial to plug in Rust or C++ or OCaml or even .NET to the BEAM/EVM via a Port (or for now kind of speed you can use any language that implements a C interface like C++ or Rust or OCaml to the BEAM/EVM as a NIF, but that incurs a hit on safety so be careful, Ports are almost always better).
Using Phoenix/Plug/Cowboy together, as they are always used together if using Phoenix, vs ASP.net they have significantly different styles of ‘work’, ASP.net’s is significantly more mutating, it can be hard to reason why something is changing where (as we’ve experienced innumerable times here at work), where in Phoenix the whole straight pipeline of Plugs is wonderfully immutable and you know where everything comes, no magic database access in views, no wondering why something is suddenly accessing the DB 20 times, no magic variables littered all over the place just to ‘tag’ crap, it just-makes-sense.
Basically for web hosting, the horizontal and vertical scaleability, for network usage, it is hard to beat Phoenix overall. Something may beat it in micro-tests like raw number performance or spamming out “hello world” to a port, but it is already close on all of those and with as close as it gets to them all with all the safety it has and scaling it has it is overall, in my opinion, unmatched, as well as it is trivial to interface it with other languages to do their specialty heavy-lifting, the BEAM/EVM makes a fantastic and safe ‘glue’ interface to other languages, even if the other languages did all the heavy work having them talk through the BEAM/EVM still gains you so much for server tasks.
Yes, I agree that the debugging story for Asp.Net in production is bad. Mostly digging through log files trying to figure out what happened, which means you have to program in the log instructions before hand. And Asp.net Core stability was a problem for me when I was using RC1 and upgrading to release was not easy. Some of those reasons are why I decided to work with node.js on serverless architecture.
Anyway, down to some practical questions about global scalability with Phoenix:
Are there any hosts that provide an auto-scaling, pay only for what you use model (i.e. no monthly costs, only costs per request)?
Or how would you set up a web api (or socket endpoint) that could have endpoints in multiple continents (N. America, Europe, Asia, etc.)?
How much would you expect such a setup to cost?
More specifically, if I wanted to create a twitch MMO game that had everyone in the same game universe, what kind of setup would be possible with Pheonix and what hosts could do it?
I honestly would have no clue, auto-scaling servers kind of scare me to be honest, every time I’ve calculated my usage on them (which tends to be high), I *always* find it *substantially* cheaper to just rent a server straight (I particularly like OVH, I think I can give a reference number or something if you want it?)
Phoenix/Elixir/BEAM distributes horizontally, so you can have servers all over the world chattering over a connection between them as necessary, this is actually really really easy to do if you have the hardware there.
Well I don’t use auto-scaling, but even renting a VM instead of a hardware server at OVH starts at like $3/month for decent specs (and not overloaded like dreamhost and such gets), but they tend to only be in the USA and Canada I think (honestly I’ve not looked to if they do elsewhere, they might), but there are plenty of places to choose from. I actually have one BIG hard server with OVH in Canada and a few mini-VM’s in a few locations in the USA that communicate back to the big one, it works really well for me. ^.^
A better example to use might be Eve Online as a massive singular game world. ^.^
Twitch style streaming is a slice of hell because of video encoding, I’d give that task to Youtube to be honest, not do it myself unless absolutely necessary. >.>
However yes, phoenix could do it, just segment your game world properly (lots of libraries can help with that), have in-game near parts nearby physically too and communicating, and slave out any computationally expensive math to a NIF (I’d choose Rust for this purpose) or large data processing over a Port (I’d probably use C++, Rust, or OCaml, depending on exactly what is being done). You would then have a system that could scale as well as your hardware allows while absolutely saturating the heck out of the individual pieces of hardware too, it would scale well, that is what the BEAM does.
I was thinking that a similar architecture would be ideal: mini servers around the world passing data to the main server.
From your experience about how many sockets do you think the $3 server could manage safely? (Or how many requests/second? Or whatever metric you might have…)
Yeah, Eve Online is a good example of what I am interested in architecting. Sorry not clarifying what I meant by “twitch”, I didn’t mean the twitch game video streaming site, but rather twitch game mechanics “if you twitch you die”: where latency can determine the winner in a direct conflict. (Ideally round trip latency from client to server and back would be less than 250ms for critical messages. So the distributed servers would basically be in charge of verifying messages are valid and letting the client know that their action was accepted as per the game rules.)
Hmm, mine are not particularly heavy so I’ve never needed to increase them (a few hundred requests a second at the heaviest, the big calls tend to hit the big server directly because of API access)… You can test it if you want, and it can scale up pretty easily from the $3 one too.
In Fortunes (benchmark) .net core’s best implementation gives 20,021 responses per second vs Phoenix’s 32,559 responses.
In multiple queries (Full ORMs of both), .net core runs 2,177 requests while Phoenix runs 1,857 requests. (Every request is consisted of 20 queries).
In single queries (again Full ORM versions only), .net core runs 25,827 requests per second vs Phoenix’s 30,182.
In JSON serializations, Revenj (C#'s best) gives 143,516 responses per second vs Phoenix’s 164,921 responses per second.
As you see the values are overlapping. Phoenix beats .net core in 3 benchmarks while .net core beats phoenix in one benchmark, so it proves that Phoenix (Elixir) is not good just in creating persistent websocket connections, but is also good at performing database queries, IO, JSON serialization etc.
As I said earlier benchmarks do not reflect the real world performance, and we should consider other factors too, like how easier some technology is to scale. And when it comes to scaling, Elixir(/Erlang) beats all other technologies.
As another note, we started running Asp.Net Core at work on linux (slowly migrating off windows) and it substantially slower than the phoenix setups here, though the quality (lack-thereof) of probably contributes… >.>
At this moment, 12.8.2019, the numbers of Fortunes are like this:
Phoenix: 175 phoenix 53,383 ASP.NET Core: 14 aspcore-ado-pg 300,613 with one error
It does not say which version of ASP NET Core exactly but I assume it’s one of 2.2.5-6 versions.
Question is why is Phoenix slower 6x times. It is faster than it used to be but still.
I understand the safety part of Phoenix. There is no production debugging in ASP.NET Core, you could produce a dump and WinDbg it but good luck with that rabbit hole. Especially, when it runs on linux machine. But when you read here about erlang and cowboy and how fast they are with hardware how can it be way slower that ASP.NET Core, I don’t get it. It has to be a configuration issue.
The first one is Actix (Rust): actix-core 702,165 which is cca 14times faster.
In data updates the situation is even worse. ASP NET Core is almost ten times faster than Phoenix.
In multiple queries ASP NET Core is just 6x times better.
I’m not bragging about ASP NET Core I would love to switch to something better and be done with Microsoft for good but the results are unconvincing. It is not a cheap machine they are running the test on. It’s actually a pretty small beast.