I’m trying to evaluate a back-end technology to use for a simple real time game I am thinking of creating (a bit like slither.io). A Nodejs socket server springs to mind as does asp.net core SignalR - but Phoenix Liveview seems like a potential candidate too. I have seen some liveview demos integrating with an HTML canvas running around 60fps which is great - but I am curious as to what sort of number of connections (players) liveview may be able to cope with? I am also thinking about the cost of deployment etc and I understand elixir in general has a good reputation for being able to handle a lot of connections with less servers - ideally I would like to start small with a single server. My only reservation with using Elixir is the performance of elixir itself as all of the game code would be running on the server. However the game will be very simple so maybe Elixir is ok.
Any tips, suggestions or guidance people could provide as to whether using live view is a good idea for such a project and how may players I might be able to cope with on a single server would be very helpful - thanks
This is probably fine. Phoenix LiveView connections in the end are simple Phoenix WebSocket connections (only they ‘talk in LiveView patches’ rather than arbitrary data). To give you an idea of how well these scale, the ‘road to 2M websocket connections’ blogpost might be an interesting read.
About deployment: there are other threads on this forum talking about it in more detail, but starting out with a free server on e.g. Gigalyxir is possible, and when you move to a paid solution you indeed usually need a less resource-intensive server than e.g. a Node.JS or Ruby system that does the same.
Elixir would definitely be fast enough for a game like slither.io. It is mainly intensive math (on large datasets) that Elixir is not optimized for. So essentially you’ll be fine until you want Elixir to e.g. handle a physics-engine for you.
I’m gonna have to disagree with @Qqwy, I do not think LiveView is a good fit for games, at least as long as what is meant by that is that the game frames would be rendered server side. It could be completely fine for game “UI”, menus and so on, but for rendering the actual game at 30-60fps it has an unavoidable weakness: latency.
Even if we assume the perfect server that can always render at exactly the 60fps for every client, latency between client and server will always lead to a choppy experience. If you happen to be in the same region as the server, it might look OK. If you’re not, it will get increasingly bad.
This could be clearly seen in the Phoenix Frenzy contest, where several people made games. They were cool, in a tech demo sense, but honestly I didn’t think any of them were all that playable as games.
What these games, and the original 60fps demo DO prove is that Phoenix can render stuff really, really fast, and this is important to prove. If rendering is slow, the total scalability of the system will be low, because each client would be imposing a heavy burden on the system. Fortunately rendering is fast, so we can expect the server to handle a lot of clients. This means that Phoenix LiveView will be great at doing form validation for 10k clients in Australia, even with 150ms of network lag. From a game rendering perspective though, 150ms lag is generally unplayable.
Keep in mind that the lag here is not player to player lag, but player to viewport lag. 150ms player to player lag isn’t great, but at least the player feels in control of their character. 150ms player to viewport lag will feel like 6 fps, no matter how many actual frames per second you’re rendering.
I agree with you up to a point. Regardless of what tech stack is used, for a fair real-time multiplayer game it is required to keep and update the game state on the server. Clients then receive the new state as input and decide how to render it. In many cases clients do not obtain the ‘full’ server state, to e.g. hide other players that are not currently in view, for two reasons:
to reduce data sent between server and client.
to ensure that players only ‘see’ what they are supposed to see (to make cheating more difficult).
As long as animations and other graphical embellishments are handled client-side, a LiveView-based system does not send significantly more data than one of those other systems.
These systems have the same round-trip lag problem that LiveView has. In some cases it is ‘hidden’ by faking the player character’s movement locally as soon as they press a button, which might make a game feel more responsive if there is low latency, but will make the game seem buggy if latency is high (because depending on actions of other players, the local player’s actions might get altered or invalidated and thus have to be reversed).
Then why would one use LiveView in the first place?
At least they’re responsive and if it’s not, you know you have bad internet connection. How would a user know if the game doesn’t work or they don’t have a good connection to the server? They don’t even know nor care if the game is rendered serverside or clientside, they just feel like playing on Google Stadia
Because it reduces the ceremony required to create a multiplayer game.
LiveView of course is not perfect. As with most things in computing, the real answer is it depends. I believe LiveView to be a fine contender for simple real-time games like e.g. agar.io / slither.io. And for obvious reasons it is also very useful for turn-based games or virtual tabletop games (card games, chess, etc).
If you have a game with lots of computation (e.g. physics), many entities (like 1000s of bullets flying at a time) or really precise timing requirements, then LiveView likely is not the tool for the task at hand.