I use regularly gen_server’s and gen_statem’s but for most “regular” stuff you would do in a normal web app, as other’s pointed out, you might not need to because most of the internal architecture of what you’ll be using already uses processes to model their behaviour/functionality (say cowboy/phoenix, ecto/db connections, oban, etc), that basically you don’t need to worry about them in those cases.
But I find plenty of uses for them and some of the places where I’ve used them:
-
modelling complex, data intensive aggregations spanning millions of rows and producing literally millions of records holding aggregations over arbitrary timeframes and conditions (overlapping timeframes, etc) - no, pure SQL would be a nightmare, not only to model and use, but to maintain - organising the synching from external datastores, concurrently while controlling the concurrency, converting it, storing it, then querying that data and organising it, taking into account that things need to happen in a defined set of steps (B’s only start after all of A’s have finished synching, C’s can start as soon as their B’s counterparts are finished, while others just have their own lifecycle but it all needs to tie into a single “flow”) - I won’t say “trivial” because the problem itself was far from trivial, but in terms of logic it was very simple, explainable in a diagram, testable, and it was basically genservers starting and monitoring others, batching things from the db and calculating that, storing it and exiting and when those finished moving to the next ones. This while guaranteeing that db timeouts/crashes and etc wouldn’t throw out hours of aggregations and restart the whole thing from the start.
-
Using gen_statem/servers to model fetching data from external api’s - say fb’s api, or whatever have you. Again allowing controlled concurrency and/or rate limiting. Start X, as they finish start another, until you’ve gone through them all, then schedule another cycle. Each individual one is a sequence of steps - check you have a valid token, if not request a new token, substitute it, now request the data, update what you need, move to the next one, if you can’t get a token, warn the user somehow, etc.
-
In a game I’m (still) re-writing, each game, each draft, etc, are all single processes, either gen_servers or statems, on the game it’s a turn based game, so it receives commands and processes them guaranteeing they’re allowed (correct player, correct moves, enough resources, etc) and then broadcasts them back to the players and dumps it into a db. The draft is 8 players at the same time, each player starts with a pool of choices, each pick has a timer, after each pick that “pool” moves one to the next player, and keeps going until all pools are empty, it’s all concurrently but you can only pick when you have a pool, and after picking one once the player “behind” you picks it’s own and moves their pool to you. Again everything is separated, each game, draft, etc can do its own things (dumping to db as a safety measure, broadcasting, etc) without interfering with others.
There’s some other cases where I’ve used them, but basically it comes down to concurrency control, need of access serialization. Say you’re linking an account to something external (stripe or paypal, wtv), if you make it pass through a process (individual to each user) that acts as a serialization point and 2 (or more) requests from the same user come in at the same time (even someone trying to poke holes in your system), you can model it easily in a way that each request will be dealt with only one after the other, and when the second one gets to be processed you can bail out immediately because you already have the result from the first one, without having to rely on ad-hoc locks or wtv, as long as the interactions with that particular resource are modeled through it. Then this process can self shutdown after X minutes of inactivity and will start&load whenever a new request comes in, guaranteeing that it’s always consistent.