I think these “edge” solutions are great for content and assets. They take care of geographical distribution, caching (it’s just a fall-through stop, if it’s not in cache go fetch) from the origin, resilience (no way to DoS it at a reasonable cost), they take load off from the servers, and improve response times, probably save on bandwidth as well (and devops)? Specially if you’re building a MVP or something, but once you have a team working full-time it might become cheaper to build your own “edge” system, tuned to your needs?
But there’s other things that I think a “local” cache that can distribute itself is really useful (and many things that can’t be cached in any useful manner by a server that is at the edge operating as a fall-through). For instance if you need to keep a cache of expired tokens, or rate limiting info, and you need them to be shared across a group of nodes then mnesia makes it very easy to do and share that across a cluster automatically (without requiring a centralised store/instance to be queried) - and you get to store all that deserialised in any term format (or even as iolists) - you can access it at ets speed if needed or with transaction guarantees in any node.
I’ve started playing with it because of some issues I was solving with keeping consistency across nodes, to store some “session” state about the users. Basically, if they’re in a queue, they can’t join or open a game, if they’re doing something else (have an open game) they can’t try to do something else - it’s a key and a struct describing what they are doing, I shoved that info into an ETS initially, but then if you go multi-node, it’s no longer as simple, as requests can hit different nodes, so then I started replicating the operations with rpc
casts/calls across the nodes and at that point just thought, perhaps instead of writing all this (definitively bug-ridden implementation) crap it’s better to just use mnesia.
And the same applies to “open” games that people could join (it has some complexities, and validations, for a player creating one, then a player joining, then a period of response for who created to say start, reject or cancel, locking resources, rejecting the second player if the accept times out, etcetc). So basically these became two tables in mnesia, when a node joins the network it loads its own local copy from the other nodes, and no matter where the request hits, you can access that locally from ram, which is super fast.
All of this could be stored in postgres, but where’s the fun in that - I’m kidding it’s just that imagining you have a lot of requests to open, join, start games, and these are requests that a user expects to be fast, hitting the db at all times is overhead - and the same for rate limiting or token validation, if you store those in a db every request hits the db n times through its lifecycle.
Plus it’s also quite versatile, as in you can do some operations as acid transactions, while others can be done without those guarantees (say token invalidation or rate limiting if it’s not 100% crucial), so it lets you fine tune it quite a bit. (and you can mix, doing transactions in mnesia with transactions through your “db” and aborting both if something goes wrong in either)
I don’t feel brave enough to write all the DB (I’m still using postgres for plain users, etc) in mnesia, as you kinda need to know what you’re doing, to ensure you don’t f*** up the copies, replication, etc, and right now it still works fine by loading from pg into ets or processes for certain things. (disclaimer, I’m no expert in either mnesia or anything else, it’s just my reading, from the docs and the - small - experiments I’ve done, I’m sure it’s no silver bullet, but a very nice tool it seems, ps: I mean no disrespect by referring to mnesia
as a “cache”)