Thank you. This is the excellent!
Currently Iâm following the âto spawn or not to spawnâ approach â use functional and module deal with thought concerns, and processes deal with runtime concerns.
And try to use registry with name or id, instead of direct linking or storing pids. Itâs pretty similar to the service discovery pattern over configuring ip addresses.
I believe itâs the safest and cleanest way of doing things currently. If you are trying to play your business on BEAM, this is absolutely recommended approach.
But from time to time Iâm also wondering use wrap processes to an object-oriented language. Though itâs not applausable across the community, itâs might suitable to make something interesting â for example an entity system with runtime concerns. Maybe like when you define an object use the wrapper, youâll get: supervision tree, default recover behaviour, type, struct, methods, and possibly graphQL type resolver for free. Or you can write custom view or read model on this entity. And itâs might quite suitable for things like fast prototyping.
Processes can definitely serve a few different purposes when designing an application, and sometimes itâs fun to just play around a bit, trying out a totally different way of doing things. Like implementing an OO approach with processesâŠ
However, for me it tends to come down to a few core points:
Concurrency
This is of course the first thing one tends to learn about the BEAM â concurrency is implemented by processes exchanging messages, after all. Thereâs no point in launching 2 million processes just because you can, however; try to stay close to the natural concurrency of the problem youâre solving.
If devices, end users, etc are connecting to your application, then thatâs usually a good start â each of those probably need at least one process, for as long as theyâre interacting. Do you have recurring tasks, cleanup tasks, etc? They probably also need their own processes. But⊠each semi-involved task of some kind thatâs initiated by a user of the system? It might not be warranted a process of its own; perhaps every user has a single âbackground workerâ, effectively serializing tasks per user in order to democratize resources somewhat, making sure a single user canât bog down the entire systemâŠ
Fault Isolation
This one didnât really âclickâ for me until Iâd started learning Erlang a long time ago, and after reading a couple of books and trying some examples, decided that I needed to build something âfor realâ.
As I was toying around with a small online game prototype, juggling things like TCP connections, command processing, long-running environmental effects, room-based communication and navigation etc, I finally realized that processes are a very powerful tool for fault isolation â especially in combination with proper supervision trees.
I was used to C/C++ mostly at the time, where errors usually are a bit of an all or nothing affair; either you anticipate and handle the error where it occurs, or your whole application comes crashing down. Naturally youâve got ways to carefully navigate around that, but itâs not easy either way.
With the BEAM, instead, we can start to think about what parts of our systems that can safely fail, and how we can best handle that. User input processing and TCP communication? That all happens in user-specific processes, and if something fails, we just need to make sure that weâve structured our processes so that everything that needs to be stopped / restarted is properly linked.
As a consequence, you can start thinking about separating very simple, reliable core parts of your application into their own supervision tree(s), so that even if everything else comes crashing down, those parts will survive, either to make sure everything else is properly restarted again, or if nothing else, to safely shut down, perhaps persisting critical data etc.
Serialization
Another important concern is if you have some part of you application that canât handle concurrency at all; this can then be wrapped in a process that manages whatever it needs to do, allowing other processes to send messages at will but always confident that youâre processing them one at a time.
Naturally this can turn into a performance bottleneck, so it requires some careful thinking. If youâre writing to a transaction log, for example, perhaps itâs tolerable to keep the last few transactions in memory, only flushing to disk every now and then, in order to achieve better throughput. Yes, you risk losing some transactions - but depending on what youâre doing that may be perfectly acceptable.