The line I usually give in my talks is "run different things separately", meaning power different activities (jobs) by separate processes. As an example, I gave a high-level overview of one component in my first Erlang production two years ago at ElixirConfEU. The relevant part starts here.
What makes thing/jobs different? A simple factor I use is to consider whether they can fail/succeed separately. If I need to do
Y, and failure of
X doesn't imply the failure of
Y (or of the entire task I'm doing), then they should likely be powered by separate processes.
An out-of-the-box example are supervisors. A supervisor is running in a separate process, so it can do the work even if workers themselves fail.
Another variation of the supervisor: say you want to start some job, and report to the user when it finishes (regardless of the reason). Then the reporter should run separately from the worker. The reporter monitors the worker, and can always know when the worker is done, even if the worker crashes, or is brutally killed from the outside.
Yet another variant: periodic job (cron). A separate process (let's call it manager) ticks, starts the worker, and monitors it. Therefore, the manager can always do its job, regardless of workers success/failure.
Another great example in practice are Phoenix channels. Each channel represents a separate conversation between the client and the server, and is powered by separate processes. If one conversation crashes, other conversations are still working properly, and the socket is not closed. It's also not just about crashes. As I've explained in my recent ElixirDaze talk, separate processes guard you from the total paralysis of the system. If one of your conversation (channel) is stuck, say due to a logical bug, or suboptimal code, all other conversations (and the socket) are still working properly.
Also, as @Qqwy hinted, separate processes sometime make sense as an optimization technique. Splitting a large computation into parallelizable chunks might improve the running time, even if the total work is all-or-nothing (all subtasks need to succeed).
Another example is doing the work which needs to allocate a larger amount of temp memory. You could start a separate process, and specify that it starts with a larger heap, do the work there, then send the result to the caller and stop the process, which will lead to immediate memory release, without putting the pressure on GC.
So the usual factor for splitting is IMO error semantics (should failure of
X lead to the failure of
Y?), or, in some special cases, technical optimization.