I haven’t been able to find a clear answer on this from Google so I wanted to pose the question here.
Can a supervised Elixir process tolerate an Out of Memory error? It seems like it would based on isolated heap space, supervision and recovery. I know that I’ve read that the VM itself can’t tolerate running out of memory from too many large concurrent processes or atom flooding, but I wasn’t sure how this applied to supervised processes.
As soon as there is a single process that fills up the memory (current free memory of the machine is less then the required size of the new heap) it will bring the complete BEAM down. At least thats how it looked like, when it happened to me, but thats been with erlang OTP 16, some time ago
You can limit the max heap size per each process with Process.flag:
iex> spawn(fn ->
01:06:55.841 [error] Process: #PID<0.85.0>
Context: maximum heap size reached
Max Heap Size: 233
Total Heap Size: 448
Error Logger: true
GC Info: [old_heap_block_size: 0, heap_block_size: 466, mbuf_size: 0, recent_size: 0,
stack_size: 14, old_heap_size: 0, heap_size: 215, bin_vheap_size: 0,
bin_vheap_block_size: 46422, bin_old_vheap_size: 0,
See erlang docs for more info (look for