Can Elixir processes tolerate out of memory errors?

I haven’t been able to find a clear answer on this from Google so I wanted to pose the question here.

Can a supervised Elixir process tolerate an Out of Memory error? It seems like it would based on isolated heap space, supervision and recovery. I know that I’ve read that the VM itself can’t tolerate running out of memory from too many large concurrent processes or atom flooding, but I wasn’t sure how this applied to supervised processes.

1 Like

As soon as there is a single process that fills up the memory (current free memory of the machine is less then the required size of the new heap) it will bring the complete BEAM down. At least thats how it looked like, when it happened to me, but thats been with erlang OTP 16, some time ago :wink:

2 Likes

You can limit the max heap size per each process with Process.flag:

iex> spawn(fn ->
        Process.flag(:max_heap_size, 233)
        IO.inspect(Enum.to_list(1..1000))
      end)

01:06:55.841 [error]      Process:          #PID<0.85.0> 
     Context:          maximum heap size reached
     Max Heap Size:    233
     Total Heap Size:  448
     Kill:             true
     Error Logger:     true
     GC Info:          [old_heap_block_size: 0, heap_block_size: 466, mbuf_size: 0, recent_size: 0,
 stack_size: 14, old_heap_size: 0, heap_size: 215, bin_vheap_size: 0,
 bin_vheap_block_size: 46422, bin_old_vheap_size: 0,
 bin_old_vheap_block_size: 46422]

See erlang docs for more info (look for max_heap_size clause).

10 Likes