What happens when Erlang VM / Beam runs out of memory?

If a “connection” varies in weight this much, counting them won’t really capture the situation; you’d need to construct a more-specific measurement to control server load in that case.

For instance, in GraphQL a “request” is a poor measurement of load because it can traverse the data graph to arbitrary depth & size, so tools like Absinthe do complexity analysis to guard against buggy/malicious requests devouring all the server’s resources.

1 Like
  1. VM out of memory, asks for more memory.

  2. kernel says f-u, kill -9

I think when the VM needs to allocate more memory, it’ll call some variant of malloc in the C library (which then does the appropriate system calls). It’ll result in an error (ENOMEM) if there’s no memory available.
The kernel will not kill the process. Well, it might at some point, but not in-band with this call.

Furthermore, a useful thing to keep in mind (on Linux) is whether overcommit is enabled. Overcommit allows the kernel to hand out virtual memory without having the physical memory available to back it up. The kernel will try to find the required physical pages at the moment that the program actually starts writing to them, instead of at the moment that the program requested them.
With overcommit, it’s like the banks, you deposit money into them, they lend it out, but if all customers suddenly want to withdraw their deposits, it’ll turn out the money’s not actually there. A bank run.

Overcommit usually works very well and improves performance, until memory is really exhausted. But when that happens - when no free physical pages are there for the kernel to map a virtual page to - then the kernel needs to hunt for pages. It can free up some by unmapping non-dirty mmap-ed files etc, but if it’s really out of options, the OOM killer will start killing processes.

So, with overcommit, your system will become very very slow before the BEAM gets killed, and it might not even get killed at all if the kernel’s OOM killer chooses another process as the victim. You can adjust the heuristics a bit (see /proc/$pid/oom_*) and there are recent improvements in the OOM heuristics. But with overcommit on, you have fundamental insecurity.

1 Like

In our experience, it just happens and there’s not much you can do about it.

What happens to us is that things are humming along, each node is using a 150M out of the 2G they are allowed and then bam something unexpected happens and we get crashes. Kubernetes automatically restarts them, customers never notice, we get a crash dump sometimes, and we can usually find the cause. It probably amounts to a hundred crashes per year or so.

Typical causes are things like we will get a huge spike in requests for a certain endpoint that queries a lot of data, all of them being in flight at the same time, and so we crash. Only a couple of our endpoints are particularly inefficient, and so it is hard to just limit those endpoints and not all the others.

Another thing that happens, sometimes there are library bugs that get into infinite loops. We had a fun issue where at 2-3am every other night we would have random crashes. It turned out some chinese user was using our system and his filename string was causing a bug in one of the elixir slug libraries that caused it to infinite loop instacrash.

We used to have a problem where parsing a super large amount of xml (say a very large rss feed) would immediately balloon our memory usage by 1G. It only took a couple of those on the same node at the same time to immediately crash the node. We’ve since changed to a rust / elixir library and that no longer happens.

Once in awhile I’ll put in code to try and mitigate this. Like stop accepting new requests when memory usage is too high, but it doesn’t really work. Memory usage usually goes up so high so fast that it is too late to react. If it were going up gradually enough to detect it would probably be better to crash than to let it run in a hobbled state, randomly rejecting requests.

I do wish there was something in the beam to maybe mark a particular pid with a memory limit, that way we could easily mark specific routes as memory limited and largely limit the problem to a specific request crash instead of taking out the entire vm, but I don’t think there is anything like that, and I’m not sure how it could exist really with processes sharing immutable values.

4 Likes

It’s easy to ballon memory with anything involving binaries and loops if those loops create binaries in any significant manner (either big sizes or a lot of them) and a process doesn’t have “breaks” where it can be garbage collected. Whenever I see an OOM error the first thing I check is putting a call for :erlang.garbage_collect, either after each iteration if it’s not a tight loop, or after a X iterations if it’s, and up till now that always solved it (not saying there aren’t other cases).

1 Like

Or pre-allocation to BEAM for production servers, similar to what MSSQL does.