I have accidentally written the following module below. I just thought the first ‘perform(url)’ function, when called, will call the second one. Now when calling QueeJob.Url.performs("test") code will eat entire system memory and kill VM. I know that code is wrong, and crash dump points to fact garbage collector got crazy with this. This situation raised the question to me how one can fight potential memory leaks in app. We have a supervisors that help us when something crash. But is there a way how to protect against such memory leaks which kill entire VM?
defmodule QueeJob.Url do
def perform(url) do
perform({url,0})
end
def perform({url, sleep}) do
:timer.sleep(sleep)
IO.puts("URL processed: " <> url)
end
end
No, there is not a way to prevent you from this kind of bugs. If you need a highly reliable system you need at least two machines anyway. There is only one good advice there, don’t do this sort of bugs and test thoroughly.
This code doesn’t look wrong to me, I’m not getting the point, what’s wrong with the code?
It froze my computer and I had to press the power button to shut it down.
The first function clause accepts any call with one parameter. The second function clause also expects one parameter: a tuple containing two elements. However, it’s never called, since the first clause already matched.
A good rule of thumb is to always place more specific pattern matches above less specific clauses.
Thanks, something to explore.
But how would you enforce these memory constraints on the code above ?
I’ve also found an interesting thread, not sure if it still applies though:
There are no mechanisms in the Erlang VM to curb the growth of the memory. The VM will happily allocate so much memory that the system shoots into swap, or that the virtual memory is exhausted. These may cause the machine to become unresponsive even to KVM console access. In the past we have had to power cycle machines to get access to them again.
The queue-based programming model that makes Erlang so much fun to write code for, is also it Achilles heel in production. Every queue in Erlang is unbounded. The VM will not throw exceptions or limit the number of messages in a queue. Sometimes a process stops processing due to a bug, or a process fails to keep up with the flow of messages being sent to it. In that case, Erlang will simply allow the queue for that process to grow until either the VM is killed or the machine locks up, whichever comes first.
This means that when you run large Erlang VM’s in a production environment you need to have OS-level checks that will kill the process if memory use skyrockets. Remote hands for the machine, or remote access cards is a must-have for machines that run large Erlang VM’s.
For people using Beam in production, does this still apply ?
If so, is there a way to circumvent that kind of behaviour ?