Hi I am having some random memory spike on my Phoenix application and I was trying to pinpoint the root of the problem. I am actually using a third party tool called AppSignal that was able to logs the CPU/Memory usage by the application however it doesn’t show the culprit processes that is hogging the memory.
I was able to locate the information I need by firing up an observer and connecting it to my remote node, specifically the Process tab as well as the Load Charts and Memory usage but those information are lost once the spike is gone, is there a way to “persist” those metrics somewhere so I can review them later instead of staring at the observer all the time?
One tool that I have used extensively in the past is Recon by Fred Hebert https://ferd.github.io/recon/recon.html. This tool provides a bunch of utilities to help introspect your running application and in your case to debug memory issues. Specifically, you may want to look at the :recon.bin_leak/1
function https://ferd.github.io/recon/recon.html#bin_leak-1 as you may be leaking binaries. From my experience…this is one of the more common memory related issues I have come across in production. Just a hunch though 
If that doesn’t help you, you can always do some introspection your self to see what is going on. For example, in an attached IEx session you could do something like this:
Process.list()
|> Enum.map(fn pid ->
{pid, Process.info(pid)}
end)
|> Enum.sort(fn {_, pid_info_1}, {_, pid_info_2} ->
pid_info_1[:total_heap_size] >= pid_info_2[:total_heap_size]
end)
|> Enum.take(5)
To get the top 5 processes with the highest :total_heap_size
. This functionality could serve as the basis for your logging solution…but after the issue is sorted out, you may not want to keep it around. Curious to see how you solve your issues…keep us posted!
1 Like