High CPU for beam.smp

Hello

I am using cpulimit to limit the CPU usage for an app I run via System.cmd:

{output, _} = System.cmd("cpulimit", ["-l", "20", "--", "ngspice", "-b", file_path], stderr_to_stdout: true)

it works well, however, after few hours, beam.smp starts eating the CPU resource ex: %98.0

Is there a relation between using System.cmd and CPU usage for beam.smp ? or how would I investigate this high CPU usage?

  1. Is this in production or development?
  2. If development then are you sure that this is your application that is gobbling CPU? Maybe you have other process, like ElixirLS running?
1 Like

Thanks for your reply, Its for production, I only have this app running on this server, I am currently setting monit to restart phoenix when beam CPU exceeds %90, but I need to know the reason behind this…

Another question, does this application produce any output?

There is a temp file created by the application and read by the used command above System.cmd
Here is the implementation:

    {:ok, fd, file_path} = Temp.open "netlist.cir"
    IO.write fd, netlist
    File.close fd
    {output, _} = System.cmd("cpulimit", ["-l", "20", "--", "ngspice", "-b", file_path], stderr_to_stdout: true)# /usr/local/bin/ngspice_proxy
    File.rm file_path
    output #consumed by application and sent to client via channel

I think that the problem there may be that output is growing. Does memory usage grows as well? If so then maybe you should look for other runner than System.cmd/3 for better support for stuff like streaming command output.

1 Like

For general high cpu investigation you could try etop http://erlang.org/doc/man/etop.html or even remote_observe https://github.com/dominicletz/remote_observe

1 Like

I’m not familiar with cpulimit with systemd but I assume it uses cgroups and cpu quotas?

If so you’ll want to follow the same instructions for use with docker to disable the schedulers busy wait +sbwt none and to set the number of schedulers to a lower number depending on what you are limiting the cpu to. Before OTP-23 if you set a cpu limit the VM still booted 1 scheduler for every physical cpu on the system and not based on the cgroup configuration. So if you are limiting the cpus to 1 then you should add +S 1 to the vm.args. I’ve written about both of these in more depth here https://adoptingerlang.org/docs/production/kubernetes/#container-resources

This might not be the full issue you are having but it will at least get rid of any noise and allow you to investigate what your app is using CPU for, instead of VM overhead.

2 Likes

This a a great blog post for digging into your apps internals. Specifically adding recon and eep to your deps for some in depth analysis.

I found some logging stuff using inspect that was using more CPU than it should have.

2 Likes