Horde memory usage


We are using Horde to have “about one” process for a key in a cluster. We poll external service and each run we ask Horde to start a process with a key. If we already saw the key, Horde reports already_started as expected. However, we observe that internal merkle_map grows linearly with startup attempts.

crdt_size = fn ->
  pid = Process.whereis(Elixir.MyHordeSupervisor.Crdt)
  state = :sys.get_state(pid)
  map_size(Map.get(state, :merkle_map).map)

iex(9)> crdt_size.()
iex(11)> crdt_size.()
iex(12)> Enum.each(1..10000, fn _ -> start_constant_queue.() end)
iex(13)> crdt_size.()                                            
iex(15)> Enum.each(1..10000, fn _ -> start_constant_queue.() end)
iex(16)> crdt_size.()                                            

depending on number of keys and refresh rate memory limit hit pretty quickly.
Elixir.MyHordeSupervisor.Crdt process memory size will be hundreds of megabytes.

While writing code we following the docs on hexdocs.

Any help would be appreciated. Thanks!


Not an expert, but this might help perhaps :


We must be careful not to accidentally load invalid state into our processes. For example, if the state of a process changes between deploys, then you might load invalid state and cause your process to crash. Here are some other things to look out for:

  • loading stale data
  • loading data fails, causing process to crash
  • there is no data to load, causing process to crash

Anytime we are loading state into our processes from an external source, we should be very careful. Erlang’s “let it crash” philosophy is predicated on the idea that a process will be in a known good state after a restart, and if we subvert this by loading invalid state, it could have a negative impact on the stability of our system.

we followed this problem up here https://github.com/derekkraan/horde/issues/200

1 Like