Offloading cache to new containers during docker rolling deploy

We have a cluster of 3 nodes in docker containers (AWS, no Kubernetes.)

Each node in a warm state has 200+ dynamically supervised processes with states. Upon rolling deploy, we want to offload these processes to newly started fresh instances.

Is there some kind of “common recommended approach,” or should we simply do that from Application.prep_stop/1 and/or each GenServer.terminate/2 low-level?


I don’t know if this is standard, but horde exists to distribute processes across nodes and also supports restoring state to other nodes using it’s version of a Supervisor.

1 Like

Thank you! With any kind of external storage, it’d be a no-brainer even without horde. I was wondering if it’s possible without a necessity to serialize and then deserialize processes to the external storage.

Oh I see, using external storage without serialization. Outside of using Mnesia to write state during termination and read during startup, nothing for me comes to mind. I’d personally try to lean on erlang:term_to_binary/1 to store into a cache and erlang:binary_to_term/1 to load if it absolutely needs to be external. There is a good amount of flexibility there, as you can create a ‘migration’ function to transform the old state into a new state format if needed.

1 Like