Phoenix Liveview - Best approach to maintain a global state in-memory

Hi there,

I am currently developing an application using Phoenix LiveView. This application is meant to generate a visualization of something. A visualization that is global/shared, which in turn requires a server-side global/shared state. I would prefer to not use a database at this moment in time, as I do not need the persistence and I fear it might be too slow on a large scale. More concretely, my requirements are as follows:

  • When I start the (web)application, a default (shared) state should be generated and kept in memory. I would prefer to not use databases at this moment.
  • When a user connects to the (web)application and thus sets up a liveview session, the shared state should be fetched and used.
  • The shared state can be updated via API calls made to the server.
  • Whenever the shared state changes, all connected users should receive a notification of this fact.
  • No persisted database is used.

I currently tried Phoenix LiveSession to achieve this goal, but as the name suggests this only seems to keep a server-side session state. It is not a global state with which all users can interact as described in the requirements above. Another solution I read and thought about was using a GenServer, but I couldn’t seem to find a good explaination on how to concretely do this in LiveView. Combined with the fact I am pretty new to all of this makes me pretty lost in regards to that option. As for other solutions, I mainly seem to come across solutions that make use of a persisted database.

Hopefully someone here can put me on the right track!

Small note: I am pretty new to Elixir :slight_smile:

What you probably need is a regular ol’ GenServer, maybe with a dash of Phoenix PubSub to notify connected sockets of changes. I’d recommend looking to the Elixir Guides on GenServers to get started.

1 Like

All of the straightforward solutions “might” be too slow; for instance, the simplest possible in-memory storage (a single GenServer) forces every access to the shared state through a single process’s mailbox!

+1 to what @zachallaun said:

  • start a GenServer from your application’s supervision tree with a static name (we’ll call it MyApp.Databucket for concreteness)
  • the Databucket GenServer receives messages with update instructions, and pushes update notifications to PubSub
  • LiveViews talk to the Databucket process and subscribe to PubSub for updates

Hi! Thanks for both of your (super) quick replies @al2o3cr and @zachallaun , it is greatly appreciated!

I am familiar with GenServers, in fact this is exactly of what I will be trying to visualize the behavior. I also think this will be probably the simplest solution, especially in combination with a PubSub system.

However, I am a bit lost on how I would exactly go about setting it up in Phoenix LiveView. For example:

  • Where would one put all the code to start the GenServer and setup the PubSub when the application starts?
  • How do users know to which GenServer to talk to retrieve the state? More concretely, when setting up the LiveView session I suppose I need to have the PID to be able to talk to it?

My apologies if these questions seem very trivial, I am pretty new to all of this :slight_smile:

You main application file (application.ex) contains startup code, so your Genserver would be started there, added to the children. If you have created a default Phoenix application, the PubSub environment should already be in there. Your application.ex will look something like:

defmodule MyApp.Application do
  use Application

  def start(_type, _args) do
    children = [
      # Start the PubSub system
      {Phoenix.PubSub, name: MyApp.PubSub},
      # Start the Endpoint (http/https)
      # Start my state handling GenServer

    # See
    # for other strategies and supported options
    opts = [strategy: :one_for_one, name: Classify.Supervisor]
    Supervisor.start_link(children, opts)


You can encapsulate the PID problem for each LiveView by starting the GenServer with a name, and wrapping access to it via public functions in a module. Something like:

defmodule MyApp.StateServer do
@moduledoc """
Does the stuff
use GenServer

  def start_link(_opts) do
    # Start the GenServer with a named process
    GenServer.start_link(__MODULE__, %{}, name: __MODULE__)

  @impl true
  def init(_) do
    state = build_initial_state()

    {:ok, state}

  # Public API
  def update_state(change) do
    # PID is automagically looked up based on name of the process, {:update, change})
  def get_state() do, :get)
  # Internal functional core to modify state
  defp build_initial_state() do
    # Build state
  defp apply_change(state, change) do
    # Apply change to state
    # Broadcast new state with PubSub

  # GenServer implementation of API
  @impl true
  def handle_call({:update, change}, _from, state) do
    state = apply_change(state, change)

    {:reply, :ok, state}

  def handle_call(_get, _from, state) do
    {:reply, state, state}


You would update the state by calling:


And retrieve it (this would be blocking, so only where you can’t get it via PubSub), like this:

current_state = MyApp.StateServer.get_state()

Finally, it is worth noting that ETS tables can be set up for concurrent, public reading. A pattern I use quite often when I wanted a state store, but where PubSub isn’t suitable, is to have a GenServer “own” a public, named ETS table that only the GenServer process can write to, but any process can read from. It might be useful for the initial read when a LiveView process starts, before getting updates from PubSub.


In a library I use a very basic Agent-based store in LiveView to store (semi) persistent settings for the user without needing database accounts. Examples:

1 Like

@tfwright’s example is a perfect demonstration of the power of OTP abstractions as well: he’s absolutely right that Agent would be a better, simpler behaviour if all you need is shared state. By wrapping the calls in a public API, however, you can transparently switch to GenServer down the line if you needed to add additional functionality. The calling code would never care whether it’s an Agent or a GenServer under the hood.

1 Like

Erlang persistent_term is very simple and the best performing for global reads. You incur a small hit when you update the data. If you are updating every couple of seconds it’s not a good choice. If you are updating the data every min or more then it’s a very efficient solution.

1 Like

Wouldn’t using a library like Cachexl be a simpler option?

Simpler? No. More flexible? Maybe.

Persistent term is built in to OTP, so you have access to it with zero additional dependencies. The interface is essentially get/set — doesn’t get much simpler than that. :slight_smile:

1 Like

Just to add another warning about the single point of access if you go with just 1 Genserver. All access to the inmemory state provided by the Genserver is serialized, because it’s a single process handling possible(?) multiple calls by your description, this could be fine and could even be what you want if you want to guarantee order access to the state.

If you want to add paralelism tho, either go with a solution like Cachex, Nublex or even why not, ETS wrapped in a simple public interface.
Personally I think I would use ETS for this.

1 Like

Agree. I think it is somewhat mitigated in this case as updates are distributed by PubSub. By wrapping the GenServer access in a public API, it would be straightforward to swap out the implementation to read from a publicly readable ETS table if scaling does become an issue.