Channels - Where to store state?

I’m building a sort of highly customized chat application with Phoenix, with some functionality similar with XMPP MUC.

Now, I have both persistent as well as transient state for both, users and channels. While the persistent data is comfortably saved in Ecto/pg, my problem is the transient state. Particularly for the channels, as user-specific transient state can be stored in socket.assigns.

Where to store channel-specific transient state in Phoenix?

For example, each channel has a quiet mode. With quiet mode on, users below certain rank cannot send messages to the channel (=topic in Phoenix slang).

Currently, my approach is to have a table in mnesia which holds all transient data for each channel. However, I don’t feel it’s quite the right approach to query mnesia every single time any user sends a message to check if quiet mode is on. I understand mnesia queries are cheap, but I was thinking if there’s a more sort of Phoenix way to handle this. I guess the most ideal way would be to carry around a channel-specific attribute, much like socket.assigns is.

Moreover, some parts of the channel-specific transient state have an expiry time, meaning that after a period of time they need to be removed from the state. What’s the best approach to handle this - should I spin up another process just to monitor them and when the expiry time hits, remove them from the state in mnesia?

1 Like

Have you tried stockings?

More serious note. Persistent storage is a trade off between speed and scale. As soon as you make that trade off you end up moving data from in memory state of day a gen server into a persistence layer.

You then also end up in your specific domains problems.

If you do not want to hit your persistence layer on every request you want caching or to not make the trade off in the first place. Bear in mind cache invalidation is hard and involves some very tricky trade off decisions that will not lead you to a single best answer.

Sorry I don’t entirely understand your point here. I have persistent data, such as user accounts and some channel specific data in Postgres. However, this data is queried quite rarely, inserted/updated even more rarely.

What I’m having trouble with is deciding the architectural pattern for maintaining transient channel specific state. For example, that quiet mode state currently resides in mnesia and is purely transient.

Honestly the way you’re doing it sounds totally reasonable. Queries to ram or disc copy mnesia tables go through ETS which is fast. In fact I think pg2 phoenix pubsub hits ETS a couple times on every broadcast anyway.

I have a process like you suggest to expire keys, although it’s for postgresql rather than mnesia, and it works great. It doesn’t get much load anyway so I never bothered optimizing or sharding its work.

1 Like

Thanks for giving me some peer validation for my approach! I’m using mnesia purely in RAM.

How do you actually work out the whole expiration -> delete in your application? You set up a process to track and do it, or is there perhaps some better way?

In my case the process is a gen_server that wakes up periodically (using send_after), deletes old stuff from PG using a query like expiry_timestamp <= now, then broadcasts to whatever topics were affected. The process is locally registered, i.e. it’s unique within a node but not within the cluster, since it doesn’t really harm if two nodes try to expire old stuff at the same time. But it wouldn’t be hard to make it a singleton (

If you need more accuracy, you could have the process use a send_after or timer per item rather than expiring stuff in bulk. Timers are less efficient than send_after but can be canceled.

Another approach is to check the expiry timestamp when you read the table (i.e. when sending a message). Then you don’t need a dedicated process.

Cachex has an example of the send_after approach:

1 Like

Cool, I’m intending to only build for a single node for now so I have some less worries.

Funnily enough, I was also thinking first about having some cleanup function invoke whenever somebody would send a message and the expiry timestamp was past the current time.

Since I’m new to Elixir and Phoenix, I didn’t even know about send_after but it looks like a really good fit for me. I’ll probably try to imitate your approach with the gen_server and see if I can build it up successfully.

It’s a bit hard to recommend the concrete approach, because the description is quite vague, but I’ll give it a try :slight_smile:

To avoid ambiguity with the term channel, I’ll use the term topic instead.

My first take on this would be to use a separate GenServer for each topic. You could hold all the transient data of the topic in that server. The server would also need to keep track of whether it’s in the quite mode. So now, whenever someone wants to send a message to the topic, they need to do it through the corresponding server. Since the server knows about its mode, it can easily decide whether to send the message or not.

Adding expiry logic can now be fairly simple with the help of :timer.send_interval or Process.send_after which would be invoked in the topic server. Either approach will result in some message sent to the server, which you’d need to handle in the corresponding handle_info and remove expired items.

This approach is very consistent, and when done properly, you’ll have no race conditions or strange behaviour. However, the topic server might turn out to be a bottleneck if you have frequent activity on a topic (frequent messages, mode changes, or other state changes). In that case, you could consider using an ETS table. ETS tables can boost your performance significantly, but they are appropriate only for some situations, so you need to think it through. In some cases a hybrid approach is needed, where all the writes are serialized through the server process, while reads are performed directly from the table.

If you go for ETS tables, and want to expire items, you could look into some of the caching libraries, such as CachEx, or my own con_cache. Cachex has more features and higher activity, so as the author of con_cache, I’d myself recommend looking into Cachex first.

If the transient state is really simple, then I’d consider ETS from the start. For example, if we only need to deal with the quiet mode, then I’d just have one ETS table where I’d store that info for each topic.

In more complex cases, I’d start with GenServer and do some testing to see if it can handle the desired load.

Again, the problem is vaguely defined, so it’s hard to precisely say which of the two would be a more suitable choice in this particular case.


Yes I understand I didn’t specify the domain very accurately here, thanks for giving your input nevertheless! It’s actually very interesting building a sort of modern administrated Multi-User Chat on Elixir/Phoenix, without the headaches of XMPP.

The nature of my application is:

  • topics are set up (=transient state established) whenever a user subscribes to it as the first user
  • a topic dies when the last user leaves it (=transient state is wiped clean)
  • lots of topics are born and die constantly at a high frequency
  • some topics will be massive = having tons of users subscribed to them, actively sending messages

The transient state each topic has:

  • title - changes rarely
  • member privileges (VIP etc.) - currently this is stored in socket.assigns, which feels like the best approach
  • keyword list - a list [] of strings which changes constantly based on user messages. These will each have an expiry time, and I believe the birth & expiry logic will get very complex as the app evolves
  • quiet mode (boolean) - will change rarely but needs to be checked on just about every message sent
  • message history - a list of the past (max) 100 messages or so. I want to avoid saving this persistently for now.

So my trouble has been trying to decide whether to make one sort of global state tree (like :mnesia) or whether to prefer more local approaches for each topic.

Your GenServer approach was also my first thought, but I read somewhere that in big topics (lots of subscribers) the GenServer could become the bottleneck, as it is just one process. Now I’m thinking to use a GenServer for the keyword list and maybe for the message history, and ETS / CachEx or your con_cache for the rest.

By the way, is there any reason not to prefer :mnesia over ETS? From what I understand, if I use the transactions :mnesia could be a more safe choice, although it does carry some performance penalty. However, that way, I wouldn’t need to serialize the writes through a channel specific GenServer, but I could still perform fast dirty_reads (if that makes sense).


Just to add a bit; one problem I could seriously use advice with is spam prevention.

Is there anyone who has built their own spam prevention solution to Phoenix’s channels? My approach now is to just have something in socket.assigns to track the messages sent, then use that to determine whether the user is allowed to send a new message yet or not.

Somehow my approach just feels too clunky. I hope there is a better way in handling this.

Notice that I suggested GenServer per each topic, so it’s not just one process. Although admittedly, it is one per topic, so it can still be a bottleneck for highly active topics with many users. But before doing some complex optimizations, this needs to be proved by measuring :slight_smile:

Given how your state looks, I’d advise starting with the GenServer approach. It looks like it might not scale past some point, but it will give you a consistent starting point.

Then, I’d implement a load tester, and analyze the behaviour of the single topic server. This should help you discover the capacity of the topic server. Perhaps it’s going to be good enough for the projected load.

If not, then you need to find the bottlenecks, and tackle them. You can use a variety of approaches here:

  • choosing proper data structures (for example, it seems that :queue with a manual counter would be perfect for the message history)
  • splitting the server into a couple of servers (e.g. one for keyword list, another for history)
  • managing some things in an ETS table (e.g. quite mode, title)

Mnesia requires a bit more ceremony, and you need to be careful with transactions, because on conflict, they will be restarted. If you want a fast in-memory key-value, I think ETS is your simplest option, and that’s where I’d start first.

You could do the same with GenServer that serializes writes, and an ETS table which acts as the snapshot of last writes. The writer server keeps the last state in its own state. When the write request arrives, it updates the state, and then stores it to the ETS table. All other clients read from that table, so they don’t have to wait for the writer to finish.

The benefit here is that you don’t need to use locks and transactions, so it’s easier to reason about the behaviour, especially in a highly concurrent system. This approach also paves way for actively managing the load, and applying backpressure, for example using GenStage.

1 Like

Yes, I meant one (or splitted into two) GenServer per topic too. You’re right, load testing is the only way to get any solid idea about the bottlenecks, and this will not be hard to test.

I will start building a ETS-GenServer solution for topic state management. I had no idea transactions were such a hassle and that there are so many things to take into consideration. Having a GenServer serialize writes, event at a performance penalty, makes sense to me now as the reads can still use the efficient ETS directly.

I’ve been looking into GenStage but don’t understand it well yet.

Thanks for all the insightful advice!

1 Like

@sasajuric I’ve been looking into your con_cache, it looks very solid and I’ll likely end up using that over Cachex or mnesia.

I’d like to try storing the keyword list in con_cache, utilizing the janitor process that checks for expired timestamps. However, my data will be in the format of {keyword, timestamp}. So the value of each key in con_cache would be something like:

 {"Politics", #DateTime<2017-12-30 12:08:18.598684Z>},
 {"Football", #DateTime<2017-12-30 12:08:18.598708Z>}

As your con_cache checks for expiry only for each key, not within dynamic lists, I was wondering if it’s possible to somehow extend the janitor process to have a custom Enum function to iterate through lists and check for timestamps there, then invoke another custom function for removal if it is past the given timestamp?

1 Like

I’m a bit confused. Do you want to remove just one part of the key, for example “politics”, or do you want to determine the expiry time of the entire element based on the entries in the key list?

1 Like

Yes, I want to iterate through those tuples, and only remove the tuple whose timestamp is past the expiry timestamp. I think Enum creates a new list, which I could then insert as the new value for the key. I would only need to iterate through them once every 30 seconds or so.

So say, [{"politics", timestamp}, ...] tuple’s timestamp is past expiry upon check -> remove that tuple from the list.

Does it make sense to write a separate janitor process that goes through the con_cache periodically, going through each value (list of tuples) checking for expiry?

1 Like

In case you haven’t read it, this blog post by Discord may help you with scaling out as it looks like you’ll encounter some of the same issues they did.

1 Like

What you’re describing here is a composite operation. When you change the value of the key (for example, by removing one element from its list), you’re effectively removing an item and inserting a new one, under a different key. As far as I know, ETS doesn’t support this operation atomically. Functions such as :ets.update_element will fail if you try to modify a key.

Currently, I don’t think key modifications should be supported by con_cache. However, I think you have all the pieces to make this happen.

First, you can set TTL per each row (see here). Therefore, when storing a new value, you could find the nearest future time, compute TTL, and set the expiry for that item. When starting the cache, you would provide the :callback option to ConCache.start_link (see here). Then, before the item is deleted, your callback will be invoked. In this callback, you can fetch the current value, modify the key, compute the next TTL, and insert the new key-value with this TTL.

However, I’m a bit puzzled with your cache organization. Why do you have such complex keys? How are you looking things up in such table?

1 Like

Yes actually, I was thinking about simply fetching the current key (the whole list), and then updating it with by basically replacing it with a totally new list as the value that holds the new state (keywords).

Thanks for the instructions! I was also thinking about a callback approach but didn’t quite understand how to work it out. Just so I got this right;

  • The callback itself would insert the new computed key
  • The delete function would execute before the callback function inserts the new key-value, thus not deleting the newly computed & inserted key-value itself

Am I understanding the callback execution / new key-value insertion wrong here?

I was of course not suggesting you to extend con_cache to suit my domain specific needs, just trying to find a solution to my own problem, which you have been helping greatly with.

The keyword list itself is a list of tuples {keyword, timestmap}, I don’t know of a better way of organizing it. The size will be max 15 tuples in the list, typically I assume about 3-5 at any given time. For now, I was planning on always fetching the whole keyword list and sending it out to clients.

The rest of the values in the cache are very simple, usually a string or an atom as the value.

As for the message history, I built it as a topic specific dynamically spawned GenServer with Erlang’s :queue, which works spot on for that. Next I need to work out on how to supervise that.

@mgwidmann thanks for that blog post link, it’s really helpful as they explain everything in a very pragmatic way there! The way they optimize message passing between nodes is particularly interesting.

1 Like

No, that’s the gist of my proposal.

The reason why I asked is because I wonder how are you accessing this data. Are you actually doing a key-based lookup or are you iterating through all the elements?

If latter, then I have some reservations about how well will ETS perform here. In any case, I advise you to properly measure and consider different options, including a plain data structure.

1 Like