How to handle complex ETS table updates

I’m writing an application that needs to maintain many buffers that flush to a different location-partition combination in a rotating fashion. There are multiple locations with varying numbers of partitions.

In order to manage this, I’m planning on using an ETS table to maintain a set of records (one per location) that basically look like this:

{"location_key", max_partition, last_partition}

In the above example, max_partition is the largest possible partition index of the given location and last_partition is the last partition index of the given location that was written to.

Whenever one of the buffers flushes, I want to increment last_partition using max_partition as a threshold. I’m aware that there is a function that does almost exactly this but, as far as I can tell, I would have to provide the threshold when I make the call. Obviously, I could first fetch max_partition and then use :ets.update_counter/4 but the trouble is that max_partition can be updated (albeit rarely) so splitting the fetch and update operations opens the door for race conditions.

So, my question is: Is there a way to do all of this in one operation?

(As an aside, I’m also aware that this problem would be technically easier if I used a GenServer, but this will be a very high traffic table so the performance of a GenServer likely won’t cut it in production)

I would attempt :ets.select_replace but that only guarantees atomicity and isolation for a single object read + replace, not everything that matches.

Failing that, you can just use Cachex because it has what you need i.e. transactions.

as @dimitarvp pointed out you need a transaction. if you don’t want to add additional library, you can use either :global.trans/2 or :global.set_lock/1 and :global.del_lock/1.


A GenServer is fine if you do many reads and not so many writes.

A GenServer per-buffer could also work well, if you do not have too many buffers.

1 Like