Best way to generate sequential ids

Hello, we have this situation right now:

Every time we make a sale we need to emit a document with a unique and sequential document number.

These numbers belong to a sequence based on the product sold (each document is for one product), we have a reserved range of numbers for each product, the ranges are big enough that we do not need to worry about running out of numbers right now.

We are looking at some options to generate these numbers (postgres sequences, Kafka events, Zookeeper sequence nodes), and I thought that maybe this is a good fit for Elixir.

I’m thinking of spawning a GenServer for each product configured with the sequence range and current value, so I can use it to generate the subsequent numbers.

I would persist the updated values to a DB to restore the processes in case of failure, and in case of needing some redundancy I could use something like libcluster + swarm to spread my GenServers across a cluster

Does this look right? Maybe I’m missing an obvious alternative, any pitfalls I might not be taking into account?

1 Like

ksuid generate unique, sortable id.

You could (post/)prepend with your product id if You want.

Now your solution with a Genserver per product could work… it depends how much do You sell, but if You sell too much You might have a bottleneck. It will not work that well in a distributed mode.

It depends also if You just want an increasing counter, or uuid style id.


I saw something like ksuid before, the problem is that we need the ids to be inside a particular range, so yeah an increasing counter. For example:

For product A we have the range 202043200000000 to 202043210000000, so the generated numbers would be:

  • 202043200000000
  • 202043200000001
  • 202043200000002

For product B we could have a range of 13200500 to 20500000 so

  • 13200500
  • 13200501
  • 13200502

I would probably have a look at erlang counters if I had to do the same task.


ty for the link, i’ll take a look

Do the counters / sequences need to be durable across BEAM restarts, or internally consistent across multiple BEAM nodes? Both of those are challenging with any of the ideas above. Some other questions that come to mind:

Do they need to be monotonic across all machines?
Are gaps permissible?
Are collisions disastrous?
Can you overflow a given product’s range?

Most of these point me towards solutions on the persistent storage rather than application code, i.e. a specialized use of Postgres sequences or similar.

I’d also be curious what real-world or business constraints are at play here since the described technical limitations are unusual.


In my company we had the exact problem you have and we solved it using Postgres advisory locks: Point 13.3.5

So you don’t deal with the concurrency on your elixir application, but on the database level.

So let’s say you want to have a sequential number for product “A”, and 2 concurrent requests come

Request 1 and 2 concurrently:

    1. Comes in
    1. Comes in
    1. Sets advisory lock with id :erlang.crc32("A")
    1. Sets advisory lock with id :erlang.crc32("A")
    1. Lock is being used, I’ll wait until it is free
    1. Query count “A” => returns 2
    1. Inserts new record with 3
    1. Finishes transaction and releases lock
    1. Lock is available, I can proceed
    1. Query count “A” => returns 3
    1. Inserts new record with 4

I hope I was explicit enough. Using postgres takes away you a lot of headaches. You don’t need to worry about if your new GenServer is a bottle neck, or if you deploy on a cluster how should you manage your processes.


The counters should be durable and consistent.

Having gaps is permissible but not ideal

Collisions are a big problem

Overflowing a range should not be an issue (range >>> sale volume)

My first thought was using something like Postgres sequences, but then I thought that having it on application could be a good fit and also help bring some devs in the team into the elixir world

Business constraints, in short we sell insurance policies, we have ranges of reserved policy numbers for different products from different companies, so we are limited to those IDs for public facing documents

1 Like

@emoragaf I just had similar requirements pop up in my project and I’m wondering if you chose an application-oriented solution?

I currently cluster my Elixir project using libcluster+swarm so I think I can pull this off in pure Elixir/Erlang using globally-named processes. But I’m not looking to reinvent the wheel :grinning_face_with_smiling_eyes:

edit: fixed a typo

If you use a database like postgreSQL, you can use an auto-incremented sequence.
The problem of using a OTP process is you will still need to serialize it in storage or you will start from 1 again once the cluster reboot. With serialization come with all the ACID problems, so you will be looking precisely at reinventing the wheel.

1 Like

This will not provide the properties that the OP wants. auto-incrementing sequences do not guarantee sequential ids in records. Sequences are non transactional, so if you open a transaction, insert a record (which increments the sequence), and then rollback the transaction, the sequence is still incremented.

I’m also considering an asynchronous approach, which might break some of the OP’s constraints.

My application already generates UUIDs for persisted documents, but we now want to additionally generate sequential IDs in the form of monotonically incrementing integers. Occasional gaps are permissible but collisions are not. I’m not using postgres as a storage backend and would prefer not to introduce it for just this purpose.

So I’m considering 2 different approaches:

  1. Use Swarm to manage Counter processes, dumping state to e.g. S3 on shutdown
  2. Enqueue an event to asynchronously generate the integer ID after document creation

The tradeoff to option 1 is Counters scale linearly with document types, and S3 failures (though unlikely) would be borderline unrecoverable. Option 2’s big downfall, IMO, is that we’d have to accept eventually consistent IDs.

:ets interests me as a possibility, but I don’t have much experience with it. I’m wary of any gotchas in a distributed setting

If the OP cannot tolerate occasional gaps then it is a hard problem indeed.

we actually went that exact same route, we have a process per product on a libcluster+swarm cluster
we can ask for the next id in the sequence to those processes

1 Like

I did the same thing, it’s good-enough and actually quite simple. I’ll elaborate in case anyone’s curious :slight_smile:

I have a function SequentialID.get_and_increment(tag) that roughly:

  1. Ensures a SequentialID.Counter GenServer process is started under a DynamicSupervisor
  2. The Counter process fetches the integer ID value in init/1 from a pluggable storage backend
  3. The Counter process is call'd to increment the counter, put it in the storage backend, and reply with it

The Counter process is named via Swarm, e.g. {:via, :swarm, {SequentialID.Counter, "my-tag"}} to ensure only one instance of the process is started in the cluster, per tag. Forming a dynamic cluster is easy to do with libcluster.

1 Like