I’m happy to introduce TigerBeetlex, an Elixir client for TigerBeetle, the financial transactions database. I’ve been working on this for some time and I finally feel it’s ready for general usage.
The goal of TigerBeetle is making financial transactions safe and fast, and TigerBeetlex allows you to interact with it from the Elixir world. If you’re building applications requiring high-throughput, fault-tolerant double-entry accounting, TigerBeetle, and now TigerBeetlex, could be a great fit.
It exposes all the available operations with two different flavored APIs: a message-based one, useful to plug into and existing process architecture, and a blocking API, providing a more familiar RPC-like interface. Moreover, it translates TigerBeetle binary data back and forth to Elixir structs.
The client is built upon the official TigerBeetle Zig client as a NIF, which is the approach used by all other TigerBeetle high-level clients. The NIF is built using build_dot_zig, so it automatically downloads the Zig toolchain for you and doesn’t require any system dependency to be built.
The docs contain a walkthrough that can be executed in LiveBook and should cover all the main features, but I suggest checking out the official TigerBeetle docs, especially the recipes section, which show some really cool accounting-foo to implement concrete use cases.
I’ve also done an introductory talk to TigerBeetlex at ElixirConf EU 2025, I’ll post the recording of the talk here when it’s released.
Let me know if you have any questions or suggestions, someone on the forum was already asking for integration with Ash which was something I already had on my mind.
I’ve released version 0.16.47 today (like I do almost every Tuesday, following TigerBeetle release schedule). This time though the version doesn’t just bump the TigerBeetle client version, but also adds support for TigerBeetle Change Data Capture functionality.
To provide a little more context, since version 0.16.43 TigerBeetle has the ability to stream its transactions to RabbitMQ, so in this version of TigerBeetlex I’ve added structs to decode the JSON payload of the events and a guide showing how to use Broadway to build a pipeline processing TigerBeetle CDC data.
Thanks @rbino for your efforts in introducing TigerBeetle to us.
As I am working in payments domain it is very interesting but I had a question whether I need another database for storage of configurations and related processing request/response,.. Etc besides using it for account ledger and postings?
TigerBeetle is meant to be used only for (financial) transaction processing, so you’ll probably need another general purpose database in your system for a full application.
Basically you use TigerBeetle as your data plane (to process a high volume of transactions) and your general purpose database of choice as control plane, storing all the other kinds of data that vary less frequently (e.g. user metadata, application data and so on).
It had some nice code and overall was a good example of Zig NIF in Elixir, but it contained very strange bugs and approaches in ID generation (like having a global agent singleton to generate IDs for solution which is supposed to handle hundred thousands requests per second). Please take a look when you have time, thanks!
How’s your experience with TigerBeetle been so far? I’m evaluating it for an upcoming project, and I still have a bunch of questions.
I’m curious what’s the best approach for something that you expect to be atomic across your general purpose DB (Postgres) and your TigerBeetle DB.
For example, let’s say you have an Invoice in your general purpose DB and you’re using its id as user_data_128 on the TB Transfer. So when you bill an account, you would create a transfer that debits this amount, crediting the account doing the invoicing. And then the opposite when the billed account pays up, crediting them and debiting the account doing the invoicing. At that point, I would probably want to mark the Invoice as paid in some way. But then you run into the questions like, what happens if the server goes down when the transfer has been written in TB but not in the general purpose DB? Or if you go to update the invoice and this write fails because the invoice didn’t ever exist, etc?
I’m wondering if it even makes sense to have an explicit enum like that on the Invoice in the first place or if TigerBeetle should just be the only source of truth, and when you need to look that up, you just hit TB to see if there is a transfer with that Invoices id as user_data_128 and a code that represents an invoice being paid. E.g. code=10 is an invoice billed, code=20 is an invoice being paid, code=30 is an invoice being cancelled or otherwise comped.
To be transparent, I didn’t actually get to use the library in a project yet I’m mainly developing it as an interesting project where I can work with both Elixir and Zig. You can have a look at TigerFans for an example of a project built with TigerBeetle (not in Elixir, but data modeling is language agnostic anyways).
Regarding transactions across TigerBeetle and Postgres, there’s a recent blogpost on the TigerBeetle blog detailing why you would want to use a specific order for write and reads (i.e. write to TigerBeetle last, read from TigerBeetle first). For temporary failures, TigerBeetle pushes towards end-to-end idempotency to be able to safely retry operations, more details here.
Ah, good to know. I haven’t dug into your code yet, but based on my understanding of how TB handles batches of up to 8189 events, but only one batch at a time, my intuition is that clustered Elixir/Erlang could be great for this. My approach would be a single GenServer on the cluster that is the TB client, and it stacks up the events to be sent off, then actually sends the request when it reaches the limit OR some set amount of time has passed since the first event was queued up.
The TigerFans blog post shows how the author ran into that as soon as he added more workers. Would be cool to do this TigerBeetle Ticket Challenge in Elixir (maybe I’ll be the one who gets nerd-sniped into this). One of the really impressive things about the way TB handles these requests is that it actually makes the clients do a little more work on their end do that when gets sent to TB is exactly it needs, which lets you take work of its hot path and also lets you do it in parallel.
This is exactly the experiment I did some time ago for the TigerBeetle Hackaton.
Note that back then the underlying TigerBeetle client didn’t have automatic batching, but now it does; not sure why TigerFans wasn’t able to leverage this though.