Best way to start Mnesia in Elixir/Phoenix application

We’ve just started using mnesia (only mnesia, no Memento or Amnesia) in our Phoenix 1.5 application, and we’re starting it like this on application startup:


   def start(_type, _args) do
    :mnesia.create_table(:rps, attributes: [:uuid, :gamestate], disc_copies: [node()])

    children = [
       # code emitted

I added :mnesia.stop as the first argument as I had a situation in which I deleted the Mnesia.node@host file and :mnesia.screate_schema would not recreate the directory without the prior stop function.

As per the docs you need to stop Mnesia in order to create the schema:

mnesia:create_schema(NodeList) initializes a new, empty schema. This is a mandatory requirement before Mnesia can be started. Mnesia is a truly distributed DBMS and the schema is a system table that is replicated on all nodes in a Mnesia system. This function fails if a schema is already present on any of the nodes in NodeList. The function requires Mnesia to be stopped on the all db_nodes contained in parameter NodeList. Applications call this function only once, as it is usually a one-time activity to initialize a new database.

When starting Mnesia you must wait for all the tables to be ready, like it is said in the docs:

Table initialization is asynchronous. The function call mnesia:start() returns the atom ok and then starts to initialize the different tables. Depending on the size of the database, this can take some time, and the application programmer must wait for the tables that the application needs before they can be used. This is achieved by using the function mnesia:wait_for_tables(TabList, Timeout), which suspends the caller until all tables specified in TabList are properly initiated.

A word of caution:

1 Like

Thanks for the tips, especially the bit about starting tables

Regarding the thread you posted, and actually reading your thread in-depth really enriched my mnesia research :stuck_out_tongue:

I’m using mnesia right now for saving/loading GenServer state for multiplayer gamestate, and at the moment we don’t need the persistence to be so reliable, but I have an eye towards a RocksDB-like or somesuch solution for persistence reliability should the need arise

1 Like

@njwest we used mnesia for gaming too (its great for storing persistent state, as the lookup speed is ets (5m+ reads per second, from multiple schedulers) and the write is fairly fast 800k~ish, for a small server.

The way you do it is pretty much the way we did it (dont use mnesia anymore). Even if mnesia is not started as an extra_application, that :mnesia.stop ensures nothing weird happens.

1 Like

Do you mind to say why you moved away from Mnesia?

Our usecase was for a durable fast KV store, think Cloudflare KV. That can be grepped. And ideally the objects map 1:1 with erlang terms (so we dont need to write SQL or another query language / deal with type inconsistencies).

What we thought we wanted was Mnesia. But it gave us a very complex multi use database that did not quite do everything as we wanted, lost data and was randomly slow under load.

What we really wanted was :ets that persisted to disk.

So we made :ets that persists to disk every write (not via tab2file that does not work), abate not 100% happy with the write performance of the older rocksdb version its using (newer rocksdb has concurrent unordered writes, and the workload is 100% write, since reads hit the ETS tables). Also eventually it needs to be distributed with some kind of simple routing algo (think etcd without strong consistency).

The project is called mnesia_kv, maybe the ideal version of it wont use rocksdb but just a simple WAL that gets aggregated somehow at thresholds (and is optimized for 100% write workload) but rocksdb was the quick and reliable choice at the time.

NOTE: we looked at patching Mnesia but the rabbit hole went too deep, it would take too many hours. Thats for a larger company like Facebook who has the resources to go down that route.

1 Like

Why not just writing to a log file in the disk?

Since the first time I read the Kafka Design docs I have been fascinated with the power of directly writing to the disk, and now I have found another interesting reading:

I always have wanted to have a log of everything that occurs in my application so that I can recover in case of disaster, by replaying it, more or less like Event Sourcing but without all its overhead and complexity.

Now with this Mnesia issues, I also had though in just wrapping :ets with direct disk persistence or write another backend for Mnesia, but like you I think Mnesia is a rabbit hole, thus I think I will just try to write a simple disk log that can cope with high throughput, and then use it for wrapping :ets or has a backup of Mnesia to be used when a netsplit occurs or I lost all the data as happen to me already.

This repo may be a good starting point:

But we already have another one in Erlang that is distributed:

Eventlog seems attractive to me, because is in Elixir, thus I can easily fork it and make it work as I intend, but I may try vonnegut and adopt it if doesn’t suffer of the same issues I am seeing now with loosing data due to the way it uses cache to write to the disk.

While reading this blog post I found that Mnesia is using selective receive blocks:

As the blog post mentions the selective receives can be a cause of slow down under load:


Selective receive is an interesting functionality that comes built into Erlang/Elixir. As with every other tool it has both strengths and weaknesses. Selective receive provides some advantages when working with relatively small message boxes (namely prioritised message processing), however, using selective receive without being aware of the potential costs can put your overall application stability at risk.

So, in your case maybe Mnesia was slowing down due to one of this cases.

Its not that simple, because I want the final data to be stored with periodic snapshots/log flushes, otherwise the larger the DB gets the significantly longer it will take to rebuild it from the first log entry. Like say 128MB WAL, once 128MB is reached, flush it into sst/another similar storage medium. If app crashes anywhere along the line, it only needs to parse max 128MB~ of journal and read the rest from the main ondisk table. If everything is a log, indeed its very simple, but the dataset will always grow. Say I do 1b inserts then 1b-1 deletes. That will be a size 2b-1 log, or a size 1 sst.

A netsplit is not that dangerous, if your just using a KV store, take the newest record and thats it. Or deepmerge the newer ontop the older one.

It could be, also it could be internally how the log persistance works.

I never really understood Kafka (and similar) and couldn’t find a good usecase for it.

1 Like

This really depends on your use case and shape of the data.

To me sounds like a very dangerous shoot on your own foot gun…

It’s excellent for when you want to do Event Sourcing or want to have a decoupled MicroServices architecture. It requires another mindset and programming approach to your application.

The advantages of using a Kafka as a data lake that anyone in the organization can consume or produce data into is very powerful and opens endless opportunities for teams to progress without depending/waiting on each others.

Normally you will use Kafka to add data as facts after you have processed it, so that others can do whatever they want with it, like analytics/reports/audits, or just to trigger further actions. Another use is to serve as a backup for your application be able to recover from, but you need to have your application coded to support replay without performing the side effects, like sending emails or doing a wire transfer.

Eventual consistency. If you need CP (of CAP) then of course it is dangerous, but IMO you cannot have a low latency distributed system (planetary scale cringe) with CP, you must be CA.

What does this even mean?

As a datalake Kafka makes sense.

1 Like

Maybe this article can help you understanding it better and how it fits with Event Sourcing:

Kafka is not properly a database, but can almost be used like one.

See this articles to understand better what is a data lake:

Event sourcing involves modeling the state changes made by applications as an immutable sequence or “log” 
of events. Instead of modifying the state of the application in-place, event sourcing involves storing the event 
that triggers the state change in an immutable log and modeling the state changes as responses to the events 
in the log. 

Ah so we get this naturally as a property of using an immutable actor model language, example updating the state of a process (gen_server), that is basically an immutable log of events via the mailbox.

Consider a Facebook-like social networking app (albeit a completely hypothetical one) that updates the 
profiles database when a user updates their Facebook profile. There are several applications that need to be 
notified when a user updates their profile — the search application so the user’s profile can be reindexed to be 
searchable on the changed attribute; the newsfeed application so the user’s connections can find out about 
the profile update; the data warehouse ETL application to load the latest profile data into the central data 
warehouse that powers various analytical queries and so on.

This is a disaster waiting to happen riddled with inconsistencies if say one of the nodes goes down now a bunch of events are missed.

Okay I get the idea now. Event Sourcing is basically a event subscription to all incoming data. But why not just do an event subscription to a distributed database instead, example like Cloudflare KV or Firestore? A users profile got updated? Instead of listening for the “UPDATE” event, the database itself eventually consists across all nodes, once each node gets the update that the profile got updated, the database generates an event and fires off notifications to each subscriber (connected user). The difference is if something goes down, when it comes back up it eventually consists, there is no “event to miss”, also stuff doesn’t go haywire if events come out of order.

When using Event Sourcing you need to persist the events, and you can do it with whatever database technology of your preference. Kafka is an excellent way to do it in an intermediate stage, and once it uses the RAFT consensus algorithm you will not have a disaster waiting to happen. I used the term intermediate stage, because normally events are consumed from Kafka and persisted in other databases, like Postgres.

You can take a look to the Elixir library Commanded by @slashdotdash that does Event Sourcing/CQRS in Elixir, and by default uses Postegres:

Event sourcing is persisting application state changes as domain specific, intention revealing events (e.g. UserRegistered). Current state is built by reducing the list or stream of events, similar to Enum.reduce/3 in Elixir. For event sourcing you ideally want to use a proper event store, such as EventStoreDB or the Elixir EventStore library I wrote which uses Postgres for persistence.

Event streaming is where events are published to interested subscribers for loosely-coupled processing, service integration, etc. Kafka is designed exactly for this. It would be more similar to Elixir’s Registry module when used for pub-sub or using Stream.each/2 or the Broadway library. Often consumers use ack/nack to guarantee at-least-once message delivery so that they cannot miss any events. They may store their current processing position to a durable store so they can stop and resume processing safely. It’s possible to use change data capture to publish updates from a data store, such as table UPDATEs but unlike event sourcing these changes are not domain specific, but instead are general insert/update/delete operations.

There are two varieties of events used with event streaming: Event Notification and Event-Carried State Transfer (ECST). Event notification is used to indicate when something in an application has occured, such as a user registration, but the event contains minimal information (“thin” events). ECST is where more information about the current state is included in the event (“fat” events). You can also combine both event sourcing and event streaming by publishing domain events used for event sourcing to consumers for event streaming.

See What do you mean by “Event-Driven”? by Martin Fowler.

1 Like