MnesiaKV - new KV store ontop of RocksDB + ETS

Pretty much most of the info in the description, but its basically ETS with persistence, for use cases when Mnesia is much too heavy or when you want to actually not lose any data.

There is only 2 operations, merge and delete. All merges are deep.

Also simple subscriptions to changes.

Lemmy know what you think, love it / hate it / suggestions? hehe…


@vans163 cool project, looking forward to dig into it more!

As a minor note, maybe this isn’t the clearest name? There is, iirc, another effort to make actual :mnesia use rocksdb for persistence and this project doesn’t actually work like or otherwise use mnesia.


Will have to check this out… Its an awesome concept! Rocksdb does a good job as a KV, but its a bummer to miss out on ETS abilities like match. Unfortunately this looks to be using the Rust based RocksDB bindings which makes cross compiling kind of suck.

Yea the name is not the best, should probably rename it… any ideas? The reason I called it Mnesia is because we often use Mnesia in place of Redis, but dont use Mnesia as an actual database, Mnesia is so much more indeed.

Yea the main idea is to just mirror all writes into Rocksdb, all reads still go only to ETS. I started brainstorming ways to do journaling and SSTs, then thought why not just use Rocksdb.

Not sure about the cross compile, maybe I could look into it, the rust-rocksdb does not compile on OSX or Windows? Or :rocker, or?

how about RockETS?


I ran some preliminary benchmarks, not too happy with what I am seeing in terms of concurrent writers really affecting the performance. Maybe there is some tuning knob?

4 core i5-7500 CPU @ 3.40GHz
ext4, consumer SSD

1.6m write tps

266k write tps

120k write tps

8/16 core i9-9900K CPU @ 3.60GHz

5m write tps
5m write tps
3.8m write tps

640k write tps
1.02m write tps
1m write tps

160k write tps
189k write tps
228k write tps
260k write tps
330k write tps

I think it’d be worth trying to integrate Benchee for benchmark running. It makes pretty graphs, and takes care of warming caches, etc. In your bench mark you iterate from 1..100000, I’d try to vary that number and remove the timer and maybe replace ets with a Enum reduction with an accumulator. Timer and ets calls add overhead, which make the benchmark impure. All these things add overhead. Using Benchee, it is much easier to parameterize tests like that, to find the sweet spot and what could use improvement.

1 Like

Looking harder, it looks like rocker is a rust nif. It is probably worth timing the rust bindings separately, and gauging whether the nif is properly configured. In some cases, using a yielding nif might help things, or if the benchmarks look really bad a dirty nif. If the nif’s overhead is larger than expected, it can be disastrous as nifs block schedulers. Knowing that the underlying nif code is performant is key for this library. The perf test included writes and reads with very small binaries, while your benchmark uses a map that continually grows. Running your benchmark with inputs of smaller size might highlight weaknesses, as well.

erlang:term_to_binary is called in your benchmark. It has overhead of it’s own, pre-constructing these binaries should be considered as well.

1 Like

Its really more specific to Nerves. The last time I tried it, rustc/cargo was alright cross-compiling stuff with it’s own tool chain. However this means rustc/cargo doesn’t/didn’t properly lookup the $LD environment variable set for the C cross compiler linker & libc. You can hack it and pass cargo the correct flags, but you have to massage the rust arch type and its a hassle. So it really only affects actual cross compiling (e.g. compiling on linux/x86 to linux/arm).