I’ve just published my tiny library:
It’s an indexed log for events inspired by some Kafka concepts. It models a “stream” as a log and its offset index. It can append, read sequentially from a given offset. Seems to work just fine, however I’d like to improve write throughput. Simple benchmark shows ~8K events at 4 MB/s. Without indexing it’s a few times better.
Profiling kind of shows it’s send/call and I am aware that message passing is costly.
If any of you could see and play with my crap maybe we could improve it together
I’ve just spent a bit of time looking through your library and getting to grips with how it works internally. It looks like a very similar solution to what I’ve been building myself, albeit nicely packaged up as a standalone lib. Mine’s currently all tangled up with the work-in-progress memory image app that I’ve been working on (thread about it here).
Where mine is a bit different is that I keep the index in memory with the ability to rebuild it directly from the log. It’s based on the KV store built in this series of articles, although I’ve made some changes to suit streaming events from a given offset rather than being a KV.
Not sure how that impacts performance as I’ve not properly benched it, but when I next get a chance I’ll swap my log out for yours, and perhaps package mine up nicely and get some tests/benchmarks around it.
Thanks for publishing this. It’s great to take a look at another way of solving the problem. I’ll be sure to return the favour once mines a little more presentable!