Can we beat Kafka if we build it in Elixir?

I am going through the kafka architecture. All the features what the kafka is providing are already in Erlang. I would like hear your opinion on the Kafka type implementation with Erlang.

Kafka® is used for building real-time data pipelines and streaming apps. It is horizontally scalable, fault-tolerant, wicked fast, and runs in production in thousands of companies.


I can’t answer your question, but bear in mind that it’s possible that kafka is actually more optimized for what it does than the Erlang VM. The Erlang VM is optimized for server architectures and easy concurrency, but it’s possible to do better if you have a very specific task.


It would be a duplication of effort. Why not build something new.


It would be super useful even if it does not perform as well as Kafka just for ops simplicity


I know they are not absolutely equivalent, but if you want to see something similar to Kafka, that is written in Erlang, look at RabbitMQ.


Pretty different use cases between RabbitMQ and Kafka


I’m interested to know what you mean when you say, “all the features what the kafka is providing are already in Erlang”. Certainly all of the building blocks are there. But Kafka is a fairly narrow tool, albeit a robust one. I’m not sure we have something that is that focused in elixir.

Along these same lines I’ve been toying with the idea of using our raft implementation to build out a distributed log. I use Kafka pretty heavily at work but I think that could be a really useful thing to have available in elixir. But it shouldn’t be understated the amount of work and engineering that has gone into making Kafka as robust as it is.


I think it is about scale and how many people right now are contributing and using Kafka.
For small scale and simplicity suppose you don’t need Kafka and Erllang/Elixir is enough.

For big scale, multi regions Apache Kafka could be better choose.
But to maintain Kafka you will need separate team :slight_smile:


Forget about building something … for the moment. Are there any ideas inside Kafka that are applicable to systems built in the BEAM/OTP ecosystem? I suspect there might be. Kafka in a Nutshell

On that note: ThoughtWorks: Recreating ESB antipatterns with Kafka

Kafka is becoming very popular as a messaging solution, and along with it, Kafka Streams is at the forefront of the wave of interest in streaming architectures. Unfortunately, as they start to embed Kafka at the heart of their data and application platforms, we’re seeing some organizations recreating ESB antipatterns with Kafka by centralizing the Kafka ecosystem components — such as connectors and stream processors — instead of allowing these components to live with product or service teams. This reminds us of seriously problematic ESB antipatterns, where more and more logic, orchestration and transformation were thrust into a centrally managed ESB, creating a significant dependency on a centralized team. We’re calling this out to dissuade further implementations of this flawed pattern.

Also culturally speaking successful JVM-based projects have a tendency towards bloat as they mature, while sometimes not minimizing their (inter-)dependencies. I have no idea if that is the case for Kafka - but if it is, it might be getting to the point of moving from “just use Kafka” to “you don’t need Kafka for that”.

The Handling Failure section of the above article reminded me of the Consensus and Leader Election section of your talk (good one, BTW).

But it shouldn’t be understated the amount of work and engineering that has gone into making Kafka as robust as it is.

To some degree that engineering could be more valuable than the product itself. But you would have a better idea of whether Kafka may have gotten a bit “bulky” for some business use cases and whether there are instances where a lighter weight alternative may be a better fit, provided the organization isn’t already using Kafka for some other, legitimate reason.

I’m not sure whether there is a clear point at which the benefits of adopting Kafka outweight the costs. Clarifying that in itself could be valuable. Naturally there already are opinions “to adopt Kafka before you need it” so you’re familiar with it, once you do need it.

In the short term it probably makes more sense to focus on one single, excellent Kafka client library for Elixir so that organizations already using Kafka don’t reject Elixir out of hand because of suboptimal integration with Kafka (seems some complaints have more to do with the available Kafka client libraries than Kafka itself).

Which Kafka lib are you using? How stable is it?


To me most useful use of Kafka is a buffer that can absorb events in data pipeline that you can process async. It allows you to handle spikes well (up to a point). I am not a big fan of ESB but if I had to build a project around ESB I’d rather use NATS.

1 Like

In some circles an Event Service Bus in itself is considered an anti-pattern.
Why you don’t need an Enterprise Service Bus (ESB);


I don’t think so you can compare Kafka to ESB. ESB has complex logic inside itself where Kafka is just simple and append only transaction log. The business logic is outside Kafka in clients / producers of events.

Kafka is mostly used for stream processing / big data architecture

1 Like

The way we use kafka is as an immutable log of facts. Things happen, those things generate facts, and the facts are added to the log. Our usage treats kafka much more like a database then anything else. Consumers can read from them in a demand driven fashion. None of these ideas are novel to our company; lots of people use kafka for this. But this architecture does lets us create decoupled applications using facts from different parts of our company.

The benefit to having a solution in elixir for me personally would be operational overhead (I know exactly enough about tuning jvms and operating kafka and zookeeper to get myself into trouble) and ease of adoption. Trying to rival kafka’s scalability and robustness is going to be hard to do and, frankly, most people are probably better off just using kafka.

Raft is already a distributed, consistent log so its a good choice as a primitive for a higher level log abstraction. This blog series explains more about it: If people are interested in working on something like that then I’d be happy to chat with them.

We’re using brod and are really happy with it.


About operation I agree you need team or some external provider if you don’t wont to host itself :slight_smile:
I think somebody also felt operation pain and there is some attempt to implement it in GO

Can we beat Kafka if we build it in Elixir

I think you can’t, but there is possibility to implement some compatible version in Elixr, Go , Rust which will be simple to maintain.

For example there is Casandra database on JVM and there is compatibile database written in C++.

Absolutely this. Better client libs would bring far more value to the Elixir ecosystem than building a Kafka clone in Elixir. This can be said for a good number of things.

Although I think the OP was really asking if a Kafka clone built in Elixir would be “better” than the current Kafka written in Java/Scala. Would it have better: speed, resource utilization, maintainability, support a wider range of clients, etc?

I think this boils down to an arms race between the JVM and BEAM. Assuming you’re still using 1st class client libs to access both implementations, what hypothetical differences would a JVM vs. BEAM Kafka have?

1 Like

Yes…But the use cases are different.

If we have an option of Elixir “clone” of Kafka that does not require ZK and integrates well with gen_stage it can be a killer app for people building data pipelines.


Any Kafka clone would still be an external process accessed via some client. That client ought to be able to integrate well with gen_stage. A better Kafka native client could integrate well with gen_stage as well. Kafka today supports Kafka streams for data pipelines. But this is just client magic on top of their existing product. A better Elixir client could expose a similar interface that leverages gen_stage.

Not having a ZK requirement might help operations run and maintain the new clone though. That could be a selling point. I like the memory characteristics of BEAM over JVM. Being able to take data off a NIC and write it directly to a file without intermediate mem copy, would also benefit a BEAM implementation. Although BEAM file IO is quite slow (at least for reads), so that’s a downside. BEAM IO is supposed to get some major love in OTP 21. I’m looking forward to seeing what they do there.


Hmm there is 0 reason it has to be like that :slight_smile:

It wouldn’t be clone if it wasn’t like that. :slight_smile:

1 Like