Can an actor framework for Python be implemented by using Elixir erlport?

I have this weird idea. I want to use elixir as a messaging service between python functions across a network.

Luckily, I found this post which explains how to use erlport to call python functions from elixir.

One use case for this sort of thing is as a worker-scheduler type system. But more generally it seems like a cool way to implement an actor system for python - just use an actor language.

My question to you guys is: what’s wrong with this idea? Is it doable? is it doable but difficult?

Keep in mind I’ve never really used erlang or elixir, but I love ideas behind the language. So what might I be missing?

Just so we’re on the same page let me give you a concrete example of how a framework like this could be used:

Say you have a network with 5 computers. 4 powerful machines we’ll call the ‘cluster’ and 1 laptop, we’ll call the ‘scheduler.’

You could start up a genserver (or something, I barely know what that is) so that there are workers on the clusters waiting to pull tasks from some elixir process on the scheduler. The idea is that the users (who only knows python, because lets say they are datascientists) would send a list of python functions to the scheduler using erlpoint, then the workers would pull those functions and execute the python call using erlport until all the tasks were depleted.

Of course, this is not the only use case, but its a simple, functional one.

What do you think?

Your example sounds a lot like what Celery already does well in Python land. It uses RabbitMQ by default to deal with the task distribution.

1 Like

Usually, if you can just get away with Python without using erlang/elixir, that would be the recommended way as it would make your stack smaller and easier to deal with.

With this said, I believe your idea is doable. But this isn’t really saying much. Elixir/Erlang are quite powerful, most things are doable, that is not the question. The question is - would it be efficient and easy enough for your use case?

And here I believe that with the right architecture, it would. But you would have to pay a heavy price in complexity.

For starters, you are talking about clusters of machines with Nodes. Erlang and Elixir support inter-Node communication by default, but don’t be fooled, that is a really hard thing to get right. Problems like maintaining a coherent state and knowing what to do when a Split Brain happens are just 2 of the challenges you likely will have to deal with - and they are not exactly a walk in the park to deal with.

Then there are other things like “What happens if the master Node (the one who has all the tasks) dies? From what Node will workers pull tasks from?” (Election algorithm incoming!)

Overall, if you want several machines to communicate, you’d have to deal with a lot of distributed systems challenges.

To this effect, you could go down the road of implementing your own solutions Raft comes to mind as a possible consensus algorithm and you can investigate more on how the folks at Phoenix managed to fix similar issues (

By all means, if you can get away by doing things in a single Node, go ahead, things like Flow already exist so you can take the most out of any machine.

If not, then you will have another level of challenges to fight and at least I won’t be able to suggest much more.

Now, if you already have solutions for these complex issues (or simply decide you don’t want to solve them for now) then Elixir really has a nice support for distributed computing with its primitives, and there is a lot of information out there to help you get started! Here are some examples and talks!

The important thing is to not be afraid. Hope it helps!

1 Like

thanks, for the overview and the references to other resources, it has already helped me understand what would be required better!

Your example sounds a lot like what Celery already does well in Python land. It uses RabbitMQ by default to deal with the task distribution.

Indeed, and there are other solutions too, like Dask which is fully contained in python. Elixir just seems more pure for handling distributed systems than python, which is where the impetus for this idea came from.

I think the fundamental question is: what real world problem is being solved by your idea?

Python evolved to solve certain problems in a certain manner in a given environment.

The BEAM with it’s process based nature approaches real world problems in a very different way - a way which is fundamentally at odds with the way libraries implemented in Python (and the lower level C code) operate.

Another Python integration technology which hasn’t been mentioned yet: Pyrlang


Hi there, to me this sounds like a pub/sub problem that could be solved using a python lib build on top of pika (the python rabbitmq client lib) to model your domain, and the rest is messaging and queues using rabbitmq. Your consumer services could also be written in python then.

I wouldn’t go down the distributed-nodes-in-elixir-path, it seems overly complex- unless you’re doing it for learning purposes.

I actually made an actor framework for stackless python over 10 years ago, it worked very well, and even had features like being able to serialize an entire actor and move it around, store it, reload it, etc… ^.^

Normal python is missing some features that make it easy though.

Not sure what you want to do but powers celery, you might want to have a look into it to build your own “custom” workers
Since you are curious about erlang/elixir, you might want to try also

NOTE that Ask Solem, the author of celery inspired herself from erlang to create celery/kombu.
That’s one of the reasons that pushed me to try/learn erlang/elixir