How to communicate between phoenix/elixir application and node js?

I am in love with elixir/phoenix (channels, easy scalability, functional, etc). But I have some javascript code which needs to be run and it completes realtime part of my editor. So there is no escaping node.js.

What I am doing right now is sending POST HTTP requests from phoenix backend to multi-node node.js backend, for every realtime editor request coming in. I need to check if the http request have the right permission before sending it to node.js and then replying back to frontend with the result obtained. From frontend, request comes through phoenix channels.

I think the way I am doing is very crude way of doing it.

Some of my inquiries are:

  1. What I need is a way to communicate with node.js from elixir/phoenix without losing scalability in node.js side. Elixir handles horizontal scalability easily. How can I scale node.js side while having horizontal scalability?
  2. I will also need sticky connections between elixir and node.js. Because its realtime editor, I keep editor data in cache on node.js side, so to reply back immediately without wasting time in looking up in database. I have implemented very stupid way to do this also. What would be a better way?

Would something like node_erlastic or porcelain work?

BTW, I am using kubernetes and Google cloud for deployment. Feel free to suggest improvement you see fit.

Thanks!
elixir%20with%20nodejs

3 Likes

I will also need sticky connections between elixir and node.js. Because its realtime editor, I keep editor data in cache on node.js side, so to reply back immediately without wasting time in looking up in database.

Since you mentioned “sticky connections”, I’m guessing Node.js node’s cache is in-memory cache? It might be better to centralise that cache (e.g. Redis), so you can add/remove Node.js nodes on demand (i.e. autoscaling) to meet surges in traffic.

I am not that experienced in scalable production apps.

Every letter typed in frontend editor makes a new request. I understand using redis will be better scalable, but would not always looking in the central cache for every letter typed, slow things down?

Could you maybe clarify what are you caching and how is that cache used?

I am trying to make Google Docs kind of realtime functionality in an editor. So basically, the doc is cached and in some cases it can be big also.
It is used every time someone edits the doc. This doc is retrieved from the database and kept in the cache.

:wave:

Do you use operational transformations or some kind of CRDT or something else entirely? Or how do you handle conflicts? Can you afford not passing each typed letter but rather a collection of logical changes?

Might be an excuse to look into ØMQ. You should be able to implement something between the Load Balancing Message Broker (Node.js) and the Asynchronous Majordomo Pattern to arrive at something you need. While exzmq is marked “not ready for production use yet” there are Erlang bindings.

2 Likes

Hi!

It is kind of operational transformation. So you are saying that I pass collection of changes every 2 or 3 secs. Is this the usual way realtime apps are made? I just could not picture (in my head), that each time the doc is retrieved from redis and a few characters appended to it by one of the server nodes and sent back again. Then again after a few milli-secs later another request comes in.

And what if multiple collections came by different users (in the case of collaborative editing) at the same time and they all go to different server nodes?

Can also look at how Apache Wave does such things too.

1 Like

Thanks! Looking into it.

So you are saying that I pass collection of changes every 2 or 3 secs. Is this the usual way realtime apps are made?

Since some of these algorithms allow to edit documents offline and then auto-resolve all conflicts, I would think its safe to say that it’s possible to send changes in batches.

I remember watching a talk by Martin Kleppmann where he showed a rather simple (but a bit naive) implementation of such an algorithm where each user was assigned a sortable ID, and all their edits (like insert " world" after index 5) were placed in accordance with (id, index)'s order:

  • (1, 5) < (2, 5) so user 2s edits would be to the left of user 1s

That allowed to deterministically place edits even when the indices conflicted.

Or were you asking about something else?

2 Likes

@idi527 are right, it should work. And also looking into zeroMQ. Interesting stuff. Thanks!

the CRDT stuff from Phoenix Channels/PubSub would be really nice here.

I would look into a server -> server websocket connection.