Fast Ruby/Rails interop?

The answer is probably no but I thought I’d try:

Is there a way to write Elixir functions that can be invoked by a Rails app as efficiently as a Ruby library? Most of my new Rails development is in the form of Ruby libs for parsing and string manipulation, used during a web apps’ request cycle. In my app, I keep requests < 100ms.

I’d much rather write all new code in Elixir, but the only way I know to call it from Rails is as a web microservice.

(As I write this, I’m thinking about how ridic fast a stateless, logic-only Elixir web microservice can be: sub-millisecond. So I ought to have time to make a call to one during a Rails server’s response. And this might be my answer.)

1 Like

No. Elixir compiles to BEAM byte code, and that byte code can only be run on the BEAM.

Right, your Ruby service could just make an HTTP call to the Elixir service. Ideally in time your Elixir service could handle the root request directly.

Can you elaborate a bit more on the use case?

Just a crazy thought. What if you ran the elixir application in the same OS as the rails app? Then you could make keep-alive requests from Ruby to elixir. It’s not going to be as fast as native Ruby, but it would be fine in a lot of cases.

That’s the dream.

Sure — here’s a page from my Rails app:

I’m working on hyperlinking textual references like Chapter 63 and Section 13.142. And I’m doing this with many bodies of law.

The key logic is,

  1. Scan a law’s body for references like these.
  2. Convert each “relative reference” to a full citation (“absolute reference”).

The web app, then, can use the full citations to generate HTML links.

That’s interesting, essentially a long-lasting client/server session.

Is there any way to expose Beam processes as dynamincally linked native libraries? If so, then Ruby FFI could work.

In theory yes, in practice it would be easier to do the other way around - implement Ruby VM as a NIF in BEAM. However if such cooperation is needed then I would look for ports or C-Nodes. The last one when ran on the same machine should result with pretty nice performance and API.

You could certainly skip HTTP, have Elixir just reading from Unix domain socket(s), commands in whatever stripped-down format you want to come up with, and the same for writing results back. (Communications over domain sockets is ridiculously fast.)


Could always build a BEAM node in Ruby too, they’d communicate by normal message passing then.

I don’t get the use case, but something “doable” if elixir/erlang are available in the same machine, could be to make system calls (eg with backticks syntax) to elixir… But this is highly tinkering…

Anyway good luck and interested to read more about your journey…

Known as C Nodes (despotę the name, these do not need to be written in C).

1 Like

Just a reminder, maybe unneeded, but… If the Ruby side is doing anything significant, the communications overhead will be trivial compared to time spent in the Ruby interpreter. Personally, I’d go for the simplest thing, whatever that is for your case.

in this context it’s also good to be aware of something like terraform. The idea would be to put your elixir app before your rails app and per route decide if you handle it with elixir or with ruby. So your elixir app is acting like a reverse proxy in front of your rails app. In this way you can gradually migrate to elixir.

Unsure if there is any authentication or other context you need to handle for the incoming requests from elixir but i’ve done stuff like this by temporarily connecting to the same database as rails (read only) or making internal api routes to get context with from elixir to rails.

Another option is going with a persistent phoenix channel connection from ruby to the elixir app but this would introduce way more complexity (and needs clear interfaces on all sides). Then I would prefer the simplest solution of terraform: handle it with elixir first (via plain HTTP) or else fallback with rails (via plain HTTP).

1 Like