Would it be possible to embed the BEAM in a C/C++ app?

Hey all! New to the BEAM and Beam related languages, but it overall seems really cool. I’m still working my way through the tutorials, but a thought popped into my head that I can’t seem to shake off. Would it be possible to embed the BEAM/OTP into a c/c++ program?

I’m really into game development, so it would be cool to have a thing c++ layer covering graphics or whatever, but have most of the logic be done in Elixir itself. I know that a bunch of games already use Elixir in the backend, but it would be nice to only have to use one language to write everything, and not have a separate language for the client.

I did some research into this topic and found out about Elixir desktop and Liveview Native, which was able to get the BEAM embedded within Kotlin for Android and Swift for IOS, so this idea does seem possible, I just have no idea how it would work.

Would it be possible, for example, to compile the BEAM as a library for a c program using SDL2 to link against, or is there a better/easier way to do this?

1 Like

I think you got it wrong, the way to do this is the other way around, you run c/c++ code from erlang VM.

This kind of integration is not much different from how any other language integrates with C, in erlang you have NIFs and Ports as the main tools to interact with other programming languages.

I don’t know how IOS integration works, however on android you don’t embed kotlin into erlang vm, you actually have the android runtime and erlang VM running at the same time, using a channel to communicate between each-other.

You can’t, the runtime was never meant to be run as a library from other application code, it lacks a lot of prerequisites to do so. The way to do this is to use erlang VM as your runtime and run c/c++ code from there with the tools mentioned above, or to separate them entirely and set a communication channel between them.

1 Like

Welcome! Your question made me think of another thread that had some activity lately: OpenGL-rendered Breakout clone in Elixir

These people tried to approach game development on the BEAM, from another angle.

1 Like

There’s also Rayex, which allows you to use Raylib in Elixir. It isn’t complete, but it’s an example to look at for how this would be done.

I still like my approach, but I think SDL bindings would be nice to have.

That’s exactly what ElixirKit (at the moment part of the Livebook repo, livebook/elixirkit at v0.14.0 · livebook-dev/livebook · GitHub). It embeds your Elixir app inside your macOS and Windows app using Swift and C# APIs, respectively.

Edit: oops, to be specific at the moment, it starts the VM is separate OS process and communicates with it over TCP. See Elixir Desktop for example where the GUI and the VM run in the same OS process, a requirement for iOS for example.

4 Likes

That’s cool! I’ve been meaning to look more into how livebook works to see if there’s any ideas I can borrow.

However, I’m not sure how that relates to SDL - I didn’t see any mention of SDL in the livebook project. I did, however, just remember that Membrane provides an SDL plugin for Membrane that could maybe be a good start for using SDL in a game development way. Probably not as it stands now, but it may be useful.

As a side note, one of the reasons that writing an SDL NIF wasn’t the way I wanted to go is because we already get wxWidgets bindings for free, so all the windowing, input handling, OpenGL context stuff, etc. is already done for us. Further, things like logging, timers, bit manipulation, etc., are handled by other separate Elixir/Erlang things. This doesn’t leave that much stuff that we don’t already have, so throwing out all of the existing options just to avoid recreating a few pieces didn’t seem ideal to me.

The other big thing SDL gives you is different rendering pipelines, which is definitely a big plus.

Wouldn’t using TCP be really really slow? Especially if you’re using the c/c++ for rendering. Is there a better way to “send data” for the c backend to render stuff?

I wonder if you could use wxWidgets for the rendering context, and then use something like wgpu for the graphics api. Still, that probably wouldn’t be portable to something like the Nintendo Switch (hypothetically anyways)

wgpu would handle all of the window and rendering, as far as I understand. You wouldn’t need wxWidgets. Should be possible, but I haven’t bothered messing with it. Probably worth it to get easy access to different rendering pipelines, but I’m fine with OpenGL for now.

For what it’s worth, I think the switch has some support for OpenGL, maybe? I think it has its own api which isn’t public. That’s not super important though, I doubt it’s wildly different from any other option. They all just tell the gpu to render triangles :slight_smile:

There is a CAD system written in Erlang that may be a useful reference: https://www.wings3d.com, source code at GitHub - dgud/wings: Wings3D is an advanced sub-division 3D modeller.. I haven’t looked at how it’s architected.