Running user entered code in a sandbox?

For my planned project I’m going to want to run user entered code in a sandbox?

How safe are dune and/or sand?

Or do I really need to do the lua/lua_erl thing?

1 Like

Dune’s default settings are safe. Lua is safe too. However, both Dune and Lua are not capable to provide real security sandboxing, they’re merely the tools to control execution boundaries and execution process.

If you want production-grade secure isolation, I’d suggest digging into virtual machines and lightweight hypervisors such as Xen or KVM

Lua can provide as long as you don’t use any IO primitives, for example luerl implements the state machine lua uses for runtime in erlang, so there is no actual reason it is not safe, as an example you could even limit the number of reductions it does.

I just want to sandbox functions, which will be “here is the state of this activity, and the new input, return the new state and some output”. It will need the tools for functional manipulation of complex structures, such as lens.
An interesting thing I found was wasm for beam. So one could let people write in any wasm language :-). But it would be nice to avoid moving complex data between different languages.
Dune is obviously intended to be safe for this sort of thing, so I’m not sure what its disclaimer means, beyond “use at your own risk”.

Possibly you didn’t quite understand what I meant. I am saying that if this Lua interpreter will have a CVE, your whole system will get compromised. So executing arbitrary code with just an interpreter is not enough. If you want to have real security, you’ll need to isolate a lot of stuff.

And yes, Dune is able to limit reductions too

There are different types of safe: Dune is safe for programs which can be buggy or programs who need to limit the set of available functions. Dune is not safe for programs which can be malicious on intention.

This also applies to Lua

How can a interpreter like this have a CVE if all what it does is compute the new state using a state machine?

Just like any other interpreter is able to have a CVE. This can vary from simple bug in interpreter implementation to exploiting memory model in parallel system or branch prediction algorithm in the CPU

I was doing fine preliminary research on the same topic recently. I really don’t want to use Lua because of the 1-based indexing. In the WASM space there is

1 Like

Another lightweight option would be to use firecracker, like this example.

1 Like

Can’t recommend lunatic enough, but it is hard to write rust compared to Erlang. Wrote a bit about it here From Erlang to Lunatic and worked on a decently complex little application. You can def get a strong level of sandboxing with it, however the development cost is high. The Erlang / Elixir style actor model is there but it is much more cumbersome (probably good in the long run for long-term development because you need to think hard about your API instead of tossing tuples around). One issue with that system is that it takes a long time to spin up processes. In the order of hundreds of milliseconds, if memory serves correctly. I think they’re trying to bring that cost down but it is still high. It is also young as hell, so expect to rewrite everything to conform to the new APIs sooner or later.

But if you’re feeling plucky I found it really had the performance and safety that I wanted (type safety and memory safety), while still having that actor / message passing / supervisor tree built right in. I still missed Erlang’s level of productivity though :slight_smile:

Another thing to keep in mind that you can’t use async Rust at all with Lunatic. Or any async library. It kept things lean for me, but could be a deal breaker for some.

There are also some (limited) Lua interpreters in Rust, so maybe they can tie in together to get a distributed sandboxed Lua executor.


Posting to follow, because that’s an interesting topic in which I already have some time poured.

To address this in a personal side-project, I went down the road of writing a small language without I/O or the ability to call Elixir functions. The small language became the side project and I sent it in another direction, but even before pivoting, I still had doubts about the attack surface.

In no particular order, I think I should have addressed those issues if I were to use it in a public project :

  • reductions count
  • conservative timeouts
  • memory usage
  • CPU usage

Those items have to do with resource hindering/DOS on servers, or cloud bills on cloud, and could have been (for my side-project) mitigated with pricing (pay by usage of X → users are incentivized to keep their scripts efficient, this could even be gamified in the app), or input size limits (encourage chunking work into smaller pieces).

But even then, before making the language quirky and stopping the initial project, I thought about sandbox escaping, and did not find a meaningful way to address it. The question isn’t that much “can the user arbitrarily call Elixir code” from a flaw in my interpreter, because the system is designed so they cannot.

The question that was troubling me was “can I be certain of that guarantee”, and I spent a lot of thinking on it, without success yet. I guess this is why the Dune project adds a disclaimer. Listing what you can do in that kind of system is easy, but listing what you cannot do is harder.

At the end, I was leaning on the idea of having dedicated execution throwaway servers / light VMs like @thomas.fortes suggested in this thread.

1 Like

As long as you treat your interpreter state machine as pure data, and don’t use dangerous things like :erlang.binary_to_term I don’t think it should be possible to escape the sandbox as the interpreter literally cannot do anything else than modify a state.


You’re totally right on that, I think it was more a lack of confidence on my side rather than technical issues. Executing user input is scary, so I approached it with a lot of doubt and questioning… For a public project I’d probably reach for solutions like Lua.

1 Like