How to create a sandbox to run untrusted code/modules?

Unfortunately it is still possible, and quite easy in fact, to load a module into the system. Even if you were to remove the code module the basic BIFs are still there. You would need a very very controlled access to stop this. And I still wouldn’t trust it. :grinning:

1 Like

Exactly this, a blacklist sandbox will never work on a full language.

A whitelist of each individual system call where each one is audited in detail for uses (including auto conversions of atoms to module and all sorts of things) would have to be done. It is possible to make a sandbox, but it would be limited on purpose and you would have to make a lot of safe stubs and such…

1 Like

cough cough Illumos Zones cough cough

1 Like

I’ve read about Erlang/BEAM’s preemptive multitasking which seems to be a good solution to not allowing sandboxed code to hog all the CPU resources.

Do you know of any strategies/papers/examples on how to prevent untrusted code to hog memory resources?

The best I’ve seen so far is just limiting the amount of memory that could be allocated/used but I was wondering if a better strategy is out there.

You could run it in a process and have the sandboxed code test it’s memory usage on each reduction or so, if it exceeds a value then GC it, if still exceeded then kill it? I’m doing that currently.

But then in one sense you are not really running untrusted code. The trouble with code is that it can do anything it wants to, it’s pretty much like having root privileges on a machine. The only way is to interpret the code in some way to check what it is doing. For example even it was running in a process with max memory set there is nothing stopping from starting another process and do what it want there.

This was a problem we did not attack.

3 Likes