WaspVM - Run WebAssembly in Elixir

As we’re nearing a 1.0 release of WaspVM I thought I’d post something here for anyone who might be interested in using it. Here it is on Github

WaspVM is a WebAssembly virtual machine written in elixir, which you can interface from your elixir projects, which means distributing a program that uses Wasm is easier, since you don’t have to create a separate build for each architecture – you just distribute it like a normal pure elixir project. Not having to rely on a C NIF also makes running Wasm safer, crashes in the VM won’t cause your entire application to crash.

We’re going to be using WaspVM as our dApp VM in the Elixium Network, but it is a general purpose VM


Huh, this is really cool. Interesting sandbox method to let basically anyone run untrusted code. Not super performant since WASM doesn’t map to the BEAM very well but perfectly good for most untrusted code. I may make use of this. :slight_smile:


Thanks! Yeah performance won’t reach what you’d get with a closer to native language like C or Rust, (at least not if you’re only executing one VM at a time), but it allows you to sandbox your code like you said. Although we haven’t actually benchmarked this yet, I’m thinking it outperforms C and Rust when running multiple VMs though (like running 10 different Wasm at the same time, which is what we’ll be doing in elixium). Let me know if you end up doing something cool with this!


Not looked deep in to it yet but a couple of questions.

How does it handle a call running ‘too’ long, like can you enforce a maximum ‘reduction’/op-calls count or should we just shunt it into it’s own beam actor and kill it if it takes too long.

Is there a way to pause and later resume a running call?

WASM doesn’t define Perfect TCO yet, but does this support PTCO (the BEAM does)?

Is there a way to serialize out the state of the ‘program’ and reload it where it left off later (outside or inside of a function call)?

1 Like

There’s a system which let’s you specify a maximum gas limit to prevent programs from running too long, it’s in one of the pull requests and is scheduled to be added to the next version. Basically each instruction has an associated cost to run, and this cost gets accumulated while the program is running – if the accumulated amount gets higher than the limit specified the program halts.

There’s currently no way to pause a call, but I suppose that should be easy enough to implement, depending on how that’s meant to work

The frames aren’t actually being dealt with as a stack, they’re just using elixirs built in recursion on every function call. I don’t recall if we’ve tail call optimized yet but I do know that we can call wasm functions recursively millions of frames deep with no issues.

There’s no serialization at the moment, although that’s something that could also be pretty easily implemented, but I don’t think that’s a goal of Wasp – as it depends on the user to validate that the serialized state is valid and non malicious, which opens up an attack vector


Elixir/Beam’s stack handling is like the heap, but they start on opposite ends and grow toward each other, consequently a recursive function calling itself many times and keeping the args compared to one using TCO to call itself but keeping args in a list the TCO one will actually be slightly slower because of the list operations and popping the frame, but they use near about the same memory, so that is not an issue unlike on metal languages regardless. :slight_smile:

As long as the VM’s recursive calls are TCO then you should just about get TCO for free though. :slight_smile:

I’m not thinking serialization of the program back out, I mean it’s memory state. Like it would be convenient to be able load a users, say a game object state that they program, it runs around for a bit, and eventually gets unloaded and serialized to a database for a bit, but then the user logs back in so it gets unserialized and its memory is restored to just how it was left off, thus picking up where it was in its own program.

Ahh I understand. What we need in order to make this possible is to expose an API for interacting with the Wasm virtual memory, the way the browsers allow you to do so. This isn’t specified in the spec but it’s a feature that makes sense to have. I’ve created an issue for this here, leave a comment on there if you want to claim the task, otherwise it’s open to anyone.

1 Like

Essentially if I can just :erlang.term_to_binary/1 then :erlang.binary_to_term/1 to serialize out then in an entire interpreter, that would be awesome. Though a better defined version that only serializes what is necessary would be far superior. :slight_smile:

Of course the term/binary conversions would fail if the interpreter is spawning multiple erlang processes (I’m not sure it should).

Direct serialization / deserialization support within the VM itself isn’t something that I feel needs to be in the VM itself, the only state that the VM really holds is instruction pointers and memory mappings, and the feature that gives direct access to memory through an API would leave just the instruction pointer as a piece of state that could be exported / imported – but I don’t see any real value in this. At the most it’d let users pause / play through the VM as it cycles through instructions (which is something we could directly support without serialization).

There exists a way to view the VM state in a read-only way if someone wanted to run diagnostics, but importing VM state probably wouldn’t be that useful, unless there’s a use case I’m not seeing?

1 Like

Like the one I listed above, say someone wants to make a game where they write code to control a little in-game tank or so, they upload their wasm to the server and the server runs it, when they log out then eventually their little moving tankbot is paused and serialized out, full state of the wasm interpreter for it, and later when they log back in then it is reloaded right where it left off. Lots of little cases like that. :slight_smile:

Right but that state would be stored in the Wasm modules memory, which can be read and written from elixir. Either way elixir is going to need to initialize a new VM, so they can write to memory from their elixir code upon VM initialization

Yep, being able to read/write that entire memory state, especially if it contains the IP, should be quite good. :slight_smile:

1 Like

Just released v0.8.1, where this feature and others were added. Here’s a fun blog post on using the new Memory API within a WebAssembly tic tac toe game


board = WaspVM.HostFunction.API.get_memory(ctx, "game_mem", 0, 9) Oooo awesome!

I like the post, easy to read, follow, and understand. :slight_smile:

As for WaspVM.HostFunction.API.get_memory/4 does that include the instruction pointer and all as well? I.E. if WASM calls ‘into’ an elixir function, can that elixir function serialize up the whole state then pick up again where it left off (assuming no other elixir functions are on the wasm stack, and probably some argument indicating whether it was just loaded or not)?

This variable is defined by the defhost macro, and is solely used as a reference that’s passed into the HostFunction API.

As for this, I’m really not a fan of magic variables, why not have the user prepend it to all defhost argument lists so it is explicitly passed in, thus:

defhost get_move_for_player(player) do

defhost draw_board do ... end

And so forth should actually be written like:

defhost get_move_for_player(vmctx, player) do

defhost draw_board(vmctx) do ... end

I always try to remember the Python tenant: Explicit is Better than Implicit :slight_smile:

Also, does this mean you can’t call the function from outside the WaspVM interpreter? Like what would you pass in then, the PID like this?

def run_game do
  # Start a fresh VM instance
  {:ok, pid} = WaspVM.start()
  WaspVM.load_file(pid, "priv/wasm/tic_tac_toe.wasm", imports)
  {:ok, gas, result} = WaspVM.execute(pid, "play")
  WaspVM.HostFunction.API.get_memory(pid, "game_mem", 0, 9)

Also, the calls return the gas, but is there a way to set a limit on how much gas is allowed to be used in a call before it just exceptions out or serializes/pauses its state or so for later resumption?


Thanks! Appreciate the feedback tons :slight_smile:

We’ve decided not to expose internal VM state at all – so instruction pointers won’t be available. Things can easily get messy when allowing people to poke around inside the VM – and it should be possible to do most things by serializing / deserializing memory. If we were to expose instruction pointers and the call stack this would be akin to the BEAM providing functionality to do the same at runtime – although useful in some cases, potentially easily destructive.

This allows for messy programming – and is also something that falls outside of the WebAssembly spec. For example: if you have an elixir program that can run and do some work, and then could be stopped, serialized, and restarted by passing state back to the BEAM, certain safety checks such as if the program is in the middle of writing to a file would be bypassed (e.g. it’s dangerous to resume the execution of a function halfway through).

In most cases, I agree that magic variables aren’t the way to go, however in this case I feel as though it makes sense – host function heads defined by defhost need to strictly resemble the function heads that they’ll be exposed as in the WebAssembly module. It could easily get confusing when you have a function like

defhost get_move_for_player(vmctx. player) do ... end

that you need to interface with in WebAssembly like

(import "Module" "get_move_for_player" (func (param i32) (result i32)))

because the former has 2 params and the latter has only 1. Plug does the same thing with their conn variable in the router.

You can! The only thing is that you would need to use the functions defined in the VM itself WaspVM.get_memory/2 and WaspVM.update_memory/3, as it takes a reference to the VM directly, and then you can use WaspVM.Memory to interface with the memory. So the above code would be rewritten as

def run_game do
  # Start a fresh VM instance
  {:ok, pid} = WaspVM.start()
  WaspVM.load_file(pid, "priv/wasm/tic_tac_toe.wasm", imports)
  {:ok, gas, result} = WaspVM.execute(pid, "play")

  # Retrieve an exported memory from the VM
  mem = WaspVM.get_memory(pid, "game_mem")

  # Read bytes from memory
  WaspVM.Memory.get_at(mem, 0, 9)

Yep! you can do this by passing a gas_limit when calling WaspVM.execute:

WaspVM.execute(pid, "some_func", [], gas_limit: 100)

If the gas limit is reached, the program will interrupt and return an error.

1 Like

Then what about adding it as an argument to the defhost? Like via:

defhost get_move_for_player(player), context: ctx do

That will just be a defhost definition like:

defmacro defhost(head, mappers \\ [], [do: body]) do
  context_var = mappers[:context] || Macro.var(:ctx, nil)
  # Then just unquote `context_var` where-ever its used now

That will support both the old implicit style as well as well as allowing someone to not just make it explicit but also name the variable whatever they want. This is a pattern I use for similar purposes. :slight_smile:

Can move the options to before the head as well:

defhost [context: ctx], get_move_for_player(player) do

Whichever you think feels more natural (I’ve seen both forms pretty equally, having it in front means you require the [/] wrapping it though where having it at the end, before the do, means that you don’t).

In addition, both forms let you add more ‘mapping’ variables in the future with trivial ease if you ever need to. And if you think you don’t want to support the implicit forms (I recommend not to support them personally) then you can use that knowledge to generate more efficient code as well! :slight_smile:

Except you can ‘pop’ the first one if it matches a pattern, like being named ctx or so, that is still awfully implicit though. ^.^;

Awesome!! ^.^

Great! Is there a way to not have it error but instead return a ‘continuation’? Perhaps something like:

case WaspVM.execute(pid, "some_func", [], gas_limit: 100) do
  {:ok, return} -> return # this is a returned variable
  {:continuation, cont} ->
    cont.() # Can just call it again to run it with the same options
    # Or perhaps make it take an argument list like `cont.([])` so you can do something like:
    # cont.(gas_limit: 50)
    # To change the certain specific options and pick up where it left off otherwise.
  {:error, reason} -> throw reason # This is a returned hard error

Could just do one-shot continuations is fine, although if possible (maybe via an option if costly, though if memory is a binary it shouldn’t be as it’s copy-on-write, not read) then the continuation could wrap the entire interpreter state so it could be called again multiple times to continue from the save point that the continuation itself wraps (this is a pattern I did in a couple of interpreters I wrote in Elixir).

Also, instead of running the interpreter in another process, have you thought of having a state be passed in/out of everything (like the lua interpreter) so we can just run it in-process (we can always spawn a process if we need), that means it becomes trivial to do things like clone the existing state of the interpreter and all among other capabilities. :slight_smile:


At the Elixir Conf 2019 and in chats I find in 2019 I see a lot about WASM and Elixir/BEAM.
We are now approximately 7 - 9 months after most of the discussions I find. Sure a lot must have happened in between?

How far are we from compiling Elixir to be executed in the browser?
Since I’m taking steps to move to Elixir/Phoenix for PWA-development, the offline development is what I’m missing, so either LiveView can be used with a BEAM running in the browser (?) or we can simply compile Elixir (and LiveView?) to a binary format executed in the browser …

Thanks for helping me have realistic expectations about maturity and the road map …

Best regards