There is any way to protect Elixir application source code?

I’ve developed a solution that runs in some raspberry pi’s deployed at some clients. The current solution was done with Go. It’s “working” but I know that I have some problems dealing with the wrong state where having something like Elixir supervisors would completely solve the problem.
The best part of Go is that it’s a compiled app, so the client can’t get the source code. There is any way to do something like this to Elixir? I’m thinking of porting the app to Elixir as it’s small.

  1. Even for go, source code fragments might remain in the compiled binary. Especially used imports are easily extractable through the strings tool. Even more of the source remains if it is compiled with debug symbols and/or not properly stripped.
  2. You can disable embedding of debug symbols, which as far as I remember is automatically done when building a relase.

Elixir is compiled language. So you can ship a binary, no source. But as @NobbZ say there are ways to disassemble it partially at least. Should stop 99.9% of clients probably, it’s not like disassembling is super easy.


Compiled? I thought it was interpreted. So, if it’s compile, even not being perfect, it’s way better then things like Ruby or Python that I need to actually “ship the code”.

Yea but most of the code is compiled, I don’t really care about the imports the most important parts is the logic that I’ve developed.

hmm about your second point, you were talking about Elixir or Go?

The second is purely about elixir.

Still, it is basically the same. Unless deactivated, both compiled artifacts will include debug symbols, which can be used to de-compile into pretty readable source code.

Just because something is “compiled”, it doesn’t mean, that it is not containing source code anymore, or that it weren’t reverse engineerable.

For each language there are tools to take pre cautions. But the last you should do is to trust in the compiler to do it for you on default settings…

1 Like

Using :beam_lib.strip_release/1 is good enough.


There’s a simple, cross-platform, language-agnostic tool that’s perfect for this application: LAWYERS. If you’re worried about your clients stealing proprietary algorithms the way to guard against that is with contract language.

You can put up technical roadblocks, but ultimately you’re handing them the algorithms in a form that’s analyzable and reversible given sufficient time and money.


WHen you build your release you can not include the source code, so all the client gets are the .beam files, and if you remove the dubug info they won’t be able to extract source form them.

I think there is a way to encript the beam file as well. At least Erlang offers that

This encryption will be good for when the .beam files are leaked to anyone without access to the decryption keys, but to run the code the decryption keys need to be available to the runtime, thus the client running the code will have access to them, thus we are back to square zero in the sense that the client can now reverse engineer the .beam files.

1 Like

Elixir has *.ex and *.exs files. *.exs are interpreted, and *.ex are compiled. When you build a release
( you are compiling and packaging resulting *.beam files, which are binary. If you ship the release binary to your client, it generally doesn’t contain uncompiled Elixir code.


I literally spent 2 minutes searching github for “elixir lib lawyers obfuscation”. Then I realised “oh, actual human lawyers”. :man_facepalming:


I would highly recommend checking out the nerves. It handles a bunch of the tricky parts of building releases for raspberry pi and other embedded devices (cross-compiling C extensions, etc).

It also allows for things like nerves-hub so you can safely do over-the-air updates and securely do remote diagnostics.


I really need to find some time to dig on this, because I am really curious in knowing how they open or close the remote device for over-the-air updates and remote diagnosis.

It’s done via https or ssh?

If via https it uses private key pair mutual authentication or his just based on a secret string in an header of the request?

1 Like

It happens over https (actually a wss connection since it gets upgraded to a websocket). I’m sure @fhunleth or @jjcarstens will know more details than I do, but I know that the OTA update gets signed so that the raspberry pi can confirm that its from you (you can use a private key that’s embedded in the hardware).


@mmmries is correct - The typical connection path is WSS with client-side and server-side SSL to establish that connection. On our production devices, we use the ATECCA508A/608A cryptographic chips for storing the device cert and key.

Firmwares are signed and the device has the public key to verify the file to prevent installing unknown firmware.

There is a decent amount to it so I’d recommend digging through the NervesHub documentation a bit for the gist if its of interest.

We’re also working towards root-filesystem encryption on our devices as a further step of protection when using Nerves (but this has a little bit to go)


How do you resolve the situation were a private key was compromised, therefore the device cannot use anymore the correspondent public key to verify the firmware signature?

I think this is up to the implementation. I can think of a few ideas which I would consider:

  • The device supports a list of firmware public keys which it checks against any new firmware (not restricted to just one key). You might preemptively plan for this case and have multiple firmware public keys that are on device and mainly use one. In the event of a compromise, you use the backup firmware key to make a new firmware that removes the compromised key from the list, device gets that update, compromised public key no longer on device
  • NervesHub supports a remote IEx console over the authenticated WSS connection. If the device firmware was enabled to allow this, an admin in NH could manually update the firmware public key for the new firmware key for the new firmware. This could be a bit of work and may not be enabled for the especially hardened device cases
  • If your firmware private key was compromised, you may consider any device with that public key as compromised as well. In such case, communication would be severed and require manual intervention to reflash the drive with new firmware that includes a new public firmware key

This would be the only truly secure option to guarantee that the remote device gets back to a 100% trusted state.