Convert a pytorch model to native elixir

Hello,

I have seen some references that there is work being done to help migrate pytorch models to elixir. I am wondering if there is any pointers folks could give me to some resources related to this? Recently Whisper was released and I am very keen to try to pull that into some of my projects and see how it performs, but I would really like to run in elixir land and not need to shell out to pytorch.

Was I misremembering that there was a project being worked to change pytorch models to be able to run in elixir?

I think the Axon library may be what you are after (on my phone so sorry for the brief answer)

Find an ONNX export of the PyTorch model or you can use PyTorch to load a model and then export the model to ONNX. Once you have an ONNX model, then you can use axon_onnx to load into Elixir.

I suggest starting with the following notebook that I contributed to Axon, axon/onnx_to_axon.livemd at main · elixir-nx/axon · GitHub. If you can get the ONNX to Axon side of the process working for you, then try the same thing on another ONNX model.

If you want to see an example of converting a PyTorch model to ONNX, review the Fast.ai code linked in the above livebook.

** BIG WARNING ** Saving ONNX imported models to an Axon file will probably fail. You can check Sean’s ElixirConf presentation for why.

Another warning: If you want to run a model on your CPU, use Torchx or XLA, you should be fine. Models have GPU memory size requirements. When trying to run a model on a GPU, it needs to fit in the available GPU memory.

Expect a blog post on Axon and ONNX at https://alongtheaxon.com/ about the above onnx_to_axon notebook. It is on my short term TODO.
Good Luck.

1 Like