Mojo by Chris Lattner looks very similar to the Nx approach for Elixir

Interesting Hacker News item on Mojo which is Chris Lattner’s take on adapting Python to the numerical computing / machine learning world. Reads a lot like the work that @josevalim, @polvalente and @seanmor5 (and others) have been delivering for Elixir for the last couple of years although it seems his work its more focused on direct compilation to MLIR.

I wonder if there is room to see a native MLIR integration layer somewhere in the Elixir future? Perhaps a more evolved JIT?

9 Likes

I totally agree with reasoning behind this, if we had in erlang/elixir a subset of commands that could compile to native code, it would be much easier to work with the native interface, no need for a secondary language, tool-chain and other nasty things that always blow up when you try to build the project.

1 Like

Hi @kip!

My understanding so far is that we can describe Mojo as a system-level language (like Rust, Zig, C++) with Python-like syntax and a Rust-like type system, with a Python runtime embedded within itself. The approaches are somewhat different to Nx (which in itself is closer to Python’s JAX).

One of the interesting things about Mojo is that it is also described as a front-end (aka syntax) to MLIR and you can specify MLIR types and instructions directly in Mojo and their goal is to build the entirety of Python from this (including the Object system). Although note they say Mojo is a superset of Python but it definitely isn’t one right now - e.g. you can’t define classes - and I think it will be very very very hard to keep full Python compatibility, especially if you include the C API.

In other words, Nx is about embedding low-level bits inside a high-level language, and Mojo is about a low-level language with a high-level runtime inside. I know @jackalcooper is working on MLIR integration but I believe his work so far is more akin to embedding low-level bits inside Elixir.

EDIT: it is also interesting that Mojo is borrowed by default - which brings its semantics closer to a functional programming language and mutation needs to be explicitly enabled. More info: Modular Docs - Mojo🔥 programming manual

18 Likes

now in Elixir you can already JIT compile a native function with beaver
a small example: beaver/guides/your-first-beaver-compiler.livemd at main · beaver-lodge/beaver · GitHub

project:

Like what Jose said MLIR integration’s low-level bits are ready now. I just don’t have enough knowledge (and time perhaps) to pursue further. In other words with beaver you can implement a mojo DSL in pure elixir.

Anyone interested in this please feel free to message me~

8 Likes

more about Mojo’s implementation detail: Modular: Mojo 🔥 - A systems programming language presented at LLVM 2023

new mojo-like DSL in Elixir: GitHub - beaver-lodge/charms: Write NIF in Elixir for Elixir
example: charms/bench/enif_quick_sort.ex at main · beaver-lodge/charms · GitHub

6 Likes

Heh, that’s pretty neat actually. If done well, you can more or less transpile Elixir to C and eliminate a ton of memory safety errors, akin to what Rust does.

Really cool idea.

1 Like

extremely cool!

1 Like

Looking through the examples, can you tell me where vector is coming from here charms/bench/vec_add_int_list.ex at main · beaver-lodge/charms · GitHub and here charms/bench/vec_add_int_list.ex at main · beaver-lodge/charms · GitHub ?

here

I’m not sure if I’m especially dense, maybe there’s something I’m missing here, but I still have no idea where that would come from. Does it get injected by the mlir block? I couldn’t find any documentation.

It would be helpful to see some more examples. I was considering how I could build something straightforward that might be improved by SIMD (like matrix math) but it feels like I’m missing something.

Sorry if this is just my lack of context showing. I think this approach for NIFs is super interesting!

I was considering how I could build something straightforward that might be improved by SIMD (like matrix math) but it feels like I’m missing something.

The status quo is the very first step to enable this (like 10%). It only allows generating MLIR vector types and generate CPU code to process them. To really make it things really “out of box”, we need to compile the vector types to native SIMD instructions on CPU or GPU. (At this point, I am not sure how far I can push it forward. Maybe I need to find ways to raise money to find people solve problems I can’t solve)
The general plan is to implement equivalent functions to SIMD in mojo: SIMD | Modular

More on this: