Anyone who wants to speculate about this tweet from José

Wrote down some notes. Please correct if something is wrong or missing, took these right while listening :slight_smile:


Nx = Numerical Elixir
Everything is WIP, baby steps!

In a nutshell
Nx is a project that consists of a core library (Nx) and other pieces that depend on the core library. Together, they introduce things very much related to machine learning and associated fields. Nx introduces the ability to work with tensors, boosts Elixir’s ability and performance when working with numbers, and makes it possible to compile code for the GPU.

The work has been ongoing for 3 months. Everything is still in flux, but the following were mentioned.

There are 3 fronts where work is happening:

1) Nx library (core)

  • Library for multidimensional arrays that are often called tensors.
  • Tensors are typed, not in type-system way but unsigned integers or float32 or float64 etc.
  • Earlier José made PR to support float16 in Erlang.
  • If you come from Python, it can be thought of as kind of like Numpy. Long way to go but working on that.
  • Written in pure Elixir
  • Tensors represented as large binaries
  • The “slowness in Elixir” with numbers is generally due to immutability and copying etc. This is addressed through a module that is part of Nx called Numerical Definitions

2) Numerical Definitions

  • Module in Nx
  • Looks like Elixir code
  • Can define functions with defn. That’s a numerical definition
  • In the definition you write with a subset of Elixir. Elixir kernel is replaced with Numerical kernel.
  • In this, operators + and - can only work with numbers and tensors.
  • In numdefs, we are not executing them immediately. We are building a computation graph of all tensor operations we want to do.
  • Now when you want to calculate X, fuse all operations together to pass through the tensor only once, which reduces copying considerably
  • Kind of like Elixir Streams but happens on syntax level inside defn
  • Defn allows for “automatic differentiation”. The problem that this solves is essentially loss function minimization in things like when creating neural networks etc.

3) EXLA

  • Based on Google XLA (accelerated linear algebra)
  • Nx team wrote wrote Bindings to XLA, which is part of Tensorflow repo
  • You can give it a computation graph and exla compiles it to run efficiently on CPU or GPU
  • You can change the compiler. In other words, the compiler backend is pluggable, so other compilers aside from XLA can also be integrated with
  • XLA compiler compiles numerical definitions to run on the GPU.
  • What this means is subset of Elixir is running on the GPU
  • Small subset of Elixir: still contains things like pipe operators and conditionals, but f.ex. limited pattern matching (only tuples)

More information will be available on February 17 at the Lambda Days 2021 conference where José will be speaking.

73 Likes