I think the only thing that you miss out from non-Elixir is the defn and friends macros that are defined in Nx.Defn.
Perhaps there’s a way to avoid needing them by using a more verbose and manual approach, though.
Feel free to ping us at #machine-learning in the EEF Slack to discuss this. José probably has the answer from the top of his head (if it’s possible to avoid defn macros in favor of manual non-macro calls), but I can try to help as well!
Worst case scenario you can get the same effect by manually compiling anonymous functions with the equivalent of:
iex(13)> defmodule NonDefn do
...(13)> def add(tensor, i) do
...(13)> Nx.Defn.jit_apply(fn a, b -> IO.inspect(Nx.add(a, b)) end, [tensor, i])
...(13)> end
...(13)> end
warning: redefining module NonDefn (current version defined in memory)
iex:13
{:module, NonDefn,
<<70, 79, 82, 49, 0, 0, 6, 132, 66, 69, 65, 77, 65, 116, 85, 56, 0, 0, 0, 236,
0, 0, 0, 23, 14, 69, 108, 105, 120, 105, 114, 46, 78, 111, 110, 68, 101, 102,
110, 8, 95, 95, 105, 110, 102, 111, 95, ...>>, {:add, 2}}
iex(14)> NonDefn.add(10, 11)
#Nx.Tensor<
s64
Nx.Defn.Expr
parameter a:0 s64
parameter b:1 s64
c = add a, b s64
>
#Nx.Tensor<
s64
21
>
iex(15)> NonDefn.add(1, 2)
#Nx.Tensor<
s64
Nx.Defn.Expr
parameter a:0 s64
parameter b:1 s64
c = add a, b s64
>
#Nx.Tensor<
s64
3
>
There are some features that you would miss out, like while
, cond
and case
which rely on macros.
However, the main effect of having a condensed Nx.Defn.Expr
graph that can be compiled with the compiler of choice is still there.