Overall, the experience was pretty good. I found the elixir-nx libraries rather stable in terms of APIs even if they are pre 1.0. And when I had doubts or when I stumbled across some potential issues, I’ve reached out for help in Github or here in the forum.
Nice one Nick! Do you think (with your work) it’s a good book for Elixir users? Is it an easy read? If so maybe we should run a book club on it
Paolo Perrotta is a great author - he wrote one of my favourite Ruby books: Metaprpogramming Ruby! Here’s my recollection of it in his spotlight on DT:
Do you think (with your work) it’s a good book for Elixir users? Is it an easy read?
It is definitely a good and well written book, with a practical approach and only a couple of “heavy-math” chapters. In general I liked it and I found it easy to follow, even if I didn’t have much experience with numpy. In the first chapters I struggled a bit to replicate the examples in Nx, but mainly because I was completely new to the library and to the concepts, that’s part of the game.
If so maybe we should run a book club on it
I think it can be a valuable reading if you wanna start with ML, on the other hand, the book is 3 years old and given the pace of innovation in ML, maybe there are more up to date resources out there The basis won’t change tho’, but for example I’d have like some chapters on more advanced topics such as NLP, transformers, reinforcement learning (but I guess they deserve a book on their own ).
Paolo Perrotta is a great author - he wrote one of my favourite Ruby books: Metaprpogramming Ruby!
I never read it, but I saw a talk on Git by him once and it was great
I’ve tried the Dockyard tutorials at first since I didn’t want to leave Elixir-land but soon found it easier to understand nx/axon it after I understood the analogous libraries in Python.
Be happy to provide feedback on the livebooks as I progress.
Welcome @shawn_leong ! And thanks for the kind words, really appreciated!
found it easier to understand nx/axon it after I understood the analogous libraries in Python.
Indeed, knowing a bit of numpy and Keras (which are used in the book) can be really helpful to understand Nx/Axon and navigate their APIs.
I didn’t have any experience in ML before reading the book and i never used these python libraries as well (numpy yes, but for other stuff), therefore in some cases I needed to figure out which API to use/compose together the reach the same result illustrated in the book, especially in the first chapters. For instance, my first implementation to initialize the weights with zeroes was:
# Given n elements it returns a tensor
# with this shape {n, 1}, each element
# initialized to 0
defp init_weight(x) do
n_elements = elem(Nx.shape(x), 1)
Nx.tile(Nx.tensor([0]), [n_elements, 1])
end
Then, later on I switched to this one which I believe is more correct:
# Given n elements it returns a tensor
# with this shape {n, 1}, each element
# initialized to 0
defnp init_weight(x) do
Nx.broadcast(Nx.tensor([0]), {elem(Nx.shape(x), 1), 1})
end
Be happy to provide feedback on the livebooks as I progress.
I started reading the book, and follow along with your livebooks. I have only just started so I am only at Chapter two at the moment.
By evaluating your code for the Linear Regression example, I noticed that it always goes through all the iterations, whereas the Python code in the book stops when the error is no longer smaller then the previous error.
By using a reduce_while instead of a reduce in the train function, you could mimic that behaviour. Here is how I changed it:
def train(x, y, iterations, lr) when is_list(x) and is_list(y) do
Enum.reduce_while(0..iterations, 0, fn i, w ->
current_loss = loss(x, y, w)
IO.puts("Iteration #{i} => Loss: #{current_loss}")
cond do
loss(x, y, w + lr) < current_loss -> {:cont, w + lr}
loss(x, y, w - lr) < current_loss -> {:cont, w - lr}
true -> {:halt, w}
end
end)
end
It will then stop at the 184th iteration, just like in the book
Hey man. I’ve tried ML before that and it looks like this book was the only one that made sense to me. Other were a lot more complicated and math heavy.
Learning Elixir now and I am so happy that you created such repo. Amazing job!
I’m only on chapter 3, but was trying to use Nx early on, perhaps too early
I’m getting some fairly different values from the straight elixir you wrote and the numpy code in the book. If you have any advice or spot what’s off, it would be very appreciated!
Hi @kenichi
thanks for the kinds words and sorry for the late reply.
I just tried your implementation, I copied paste the nx version at the end of the original livebook (see C3Test module) and I’m getting the same exact values as the non-nx version
I’m far from being an expert, but I’ll give it a try and reply to some of your questions:
How to save a trained model? Is there any language-agnostic standard format?
There are probably multiple ways to do that.
A simple approach is to dump the weights obtained from the training in a file (see Nx.serialize/2), they should be pretty much portable and language/framework-agnositc, but of course the model implementation must be the same. Basically, you need to keep your model definitions as code around and then you can just load the weights and call the predict function.
Another possible way is to convert the trained model to ONNX format (including the weights). In elixir-nx ecosystem you can use the AxonOnnx library.
How to load a pre-trained model written in another language?
Also in this case, there are more possibilities depending on the model.
First thing, maybe you can check if this model is already available via Bumblebee, that would be the easiest way.
If not, you can check if it is possible to export the model to onnx format and then you can try to import it in Axon via the aforementioned AxonOnnx.
Worst case scenario, you have only the weights at hand and therefore you need to re-implement the model with Axon, in that case you can check Bumblebee models for inspirations.
Hi @NickGnd thanks for checking! i carried on into further chapters, and haven’t gone back to check but i have a suspicion that :f64 vs. :f32 might be the thing. i will check our your gist and reply back - might be a bit though
You beat me to it. Haha. I am slowly going through the book in both F# and Elixir, using notebooks for each language. I’ll take a look at your implementation. Thanks for posting it!