If it is of any help, I am deep into AI in the last 2 years and I still can’t understand most Hugging Face projects. But I think that’s ok: I also don’t understand most of what is on GitHub, except for things written in certain languages, like Elixir or Erlang.
Therefore, the book will give you similar context. It is unlikely you will fully grasp any Hugging Face model, but there are certain models, architectures, and problems that will make a lot more sense once you read it.
Unfortunately there is no quantization at the moment but we are working on it. You can fine-tune models and one of the chapters discuss it, but you will most likely need a GPU and that’s one particular area we want to further improve the UX on.
I have the “practical deep learning” course from fast.ai on my radar too. There have been some mentions on the forum here, of people who recommend it as a foundation course.
I’m curious if anyone can comment on how this course and the book of Sean relate to each other, in the context of someone proficient in Elixir but has no knowledge about ML/AI (things might be different if you’re coming from a python background). Is one of the two a better starting point? Will they teach similar things?
I have been experimenting with ML throughout the years and I have been able to complete a few client projects via Python via TensorFlow, Swift via Create ML, and Wolfram via Mathematica. Thus, I was wondering, are there posts and/or articles for making sense of Elixir’s ML package ecosystem?
Reads like a breeze and goes also deep (looking at you, Chapter 6 ;-). Thank you, I love the editing and overall writing style of the book and the examples you picked to explain something.
I have finished Part 1 of Howard’s course in multiple versions and made it halfway through Part II. They cover the same material really, but in reverse order. Moriarty starts with the primitive foundations and builds up to complex applications, whereas Howard does the reverse by having you create an app with his FastAPI library within 5 to 10 minutes, then works backwards until you are creating your own custom deep learning models. Howard has very convincing reasons for this difference in pedagogy, and I think if you have never learned or used in deep learning it’s the best approach. “Practical Deep Learning for Coders” also only covers machine learning, that is methods that don’t use deep learning, tangentially.
So whether you should first learn deep learning in Python, then relearn everything again in Elixir is a matter of time and preference. Howard, in my humble opinion, is really one of the greatest teachers of programming alive today, so it’s hard to not recommend his course when given the opportunity.