I was actually recently thinking about the same thing. Thanks to Elixir macros power it would be not that hard.
I am more interested in CoreAI and Apple M1 Neural cores. This may be interesting stuff.
I was actually recently thinking about the same thing. Thanks to Elixir macros power it would be not that hard.
I am more interested in CoreAI and Apple M1 Neural cores. This may be interesting stuff.
apple recently did an optimized tensorflow 2.4 Accelerating TensorFlow Performance on Mac — The TensorFlow Blog - so believe those changes should be upstreaming next few months…
And you can now rent those on the cloud by the way: M1 Mac mini in the cloud now available for 12 cents/hour - 9to5Mac
I really enjoyed listening
I think some homage to Pelemay is due too, The team at Fukuoka University has been working on similar ideas for a while.
Definitely Kip! In fact I recently created a Pelemay group here on the forum
I’m not sure why but I thought Pelemay was primarily intended to make use of GPUs in a more general context - to give Elixir an overall speed-boost since GPUs have a ton of cores, but I see now it is ML focused too. I wonder what the differences are between Nx and Pelemay, and whether there might be a collaboration of sorts in the future (I’ve updated my post above btw )
Admittedly I was half asleep and slightly distracted when listening to the Nx podcast too, and thought it was more a general speed up of number crunching operations as well (in my defence I didn’t get any sleep last night )
Is it automatic differentiation in elixir
CALLED IT
These cores are, however, not suited to perform more generic operations, and thus giving Elixir “overall speed-boost” is not something that happens automagically. It’s not going to speed up string concatenations, I/O, regexp, JSON parsing, data serialization and things like that. And this is bulk of operations your traditional CRUD web app does. The GPU may have a ton of cores, but they can’t do any of these things, at least not in performant manner.
But what you can do is optimize certain algorithms, hashing algorithms, cryptographic ones, any signal processing (image/audio included), a lot of numerical methods and of course machine learning / AIs.
Thanks Hubert
I had always thought that with Erlang processes being so lightweight they could take more advantage of GPU cores than the other languages that use much larger threads for parallelism and concurrency etc.
That’s still a nice win and out of curiosity, would this be the same then for any language that has some sort of GPU adaptation? Or are there any unique advantages to Erlang/Elixir (because Erlang processes are so small for instance).
I would actually think more in terms of data processing pipelines that can be orchestrated very well in Elixir. Think Streams or GenStages that do things and some of these things could benefit from being executed on GPU… Think there’s a medical system that does take in files from some scanning machine, runs them by auto cropping, then detects cancer cells, then produces enhanced version then uploads to some repository for doctors to analyse. Some of these steps can be accelerated by GPU some don’t, but having it all in the same nice Stream or GenStage pipeline is definitely making lifes of programmers easier.
That’s me actually guessing how this can be used. I am yet to look into details into Nx when the source code drops.