Thanks for pointing out that podcast Jose - very enlightening as to the goals of nx.
FWIW I’d classify myself as a “maybe” for the appropriate research project - in particular if I am doing another research project w/ non-deep neural nets I think that would be an excellent starting point. I have a planned follow up to spectral-neural-nets - so maybe as a first step I will implement the FFT in nx and see how it goes.
I’ll be sure to post on the forums w/ some thoughts if I end up doing so.
I have found Nx powerful to move various software pieces coming from my dissertation as a way to make them distributed.
My work deals with the physics of realistic complex systems, and I need to solve large sets of coupled stochastic differential equations. Having Elixir + OPT + Nx will be a game-changer for me.
My take, based on your description and the video presentation in Lambda Days, is that Nx has broader potential than only having applications in machine learning or data science.
Let’s consider one factor that drives adoption of numerical libraries: analysis and visualization ecosystems. One of the strong points for which Python has been successful has been the marriage between libraries such as numpy, pandas and matplotlib. Numerical computing and plotting tend to go well together. Maybe integrating a good plotting tool though an Elixir library can help do the same for Nx.
A good route may be to take an existent open source plotting library that has been stress-tested, use Elixir to simplify access to its API and provide it as a test front-end that people may use to display results computed with Nx. CERN’s ROOT is a good example, binaries range between 50MB and 100MB (similar to Matplotlib).
I think the interesting thing is that elixir has a decent deployment story, which neither Julia nor python have. We are transitioning into an era where ml is coming out of the research and academic phase and moving into deployment outside of places like google and facebook, so there is a real opportunity here.
tl;dr - Nx-based implementation of A Thousand Brains?
I’ve been following Jeff Hawkins’ work for a couple of decades. He thinks well, works hard, and writes clearly. His introductory book, On Intelligence (2005), lays out his goals and motivations in studying how the neocortex works. It also gives an overview of Hierarchical Temporal Memory (HTM) systems.
Quite recently, Jeff released A Thousand Brains, which lays out Numenta’s progress and current theories in this area. The basic notion is that the human neocortex is composed of something like 150K columns of neurons. Each of these stores contextual and descriptive information on hundreds of items. Collectively, these columns “vote” on whether incoming input matches a given item.
While reading the book, I’ve been wondering whether Nx might have a useful role to play in moving this approach forward. There are HTM implementations in several languages, including C++, Clojure, Java, and Python. However, none of these languages has Elixir’s built-in support for distributed processing, pattern matching, etc. With the advent of Nx, it might be time to think about offering a massively scalable implementation.
Thanks for posting - just read 1K Brains. Despite Hawkin’s incontinent proselytizing of silicon-valley “humanism”, I’m a fan of his technical ideas.
But I don’t know if they work in practice. NuPic was open-sourced in 2013 I wonder why I have not heard of any success stories by now. Numenta’s open source code hasn’t been updated in a couple years, and it looks like the HTM meetup community is not active.
While I haven’t seen many application stories for HTM, I’ve seen lots for GPT-3. (some examples)
I wonder how HTM or GPT-3 could be applied to meat-and-potatoes tooling in the Elixir Ecosystem: telemetry alerts, optimizing test execution, property-based testing, etc.