I’ve been interested in Artificial Intelligence (AI) since my earliest days in programming. Around 1970, I even got a chance to visit Stanford’s AI lab. I’ve also sat in on a few discussions and presentations over the years. However, because I have no real expertise in the matter, anything I suggest about it should be considered science fiction.
That said, I’d like to reiterate some notions about how Elixir et al might be well positioned to take part in the field’s advancement. First, however, I’d like to offer some historical background.
Background
Traditional AI was mostly dominated by symbolic computation, although researchers have certainly tried other approaches (e.g., genetic algorithms, neural networks, pattern recognition).
In any case, most of the effort was involved in developing ways to encode and process (i.e., reason about) concepts. As a result, the code for AI systems could also be reasoned about; indeed, many systems were able to “show their work”.
More recent developments have been largely dominated by large language models (LLMs, aka “auto-completion on steroids”). Indeed, much of the public perception of AI is based on this approach. As I understand it, LLMs are basically humongous neural networks, trained using back-propagation, big data (and computation), etc.
This approach is remarkably effective at some tasks, but not so great at others. For example, there are annoying issues such as “hallucinations”. So, although huge amounts of effort (both human and machine) are being invested, the jury is still out on the approach.
One issue which annoys me (as a programmer) is that the models are basically impenetrable piles of weighting values, etc. Another problem is that the underlying model is very naive, in terms of the ways that neurons operate (e.g., in the neocortex). So, I wonder whether neuroscience might be able to offer some clues.
Jeff Hawkins, Numenta, etc.
Jeff Hawkins et al have been studying the neocortex for decades, largely in an effort to reverse engineer its operation. Jeff has given a number of talks about his work and ideas; I’ve managed to attend some in person and watched videos of others. I’ve also read his two popular books, which I found amazingly accessible and thought provoking:
“On Intelligence” presents Jeff’s Memory-prediction framework, which basically contends that memory, prediction, and recognition are intertwingled brain activities.
“A Thousand Brains” goes over the basics of cortical neuroscience, paying particular attention to the structure and function of cortical columns.
In a nutshell, each neuron in the human neocortex has several layers, performing differing but related functions. Sets of neurons form minicolumns, containing up to ~100 neurons each. Sets of 50-100 minicolumns, in turn, form each cortical column. All told, the neocortex contains ~200 billion neurons, but only ~150K cortical columns.
Elixir et al
At this point, you may be asking what any of this has to do with Elixir et al. I’m delighted that you asked (:-).
It strikes me that the BEAM’s design might make it a good match for emulating the neocortex. Distributed, failsoft operation, message handling, and soft real-time support would all be useful. Elixir and its libraries, in turn, provide support for data structures, documentation, lazy evaluation, metaprogramming, Pub/Sub, and so forth. The evolving work on data typing might also turn out to be useful.
So, I’d encourage folks here to pay attention to Jeff’s work and think about possible architectures, constraints, etc. For example, should an Elixir process emulate a column, a minicolumn, or a neuron? Is there a way to handle gazillions of messages without totally bogging down the BEAM? Could spare cycles on (say) cell phones and/or laptops be used to crowd source computation?
(ducks)
-r
P.S. This isn’t the first time I’ve posted about this general area. For example: