Notions on AI and Elixir, redux

I’ve been interested in Artificial Intelligence (AI) since my earliest days in programming. Around 1970, I even got a chance to visit Stanford’s AI lab. I’ve also sat in on a few discussions and presentations over the years. However, because I have no real expertise in the matter, anything I suggest about it should be considered science fiction.

That said, I’d like to reiterate some notions about how Elixir et al might be well positioned to take part in the field’s advancement. First, however, I’d like to offer some historical background.

Background

Traditional AI was mostly dominated by symbolic computation, although researchers have certainly tried other approaches (e.g., genetic algorithms, neural networks, pattern recognition).

In any case, most of the effort was involved in developing ways to encode and process (i.e., reason about) concepts. As a result, the code for AI systems could also be reasoned about; indeed, many systems were able to “show their work”.

More recent developments have been largely dominated by large language models (LLMs, aka “auto-completion on steroids”). Indeed, much of the public perception of AI is based on this approach. As I understand it, LLMs are basically humongous neural networks, trained using back-propagation, big data (and computation), etc.

This approach is remarkably effective at some tasks, but not so great at others. For example, there are annoying issues such as “hallucinations”. So, although huge amounts of effort (both human and machine) are being invested, the jury is still out on the approach.

One issue which annoys me (as a programmer) is that the models are basically impenetrable piles of weighting values, etc. Another problem is that the underlying model is very naive, in terms of the ways that neurons operate (e.g., in the neocortex). So, I wonder whether neuroscience might be able to offer some clues.

Jeff Hawkins, Numenta, etc.

Jeff Hawkins et al have been studying the neocortex for decades, largely in an effort to reverse engineer its operation. Jeff has given a number of talks about his work and ideas; I’ve managed to attend some in person and watched videos of others. I’ve also read his two popular books, which I found amazingly accessible and thought provoking:

“On Intelligence” presents Jeff’s Memory-prediction framework, which basically contends that memory, prediction, and recognition are intertwingled brain activities.

“A Thousand Brains” goes over the basics of cortical neuroscience, paying particular attention to the structure and function of cortical columns.

In a nutshell, each neuron in the human neocortex has several layers, performing differing but related functions. Sets of neurons form minicolumns, containing up to ~100 neurons each. Sets of 50-100 minicolumns, in turn, form each cortical column. All told, the neocortex contains ~200 billion neurons, but only ~150K cortical columns.

Elixir et al

At this point, you may be asking what any of this has to do with Elixir et al. I’m delighted that you asked (:-).

It strikes me that the BEAM’s design might make it a good match for emulating the neocortex. Distributed, failsoft operation, message handling, and soft real-time support would all be useful. Elixir and its libraries, in turn, provide support for data structures, documentation, lazy evaluation, metaprogramming, Pub/Sub, and so forth. The evolving work on data typing might also turn out to be useful.

So, I’d encourage folks here to pay attention to Jeff’s work and think about possible architectures, constraints, etc. For example, should an Elixir process emulate a column, a minicolumn, or a neuron? Is there a way to handle gazillions of messages without totally bogging down the BEAM? Could spare cycles on (say) cell phones and/or laptops be used to crowd source computation?

(ducks)

-r

P.S. This isn’t the first time I’ve posted about this general area. For example:

12 Likes

Lest there be any confusion, I suspect that Nx is likely to play a key role in any Elixir-based neocortex modeling system. I also realize that most “Elixir-based AI” enthusiasts probably hang out in the Nx Forum. All of this makes @AstonJ’s decision to move my post quite understandable.

However, the topics of AI in general and neocortex modeling in particular are not Nx-specific. More to the point, a useful system might well need help from Broadway, Phoenix Presence and Pub/Sub, etc. So, at least from my perspective, all approaches are worth considering and discussing.

-r

1 Like

I reckon it might be worth keeping all AI/ML related threads in this section Rich, to make it easier for people to focus or find threads on this topic (I think this will be more important as the community and this area of Elixir grows).

Wonder whether we should add something to that effect to the section name as well? Nx/ML Forum or Nx/AI Forum perhaps? Or just leave it as it is for now?

1 Like

I’m not sure that any modification to the forum’s name is needed at the moment, given that there aren’t any other framework forums (AFAIK) that are more closely tied to AI-related topics. However, as I’m not a regular contributor here, I don’t think my opinion counts for much.

Incidentally, my own take is that Nx brings a sorely needed set of data structuring capabilities to the BEAM, along with the very cool GPU support, etc. As a recovering scientific programmer, I applaud the ability to define and use “typed data in multiple, named dimensions”. Lists and Maps are fine for lots of uses, but arrays (erm, tensors) are occasionally what is needed, even when there isn’t much math involved.

-r

1 Like

Regarding the section naming, the other side of the coin is that Nx is not ML- or AI-specific :slight_smile:

1 Like

@Rich_Morin Maybe this book is tangential to this topic. Handbook of Neuroevolution Through Erlang | SpringerLink

It’s been quite a long time since I read it, and it predates even Elixir. But perhaps it’s worth exploring how Nx affects the concepts presented there.

4 Likes

3 posts were split to a new topic: What can Elixir Nx be used for? (Split thread)

Ok let’s keep it as the Nx Forum, and just move threads here that are related to Nx, even if tangibly.

I’ll also move the above 3 posts into a new thread as people may wonder similar in future :smiley:

@Rich_Morin, re your thread - it’s up to you - we can leave it here if you wish or move it back to Elixir Chat :023:

3 Likes

Your approach is fine with me; I just hope nobody in the Nx community has a problem with it.

3 Likes

Thank you for your interesting post.

One of the facts about Elixir is that it is a functional programming language. While you can just pipe operations via |> in the end is this programming model a match for neural networks that are a bunch of weights?

It seems that current AI approaches are largely imperative, aren’t they? They are about applying a bunch of statistics to a bunch of data and hoping that the next word prediction that comes out is useful.

There seems to be inherently very little that the BEAM could optimize, since the process itself is as simple as applying an activation function repeatedly to data and weights. (This could be a simplification.)

The magic seems to arise out of setting those weights, across billions of parameters.

Is there anything about that which is a match for the functional programming paradigm?

Again, I’m no expert, but I’d agree that “deep learning” seems to be largely about applying a bunch of statistics to a bunch of data and that the results are basically a bunch of weights. Although declarative and/or functional programming approaches may turn out to be useful to this approach, that’s not what I have in mind.

Deep learning can produce amazingly plausible results, but any problems (e.g., hallucinations) are pretty much impossible to debug. Smarter folks than me are working on this issue, but I fear that the approach has a fundamental problem: the model of the neuron is far too simplistic.

So, I’d like the biomimicry to operate at a much higher (and more accurate) level. Specifically, I’d like to pick up on Jeff Hawkins’ notions about the neocortex as an interacting network of about a hundred thousand “brains” (i.e., cortical columns).

Each of these brains recognizes, remembers, and predicts things about its inputs. The brains also communicate with each other, competing and cooperating to generate the overall result. An AI system based on this approach might actually be amenable to analysis, debugging, and experimentation. (At least, that’s my hope…)

2 Likes

@Rob1 - my $.02 - for LLM Apps Elixir’s strength is orchestration.


(from GitHub - a16z-infra/llm-app-stack)

For small-scale Numenta-style experiments, I think Elixir would be a great prototyping language. For production, Elixir/Nx has wrappers for C and Rust libs for tensor math which run on the GPU.

For ML Ops and production orchestration the concurrent / distributed power of Elixir is really strong. IMHO!

3 Likes