Use of Phoenix for Monty (TBP) modules?

The Thousand Brains Project (TBP) is developing an open source, neuroscience-inspired AI system called Monty (in honor of Vernon Mountcastle). The basic idea is that Monty will have bazillions of message-passing Learning Modules, each of which will (loosely) emulate a cortical column.

Although the current Monty prototype is implemented in Python, I’ve been suggesting ways that it could be migrated (in part, at least) into Elixir. I’ve extolled the virtues of the BEAM, Nerves, Nx, Phoenix, Pythonx, etc. Somewhat surprisingly, this seems to be getting some traction with the project team et al.

In any case, my latest inspiration has to do with the use of Phoenix for Monty modules. Here’s a precis:

I wonder whether Phoenix could be useful infrastructure for a learning module. So, instead of having an ordered set of patterns do the dispatching, we’d have the stream of input messages drop into the top of a Phoenix-based app, hit the routing code, and get handled like a web request.

ChatGPT was quite encouraging:

That’s a very cool thought—and it’s absolutely worth exploring.

You’re proposing treating each Learning Module (LM) like a Phoenix application—or at least like a Phoenix Endpoint—and routing incoming messages in a way that’s conceptually similar to HTTP request routing.

However, I’m not sure how well Phoenix would work as infrastructure in situation where hundreds of thousands of modules are incorporating it and associated libraries (e.g., PubSub). Can anyone give me some feedback on the scalability (etc) issues that might arise?

One thing jumped out from the Wikipedia article:

[the] process of cognition is implemented through a hierarchy of identical microcircuits

This suggests that “modules” isn’t necessarily the right name, since in Elixir it means things with different code.

Instead, there would presumably be ONE chunk of code (or perhaps a few?) with many independent pieces of state, like how an e-commerce store has one codebase but many products in inventory.


To your specific question about Phoenix, it seems like the whole request parsing layer is probably overkill since it means serializing and deserializing everything through text-ish formats for every interaction. Sending Erlang terms between nodes directly would be a LOT more efficient.

Don’t trust anything chatgpt says like that.

It will tell you that something is a great idea for pretty much everything. It likes to blow smoke on your ass so you feel like you are a genius and keep using it.

1 Like

There are a lot of annoyingly overloaded words; “module” is only the tip of the iceberg. However, the TBP uses the word as they will (i.e., it is what it is), so my plan is to grimace and move on to real (as opposed to semantic) issues.

So yes, many if not all Learning Modules will be running the same code and varying only in their local data, position within the flow of messages, etc.

It’s not clear that either space or time efficiency is going to be a gating item, at least in the exploratory stages of development. As Donald Knuth said:

The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; premature optimization is the root of all evil (or at least most of it) in programming.

Nonetheless, it would be very useful to know if there are any show-stoppers before any Real Development ™ is attempted…

1 Like