Interactive testbed for cortical modeling?

Following up on Notions on AI and Elixir, redux, I’ve been speculating about what an interactive testbed for cortical modeling might look (and act) like.

In Media for Thinking the Unthinkable, Bret Victor discusses and demonstrates how interactive, symbolic, and visual modes can act in concert to aid in thinking. If you’ve never watched this talk, please do so now (I’ll wait…).

Although realistic modeling of the Neocortex isn’t currently (and may never be) practical, a spherical cow model might still be useful as a testbed:

The spherical cow is a humorous metaphor for highly simplified scientific models of complex phenomena.

Details

The human neocortex contains ~200 billion neurons, but only ~150K cortical columns. So, we could model each column as a (BEAM) process. If need be, subsidiary processes could be used to handle finer details.

Although the neocortex isn’t rectangular, we could use rectangular data structures to aggregate overall state. For example, a 400 x 400 tensor could contain data on 160K columns. This data could then be presented visually (e.g., one pixel per column).

Given this visual approach, optical character recognition (OCR) might be a useful test case. Generating test data (e.g., rasterized and/or distorted renderings of fonts) would be easy, making this a good fit for supervised learning.

With appropriate support infrastructure (e.g., for recording and playback), users might be able to explore various data sets, modeling approaches, etc. (And a pony…)

In any case, that’s more than enough speculation for now. As always, helpful comments and suggestions are welcome…

1 Like

Here’s two opposing thoughts that popped in my mind when thinking about process usage with Nx in mind.

  1. On one hand, because there’s no isolation requirement, it would be more natural to just model each cortical column as a plain data structure, without any processes involved. BEAM processes bring with them some overhead when dealing with message processing, meaning that if a given cortical column process needs to communicate with more than a single process, we can’t have concurrency.

  2. On the other hand, EXLA would block the GPU device when dealing with processing anyway, so serialization would happen at some level. So we could use processes to calculate things in a “game tick” fashion, where the whole network is updated at the same time (and thus, in a single or fewer-than-the-default batches).

  3. Processes would also allow for easier distribution of the network over a computing cluster.

  4. I’m also thinking that each column could be it’s own artificial neural network (or similar) computation.

All of this might just be deep learning with extra steps, but the whole idea piqued my interest

I followed this from another topic because I did think about some idea similar, but mine is still naive, and the idea is far from being realized because of my current major.

If I remember correctly, a cortical column is a group of neurons, recording from a single column can emerge extremely complex behavior from many layers of neurons. Therefore, I think it would be best to set the modeling level to be more specific, such as a cell or even sections of the cell membrane(soma, axon, dendrites and synapses), in my original idea.

CPU-intensive tasks such as calculating membrane potentials or ion concentrations in next-tick can be left to Julia or Rust. The task related to send pulse to related node or remote node if a neuron in action potential can handle by BEAM well.

The original trigger to give me the idea not some computation-project in elixir like Nx, but rather the following three things from biology view:

  • Only a few of neurons are in activate state
  • The monomers are connected in a mesh structure (I didn’t understand the concept of “graph” in programming when I was in high school)
  • Diversity of monomer morphology(different type of neurons has different type of neuro-transmitters or ion channels)

From this perspective, in my opinion, it’s better to build a generic simulation or emulation platform, the next step is set a level and delve in.

Finally, I’m not a native English speaker, so please correct me if there are any factual or grammatical errors.

1 Like