Bardo - neuroevolution (a powerful and underrated type of AI) through Elixir

Hello I would like feedback on an experimental neuroevolution (including substrate encoding) library called Bardo based on the amazing work of Gene Sher.

Neuroevolution is a powerful and underrated type of AI that is well suited to Erlang and Elixir.

Features

  • Topology and Parameter Evolving Neural Networks (TWEANN): Neural networks evolve their structure and weights over time
  • Efficient ETS-based Storage: Simple and fast in-memory storage with periodic backups
  • Modular Sensor/Actuator Framework: Easily connect networks to different environments
  • Built-in Evolutionary Algorithms: Includes selection algorithms and mutation operators
  • Substrate Encoding: Hypercube-based encoding for efficient pattern recognition
  • Example Environments: XOR, Double Pole Balancing, Flatland, and Simple FX simulations

License

Distributed under the Apache License 2.0. See LICENSE for more information.

Acknowledgements

This is a vibe coded port of this project: github - Rober-t/apxr_run

Which was based on this code: Gene Sher - DXNN2

Based on concepts from this amazing book: Handbook of Neuroevolution Through Erlang by Gene Sher.

This is experimental

I’m testing it out some personal projects but I would love feedback and contributors.. it may be totally crap.. so I would appreciate someone who is cleverer than me giving it a go.

I understood everything in Genes book up until substrate encoding where it started getting fuzzy.. so I’m trying to really understand it..

It seems vagually similar to Hierarchical temporal memory - Wikipedia and I wonder it its posssible to create a HTM based substrate encoding using sparse representations of data.. or integrate HTM in some way.

All the best,
hibernatus

10 Likes

Awesome, I was wondering when someone was gonna pick this up! Gene Sher’s work is outstanding and very approachable!

1 Like

Gene just responded with some really interesting feedback on the DXNN2 github issues : )

I personally find this trend of LLM based agents disconcerting, not least because they are incredibly wasteful and inefficient. Just because we can use the emergent properties of huge natural language / transformer based ML to write code doesn’t mean we necessarily should IMO.

Neuroevolution based agents seem far better than LLM agents for most of the use cases that LLM based agents are being targeted at… NE agents can continuously learn and adapt.. and be evolved to solve a problem efficiently.

LLMs could be used to create scapes, actuators, sensors the initial conditions for an NE system or as part of a fitness function. Either by writing code or using a DSL. There seems to be some great opportunities to combine NE with transformers and other ML that Gene has pointed out.

I do use various AI tooling but I’ve noticed that the agentic LLM based systems use an excessive amount of tokens (a suspiciously large amount of tokens)… tend to diverge and create new work that they have to clean up.. costing more money…

LLMs seem to be a great business model for big tech but will you still use them if we have powerful neuroevolution systems?

NE systems that evolved code and agents to solve a problem without needing to send data to a third party or spend money on tokens.. efficient enough to run on your own infrastructure?

I don’t really care that NE systems are black boxes.. the same is true of any sufficiently powerful ML models.

If anyone else has drank enough of the LLM kool-aid and wants to collab on NE… get in touch.

hibernatus@use.startmail.com

8 Likes

There might be a lot to gain with leveraging tinygrad GitHub - tinygrad/tinygrad: You like pytorch? You like micrograd? You love tinygrad! ❤️ for the backend of the computation and if you like cutting edge stuff, there is outside of the mainstream light checkout: Predictive Vision in a nutshell – Piekniewski's blog

Elixir/Erlang are great for writing lexer, parser and compilers in. Although the VM sucks at anything math heavy. Like gemm.

3 Likes

PVM is interesting!

tinygrad is also interesting but I’m not sure PVM or tinygrad relate to Elixir/Erlang based neuroevolution. Ideas could definitely be incorporated and I definitely want to research sensors/substrate encoding and the recommendations from Gene Sher. Also Hierarchical temporal memory - Wikipedia seems very interesting… there is so much AI to explore laterally and maybe there is a way to use novelty search or an open ended algorithm (As proposed by Keneth Stanley) to find novel AI/ML approaches.

Erlang was designed for distributed soft-real time systems not number crunching granted… but just look at axon, nx and the amazing Elixir machine learning ecosystem that leverage GPU backends… I can’t wait to investigate how I can use nx and distributed computation to speed up neuroevolution… for now I just need a big VPS to train and to distribute CPU workloads horizontally and making it run “fast” isn’t as important. Make it work, make it beautiful.. then make it fast…

1 Like

Yeah, tinygrad would be to generate the best kernels for your gemm. gemm is the backbone of AI even when comes to NE.

I do not believe that tinygrad and nx with xla is even comparable at this point. Don’t forget that there is also tinybox. And tinygrad has their own GPU driver which is better than AMD.

At this point I don’t really think there is any other game in town when it comes to deep learning frameworks, it’s only a question about time when all leaderboards will be dominated.

This looks amazing. I have very very recently started looking at GE and DXNN, looking frward to going through the code.

I don’t feel tinygrad is the only game out there. TVM and MLIR-based systems are making a lot of progress as well. I am super pumped about Beaver becoming a way to write MLIR from Elixir!

1 Like

Nice, I was not aware of beaver!

Can beaver optimize kernel based on memory layout and datatypes?

When I was looking into this, then there was no generic way to get that information on runtime, let alone compile time.

So much of the time spend building these system is used on developing software for specific hardware. And we do this regardless of the fact that we have the math to calculate the optimal kernels, with only knowing some basic numbers of the hardware, L1-3 cache, datatype and size of a given matrix.

I see Beaver can be used as a backend for nx: GitHub - beaver-lodge/manx: MLIR backend for Nx

Personally I don’t want to go to a lower level of abstraction than NX because I’m interested in neuroevolution not GPU’s or chip design.

Having spoken to Gene he has been out of the loop with Elixir ML and didn’t know about nx or axon so there is a big opportunity to leverage NX for neuroevolution and take advantage of any backend.

The most exciting thing about NX for me is the potential to distribute GPU load in batches and combine it with CPU workload to create a non-homogeneous cluster.

1 Like

Yes, its crazy something like this exists. They have an Nx backend as well. Nothing feels production ready, but more a super snazzy exciting thing happening in the ecosystem.

Nx sounds like the right abstraction.

Bardo has some issues, so I would hold from trying it until I merge in some changes. I’m testing it out on a project I’m working on and I need to feed back the fixes.

Using NX is on my todo list : )