I’m excited to announce Jido, a framework providing foundational primitives for building autonomous agent systems in Elixir. While developing agent-based architectures, I found existing tools lacked the right abstractions for building truly autonomous systems that could evolve and adapt at runtime.
Core Architecture
Jido’s architecture is built around four core primitives that map directly to key patterns in autonomous systems:
Actions provide a composable unit of work with rich metadata, enabling agents to introspect capabilities and combine them in novel ways at runtime. Unlike traditional functions, Actions include schemas and metadata that support dynamic reasoning and composition.
Workflows extend beyond traditional sequential chains to support dynamic composition patterns. Building on Elixir’s strengths, workflows can be constructed to handle conditional paths, error recovery, and compensation seamlessly.
Agents maintain schema-validated state while planning and executing workflows. Through directives, agents can modify their own capabilities at runtime - such as enqueueing additional instructions to perform.
Sensors operate as independent processes, providing real-time environmental awareness through standardized signals. The exist to support Agents by monitoring external events.
SDK & Extension
Jido is designed as an SDK for building agent-based systems. The core framework provides the runtime primitives, while the companion jido_ai package offers integrations with LLM providers like Anthropic’s Claude through the instructor_ex package. Teams building with Jido can easily integrate any LLM into Jido via a custom Action.
Current Status
Version 1.0.0 provides production-ready implementations of all core primitives. Work is underway on enhanced Agent Server and Supervisor capabilities for 1.1. We’re actively using Jido in production and welcome feedback from teams building autonomous systems.
Great question - This whole space is evolving quickly and has been a fantastic mental challenge to shed my old habits of how these apps get built - so this is my current perspective.
The “use an LLM assistant to execute a tool” trend of late benefits one party - LLM API providers. I’ve read over a LOT of repo’s, and have played with a lot of toys.
That’s why this foundational repo has ZERO LLM or AI code in it. At this foundational layer, LLM tool calling as a first class citizen didn’t feel right.
Don’t get me wrong, I wrote this to integrate with LLM’s - but the Elixir way is to build with pure data structures and minimal dependencies on OTP. That’s the purpose of this package.
Agents as GenServers is implemented and tested. I’m not entirely happy with the API yet so I didn’t include it in this official release. They each spin up a DynamicSupervisor and can manage delegating tasks to other agents, starting and stopping Signals and doing all sorts of other cool things.
I see a future where we each have thousands of Agents working for us constantly - like a swarm of ants. No other framework or platform comes close to making this happen - so I started from first principles with Elixir.
Enjoy!
EDIT: There will be many examples coming soon to demonstrate the scope of what Agents with Jido are capable of.
If that deterministic workflow fits all queries, by all means just code everything! This will give you a 100% reliable system with no risk of error introduced by letting unpredictable LLMs meddle in your workflow. For the sake of simplicity and robustness, it’s advised to regularize towards not using any agentic behaviour.
…
Until recently, computer programs were restricted to pre-determined workflows, trying to handle complexity by piling up if/else switches. They focused on extremely narrow tasks, like “compute the sum of these numbers” or “find the shortest path in this graph”. But actually, most real-life tasks, like our trip example above, do not fit in pre-determined workflows. Agentic systems open up the vast world of real-world tasks to programs!
This thread is confusing to me. Reading the OP it seems this is a library that’s kind of sort like Oban – you can build pipelines of tasks, sort of like ETL systems as well.
But now the discussion got steered toward LLM agents and I apparently I misunderstood the whole thing. Anybody wants to put things in context?
The primitives share similarities with Oban - but the use-cases are different.
Agent techniques like “Chain of Thought” (Chain-of-Thought Prompting | Prompt Engineering Guide<!-- -->) string together several actions with an LLM’s response deciding which next-action to choose. This simulates human reasoning leading to higher-quality outputs.
Jido was tailored towards these use cases. But, as I explained above, as I worked through my own prompting pipelines, I got annoyed with LLM’s being slow an flaky. When I set out to build Jido, I wanted to ensure the building blocks didn’t make assumptions about LLM’s being involved, but also supported classical AI planning algorithms - a currently neglected sector of artificial intelligence research.
Think about video game NPC’s - the AI used to power them are most often things like Behavior trees or Zipper trees.
Elixir has a solid base of these algorithms available as Hex packages now and there was even a talk at ElixirConf about them in 2018
A lot of the latest AI research focus has gone towards LLM’s lately, but I find algorithms like behavior trees to be more practical in the real-world because they are more reliable in several ways.
Really appreciate the questions here- I’m adding clarifications to the README along the way to hopefully answer some of these questions for others in the future.
Another question: can this library be used without LLM agents at all? Or anything AI-related? From what I quickly gleaned it seems that it can be used similarly to how are Oban / Flow used?
Correct - the foundational library can be used without LLM’s. During planning, I felt it was important to simplify down to the base data structures and interaction patterns - get those correct - and then build LLM interaction patterns on top of that.
This example uses instructor_ex - but langchain or any other Elixir AI package could easily be integrated for an LLM response.
Yes, exactly - or even Broadway. This problem space of data transformation pipelines has a lot of solutions.
Each solution has it’s strengths. I love and have used each of these tools many times.
Jido won’t be the best at raw throughput as it does not use GenStage - like Broadway or Flow. Jido does not assume a database, if you need a database backed Job Processing system, you should use Oban.
Jido is aimed at the space where a particular agent (or queue?) is built to facilitate a wide variety of output - because a LLM is typically involved. Low throughput, high variety.
Where the Agent space gets crazy-weird is when you let the LLM pick the next action. It’s not random … but it’s not idempotent either. Jido agents natively support a list of “allowed” actions (Agents — Jido v1.0.0) for this reason - it provides a small level of control over this use case.
I purpose built Directives to help provide some control over these advanced use cases.
This space evolves weekly - and I’m constantly learning and evolving my thinking here as well.