Anubis (formerly Hermes) MCP - Model Context Protocol Implementation for Elixir

update (since 2025-07-24)

the project got forked and rebranded to anubis-mcp, since i not on CloudWalk anymore and can’t ensure they will have interest of maintaining the original project. the fork happened from the v0.13.0 and i’m tracking the opened issues on hermes while developing new features and following the feedback from this thread. thanks for the attention!

original description

Hey folks!

I would like to introduce hermes-mcp, a comprehensive elixir implementation of the https://spec.modelcontextprotocol.io/ that we’ve been building and using in production at CloudWalk. MCP enables standardized communication between LLMs and external tools, and we’re excited to share what we’ve built with the community.

current state & protocol support

hermes provides both client and server implementations with full support for the MCP spec:

  • draft & 2024-11-05 spec: complete implementation
  • 2025-03-26 spec: partial (missing OAuth authentication)
  • 2025-06-18 spec: on the roadmap

client architecture

the client supports multiple transport layers:

  • STDIO
  • SSE (Server-Sent Events)
  • WebSocket
  • Streamable HTTP

what’s interesting about the architecture is how it leverages OTP patterns. each client runs as a supervised process tree:
Client.Supervisor
├── Client.Base (handles MCP protocol and state management)
└── Transport Process (manages I/O and acts like a bridge between external world and the client process)

you can use it either as a long-running process or spawn one-off clients for specific tasks. the supervision strategy ensures fault tolerance based on the :one_for_all strategy.

server architecture

the server implementation supports:

  • STDIO, SSE, and Streamable HTTP transports
  • direct integration with Plug/Phoenix applications
  • component-based design for tools, prompts, and resources
  • low level implementation or higher-level where the library handle most of the requests/notifications

the server supervision tree adapts based on transport type:
Server.Supervisor
├── Session.Supervisor (for HTTP transports)
├── Server.Base (protocol handler)
└── Transport Process

production usage

we’re currently running hermes in production at CloudWalk, powering capabilities for JIM, our financial assistant serving hundreds of thousands of users across Brazil. our setup includes:

  • as MCP client: both local and clusterized deployments providing server capabilities to JIM
  • as MCP server (in progress): building a clusterized implementation using Horde for JIM to expose capabilities to external clients like Claude Desktop

why elixir for MCP?

Have you thought about how elixir’s concurrency model maps perfectly to MCP’s architecture? each session gets its own supervised process, state management is clean through genservers, and the fault tolerance means a single bad request won’t bring down your integration. plus, with libraries like Horde, we can easily distribute MCP servers across nodes.

looking for community input

some questions we’re exploring:

  • what patterns have you found useful for managing stateful connections with external services?
  • how are you handling protocol version negotiation in your APIs?
  • anyone else working with MCP or similar AI tool protocols?

we’d particularly love feedback on:

  • the component-based server design (using use Hermes.Server.Component)
  • our approach to transport abstraction and supervision tree architecture
  • ideas for the clusterized server implementation and possible edge cases/caveats

The library is available on

Documentation is on anubis_mcp v0.13.1 — Documentation

What challenges are you facing with LLM integration that MCP and maybe hermes-mcp might help solve?

17 Likes

Looks cool! Curious if there is a way to add tools that does not involve defining modules? i.e tools derived at runtime?

1 Like

I’m linking to Zach’s comment in the Vancouver thread for additional context, which I’m guessing that the above question is to figure out if/how AshAI can auto generate “Tools derived from Ash Resource actions/DSL” to be served by a Hermes server?

1 Like

I’ll also say that @zoedsoupe :clap: has been very responsive to, and providing answers/fixes/improvements to, inquiries in my own journey/experience/experimentation Hermes in my own project.

i’ve started a more in deep discussion on your open github issue but it could also benefit from moving it to here - although hermes-mcp seems to have more visibility on the github side…

hey @zachdaniel, now we have runtime server components registration feature: wdyt?

need to update documentation accordingly though

1 Like

It looks good at first glance, will need to see if I can get someone to try it out or find some time myself :smiley:

1 Like

I won’t have time myself, happy to advise someone on integrating these two tools and consider using ti as part of Ash AI

I’ve gotten a proof of concept working of a Hermes.Server that can register and call AshAi tools. It’s enough of a “walking skeleton” that it seems worth exploring further.

3 Likes

Hello, I’m playing with the echo example. I may be wrong but it seems that the MCP “transport” is not registered with an unique name, but with {:via, Registry, {Hermes.Server.Registry, {:transport, EchoMCP.Server, :sse}}}, meaning that all connected clients will use the same server right?

And then this server will make another GenServer.call in forward_request_to_server to another GenServer, which seem to be the actual server, which copies de input data once more. There are dynamic sessions processes but I don’t know how everything works yet.

I think this could be simplified because that transport GenServer looks like a bottleneck.

we’re kinda discussing that on the following github issue, wdyt?

Well I’m not really aligned with your conclusions. If we want to implement a tool that is not fast (it searches through database or runs network calls to an Elastic search instance), then all other users will have to wait in line.

Typical handler execution: 10-5000ms (dominated by I/O)

5 seconds is a lot. If you have 3 users using the server at the same time, one user will wait 15 seconds before the calling llm displays anything.

1 Like

just found the project yesterday, looks amazing! i think this will be the most robust MCP implementation existed!

thanks! i think we need to have some improvements, like for the supervision architecture on-going discussion, protocol negotiation/version alignment too and specially documentation, but im doing my better since im the only active maintainer/core

1 Like

yeah, but my main point is that MCP is a bidirectional sync communication. there’s no “async response” (kinda emulated with SSE, but not very similar either) i think my main “flaw” on this architecture was to separate the frame (user server state) from the session “generic” MCP server state, since on MCP sessions are not exposed to users actually as they are an internal concept of “client specific connection state”, although hermes aims to have a very similar API/design from LiveView, but we can’t replicate the dynamic behaviour of LiveView since a MCP server is stateful.

the other main problem could be the “base” transport being a single genserver, we may refactor it just to a plain code router and transform the Session one into the main implementor for request handling instead of being just a opaque process holding partial state.

but the major challenge here is that MCP can have multiple different transport layers and although some are more simpler (like STDIO that will have always 1:1 client/server), HTTP based transports can easily have N:1 client/server and if we consider sessions here, this ratio comes to N:N:1 client/session/server with bidirectional/stateful communication, which doesn’t seems to be very simple to architecture. i would like to discuss more and actually ask for help. the last change i did was to at least reduce the bottleneck removing the transport layer from this problem, so it now routes the request async to the Base, which now answers directly to the SSE handler/HTTP connection instead of base->transport->handler, but of course the Base stills have the same problem.

what i would think to “solve” this problem would be:

  1. handle user defined module request async and make the user defined server answer directly to transport instead of the Base server, transforming into a simple router and maybe removing stateful nature from it. the problem here is that we need to maintain the frame state, which could be changed by the user defined server, so we start to need to handle different frame versions?
  2. merge session/frame into a single process for request handling, so we may solve the problem on point 1?

i scanned quickly yesterday source code, and it looks impressive. now i’m thinking do i still need my internal Tool implementation that i’m using with elixir agent, or should i standardize everything to hermes-mcp handle_tool logic

1 Like

hey everyone!

I’m the core maintainer of hermes-mcp, which has grown to 30k+ downloads and 250+ GitHub stars. Since i’m not working anymore on the Cloudwalk org and I’m uncertain about their long-term maintenance plans for the project, I’ve decided to fork and rebrand it as Anubis MCP :wolf:

What’s the situation:

  • I’ve been the primary developer and maintainer of hermes-mcp
  • Given corporate uncertainties, I want to ensure the community has a guaranteed maintained version
  • Anubis MCP is already published following the same latest release
  • I’m tracking all existing issues and pull requests from the original

What you get:

  • Same battle-tested Elixir MCP implementation
  • Continued active development and community support
  • All the performance and reliability you’re used to
  • Same API, easy migration path

The name Anubis felt fitting - as the Egyptian god of transitions, it seemed appropriate for guiding this project through its own transition. Plus, sometimes you need a deity of change to help you… transition :transgender_flag:

If you’re currently using hermes-mcp, I’d encourage migrating to anubis-mcp for guaranteed ongoing support and development.

Repository: GitHub - zoedsoupe/anubis-mcp: Elixir Model Context Protocol (MCP) SDK (hermes-mcp fork)
Hex: anubis_mcp | Hex

Would love your continued support and feedback! And now i have more free time to take care about more delicate parts of the project, like documentation ^-^

8 Likes

hi @zoedsoupe , just some feedback on the project based on your architecture:

  • would suggest allowing Finch to be overridden for clients, or by allowing a custom finch pool to be provided.
  • would suggest using ets for session registry instead of agents
    • might also want to add an auto-cleanup for sessions or ttl mechanism
  • would suggest converting your base server to be fully stateless as that genserver would be a bottleneck
  • might want to add in benchmarks for the performance claims

cool project and nice project name :smiley:

hey @ziinc!

  1. yes! i would like to make the server architecture more reliable/stable and then allow users to customize/extend it like with custom HTTP clients or even session stores, which cover the nest topic
  2. yeah, i over complicated the session managed and sincerely i should ditch it totally and merge with the frame concept since as i already said on this thread, “session” in the perspective of MCP is an internal technical detail and should not be exposed to users directly (at least the management). we already have a auto-cleanup/auto-close feature tho, but ETS is way better in this case and we could extend it as said on 1. making an interface for whim would to store session on other stores like redis.
  3. i don’t think is possible to do that actually, although it seems to be better, it need to hold state but we can think of it as a router? so sessions process the message and answer directly to the transport/SSE process
  4. yes! i will be doing that

thanks for the feedback, i’ll be tackling it in the next weeks and i generally agree with all points! also remembering that i forked the original repo to anubis-mcp

2 Likes

Suggestion to update the thread title to “Anubis (formerly Hermes) MCP …” ?