Erl_dist_mcp - An MCP server for distribution

erl_dist_mcp — Give your AI assistant direct access to your running BEAM nodes

Hey all,

I’ve just released erl_dist_mcp, an MCP server that connects AI assistants directly to your Erlang/BEAM nodes via the distribution protocol. It gives tools like Claude, Cursor, and other MCP-compatible editors the ability to introspect, debug, and trace your running systems.

What it does

It exposes 30+ tools over the Model Context Protocol, including:

  • Process inspection — list, search, and inspect processes, top-N by memory/reductions/message queue
  • OTP introspection — supervision trees, GenServer state and status, application info
  • System monitoring — memory breakdown, scheduler utilisation, ETS tables
  • Function tracing — safe tracing via recon (with dbg fallback), with structured output
  • Log capture — auto-installs an OTP logger handler to capture and retrieve recent log events
  • Code evaluation — sandboxed eval_code and rpc_call (opt-in with --allow-eval)

Output is formatted in your language of choice: Elixir, Erlang, Gleam, or LFE.

Why?

I wanted to be able to say things like “connect to my local node and show me which processes are using the most memory” or “trace calls to MyApp.Repo.query/2 and show me what’s happening” — and have the AI actually do it, rather than telling me how to do it myself.

Getting started

cargo install erl_dist_mcp

Or grab a binary from the releases page (Linux, macOS, Windows).

Add it to your Claude Desktop config (or Cursor, Continue.dev, Cline, Claude Code — any MCP client):

{
  "mcpServers": {
    "erlang": {
      "command": "erl_dist_mcp",
      "args": ["--mode", "elixir"]
    }
  }
}

Then just ask your AI to connect:

Connect to my_app@localhost with cookie MYSECRETCOOKIE

The README has full setup and usage docs.

Technical details

It’s written in Rust using the erl_dist crate for the distribution protocol and rmcp for the MCP server. No dependencies on the target node — it connects as a hidden node and uses standard RPC via the rex process.

Apache-2.0 licensed. Feedback and contributions welcome.

11 Likes

Interesting! This can be very useful.

What does sandboxed code eval mean in this circumstance?

2 Likes

It’s covered in the README, but I think calling it a sandbox is probably overstating it.

The eval_code tool includes safety mechanisms:

  • Process-level sandbox: Evaluation runs in a separate process with resource limits

  • Heap size limit: Prevents memory exhaustion

  • Timeout: Prevents infinite loops

  • Low priority: Reduces impact on system performance

  • Function whitelist: Only safe operations allowed (arithmetic, comparisons, list/map operations)

  • Blocks dangerous operations: No file I/O, network, process spawning, or code loading

Note: These safety mechanisms are NOT a complete sandbox. Skilled attackers may find ways to bypass restrictions. Only use on nodes you control.

1 Like

Unfortunately it doesn’t require much skill to wreak havoc, evaluating <<0::really_large_number>> will kill any node and can happen by accident. I wouldn’t be too surprised if an “AI assistant” managed to evaluate something like that by mistake, it might be worth trying to filter that kind of expression too.

you’re right that it’s easy to work around, so maybe it’s better to remove the feature and avoid any perception of safety.

I’d put code eval behind a config flag. It is incredibly useful, especially for dev, but absolutely bad for some cases, especially in prod.

2 Likes