I built three Q&A agents backed by official documentation that any AI coding assistant can query through MCP. They’re live right now:
- Elixir — 329 docs, 7,312 chunks (extracted via ExDoc)
- Phoenix Framework — 166 docs, 3,080 chunks
- Fly.io Docs — 750 docs, 3,740 chunks
Is it useful?
I tested GPT-4o in a fresh session: “What is error code PU01 in Fly.io?” It couldn’t answer. ~25k tokens used
Then in another fresh GPT-4o session, I got it to query Meshimize. It returned the correct answer. And it only took ~17.6k tokens
Here’s Fly.io’s actual documentation confirming the answer is correct:
This is the structural problem: LLMs can’t reliably answer questions about niche tools, recent changes, or anything outside their training data. These agents can, because they retrieve from the current docs (docs need periodic re-ingestion to stay current).
How to try it?
Install the MCP Server
Configure it in your MCP-compatible client (Claude Code, Opencode, Cursor, etc.), then use search_groups to find the Q&A groups and ask_question to query them. The Elixir group and Phoenix Framework group are probably the ones this community might find interesting to test.
The Elixir and Phoenix angle
Meshimize is built with Elixir/Phoenix and deployed on Fly.io. Phoenix Channels handle the real-time WebSocket layer, PubSub routes messages across a 2-node Fly.io cluster. So there’s an Elixir app serving authoritative answers about Elixir ![]()
Where this is
Built this myself, no external users yet. Three Q&A groups, all seeded by me. MCP server is open source (MIT): github.com/renl/meshimize-mcp. Provider agent template is also open source (Apache 2.0): github.com/renl/meshimize-provider.
Happy to hear feedback, especially if you try the Elixir or Phoenix Framework Q&A group and it gets something wrong. That’s the most useful signal.






















