It implements the MCP server protocol to introspect your running application and interact with it, I’m curious to try it out with bigger phoenix applications.
Nice. I’ve been watching things crop up over the last hour or so and didn’t want to spoil anything by revealing the surpise. But I also don’t use Twitter, so I didn’t see the announcement.
I’ve got to figure out how to get all this stuff working in NeoVim (it’s not in the list of supported editors… yet).
Between this and the launch of Qwen3 yesterday (which seems to work well on powerful CPUs, may not need a huge GPU to use it), I’m pretty excited to see if I can get my hands on some sort of “agentic” coding experience that doesn’t send all my code to some Big Data oligarchs (without having to drop $$$ on a GPU farm).
Looks like it won’t work with Zed. MCP is running but no action because Zed only supports MCP with its slash commands. That’s sad, so I guess someone will have to make Zed’s extension, I can look into it this weekend. Or do you have any other experience?
The vibes. But I’m thinking of using it in parallel - like sending your underling to do some work for you while you do more important stuff, then checking up how it’s doing. “You made this? … I made this.”
I think the true potential will be unleashed when it will be able to use a language server and browse documentation, inspect available/imported functions etc. I do not believe it’s doing it just yet but that’s indeed going to be revolutionary.
Very interesting experiment. The whole framework being available as an MCP - coupled with great documentation associated with elixir - I guess we are reaching that stage where we will start getting unfair advantage as a web developer. Thanks @josevalim
One question: At this moment, providing context to the agent is the biggest issue we try to resolve as part of vibe coding. Many a times, we update the memory/memorybank/rule set - with some trivial mistakes that the ageent & llm commit in code generation. For example - any number of times you warn the agent/llm - they use old style syntax for heex and when it gets changed by mix format they become unhappy - and - go through an infinite loop using just that. Like that many examples. Similarly, I have prepared the markdown files of entire phoenix and liveview guides - toned down version (humanly edited) - and I keep them in a reference directory - which is attached to the context. Though the number of tokens increases, nowadays I am getting a lot more idiomatic and syntactically correct code.
Now, my question is - with this MCP - which parts become irrelevant and which parts become more important? How do you derive the best benefits out of tidewave? I mean - tidewave can get the docs for the corresponding version that we are using - so - no point keeping in the doc markdown files - is it? Some brainstorming will definitely help. @josevalim and @chrismccord - can you put up some of your thoughts please! Specially @chrismccord - I think he has done some 50 applications already with tidewave.
Congrats Jose and the team.
It seems we are reaching a stage where AI & MCP connectivity will be a standard mandatory feature in software so that you will not loose a great percentage of customers going forward. Yesterday announcement by Claude just shows how consuming multiple services maybe heading:
That’s technically how one writes Don Quijote, just throw 100K random words from a dictionary into a linked list and shuffle until publishers pay for it.
Is @mudasobwa referring to the Infinite monkey theorem?
Somebody should redo the theory for these MCP AI’s. Instead of Shakespeare it should be Facebook or something.