Tidewave has just been announced by José Valim

Thanks for introducing TideWave, I am using it with a rails application and it definitely provides some nice context to the LLM.

One thing I have noticed when using MCP tooling with my LLM ( I am using Avante in NVIM ) is high token usage. I wonder if this is Avante itself doing something that I don’t know about OR if that is the nature of offloading your AI development to your local machine ( by using MCP ) versus using a cloud based solution (e.g. Devin, Codex).

To be honest, I think MCP locally with an LLM provides some nice benefits in that the LLM can interface with your application. For example, I recently used it to created records in my database that it could debug with, plus these were records that I could also observe and interact with. I think this is nice compared to the cloud based black box experience that Devin and other tools provides.

However, I think the trade off is that with the increase local tooling, you end up with potentiall more input tokens and more expensive AI platform costs. That said, I would expect compute costs to come down in the future, so this may only be temporary (still also might be misconfiguration of my nvim plugins). I would be interested to hear anyone elses experience around this.