It can help with most of the things at this point.
For example I just told it to find if we are missing some log_user_action calls in our codebase at places where we change data and it found the places we do and filled in the missing log_user_action calls.
We are already doing what we can to avoid this, which is to detect when it runs something too long and tell them that it is too long. Unfortunately, we canât really prevent them from doing this regardless of how much direction we provide in the prompts.
Slightly changing subject., I did not have a good experience with GPT-5-mini so far, but GPT-5 has done well. What about you?
For me it was good until yesterday when I felt a drop in quality. Started to make mistakes with the smallest things. I have experience with this with OpenAI, they did this a lot in the past but of course I didnât measure this however in the past others measured it so I wouldnât be surprised if something changed.
So in short gpt-5-mini through GitHub copilot was good for a few days but now itâs very chaotic.
I tried it for a bit, but ended up going back to Claude when the gpt 5 models started confidently hallucinating functions it insisted were part of the Elixir core modules.
At Octoscreen we just subscribed to a teams plan for all our devs but we had some issues.
We should be able to change the email of the main account because that email is not changeable in the paddle account and it gets written on the invoice and itâs my personal email that tidewave.ai got through my github login.
We got billed twice, first we got billed with the amount that we pay without VAT and then we got billed separately with the VAT, is this intended?
Hi @preciz, can you please email support@tidewave.ai? You should be able to change your email at tidewave.ai/settings. I will also doublecheck the invoices, if it was indeed two payments, and reimburse one of them and correct the email address. Thank you!
Iâve been testing Tidewave with GitHub Copilot and Iâm really impressed. I have two comments:
I asked it to add a new resource to an existing context with the mix phx.gen.live task, but as is often the case, this command suggests creating a new context, so the chat got stuck showing the confirmation request. I donât know if thereâs a way from Tidewave to access the console from the chat and confirm the request. I had to do it manually from a console.
After only 5 messages, the Context Window Usage was 50% full. I understand this depends on the provider, and in the case of Copilot, it varies by model, with a maximum of 264.0k. What would be the best practices for dealing with the context limit?
Iâm not an expert in this problem but could we just fork the current chat from a given point?
Then I would be able to choose a prompt I sent maybe 2-3 prompts ago that I know is a good state to continue the work from.
Or could we have an option here to reduce the current context window by some method like filtering / summarization / replacement of parts (automatically)?
I used Tidewave last night to solve some longstanding UI nits on my app. Not having to manually fight CSS / Tailwind for hours to get the UI I want is well worth a little iterating with the agent! Very nice!!
Quick question⊠I used my existing Anthropic API Key (Sonnet 4.5) and the small changes burned through credits pretty quickly compared to my normal agent workflow inside Zed using the same model. Iâm assuming itâs all of the JS testing code which of course Zed agent is not generating that is the difference. It wasnât too bad just a few dollars but they were pretty small changes and if I were using it all day it would quickly add up!
So I noticed Claude Code integration was recently released as an alternative to using the Anthropic API keys and looking at the documentation it appears to be the recommended path (since itâs listed first in the guides and âBring your own keysâ is introduced as âYou can also useâŠâ). I was wondering is the benefit of using Sonnet 4.5 through Claude Code from Tidewave so we can use our Pro/Max subscriptions and save money or are there some other context/response quality improvements doing it that way versus just using our API keys?