Hello everyone,
After countless nights spent working on my side project, I’m now preparing to onboard pilot users from a potential customer. I’m seeking advice and pointers on how to create a solid production-readiness checklist and improve the reliability of my Absinthe GraphQL API.
Here’s some context:
- Backend: Elixir/Phoenix with Absinthe GraphQL API
- Frontend: JavaScript Single Page Application (SPA)
- Infrastructure: Fully containerized, running on Kubernetes
I’m particularly interested in:
- Best practices for monitoring and observability (metrics, tracing, logging)
- Effective error handling and resilience patterns
- Load testing and performance optimization strategies
- Security considerations specific to GraphQL and containerized applications
- Any common pitfalls or lessons learned from your own production deployments
If you’ve had experience launching a similar stack or have insights into reliably scaling Absinthe APIs, your input would be incredibly valuable.
Thank you in advance!
1 Like
If you want near-instant okay-quality feedback I’d do this:
- Run this command in your project’s directory: GitHub - yamadashy/repomix: 📦 Repomix (formerly Repopack) is a powerful tool that packs your entire repository into a single, AI-friendly file. Perfect for when you need to feed your codebase to Large Language Models (LLMs) or other AI tools like Claude, ChatGPT, DeepSeek, Perplexity, Gemini, Gemma, Llama, Grok, and more.. It creates a single XML file containing all your project’s files and their relative paths.
- Describe in details what are your goals of the project. Be very, very detailed. Write this for an hour or two if you must.
- Feed both to Gemini Pro 2.5. Ask it whether your goals and what it sees in the code align.
I have trashed LLMs since they came out – for the record. But Gemini Pro is really good if you give it a detailed prompt and if you’re guiding it towards the goals that you hold dear.
Thanks this is actually great. I’m on the fence for LLMs but haven’t tried Gemini so I’ll give it a shot.
I hate scoreboards but still, check this out: https://web.lmarena.ai/leaderboard
And Gemini Pro just got an update today, so it should be even better. And it was already at the very top position.
I dread the possibility to stop thinking well out of laziness and pushing everything to LLMs and I am thus working very hard to avoid getting addicted and lazy, to the point of having to get up from the chair and breathe deeply for full 5-10 minutes a few times lately.
But fair is fair, Gemini Pro is excellent and it has saved me significant time on several occasions lately.
1 Like
But Gemini Pro is really good if you give it a detailed prompt and if you’re guiding it towards the goals that you hold dear.
Is it though? It gave me this code:
# Try to find the last space in this potential prefix.
# String.rindex/2 returns the 0-based index of the last occurrence or nil.
case String.rindex(potential_prefix, " ") do
nil ->
# No space was found within the first 'limit' characters.
# In this case, we return the string truncated exactly at 'limit'.
potential_prefix
idx ->
# A space was found at 'idx'.
# We slice the original string up to this point to ensure we don't cut a word.
String.slice(text, 0, idx)
end
Last time I checked there is no rindex
function the String
module.
Edit: and the bloated, useless comments drive me crazy.
Edit2: this is gemini 2.5 pro btw.
I don’t work for Google and I’m not here to do PR for them. You also didn’t show your exact prompt so your result can’t be assessed well against it.
For what I used it, it’s fantastic. I’ve invested a lot in prompting with many details and priorities.
Obviously at one point one has to wonder whether that investment is not spending more time and energy compared to just doing it yourself. If that is so then okay, do it yourself. That particular choice was always there.
For what I’m doing in the last several weeks, Gemini Pro is supernaturally good. I have a very good idea what must be done, I’ve given it literal hundreds of thousands of tokens of context and it is performing fantastic at the code generation I need from it (long, annoying, verbose code that still has to be thorough and just, you know, verbose for no reason).
And I’m still triple-checking every line of code I asked it to generate. Much more often, I’m asking it for feedback on the code I’m writing.
So yeah, if it doesn’t work for you and you dislike it then fair enough.