Reporting back after a couple of weeks of using Sourcery that @cblavier shared. Initially the tool was a nuisance, it generated too much content and interfered with human-to-human communication. We had to silence most of its options, and to add a couple of extra handcrafted prompts to make it behave in a reasonable way.
With those additions, the generated comments started to be actually useful. It is not able to detect larger problems, wrong architecture decisions, but it is good at spotting typos, missing validations, edge cases, etc… It reduces the burden of manual reviews and gives feedback faster to the team.
Overall, the tool is in the nice-to-have category, but not yet in the must-have category.
I think this should be highlighted, as this is the silver lining. Such tools are just linters on steroids, even though I doubt they are that consistent, it would be a mistake to assume that they can replace the human factor in code reviews.
I also think it’s also important to point out that this might be a shortcut for companies that never understood how to setup and use already existing static analysis tools like credo (especially with custom made check rules).
It even has a GitHub action if you don’t want to only use it as an offline CLI tool (which is how I use it exclusively but a GitHub action is super useful for CI/CD).