Interesting discussions. One aspect in predictions about widespread agentic ai adoption that often gets overlooked is trustworthiness.
In my opinion, there is a possibility within the next 5 years code generation quality, safeguards, agentic ai SDLC loops, and so on could improve to such an extent that risk of introduced vulnerabilities is not significantly greater than average human software development.
But the crux of the matter is, how much would you (would anyone) ever trust AI to fully develop & run software critical for business operations over human developers? And what sort of proof is actually going to convince your average business leader? (My guess, it would require widespread adoption with very few ai-introduced vulnerabilities, so it’s a catch-22). Throw in things like potential job loss, unionization, legalities (was the data LLMs were trained on really permissible?), politics, etc. and the way forward is very bumpy.
After the whole world’s exposure to LLM hallucinations, its trustworthiness is already severely compromised and, despite the very loud AI evangelists, trepidation about it is at an all-time high. Despite many businesses’ FOMO and rush to experiment with it, it’s only going to take one publicized vulnerability introduced by AI to set back agentic software development back years.
With that in mind, I think any current software best practices like static typing (whether you think static typing is a best practice is debatable of course!) and other high-level abstractions of all kind, including non-machine-code languages, are crucial for human verification, which I don’t foresee going away any time soon.
A humanistic ai-enhanced software development process (such as Spec-then-Code*) is the infinitely saner, less apocalyptic way forward for now for all human parties involved. My $0.02 anyway.
GitHub - mosofsky/spec-then-code: LLM prompts for structured software development because quality takes more than just "good vibes".
From “Vibe Coding” to “Vibe Software Engineering”