Exactly.
A bit off topic, but I think the issue with the machines is they don’t bear any consequences. And even if they did one day, there’s nothing you can really do to condition them as in punishing them for any wrongdoings, because a punishment implies a loss of something that is essentially irrecoverable (such as lifetime or the life itself). Unlike with humans, the life of a machine will always be easy to replicate or extend and they will “know” it (besides the fact that unless we program them into “thinking” otherwise, the survival instinct does not even apply to them).
So, ultimately, the question that arises is who pays the price. Who pays the price if a Tesla figures something wrong (because reasons) and plows into a crowd? And no, it’s not the same thing as a mechanical failure. My guess governments are going to provide a full immunity to the manufacturers for there’ll always be an insurance policy for the unfortunate ones.
But still it’s not the same. If a person is guilty of a traffic accident they will bear the sense of guilt for the rest of their life regardless of the insurance. And the sense of guilt will impact their future decisions. If an AI kills people by accident, no one will care. The passengers in the vehicle won’t be or feel guilty because they had no control over the course of events. The manufacturer will not even know of it except in the form of stats. The insurers will write it off from their reserves (again subject to statistics).
Hallucinations or not, it won’t be a nice place to live in if no one is going to bear any personal consequences.
1 Like
Yep, a really important topic! As a society, I think most people agree we want to avoid a not-impossible reality where AI agents are given personhood rights akin to corporations, leading to even further obfuscation and deflection of accountability / liability.
My personal feeling is agentic ai will only be acceptable in a sane future if we limit use of non-deterministic AI during the compile time stages and bar its use from runtime.
In your example, we might allow non-deterministic AI to help create code that is then verified by humans before it’s put into cars, but never allow non-deterministic AI to drive cars, detect humans, etc.
1 Like
To me, the problem with this new paradigm isn’t trust but a whole new way of working.
Trust isn’t needed if we treat code generated by the bots like any other kind of code : it needs to be reviewed and the developer always has the responsibility of being an expert on the code they ship.
I use Claude Code + Tidewave MCP + my own documentation and examples and claude is able to generate code that is useful and correct, and test it, by following on what is already built, then suggest refactors if this “inspiration” leads to too much duplication.
The days I use it are absolutely exhausting ! The speed boost comes with so much code to read, review, think about and potentially correct… The code has to be read and reviewed thoroughly, patterns should be evaluated and tought about system-wise.
In a single day, writing part of a feature is so, so much more gentle than thoroughly reviewing N whole features and their tests… even if the model follows my style (which is even more exhausting : since it looks like code I would have written, it is harder for me to spot errors than if it was code that looked written by someone else, because it looks too familiar at first glance).
So, to me the problem of trust is maybe moved a bit further : what I do not trust is review of AI-generated code by another AI following a spec.
3 Likes
If we are optimistic about AI advances in the future, then it will be able to review itself way better than any human could.
And the only important metric will be the same as always: cost and accuracy. If an AI drives a car better than algorithms made by humans and causes less accidents then it will be used.
I’m not that optimistic about all of that though.
2 Likes
This point may sounds valid solely from a collectivist point of view, one that completely disregards any form of individual tragedy. Or put otherwise, it may sound valid until it happens to you.
I 100% agree with that, I do prefer to drive myself and the idea of being driven by an autonomous car is unnerving to me for the reasons you just stated. Whether the car is driven by AI or by algorithms elaborated and/or reviewed by humans leads to the same moral problem.
1 Like