Honestly? If it would be based on solid algorithms and not LLM “guessing” then I would rather trust first tested version more than humans. Even in complicated systems all you have to do is to follow rules (signs on road). This shouldn’t be a big deal.
There are however much bigger problems like a energy amount car can store, battery life, economy and problems with fire. Those are actually worrying me much more than the software.
Depends … generic LLMs are terrible especially because their temperature is usually set to 1.0
and some of them have predefined system prompts which may cause so much problems with false information, “Voldemort syndrome” and so on … However a well configured local LLMs, well … I would definitely not trust them, but use them as a helpful tool instead.
For example in zed
I’m testing a code predictions feature and I need to say it’s as same useful as I don’t trust it. Yeah, I know it sounds weird, but if you pay attention what this tool is suggesting you then you save lots of time with variable renaming. That’s said it’s still definitely far away from expected quality. For example it has a problems with a names with same prefix, so sometimes it suggest to rename other variable when it’s not desired.
I would say LLM+
, so a tighter language-level integration and some improvements would change a lot while it would not even be closer to the true AI. I don’t expect to find an AI coding partner really soon. LLMs may improve a lot soon (see cars before and after WWII). However after the war the development may drastically slow down due to lack of developers. It’s not like that you can enter LLM factory and learn how to use 
The answer is as same as for any next question. No matter if we talk about 10, 20 or 100 years nobody can guess so far as there are way too many possibilities situations that could happen. For example, Fuji
is very dangerous for the people of Japan, as it is still active, and the next eruption is expected soon, as it is rather cyclical. Imagine a world with fallen Tokyo
- it would affect directly at least 40 million of people in Japan
and the fallen economy would not help the rest.
Other thing that could happen everyday is a coronal solar mass ejection. Sun is doing that really often, but not all of them are as strong. However we already had a case when it happened. Think that suddenly all electronics are damaged and think where are factories. Well … “good luck AI”. 
One nuclear tsunami could kill about 80% of the people in China, or most of Western Europe could suddenly be underwater. US would need to give up on USD (not likely) or they would start a war with China after they lose to the economic war we currently have.
It’s a fact that currently at least 2 countries with nuclear weapons are preparing for war (India vs Pakistan) and at least 3 others declares using nuclear bombs (Israel, Iran in revenge for Israel attack, North Korea). We really don’t need US ↔ China conflict to make things go the wrong way. It’s like sitting on a powder keg.
Unfortunately all of those (and many other) scenarios may happen even before 2030. Even if somehow the bigger events would not happen LLMs are still a big mystery in future. There is no way to predict how much algorithms would be changed for military purposes and what problems with them we would have. Look that the whole network is based on old military architecture which opens to many attacks from inside.
It’s not just my opinion, as far as I know the LLMs are already officially used by Israel for attacks. Now think that Israel plans the same on bigger scale (Iran). I think that wishing for a peaceful world would not change facts. We should rather think how much current and upcoming wars would affect LLMs. This could help us with more long term predictions.
That’s said except economical and political problems we may have also problems related to organisation of work. If so-called “AI” would be forced as a pure idea (as many other ideas are forced in EU) we would be in a big trouble. Yes, it’s stupid to limit resources only for LLMs as same as it’s stupid to limit economy by CO₂ emission, limit applicants by country and so on … So many bad ideas were forced that we need to also consider this one. In such case in short term many developers would not have a job and in long term we would have lack of developers on the market and we would need dozens of years to fix that.
What good can happen? We already have editor integration. So for now I don’t see much more than improvements. However how about we would make a topic more wide? Why we assume that after 20 years we would use the same enhanced technology? I guess in long term we may have many alternatives to LLM concept. People may find maybe not a better solutions, but more space-efficient ones or use alternative algorithms to decrease resource usage.
The new technologies may require implementing new file types to store information. See that nobody really knows all characters from UTF-8 table. How about creating a UTF-8 like language-agnostic LLM table where data would be saved differently? This way we may be able to store and retrieve more data on same device. Most probably it would be also more efficient solution.
The other concept is control LLM tools with brain directly. Of course it’s controversial to use 2-way communication, but 1-way would be anyway a huge improvement. Just converting thoughts into LLM prompts may drastically improve work. Think that you can reorganise all of the files stored on your drive in just few seconds and within writing even 1 letter …
Think that for doing a medical operation all you need is to have a knowledge and directly control the machine with your thoughts while getting suggestions from LLM-based tool. Also creating every report would be almost fully automated. No matter how hard the paper work would be it could be automated as long as you can understand it.
Regardless of concept it’s rather not a matter if, but when it would happen. Rather than a revolution we should focus on evolution by fixing current problems instead of introducing new ones.