How useful do you think AI tools are/will be? (Poll)

,

Following on from a conversation in the Tidewave thread - how useful do you think AI dev tools are right now and how useful do you think they might become?

There are multiple polls below, please vote in each one :icon_biggrin:

How useful do you think AI tools are to developers right now?

  • Not useful
  • Somewhat useful
  • Moderately useful
  • Very useful
  • So useful that I consider them a necessity
0 voters

How useful do you think AI tools will be 5 years from now?

  • Not useful
  • Somewhat useful
  • Moderately useful
  • Very useful
  • So useful that they would be considered a necessity
0 voters

How useful do you think AI tools will be 10 years from now?

  • Not useful
  • Somewhat useful
  • Moderately useful
  • Very useful
  • So useful that they would be considered a necessity
0 voters

How useful do you think AI tools will be 15 years from now?

  • Not useful
  • Somewhat useful
  • Moderately useful
  • Very useful
  • So useful that they would be considered a necessity
0 voters

How useful do you think AI tools will be 20+ years from now?

  • Not useful
  • Somewhat useful
  • Moderately useful
  • Very useful
  • So useful that they would be considered a necessity
0 voters

Honestly? If it would be based on solid algorithms and not LLM “guessing” then I would rather trust first tested version more than humans. Even in complicated systems all you have to do is to follow rules (signs on road). This shouldn’t be a big deal.

There are however much bigger problems like a energy amount car can store, battery life, economy and problems with fire. Those are actually worrying me much more than the software.

Depends … generic LLMs are terrible especially because their temperature is usually set to 1.0 and some of them have predefined system prompts which may cause so much problems with false information, “Voldemort syndrome” and so on … However a well configured local LLMs, well … I would definitely not trust them, but use them as a helpful tool instead.

For example in zed I’m testing a code predictions feature and I need to say it’s as same useful as I don’t trust it. Yeah, I know it sounds weird, but if you pay attention what this tool is suggesting you then you save lots of time with variable renaming. That’s said it’s still definitely far away from expected quality. For example it has a problems with a names with same prefix, so sometimes it suggest to rename other variable when it’s not desired.

I would say LLM+, so a tighter language-level integration and some improvements would change a lot while it would not even be closer to the true AI. I don’t expect to find an AI coding partner really soon. LLMs may improve a lot soon (see cars before and after WWII). However after the war the development may drastically slow down due to lack of developers. It’s not like that you can enter LLM factory and learn how to use :toolbox:

The answer is as same as for any next question. No matter if we talk about 10, 20 or 100 years nobody can guess so far as there are way too many possibilities situations that could happen. For example, Fuji is very dangerous for the people of Japan, as it is still active, and the next eruption is expected soon, as it is rather cyclical. Imagine a world with fallen Tokyo - it would affect directly at least 40 million of people in Japan and the fallen economy would not help the rest.

Other thing that could happen everyday is a coronal solar mass ejection. Sun is doing that really often, but not all of them are as strong. However we already had a case when it happened. Think that suddenly all electronics are damaged and think where are factories. Well … “good luck AI”. :joy:

One nuclear tsunami could kill about 80% of the people in China, or most of Western Europe could suddenly be underwater. US would need to give up on USD (not likely) or they would start a war with China after they lose to the economic war we currently have.

It’s a fact that currently at least 2 countries with nuclear weapons are preparing for war (India vs Pakistan) and at least 3 others declares using nuclear bombs (Israel, Iran in revenge for Israel attack, North Korea). We really don’t need US ↔ China conflict to make things go the wrong way. It’s like sitting on a powder keg.

Unfortunately all of those (and many other) scenarios may happen even before 2030. Even if somehow the bigger events would not happen LLMs are still a big mystery in future. There is no way to predict how much algorithms would be changed for military purposes and what problems with them we would have. Look that the whole network is based on old military architecture which opens to many attacks from inside.

It’s not just my opinion, as far as I know the LLMs are already officially used by Israel for attacks. Now think that Israel plans the same on bigger scale (Iran). I think that wishing for a peaceful world would not change facts. We should rather think how much current and upcoming wars would affect LLMs. This could help us with more long term predictions.


That’s said except economical and political problems we may have also problems related to organisation of work. If so-called “AI” would be forced as a pure idea (as many other ideas are forced in EU) we would be in a big trouble. Yes, it’s stupid to limit resources only for LLMs as same as it’s stupid to limit economy by CO₂ emission, limit applicants by country and so on … So many bad ideas were forced that we need to also consider this one. In such case in short term many developers would not have a job and in long term we would have lack of developers on the market and we would need dozens of years to fix that.

What good can happen? We already have editor integration. So for now I don’t see much more than improvements. However how about we would make a topic more wide? Why we assume that after 20 years we would use the same enhanced technology? I guess in long term we may have many alternatives to LLM concept. People may find maybe not a better solutions, but more space-efficient ones or use alternative algorithms to decrease resource usage.

The new technologies may require implementing new file types to store information. See that nobody really knows all characters from UTF-8 table. How about creating a UTF-8 like language-agnostic LLM table where data would be saved differently? This way we may be able to store and retrieve more data on same device. Most probably it would be also more efficient solution.

The other concept is control LLM tools with brain directly. Of course it’s controversial to use 2-way communication, but 1-way would be anyway a huge improvement. Just converting thoughts into LLM prompts may drastically improve work. Think that you can reorganise all of the files stored on your drive in just few seconds and within writing even 1 letter …

Think that for doing a medical operation all you need is to have a knowledge and directly control the machine with your thoughts while getting suggestions from LLM-based tool. Also creating every report would be almost fully automated. No matter how hard the paper work would be it could be automated as long as you can understand it.

Regardless of concept it’s rather not a matter if, but when it would happen. Rather than a revolution we should focus on evolution by fixing current problems instead of introducing new ones.

Apropos, can you explain this temperature thing, please?

1 Like

I’ve also read recently about it:

Additionally with the APIs, you can control the “temperature” of the generation, which at a high level controls the creativity of the generation. LLMs by default do not select the next token with the highest probability in order to allow it to give different outputs for each generation, so I prefer to set the temperature to 0.0 so that the output is mostly deterministic, or 0.2 - 0.3 if some light variance is required. Modern LLMs now use a default temperature of 1.0, and I theorize that higher value is accentuating LLM hallucination issues where the text outputs are internally consistent but factually wrong.

Source: As an Experienced LLM User, I Actually Don't Use Generative LLMs Often | Max Woolf's Blog

Of course standalone it would not magically solve all issues, but a typical LLM available for the end-user is configured for conversation purposes.

Where I live we have self-driving taxi services city-wide. I encounter them on the road frequently when I’m driving and they are my preference if I need a taxi service. Truth is I feel safer around them/inside of them than I do the alternatives. Just the fact that they don’t suffer selective/directional situational awareness, as humans do, gives them some advantages.

4 Likes

just replying to say that I think “AI tools” will never be useful because it’s a generic term and it can mean anything, if it can mean anything it means precisely nothing.

If we’re talking about LLMs and tools that generate code I still don’t think they’ll be useful, as I understand code needs to be written intentionally and that’s how we build our mental model about what that code does when running. A thing that uses previously written code to try to predict what would be an “acceptable piece of code” given an input is not intentionally written, and as anyone that past the first deploy in their career, just being able to compile/run doesn’t mean it works.

I think that would be possible tools that explain what a piece of code does(like an auto-documenting tool, idk) but i doubt it would work as a business model, given the amount of resources and the cost to have it running. maybe as a self hosted thing for large enterprises.

My optimistic view of the current thing is that either investors stop overpromises of those AI CEOs and the bubble will burst… my pessimistic view is that usage of such tools will be mandatory and devs will become the new gig worker fixing bugs introduced by AI tools using a platform for bug hunt that will be the uber of devs.

edit: predictions based just on gut feeling and how every job evolved in the past 10 years, i don’t bet money on it(but i don’t bet money on anything :joy: )

2 Likes

Saw this on LinkedIn and I kinda agree with it (I do not have this exact, but you’ll get the spirit of it)

2025 - 10% of engineers are fired for AI
2026 - 50% of engineers replaced with AI
2027 - 100% of engineers are replaced with AI
2028 - Senior engineers are hired back at exorbitant salaries to rewrite all the AI code

I think eventually AI tools (whatever those are) will replace engineers. Coding is just the clever application of Math. But I think we have a long way to go before they replace all of us. However, I don’t see CxO’s seeing things the same way. I think they will buy into the hype, replace us, and suffer for it.

2 Likes

Yep. I believe we are deep into this phase of the cycle right now.

Predicted it a year ago and already heard 2-3 cases of people being hired back at higher salaries indeed. Hilarity is about to ensue and be witnessed in the next years.

1 Like