Can you please share how exactly you think this will be possible given the limitations of the technology employed? I understand there is anecdotal evidence of LLMs excelling in doing superficial repetitive jobs and thus saving us time, but for being good (let alone great) at software engineering it takes real intelligence, not just pattern matching.
Indeed. And when you have a whole bunch of such little decisions, or, better, if the entire project is founded on just those, it becomes a liability not an asset.
Agreed, and unfortunately this can also be attributed to certain not-so-benevolent āindustryā traits. First, it takes unpacking what āindustryā truly is. A part of it, again unfortunately, is made up of make-money-quick VC shops whose only interest is in capitalizing on trends. Truth, integrity and sustainability mean nothing to that particular part of the āindustryā as they will indeed fuel the sound-good projects for as long as they have someone to sell their shares to. As soon as the trend reverses, so will their funding.
We have two such recent examples of the finding-a-greater-fool āphenomenonā other than the current āAIā hype:
- The 2015-2019 crypto-everything, wonder boy-bands era
- The 2020-2023 COVID lockdowns/panic weāll-never-exit-our-rooms-again-so-everyone-better-learn-to-code web app/business overshoot era
When studying an industry, itās often wise paying attention to the flow of money (as well as the true intent behind it), for thatās what will be having the upper hand over media peddling and consequently the public opinion in general.
āItās possible to program a computer in English. Itās also possible to make an airplane controlled by reins and spurs.ā ā John McCarthy
Stop using Grok, itās hot garbage.
Stop using Grok, itās hot garbage.
Ok, but as opposed to what? Itās not like other LLMās are any different.
They are different. Use Opus 4.5 if you can, otherwise GPT 5.2 Codex or Gemini 3.
Iām using ChatGPT on a regular basis. There where Grok fails so does GPT.
Currently Gemini Pro 3.0 is the best at everything minus coding. For analysis and discussions and planning, use that. And Iām fairly happy with its code as well.
`If LLMs were around back when I was in school I would have used them to write the first draft of every paper. I was always pretty good at revising my essays, but getting that first draft down was the hard part.`
I agree, but the reason to why my first draft takes so much time to get done - is the amount of research and understanding that needs to be achieved before producing anything.
What I have observed while working with people that is able to skip that initial bump while tackling a new problem is that they havent learned anything. They are just operators of patterns.
`All the little decisions early on add up to unmaintainable software if weāre not careful.`
Thats easy to say - but how can people be careful when they skipped the process of learning & understanding the new problem they try to solve.
`It kind of seems like the industry is hoping that LLMs will get good enough that they can just rewrite it once it gets bad,`
It sure does & that frightens me. The amount of errors masked by convincing half baked solutions will become a future nightmare.
Yeah, I think it depends a lot on the problem as well. Some problems in industry are largely solved, so itās just reapplying tried and true patterns to accomplish something weāve done before, but there are usually turn-key or off-the-shelf solutions to those problems (Wordpress for the CMS example).
It seems like LLMs are very well-suited to these kinds of āsolvedā problems. But thatās not what engineers get hired to do. Companies donāt hire engineers to solve problems that have turn-key solutions. They hire us to find new solutions to problems that donāt yet have good or feasible solutions. Or to figure out how to keep the lights on when the last engineer created a really bad solution that is crushing the company under the weight of unrecoverable tech debt. Or to create a better experience to an already solved problem.
LLMs can be helpful if used as a tool that fits into the workflow, but I think a lot of devs are resistant to it just because itās constantly being forced down our throats. I know thatās the main reason I tend to roll my eyes at AI stuff at least. Some of it is pretty cool, but Iād prefer to wait until the tools become more established rather than being forced to be an early adopter of something that is situationally (un)helpful. Though, they have gotten a lot better over the last year.
Seems like once the hype cycle concludes and the bubble pops, LLMs will find their place in everyoneās workflows, because then everyone will be able to utilize them based on their actual benefits to the particular problem being solved, but right now it seems like there is an unrealistic amount of pressure on devs to use AI even if itās not all that helpful for the task at hand.
Thatās the part thatās weird to me. Iām new to the tech industry, but Iām a computing history nerd, so Iāve read about most of the major technology cycles weāve been through since the 50s, and it seems like this is the first time that the entire industry is pushing people to use a single tool without really considering the merits of it at all. Even people that have no vested interest in it. I guess if we compare AI to a programming paradigm and not a tool, then it makes more sense. Everyone had to know OOP in the early 2000s even if they were applying for a job that was doing something like data analytics that makes no sense to use OOP for. And the dominance of OOP in the early 2000s is also eyebrow-raising because OOP is a pretty bad model to use for HTTP 1-based applications since HTTP 1 is stateless. FP actually fits much nicer with traditional web apps, but OOP dominated right after the web took over as the next big thing. We also started switching over to primarily Functional-style stuff (React Function Components, Elixir, ect) after stateful connections (HTTP2/3, websockets) started becoming commonplace, which is really weird because stateful connections are actually a pretty good fit for OOP patterns (though I still prefer immutability everywhere personally). ![]()
Unfortunately I think that may be because this is a socio-political watershed moment more than a technological one.
Programming is pattern operation. We all assemble patterns: OTP behaviors, Ecto changesets, Phoenix contexts, supervision trees, etc. The question isnāt āpatterns vs not patterns,ā itās whether you can evaluate and adapt patterns to your situation. LLMs can be immensely helpful for that regardless of experience level.
The reason a first draft for a school paper takes so much time is because the student does not know what questions to ask or where to find the answers. Compiling the information and arranging it in a coherent way makes up the vast majority of the overall process.
We donāt have any evidence that the research itself is what makes one achieve true learning. I suspect that is false, because everyone learns differently. This is why there is an entire field dedicated to how people learn.
In addition, are you sure that the ability to produce anything useful is contingent upon how much time one has spent researching and how much they have struggled while learning? Because if that is the case, we should forbid Stack Overflow and even Google, and force people go to back to digging up and reading obscure tomes in libraries.
Even if a CMS is āsolved,ā the hard work in real orgs is:
-
Integration: auth, SSO, billing, data warehouse, search, logging, permissions
-
Migration: content + users + URLs + SEO + analytics + API quirks
-
Non-functional requirements: latency, uptime, compliance, privacy, audit, multi-region
-
Custom workflows: approvals, roles, editorial pipeline, bespoke content models
-
Ongoing change: product tweaks, A/B tests, expanding to and customizing the solution for new markets, regulation, internal re-orgs
Thatās engineering work, and itās overwhelmingly āglue + adaptation + evolution,ā not āinvent a new CMS.ā So the assertion that companies donāt hire engineers for problems that have turn-key solutions is quite false. In most serious orgs, even turn-key solutions need a lot of customization.
If I had to create a taxonomy of work in this field, it would be:
- Commodity work, such as CRUD, integrations, infra plumbing, standard patterns.
- Local novelty, i.e. a new feature for your product (but not new to the world).
- Frontier novelty, as in, genuinely new algorithms/research-like work.
The fact of the matter is that the overwhelming majority of software engineering falls into the first two categories. Most of the novelty in software engineering is local novelty.
This mindset made sense and could be justified in 2024. In almost-2026, it is not only impractical, but could actively hurt your career, especially for someone new to the industry such as yourself. It is akin to refusing to use Google in 2005, and refusing to use Stack Overflow or similar sites in 2015. Youāre much better off actively learning how to use LLMs effectively, rather than waiting until some arbitrary time when the tools become more āestablishedā.
Iām sure we have some graybeards here who might be compelled to share āwell, back in my dayā¦ā type stories, but I think even they would be hard-pressed to argue things were actually better back then, when they didnāt have modern tooling.
Not sure what you mean here. The tools are numerous and their merits are considered and debated daily, both in the industry and broader society.
I think I know what youre trying to say here re technology, but a huge portion of pedagogical research proves the importance of critical thinking skills that are only developed through effort. Neither Google nor Ai or any particular library ātomeā can tell you how to think. The key is being exposed to problems that are ambiguous and working through various potential answers and even frameworks for answers yourself, and precisely not having one spoon fed you by any other person or device.
Hmm ⦠I would rather use a context naming here. I donāt believe that the code itself is good enough unless you expect LLM to predict what you want to change in it. However Iāve really got a point. Instructing LLM to follow some style of code gives lots of useful hints. At one side of the coin itās a huge time saver, but on the other one itās just another way to waste your time. ![]()
Personally I find zed editor useful as I only use āedit predictionsā. If the short prediction (basically a āsmartā form of snippet generator) is wrong I can just ignore it and work on my own. This is the only way LLM are helping me, but now. It does not doubles the code / time. Thatās a myth as LLMs are way too unskilled for this. ![]()
I believe most people donāt see a real value in what you describe. Let me explain what it is and what criteria I believe are the best ones. You are using intuition and itās a very powerful skill if you manage to control it. This is where LLMs (at least in the current form) are never going to replace people like you. ![]()
The criteria are quite obvious ⦠You simply āpredictā (we rather call visualise) something without really focusing on thinking about it (thatās an intuition). So what are the criteria? Simple. You can simply guess from your experience what kind of data was machine trained for into the LLM machine. Itās easiest to see it on examples ⦠![]()
The thing I would never search using LLM are for example list of characters especially in niche media/titles. Itās way easier to just enter list of characters in ⦠wiki in the search engine than constructive a productive prompt for LLM. Most probably LLM was not trained for such data, so itās rather going to predict character names. Thatās just a waste of time. ![]()
So where LLMs are good? When you forgot naming (like wiki fan pages). You can describe something with your own words, a lot of words, but none of them hits the correct naming used by keyword indexation in the search engines. I like to say itās a kind of vibe browsing vs (old-school) constructive browsing. None of them are wrong. We are humans, so itās more than fine to forget one or two things. However we were ātrainedā for more old-school constructive tagging of the content. We as humans simply and fully naturally like to categorise / put tag on everything including the people we just met (which is rather considered as a bad habit). ![]()
Literally any kind of tool is good when you know how to use it. Companies wants to replace cars by trains without putting more rails. Itās not like you donāt see how to use LLM well. Everyone around is wearing mask infecting everyone around with viruses (you really have to know how to properly use mask btw.). ![]()
Itās just another effect of the Prussian school system forced in so-called āWest Worldā and their slaves ⦠I mean allies. You are just supposed to follow the orders. They are stupid? Itās even better! Show your contribution to community? Sounds like Soviet Republic? Think twice ⦠from where āred revolutionā really came? Germany (or rather their influence on Russia). I recommend not following school books and read what some kind of people were saying. What were the plans for the economy. Itās kinda eye opening that we often fight in opposite camps, but live under the same type of regime. So as said ⦠itās not about using LLM the right way - itās about using it and replace middle class people. ![]()
We react with fear hearing how many thousands of employees Big Tech are firing every year. However nobody is really looking at the investments. So how much they invest yearly? Hundreds of billions only into LLM! Letās make a simple math and we would see that the fired people could have a yearly salary ⦠every month. This is exactly how they are āreducing costsā. Just for sure ⦠some investments are indeed needed simply to make things cheaper in the future, but if you ever want to reduce your costs by giving me over 12x more money than you spend every month then really donāt feel bad about it and just send that money, pretty please. I would kindly accept your amazing āinvestmentā. ![]()
People says that I should not touch political topics, but those situations work exactly 1:1, so they are just perfect as examples. Besides that ⦠if topic is about work and politician decide that we would in fact not work because of some virus then why we shouldnāt touch such topic? Think yourself and remember my words ⦠How did forcing political ideas ended for Europe and US around hundred years ago? Recession? Not possible! We have amazing politicians and the perfect GREEN economy! They would save us on the white horse! Wait, did you said ānoā? Are you sure? ![]()
I kinda like your cynicism. Also, thank you for being brave enough to pull the garbage of the policies we all live under out in the open. Too many tech people try to stay āneutralā (as if living in vacuum) and never speak publicly of politics at all while at the same time, the smell is reaching the skies.
Youāre right to rant about it. It is another bubble and it is intentional (as always). In finance, you can collateralize the assets you invest into (to take on more debt) but you canāt collateralize liabilities (the wages for your staff), so itās totally unsurprising whatās going on. And the only reason why āTheyā are doing it again is because they can. Weāve taught āThemā they can, for the last time around (17 years ago to be more precise) precisely none of the bad actors ended up in jail with their assets seized. On the contrary, the very taxpayer who was thrown under the bus ended up bailing āThemā out. Sam Altman has been openly speaking of the taxpayer picking up the bill again.
Itās a big club and we aināt in it.
Friendly advice: kill all traces of this rebellious thinking in your brain.
āGoing against the grainā is a highly unproductive way to engage with the world. Use your own judgement. Give LLMs an honest, detailed and intellectually objective chance ā then make a decision. Reject LLMs, but do it after informing yourself.
We have way too many people that roll their eyes at stuff and are not evaluating them on objective merits. Be better than the crowd.
Iāll also echo @egeersoz here, who wrote my stance better than I could: the LLM landscape evolved hugely during 2025. Just one year ago I laughed at ChatGPT and a few others, they were almost helpless. Nowadays I can write multiple detailed paragraphs with a prompt to Gemini and I get insights that I would need years ā and a lot of connections ā to get.
As per above, train your critical thinking with objective information and scientific experiments. Informed decisions > all other decisions.
Rolling my eyes doesnāt mean refusing to use or evaluate the technology and it doesnāt mean being ārebelliousā. Just that itās exhausting to constantly be bombarded with tertiary crap that isnāt useful to my regular tasks. Itās similar to any other form of marketing. Itās exhausting because itās a constant barrage, and itās impossible to survive without tuning out most of the noise most of the time. And most AI news is in fact nothing more than marketing.
Just rolling over and accepting whatever the company or industry throws at you is not a healthy way to interact with the professional world. Maybe itās different elsewhere, but in the US where Iām located, this approach is how employees end up working excessive unpaid overtime with nothing to show for it but a 1% raise at the end of the year and divorce papers from their spouse. God forbid you actually use PTO for something other than a life-threatening medical emergency. Might as well just lay yourself off at that point. Being defiant or rebellious is also a terrible idea. When I was in my 20s I thought that way, but eventually I realized itās better to let sleeping dogs lie and not burn a bridge at least until youāve crossed to the next island.
I should clarify here that Iām speaking of the US workplace in general and not my current employer. My employer is actually very good about PTO and rest time with family, etc, but in my experience in the US workforce, thatās the exception not the rule.
And FWIW, Iāve been in the workforce for over 10 years, and Iāve actually changed careers twice and burned out once, so Iām no stranger to navigating the politics and unreasonable demands of industry. Iām new to the tech industry/professional software dev, but Iāve been programming my entire adult life and Iāve been in the field and in the conference room at multiple companies in multiple industries in past careers. The tech industry seems to be pretty much the same as any other industry as far as how to get hired and keep a job go. It just seems to be loaded full of executives that are abnormally insulated from other industries, which means some decision makers are less predictable. Once the bubble pops, AI will likely become one of those ārequired skillsā that you need to have at least basic competency in for most jobs and to be an expert in for jobs that itās actually relevant for. Similar to Excel for the Accounting field (which I worked in before). Youād be hard-pressed to get any job in the Accounting or Office Administration fields without basic Excel skills like macros and pivot tables, but most jobs donāt care if you can do actual data analysis in it. Itās just an everyday tool that everyone needs to be familiar with. Itās much more important that you actually understand double-entry accounting and how to three-way match an invoice. But even with those fundamentals, you only really need to have a basic understanding of them to get an entry-level role.
Anyway, I believe Iāve been pretty fair/moderate in most of my messages on this thread, but that hasnāt really gotten through it seems. Maybe Iāve been less amicable than I thought. Discussion seems to be very polarized on this topic in general, so itās hard to have a productive discussion about the actual pros and cons of the technology, and Iām sure Iām no exception to that even if I feel like I am. All things in software are tradeoffs, but AI seems to be one of those topics that invites the extreme sides like OOP vs FP from a while back. I appreciate you taking the time to offer the objectively good advice to evaluate the technology and then make an informed decision. I just want to point out that there seems to always be an assumption among AI-advocates that when someone says they donāt like AI or donāt see its usefulness in a particular domain/workflow (e.g. code gen), then that means they havenāt evaluated it for themselves and that decision was not an informed one. What may be super useful for one person, might be completely useless for another. CSS is directly useful for my regular work (even though Iām a BE dev). Fragment shaders are not (though knowledge of how they work is remarkably helpful). A game programmer would be on the opposite side of that though. AI is not a magical wonder-technology that is immune to this fundamental truth. Itās probably the most widely-applicable technology thatās hit the industry to-date, but we still have a way to go before we finally find the one true hammer. ![]()
Now, as far as discussion goes, Iām actually curious what the different sides in this thread think about the economics of AI. Engineers seem to have a tendency to focus on the technical merits of these things without really considering the unit-economics of it. Thatās part of how we ended up with $100k+ cloud bills, but thatās a whole other can oā worms. Given the fact that none of the AI companies making these models have managed to make an actual profit off of them and are still relying on the ponzi-esque VC/grant infusion business model to keep going, it seems pretty risky for any company to become too heavily reliant on AI tooling unless development/operation of the models themselves can be made much more financially efficient. If OpenAI or Anthropic control the supply, and most of the industry canāt meet their client deliverables without these tools, then things will get bad once investors decide they finally want their return. I havenāt been keeping up with the latest gossip in the field though, so maybe this has changed? ![]()
You have my sincerest apologies for the negative assumption I have made about your comment. Your grace in responding to that has been humbling. Thank you.
I should clarify I am in no shape or form an AI advocate at all. In fact I started off extremely cynical about it and it took me most of 2025 to come around. I am still using LLM agents very cautiously and I have not paid a subscription to this day; I am happy to copy-paste code in both directions (editor ā web UI) because it also gives me a little more time to think and ruminate over things and I happen to believe that vibe-coding, with its extremely fast feedback loop, is very demeaning for the human operator ā they barely utilize their brain during the session. They are basically a meat robot that must confirm at certain milestones that the machine should continue. I mean⦠WTF?
My point was more along the lines of two things: (1) make sure you are not missing out on skipping super boring work that does not develop you in any way (some of the code gen work that agents do is very useful in this regard) and (2) try to integrate the useful parts of the agents in your daily flow because, letās put it bluntly, weāre currently going through an arms race ā I donāt mean to try and inject FOMO in anyone but the productivity boost I have experienced first-hand when using an agent has been a little bit scary and I am pretty sure many business people will wake up to the fact that they should start pressuring us into being even faster and more productive in general ā how (or even if) we will fight back is a topic for another, likely much more grim, thread.
I would say this is both a parasitic and symbiotic relationship; the LLM providers cannot raise prices too much because theyāll lose a lot of customers who use them casually and yet pay subscriptions. F.ex. a lot of people pay for Claude Code at the $20 plan and yet they use it maximum 10 hours a month ā Anthropic probably saves the most money and makes the biggest profit out of these customers.
Any capitalist trying to utilize the fairly easy (and Iād say brainless) pattern of āfocus on the top X% that pay the mostā is doomed to ruin their business.
But then again, a lot of the MBA-infested CEOs have ruined companies many, many times, so who is to say that the market is rational?
As for a prediction, I actually cannot make any. The current situation looks unsustainable any way I look at it. Perhaps the only way out thatās good for everyone that I can find is even further innovation that would make the LLM agents much more energy efficient to run⦠and the LLM agent companies find a way to have the agent do a part of its work on the machine of the user. IMO their only game is to drastically reduce expenses because increasing revenue by a factor of 100x is, shall we say, unlikely.
(Which then raises the question: what will they do with those metric tons of GPUs after the bubble pops?)
Thanks for your clarification. I agree about exploring AI in the current landscape because itās very very likely to become a ubiquitous tool in the industry in one way or another. Even if we canāt predict exactly how. I just donāt personally think itās as critical to be an expert in it as the general discourse/internet seems to suggest, so I think I tend to be a bit reactionary when I feel like people are using that āyouāll be left behindā kind of logic. There are still jobs for Cobol developers after all. ![]()
For my part, Iām currently experimenting with using AI assistants for debugging. Iām mostly an R&D programmer in my current role, so the kind of monotonous grunt work tasks that the code gen would help with are few and far between for me (something thatās rare for software engineers it seems), but LLMs are good at tracing through a large codebase and spotting anomalies, so using them to find a thread to pull when Iām trying to root cause a caching bug or race condition that I donāt have any leads on how to solve can be a big time saver. I imagine that will probably become my main use-case for them since I donāt do much throw-away prototyping.






















