Change my mind - the optimal AI prompt is the code itself

,

Can you please share how exactly you think this will be possible given the limitations of the technology employed? I understand there is anecdotal evidence of LLMs excelling in doing superficial repetitive jobs and thus saving us time, but for being good (let alone great) at software engineering it takes real intelligence, not just pattern matching.

Indeed. And when you have a whole bunch of such little decisions, or, better, if the entire project is founded on just those, it becomes a liability not an asset.

Agreed, and unfortunately this can also be attributed to certain not-so-benevolent ā€œindustryā€ traits. First, it takes unpacking what ā€œindustryā€ truly is. A part of it, again unfortunately, is made up of make-money-quick VC shops whose only interest is in capitalizing on trends. Truth, integrity and sustainability mean nothing to that particular part of the ā€œindustryā€ as they will indeed fuel the sound-good projects for as long as they have someone to sell their shares to. As soon as the trend reverses, so will their funding.

We have two such recent examples of the finding-a-greater-fool ā€œphenomenonā€ other than the current ā€œAIā€ hype:

  1. The 2015-2019 crypto-everything, wonder boy-bands era
  2. The 2020-2023 COVID lockdowns/panic we’ll-never-exit-our-rooms-again-so-everyone-better-learn-to-code web app/business overshoot era

When studying an industry, it’s often wise paying attention to the flow of money (as well as the true intent behind it), for that’s what will be having the upper hand over media peddling and consequently the public opinion in general.

ā€œIt’s possible to program a computer in English. It’s also possible to make an airplane controlled by reins and spurs.ā€ – John McCarthy

4 Likes

Stop using Grok, it’s hot garbage.

1 Like

Stop using Grok, it’s hot garbage.

Ok, but as opposed to what? It’s not like other LLM’s are any different.

They are different. Use Opus 4.5 if you can, otherwise GPT 5.2 Codex or Gemini 3.

I’m using ChatGPT on a regular basis. There where Grok fails so does GPT.

Currently Gemini Pro 3.0 is the best at everything minus coding. For analysis and discussions and planning, use that. And I’m fairly happy with its code as well.

1 Like

`If LLMs were around back when I was in school I would have used them to write the first draft of every paper. I was always pretty good at revising my essays, but getting that first draft down was the hard part.`
I agree, but the reason to why my first draft takes so much time to get done - is the amount of research and understanding that needs to be achieved before producing anything.
What I have observed while working with people that is able to skip that initial bump while tackling a new problem is that they havent learned anything. They are just operators of patterns.

`All the little decisions early on add up to unmaintainable software if we’re not careful.`
Thats easy to say - but how can people be careful when they skipped the process of learning & understanding the new problem they try to solve.

`It kind of seems like the industry is hoping that LLMs will get good enough that they can just rewrite it once it gets bad,`
It sure does & that frightens me. The amount of errors masked by convincing half baked solutions will become a future nightmare.

5 Likes

Yeah, I think it depends a lot on the problem as well. Some problems in industry are largely solved, so it’s just reapplying tried and true patterns to accomplish something we’ve done before, but there are usually turn-key or off-the-shelf solutions to those problems (Wordpress for the CMS example).

It seems like LLMs are very well-suited to these kinds of ā€œsolvedā€ problems. But that’s not what engineers get hired to do. Companies don’t hire engineers to solve problems that have turn-key solutions. They hire us to find new solutions to problems that don’t yet have good or feasible solutions. Or to figure out how to keep the lights on when the last engineer created a really bad solution that is crushing the company under the weight of unrecoverable tech debt. Or to create a better experience to an already solved problem.

LLMs can be helpful if used as a tool that fits into the workflow, but I think a lot of devs are resistant to it just because it’s constantly being forced down our throats. I know that’s the main reason I tend to roll my eyes at AI stuff at least. Some of it is pretty cool, but I’d prefer to wait until the tools become more established rather than being forced to be an early adopter of something that is situationally (un)helpful. Though, they have gotten a lot better over the last year.

Seems like once the hype cycle concludes and the bubble pops, LLMs will find their place in everyone’s workflows, because then everyone will be able to utilize them based on their actual benefits to the particular problem being solved, but right now it seems like there is an unrealistic amount of pressure on devs to use AI even if it’s not all that helpful for the task at hand.

That’s the part that’s weird to me. I’m new to the tech industry, but I’m a computing history nerd, so I’ve read about most of the major technology cycles we’ve been through since the 50s, and it seems like this is the first time that the entire industry is pushing people to use a single tool without really considering the merits of it at all. Even people that have no vested interest in it. I guess if we compare AI to a programming paradigm and not a tool, then it makes more sense. Everyone had to know OOP in the early 2000s even if they were applying for a job that was doing something like data analytics that makes no sense to use OOP for. And the dominance of OOP in the early 2000s is also eyebrow-raising because OOP is a pretty bad model to use for HTTP 1-based applications since HTTP 1 is stateless. FP actually fits much nicer with traditional web apps, but OOP dominated right after the web took over as the next big thing. We also started switching over to primarily Functional-style stuff (React Function Components, Elixir, ect) after stateful connections (HTTP2/3, websockets) started becoming commonplace, which is really weird because stateful connections are actually a pretty good fit for OOP patterns (though I still prefer immutability everywhere personally). :thinking:

2 Likes

Unfortunately I think that may be because this is a socio-political watershed moment more than a technological one.

2 Likes

Programming is pattern operation. We all assemble patterns: OTP behaviors, Ecto changesets, Phoenix contexts, supervision trees, etc. The question isn’t ā€œpatterns vs not patterns,ā€ it’s whether you can evaluate and adapt patterns to your situation. LLMs can be immensely helpful for that regardless of experience level.

The reason a first draft for a school paper takes so much time is because the student does not know what questions to ask or where to find the answers. Compiling the information and arranging it in a coherent way makes up the vast majority of the overall process.

We don’t have any evidence that the research itself is what makes one achieve true learning. I suspect that is false, because everyone learns differently. This is why there is an entire field dedicated to how people learn.

In addition, are you sure that the ability to produce anything useful is contingent upon how much time one has spent researching and how much they have struggled while learning? Because if that is the case, we should forbid Stack Overflow and even Google, and force people go to back to digging up and reading obscure tomes in libraries.

Even if a CMS is ā€œsolved,ā€ the hard work in real orgs is:

  • Integration: auth, SSO, billing, data warehouse, search, logging, permissions

  • Migration: content + users + URLs + SEO + analytics + API quirks

  • Non-functional requirements: latency, uptime, compliance, privacy, audit, multi-region

  • Custom workflows: approvals, roles, editorial pipeline, bespoke content models

  • Ongoing change: product tweaks, A/B tests, expanding to and customizing the solution for new markets, regulation, internal re-orgs

That’s engineering work, and it’s overwhelmingly ā€œglue + adaptation + evolution,ā€ not ā€œinvent a new CMS.ā€ So the assertion that companies don’t hire engineers for problems that have turn-key solutions is quite false. In most serious orgs, even turn-key solutions need a lot of customization.

If I had to create a taxonomy of work in this field, it would be:

  1. Commodity work, such as CRUD, integrations, infra plumbing, standard patterns.
  2. Local novelty, i.e. a new feature for your product (but not new to the world).
  3. Frontier novelty, as in, genuinely new algorithms/research-like work.

The fact of the matter is that the overwhelming majority of software engineering falls into the first two categories. Most of the novelty in software engineering is local novelty.

This mindset made sense and could be justified in 2024. In almost-2026, it is not only impractical, but could actively hurt your career, especially for someone new to the industry such as yourself. It is akin to refusing to use Google in 2005, and refusing to use Stack Overflow or similar sites in 2015. You’re much better off actively learning how to use LLMs effectively, rather than waiting until some arbitrary time when the tools become more ā€œestablishedā€.

I’m sure we have some graybeards here who might be compelled to share ā€œwell, back in my dayā€¦ā€ type stories, but I think even they would be hard-pressed to argue things were actually better back then, when they didn’t have modern tooling.

Not sure what you mean here. The tools are numerous and their merits are considered and debated daily, both in the industry and broader society.

1 Like

I think I know what youre trying to say here re technology, but a huge portion of pedagogical research proves the importance of critical thinking skills that are only developed through effort. Neither Google nor Ai or any particular library ā€œtomeā€ can tell you how to think. The key is being exposed to problems that are ambiguous and working through various potential answers and even frameworks for answers yourself, and precisely not having one spoon fed you by any other person or device.

4 Likes

Hmm … I would rather use a context naming here. I don’t believe that the code itself is good enough unless you expect LLM to predict what you want to change in it. However I’ve really got a point. Instructing LLM to follow some style of code gives lots of useful hints. At one side of the coin it’s a huge time saver, but on the other one it’s just another way to waste your time. :sweat_smile:

Personally I find zed editor useful as I only use ā€œedit predictionsā€. If the short prediction (basically a ā€œsmartā€ form of snippet generator) is wrong I can just ignore it and work on my own. This is the only way LLM are helping me, but now. It does not doubles the code / time. That’s a myth as LLMs are way too unskilled for this. :ninja:

I believe most people don’t see a real value in what you describe. Let me explain what it is and what criteria I believe are the best ones. You are using intuition and it’s a very powerful skill if you manage to control it. This is where LLMs (at least in the current form) are never going to replace people like you. :+1:

The criteria are quite obvious … You simply ā€œpredictā€ (we rather call visualise) something without really focusing on thinking about it (that’s an intuition). So what are the criteria? Simple. You can simply guess from your experience what kind of data was machine trained for into the LLM machine. It’s easiest to see it on examples … :gear:

The thing I would never search using LLM are for example list of characters especially in niche media/titles. It’s way easier to just enter list of characters in … wiki in the search engine than constructive a productive prompt for LLM. Most probably LLM was not trained for such data, so it’s rather going to predict character names. That’s just a waste of time. :hourglass_done:

So where LLMs are good? When you forgot naming (like wiki fan pages). You can describe something with your own words, a lot of words, but none of them hits the correct naming used by keyword indexation in the search engines. I like to say it’s a kind of vibe browsing vs (old-school) constructive browsing. None of them are wrong. We are humans, so it’s more than fine to forget one or two things. However we were ā€œtrainedā€ for more old-school constructive tagging of the content. We as humans simply and fully naturally like to categorise / put tag on everything including the people we just met (which is rather considered as a bad habit). :label:

Literally any kind of tool is good when you know how to use it. Companies wants to replace cars by trains without putting more rails. It’s not like you don’t see how to use LLM well. Everyone around is wearing mask infecting everyone around with viruses (you really have to know how to properly use mask btw.). :microbe:

It’s just another effect of the Prussian school system forced in so-called ā€œWest Worldā€ and their slaves … I mean allies. You are just supposed to follow the orders. They are stupid? It’s even better! Show your contribution to community? Sounds like Soviet Republic? Think twice … from where ā€œred revolutionā€ really came? Germany (or rather their influence on Russia). I recommend not following school books and read what some kind of people were saying. What were the plans for the economy. It’s kinda eye opening that we often fight in opposite camps, but live under the same type of regime. So as said … it’s not about using LLM the right way - it’s about using it and replace middle class people. :money_bag:

We react with fear hearing how many thousands of employees Big Tech are firing every year. However nobody is really looking at the investments. So how much they invest yearly? Hundreds of billions only into LLM! Let’s make a simple math and we would see that the fired people could have a yearly salary … every month. This is exactly how they are ā€œreducing costsā€. Just for sure … some investments are indeed needed simply to make things cheaper in the future, but if you ever want to reduce your costs by giving me over 12x more money than you spend every month then really don’t feel bad about it and just send that money, pretty please. I would kindly accept your amazing ā€œinvestmentā€. :rofl:

People says that I should not touch political topics, but those situations work exactly 1:1, so they are just perfect as examples. Besides that … if topic is about work and politician decide that we would in fact not work because of some virus then why we shouldn’t touch such topic? Think yourself and remember my words … How did forcing political ideas ended for Europe and US around hundred years ago? Recession? Not possible! We have amazing politicians and the perfect GREEN economy! They would save us on the white horse! Wait, did you said ā€œnoā€? Are you sure? :joy:

3 Likes

I kinda like your cynicism. Also, thank you for being brave enough to pull the garbage of the policies we all live under out in the open. Too many tech people try to stay ā€œneutralā€ (as if living in vacuum) and never speak publicly of politics at all while at the same time, the smell is reaching the skies.

You’re right to rant about it. It is another bubble and it is intentional (as always). In finance, you can collateralize the assets you invest into (to take on more debt) but you can’t collateralize liabilities (the wages for your staff), so it’s totally unsurprising what’s going on. And the only reason why ā€œTheyā€ are doing it again is because they can. We’ve taught ā€œThemā€ they can, for the last time around (17 years ago to be more precise) precisely none of the bad actors ended up in jail with their assets seized. On the contrary, the very taxpayer who was thrown under the bus ended up bailing ā€œThemā€ out. Sam Altman has been openly speaking of the taxpayer picking up the bill again.

It’s a big club and we ain’t in it.

1 Like

Friendly advice: kill all traces of this rebellious thinking in your brain.

ā€œGoing against the grainā€ is a highly unproductive way to engage with the world. Use your own judgement. Give LLMs an honest, detailed and intellectually objective chance – then make a decision. Reject LLMs, but do it after informing yourself.

We have way too many people that roll their eyes at stuff and are not evaluating them on objective merits. Be better than the crowd.

I’ll also echo @egeersoz here, who wrote my stance better than I could: the LLM landscape evolved hugely during 2025. Just one year ago I laughed at ChatGPT and a few others, they were almost helpless. Nowadays I can write multiple detailed paragraphs with a prompt to Gemini and I get insights that I would need years – and a lot of connections – to get.

As per above, train your critical thinking with objective information and scientific experiments. Informed decisions > all other decisions.

3 Likes

Rolling my eyes doesn’t mean refusing to use or evaluate the technology and it doesn’t mean being ā€œrebelliousā€. Just that it’s exhausting to constantly be bombarded with tertiary crap that isn’t useful to my regular tasks. It’s similar to any other form of marketing. It’s exhausting because it’s a constant barrage, and it’s impossible to survive without tuning out most of the noise most of the time. And most AI news is in fact nothing more than marketing.

Just rolling over and accepting whatever the company or industry throws at you is not a healthy way to interact with the professional world. Maybe it’s different elsewhere, but in the US where I’m located, this approach is how employees end up working excessive unpaid overtime with nothing to show for it but a 1% raise at the end of the year and divorce papers from their spouse. God forbid you actually use PTO for something other than a life-threatening medical emergency. Might as well just lay yourself off at that point. Being defiant or rebellious is also a terrible idea. When I was in my 20s I thought that way, but eventually I realized it’s better to let sleeping dogs lie and not burn a bridge at least until you’ve crossed to the next island.

I should clarify here that I’m speaking of the US workplace in general and not my current employer. My employer is actually very good about PTO and rest time with family, etc, but in my experience in the US workforce, that’s the exception not the rule.

And FWIW, I’ve been in the workforce for over 10 years, and I’ve actually changed careers twice and burned out once, so I’m no stranger to navigating the politics and unreasonable demands of industry. I’m new to the tech industry/professional software dev, but I’ve been programming my entire adult life and I’ve been in the field and in the conference room at multiple companies in multiple industries in past careers. The tech industry seems to be pretty much the same as any other industry as far as how to get hired and keep a job go. It just seems to be loaded full of executives that are abnormally insulated from other industries, which means some decision makers are less predictable. Once the bubble pops, AI will likely become one of those ā€œrequired skillsā€ that you need to have at least basic competency in for most jobs and to be an expert in for jobs that it’s actually relevant for. Similar to Excel for the Accounting field (which I worked in before). You’d be hard-pressed to get any job in the Accounting or Office Administration fields without basic Excel skills like macros and pivot tables, but most jobs don’t care if you can do actual data analysis in it. It’s just an everyday tool that everyone needs to be familiar with. It’s much more important that you actually understand double-entry accounting and how to three-way match an invoice. But even with those fundamentals, you only really need to have a basic understanding of them to get an entry-level role.

Anyway, I believe I’ve been pretty fair/moderate in most of my messages on this thread, but that hasn’t really gotten through it seems. Maybe I’ve been less amicable than I thought. Discussion seems to be very polarized on this topic in general, so it’s hard to have a productive discussion about the actual pros and cons of the technology, and I’m sure I’m no exception to that even if I feel like I am. All things in software are tradeoffs, but AI seems to be one of those topics that invites the extreme sides like OOP vs FP from a while back. I appreciate you taking the time to offer the objectively good advice to evaluate the technology and then make an informed decision. I just want to point out that there seems to always be an assumption among AI-advocates that when someone says they don’t like AI or don’t see its usefulness in a particular domain/workflow (e.g. code gen), then that means they haven’t evaluated it for themselves and that decision was not an informed one. What may be super useful for one person, might be completely useless for another. CSS is directly useful for my regular work (even though I’m a BE dev). Fragment shaders are not (though knowledge of how they work is remarkably helpful). A game programmer would be on the opposite side of that though. AI is not a magical wonder-technology that is immune to this fundamental truth. It’s probably the most widely-applicable technology that’s hit the industry to-date, but we still have a way to go before we finally find the one true hammer. :slight_smile:

Now, as far as discussion goes, I’m actually curious what the different sides in this thread think about the economics of AI. Engineers seem to have a tendency to focus on the technical merits of these things without really considering the unit-economics of it. That’s part of how we ended up with $100k+ cloud bills, but that’s a whole other can o’ worms. Given the fact that none of the AI companies making these models have managed to make an actual profit off of them and are still relying on the ponzi-esque VC/grant infusion business model to keep going, it seems pretty risky for any company to become too heavily reliant on AI tooling unless development/operation of the models themselves can be made much more financially efficient. If OpenAI or Anthropic control the supply, and most of the industry can’t meet their client deliverables without these tools, then things will get bad once investors decide they finally want their return. I haven’t been keeping up with the latest gossip in the field though, so maybe this has changed? :thinking:

3 Likes

You have my sincerest apologies for the negative assumption I have made about your comment. Your grace in responding to that has been humbling. Thank you.

I should clarify I am in no shape or form an AI advocate at all. In fact I started off extremely cynical about it and it took me most of 2025 to come around. I am still using LLM agents very cautiously and I have not paid a subscription to this day; I am happy to copy-paste code in both directions (editor ↔ web UI) because it also gives me a little more time to think and ruminate over things and I happen to believe that vibe-coding, with its extremely fast feedback loop, is very demeaning for the human operator – they barely utilize their brain during the session. They are basically a meat robot that must confirm at certain milestones that the machine should continue. I mean… WTF?

My point was more along the lines of two things: (1) make sure you are not missing out on skipping super boring work that does not develop you in any way (some of the code gen work that agents do is very useful in this regard) and (2) try to integrate the useful parts of the agents in your daily flow because, let’s put it bluntly, we’re currently going through an arms race – I don’t mean to try and inject FOMO in anyone but the productivity boost I have experienced first-hand when using an agent has been a little bit scary and I am pretty sure many business people will wake up to the fact that they should start pressuring us into being even faster and more productive in general – how (or even if) we will fight back is a topic for another, likely much more grim, thread.

I would say this is both a parasitic and symbiotic relationship; the LLM providers cannot raise prices too much because they’ll lose a lot of customers who use them casually and yet pay subscriptions. F.ex. a lot of people pay for Claude Code at the $20 plan and yet they use it maximum 10 hours a month – Anthropic probably saves the most money and makes the biggest profit out of these customers.

Any capitalist trying to utilize the fairly easy (and I’d say brainless) pattern of ā€œfocus on the top X% that pay the mostā€ is doomed to ruin their business.

But then again, a lot of the MBA-infested CEOs have ruined companies many, many times, so who is to say that the market is rational?

As for a prediction, I actually cannot make any. The current situation looks unsustainable any way I look at it. Perhaps the only way out that’s good for everyone that I can find is even further innovation that would make the LLM agents much more energy efficient to run… and the LLM agent companies find a way to have the agent do a part of its work on the machine of the user. IMO their only game is to drastically reduce expenses because increasing revenue by a factor of 100x is, shall we say, unlikely.

(Which then raises the question: what will they do with those metric tons of GPUs after the bubble pops?)

2 Likes

Thanks for your clarification. I agree about exploring AI in the current landscape because it’s very very likely to become a ubiquitous tool in the industry in one way or another. Even if we can’t predict exactly how. I just don’t personally think it’s as critical to be an expert in it as the general discourse/internet seems to suggest, so I think I tend to be a bit reactionary when I feel like people are using that ā€œyou’ll be left behindā€ kind of logic. There are still jobs for Cobol developers after all. :slight_smile:

For my part, I’m currently experimenting with using AI assistants for debugging. I’m mostly an R&D programmer in my current role, so the kind of monotonous grunt work tasks that the code gen would help with are few and far between for me (something that’s rare for software engineers it seems), but LLMs are good at tracing through a large codebase and spotting anomalies, so using them to find a thread to pull when I’m trying to root cause a caching bug or race condition that I don’t have any leads on how to solve can be a big time saver. I imagine that will probably become my main use-case for them since I don’t do much throw-away prototyping.

1 Like