Change my mind - the optimal AI prompt is the code itself

,

I came across this post earlier that sums up my thoughts on AI:

Note that just two months prior, Karpathy appeared on a prominent podcast where he said LLMs and coding agents today are much more like ghosts and spirits than higher powers, and sounded generally underwhelmed. The fact that he did a 180 on it so quickly (after Gemini 3 and Opus 4.5 came out) is an example that the field moves so fast that opinions one holds one month are likely to be outdated and invalid the next month.

1 Like

It’s funny how there are never ever any receipts. Wonder if there is any other monetary reasons for any AI influencer to promote, wait no way companies would pay influencer to promote anything.

2 Likes

The ability to assemble a platoon of agents, each of which having different role and doing a part of the bigger work, is viewed as a critical competitive advantage by the devs who managed to pull it off.

I don’t think someone of Karpathy’s reputation needs receipts. He has made significant contributions to the field and is wealthy enough where he doesn’t need more money.

Yeah, that always stops people trying to get more :sweat_smile:.

4 Likes

It occurred to me that the rift between the two opposing views on this matter could just as well be the result of subjective needs and expectations each specific developer may have.

At the end of the day, when dust settles, I believe the only relevant metrics will remain the same as it’s always been: Is Peter who uses tools A, B, C, D and E overall more productive than Paul who uses tools B and D? Peter and Paul being specific developers and the tools being anything at their disposal including LLM’s.

1 Like

The LLMs laziness has become uncanny, you might even call it human. There is more work todo before any of them are going to be able to work unsupervised.

1 Like

Obviously. This is why I giggle when I go to HN and people fight over which LLM works best for whom and barely anyone mentions their work area, process and everything else.

True. LLMs are well on their way to become one more tool in the toolbelt, only this one comes with a subscription attached.

1 Like

Sure. All I’m saying is that Karpathy is not some random schmuck. He was working in the AI field since way before GPT became a thing, and has already proven himself several times over. There’s no need to invent conspiracy theories regarding his motivations for sharing his thoughts and impressions.

I disagree, if there is no receipt then it didn’t happen no matter how much someone wants to say trust me bro, or believe in it. I use AI for a lot of stuff, but as soon as I read what AI influencers writes about, then either that is far from reality or what ever they try to solve is mundane. It’s properly somewhere in between. Would be great if the hype lived up to reality, but there is huge amount of money already going into this, so no way anyones reputation is going to get in the way of it.

1 Like

Sure, you’re free to disagree. But let’s be real: if someone did provide receipts, you’d declare that the receipts aren’t good enough or detailed enough, or that they made things up, or that the problem they are working on is mundane (as you’ve already done). No amount of actual evidence will convince you so what’s the point?

Why is it anyone’s job to compile mountains of evidence to try to convince you that what they are doing is working beyond their wildest dreams? Forget Karpathy. What do I get out of sharing my positive experiences? After all, I don’t run an AI company and have nothing to sell.

I feel you’re extrapolating a bit here, I never said evidence isn’t good enough. I even stated that I use AI, so I do have some experience with what is available out there. Although I do think it’s rather hype driven right now, and at no point do I see anyone being left behind rather they use AI or not. I suspect the people that still aren’t using it will catch up at some point the same way people moved from hard covered manuals and usenet to Google and stack overflow. There is only so much a probabilistic modal can do.

Getting us to 80% was a huge achievement, but it’s the fat tails that might pop this bubble. How much money can we really put in before the return on investment can’t justify it?

1 Like

I continue to have little interest in this topic but I just want to drop in to say that Andrej is not some AI influencer. He was training neural nets before AlexNet and has worked at multiple top labs. He is also an excellent educator, probably the best the field has ever had.

I mention this because I understand that in this hype hellscape we live in it is legitimately hard to tell the difference. Andrej is an OG and deserves respect.

He is not, however, a professional programmer, so I would put less weight on his opinions there. There are many professional programmers who have expressed a similar sentiment, though.

4 Likes

This starts to move outside the bounds of the original topic :sweat_smile: .

I would argue that OTP is the perfect implementation of OOP.

If OOP is defined as methods wrapped around mutable data.
I dont see why that would handle state better. In fact, I would argue it makes state handling worse - how is one able to reason about the current state in a system where its total sum building up the state is spread to a multitude of “objects” and it arrived at its current state is an combinatorial explosion

I think you forgot about the dot com era.
Everyone was about to become millionaires in this new tech - Internet.
It had merits, but no one was sure about what exactly.
People poured money on products/companies no one understood. There are plenty of examples that I believe have a perfect match with today’s current AI situation.

1 Like

I would argue its both.

People are for real building solutions where AI agents performs tasks with absolutely zero deterministic end result for absolutely no other reason then “its AI” so its cool - management to be able to say they have started using AI.
That isnt socio-political - its just poor engineering and bad use of tools.

Then we also have the situation of people acting as a proxy for LLMs, which I also argue is poor engineering.
The best explanation im able to provide for “acting as proxy” - is when people start behaving like they did during the stackoverflow era, when people just copy/pasted answers into the code base until they got **** working.
For example people inputting “their” own bad implementation of the modulo operator into the code base. I guess you get what picture im trying to paint here.

3 Likes