Apples and oranges, I donât think co-pilot is comparable to GPT-x. GPT kind of AI will be very helpful for creating code snippets and other things like possibly checking code for bugs etc. But tool like co-pilot has to be really good at its job to guess what you need so it wonât add more mental load instead for the developer and that is much harder to get right in my opinion.
I agree, copilot is not good right now. But its the best we got in the IDE. (except for some experimental tool based on GPT-API).
But it seems to me that most people think, that code-completion is the only thing it does.
Thats not the case. You can make it to be a little chatGPT by writing larger chunks of docs (also gives it context).
You can make it transform code inline by adding clear comments, eg
@foo %{"bar" => 1, "baz" => 2}
# @foo with atom keys. -- following generated by copilot
@foo_atom_keys %{
bar: 1,
baz: 2
}
Its not useful for people, that do not invest the time to learn to work with it.
I know itâs not useful because I did invest time in learning to work with it. Both Chat GPT and Copilot invariably suck and produce bad to extremely bad output that takes away more of my time than they save.
Even you are basically saying this: âAfter a while you know, when proposed code has the chance to be useful, when I know that it can only be garbage, I donât even look at it.â
So itâs a range between âchance to be usefulâ and âcan only be garbageâ. Basically, a junior dev with little understanding of product and code that you need to sit with and mentor.
Everybody I know, that invested some time, thinks different.
You now know me, and Iâm not impressed
I agree, copilot is not good right now. But its the best we got in the IDE.
From my experience on a Javascript codebase I was working on it was consistently and significantly worse than Intellij IDEA. It consistently produced garbage.
Reason? Well, even if itâs a codebase in a popular language (Javascript) using a popular framework (React), itâs a code base for something unpopular (Google Cast integration) using internal libraries and APIs (that Copilot has no access to, and never will unless it is offered as a hosted service).
Same for ChatGPT. Itâs reasonably useful for some simple tasks, but more often than not Iâm better off writing actual code instead of fixing the obvious and non-obvious issues in the generated code
Well I know its useful. What now?
This will lead to nothing, people should try this out and see for themselves.
True, I think it would be better for this topic to be closed, as it turns into the same philosophical arguments about why a language is better than other.
Letâs talk about tabs vs. spaces!
oh youâre right! this is a new tabs vs spaces topic. so chatGPT did invent something new after all!
this is a good point. however why did GPT3 only come into existence now if itâs hanging so low? seems to me that the advancement of AI is all about the low hanging fruit ideas. so the only sensible assumption is that there will be more.
but back to the OP question - I donât think Elixir can do much to make the AI more usefl. Unless we use ChatGPT to generate lots and lots of code, put it on GitHub, so ChatGPT5 is trained on it
also, donât close this. let people have fun. not hurting anyone.
I was thinking about this yesterday. It seems that programming languages evolved with various abstractions and tradeoffs designed to make it easier for human programmers to reason about the programs they were making. As a rough example, think of Elixir built on top of Erlang to provide nicer syntax that was more approachable for the generic newcomer. If we are using AI to create code based on our natural language commands â âMake me a webapp that provides simple chat functionality and style it with Bootstrapâ â why do we want the AI to output code designed for us to read? Seems to me that AI code generation should all take natural language as input and output assembly code. The input might reasonably need to specify priorities and desired optimizations â âMake me a webapp that can scale to a million connected users with robust concurrency models prioritized over raw speed of request handlingâ â that reflects the same tradeoffs that lead to choosing one programming language over another today, but the AI shouldnât have to output it in a language for us to understand.
I guess this post is more of a âshower thoughtâ but the idea of an âAI coding partnerâ seems like a temporary middle step before true âAI developersâ. Kind of like Adobe illustrator plugins for automating various effects are a middle ground toward DallE and co.
Valid question and I agree that normally it should not. However:
- Requirements towards apps are never 100% completed so maybe it should produce human-readable code because somebody is bound to have to finish it manually later. (Though thatâs not a given; if an actual general AI gets so far that we can just tell it âprevious requirements + these 5 new things pleaseâ and it can deal with it then that point is moot.)
- There will be a transitional phase during which people will be distrustful of âAIâ (and weâre in it already). Having the âAIâ be able to produce readable code will help us vet it manually and increase our trust in it and leave it do its magic in the future (without us vetting it; unattended).
When that day comes and if I am still working, Iâll retire right there and then! Though admittedly, I want to work on something very similar â a recursive hyper-optimizing machine that can even optimize itself (to physical limits of course; we know it canât modify the CPU itself⊠and even thatâs not 100% true; microcode can change how a CPU behaves⊠though also you gotta wonder how far you can get this idea if you put FPGAs into the mix).
You just wrote the reason why AI will not replace humans in any forseeable future. What you wrote isnât a specification. Itâs a wish. âChat scalable to million usersâ has about a million of other things that you didnât mention:
- does it have group chats or is it one-to-one only?
- is it text only?
- what about formatting? Plain text, Markdown, or custom formatting?
- any limits on text length?
- does it allow file uploads?
- Any files, or just images? Or images and video?
- Which formats?
- What maximum size?
- Will there be previews? Resizing? Re-encoding?
- where will all this be stored?
- do files expire?
- Any files, or just images? Or images and video?
- end-to-end encryption?
- does it apply to any files?
- does it apply to group chats?
- moderator tools?
- for creators of group chats?
- for managers of the chat service?
- any additional tools like link unfurling, image previews, youtube autoplay?
And thatâs off the top of my head without going into any serious detail. Because each of these details has about a million of other details. And for each of these details two different humans will give you 15 different opinions.
In general, itâs âDraw me 7 red lines, all perpendicular to each other, two of them green, one of them transparent, and one in the shape of a kittenâ: The Expert (Short Comedy Sketch) - YouTube
Sorry, the original post said âMake me a webapp that can scale to a million connected usersâ, but even more things apply there, than in a chat: what app, what data etc.
I mean, I was going for a brief example of the concept, not something I would expect to be actually full spec. As with any kind of software interaction, it does what you tell it to do not what you want it to do. So being explicit and identifying all the required features and corner cases is still going to be important. But I donât see why thatâs any different from now other than when you ask an experienced developer to make a chat app, they will ask for the additional parameters based on prior experience. No reason to assume the hypothetical AI dev wonât get to that level at some point.
Apart from, you know, the human actually understanding what you are talking about, and the large language model just inferring statistically when you might mean, based on previous training data, while having zero actual understanding?
Context matters here. The statement you quoted was preceded by
being explicit and identifying all the required features and corner cases is still going to be important.
Whether the receiver of the information uses âactual understandingâ or statistically informed inference doesnât matter. In both cases the sender needs to provide all the relevant information.
To your point that understanding by the developer is different from the inference by the LLM, thatâs why I said the developer might be able to ask for missing relevant information up front based on prior experience and understanding of the problem space. I donât see why an advanced AI couldnât get to the point of specialized knowledge in the future, though I fully understand that the current iterations are not close to that.
Disagreed, simply because that requires combinatorics on a mind-blowing scale. As @dmitriid outlined above, just for one project idea you can have millions of combinations of yes/no prompts. To encompass what we do with software today weâd likely need something on the order of hundreds of trillions of combinations.
Donât get me wrong, I do like statistics. But it does have limits.
Better hardware and/or better optimized algorithms will make it possible in TheFutureâą? Absolutely. I have my doubts for the next 1-5 years however.
I donât know why you keep cutting off part of my statement to make it look like Iâm arguing a point that Iâm not making. You quoted
but the point was the next sentence
My point was not about the merits of human intelligence or the limits of AI. It was simply that from a practical perspective, the person who desires someone or something else to produce the software always has to provide the requirement specs said software needs to fulfill but should not care about the process of actually producing the code. Which is NOT the point I originally posted about either. I keep getting dragged further away from the simple idea that when we are using AI to generate code we donât need to continue using abstractions that were developed explicitly to make it easier for humans to generate code. Thatâs the sum total of my thoughts on this topic, so I think Iâll bow out now.
It does.
Because actual understanding means understanding the constraints, legal implications, applicability etc.
E.g. just the simple fact of âdisplay image on screen in a scalable mannerâ means zilch without understanding the context. E.g., if your scalable app is only/mostly accessed through mobile phones, you can easily downscale and resize all images, and save millions on storage. But if the actual target audience is professional photographers who use the app all the time because itâs an easy way to access photos on the go, you need to store the actual RAW images as well.
AI has no understanding of that.
And anyone who says that itâs just a matter of refining prompts, has never:
- tried to actually get a full technical description from humans for a task (good luck asking a non-technical product owner whether you should store images in webp or png)
- worked on a project with more than a few moving parts (that team responsible for scalable storage? itâs outside the statistical model of your AI, and has its own constraints)
Yeah it would be nice to have a formal and structured language for dealing with specifications of business logic, one that would be able to go into more technical details should that need arise, oh wait, that is just a programming language .