Get AI code generation tools to create correct Elixir code, or else 😱

Apples and oranges, I don’t think co-pilot is comparable to GPT-x. GPT kind of AI will be very helpful for creating code snippets and other things like possibly checking code for bugs etc. But tool like co-pilot has to be really good at its job to guess what you need so it won’t add more mental load instead for the developer and that is much harder to get right in my opinion.


I agree, copilot is not good right now. But its the best we got in the IDE. (except for some experimental tool based on GPT-API).

But it seems to me that most people think, that code-completion is the only thing it does.
Thats not the case. You can make it to be a little chatGPT by writing larger chunks of docs (also gives it context).

You can make it transform code inline by adding clear comments, eg

@foo %{"bar" => 1, "baz" => 2}

# @foo with atom keys. -- following generated by copilot
@foo_atom_keys %{
    bar: 1,
    baz: 2
1 Like

Its not useful for people, that do not invest the time to learn to work with it.

I know it’s not useful because I did invest time in learning to work with it. Both Chat GPT and Copilot invariably suck and produce bad to extremely bad output that takes away more of my time than they save.

Even you are basically saying this: “After a while you know, when proposed code has the chance to be useful, when I know that it can only be garbage, I don’t even look at it.”

So it’s a range between “chance to be useful” and “can only be garbage”. Basically, a junior dev with little understanding of product and code that you need to sit with and mentor.

Everybody I know, that invested some time, thinks different.

You now know me, and I’m not impressed

I agree, copilot is not good right now. But its the best we got in the IDE.

From my experience on a Javascript codebase I was working on it was consistently and significantly worse than Intellij IDEA. It consistently produced garbage.

Reason? Well, even if it’s a codebase in a popular language (Javascript) using a popular framework (React), it’s a code base for something unpopular (Google Cast integration) using internal libraries and APIs (that Copilot has no access to, and never will unless it is offered as a hosted service).

Same for ChatGPT. It’s reasonably useful for some simple tasks, but more often than not I’m better off writing actual code instead of fixing the obvious and non-obvious issues in the generated code


Well I know its useful. What now?

This will lead to nothing, people should try this out and see for themselves.

1 Like

True, I think it would be better for this topic to be closed, as it turns into the same philosophical arguments about why a language is better than other.


Let’s talk about tabs vs. spaces! :smiley:


oh you’re right! this is a new tabs vs spaces topic. so chatGPT did invent something new after all!

this is a good point. however why did GPT3 only come into existence now if it’s hanging so low? seems to me that the advancement of AI is all about the low hanging fruit ideas. so the only sensible assumption is that there will be more.

but back to the OP question - I don’t think Elixir can do much to make the AI more usefl. Unless we use ChatGPT to generate lots and lots of code, put it on GitHub, so ChatGPT5 is trained on it :smiley:

also, don’t close this. let people have fun. not hurting anyone.


I was thinking about this yesterday. It seems that programming languages evolved with various abstractions and tradeoffs designed to make it easier for human programmers to reason about the programs they were making. As a rough example, think of Elixir built on top of Erlang to provide nicer syntax that was more approachable for the generic newcomer. If we are using AI to create code based on our natural language commands – “Make me a webapp that provides simple chat functionality and style it with Bootstrap” – why do we want the AI to output code designed for us to read? Seems to me that AI code generation should all take natural language as input and output assembly code. The input might reasonably need to specify priorities and desired optimizations – “Make me a webapp that can scale to a million connected users with robust concurrency models prioritized over raw speed of request handling” – that reflects the same tradeoffs that lead to choosing one programming language over another today, but the AI shouldn’t have to output it in a language for us to understand.

I guess this post is more of a “shower thought” but the idea of an “AI coding partner” seems like a temporary middle step before true “AI developers”. Kind of like Adobe illustrator plugins for automating various effects are a middle ground toward DallE and co.

1 Like

Valid question and I agree that normally it should not. However:

  1. Requirements towards apps are never 100% completed so maybe it should produce human-readable code because somebody is bound to have to finish it manually later. (Though that’s not a given; if an actual general AI gets so far that we can just tell it “previous requirements + these 5 new things please” and it can deal with it then that point is moot.)
  2. There will be a transitional phase during which people will be distrustful of “AI” (and we’re in it already). Having the “AI” be able to produce readable code will help us vet it manually and increase our trust in it and leave it do its magic in the future (without us vetting it; unattended).

When that day comes and if I am still working, I’ll retire right there and then! :smiley: Though admittedly, I want to work on something very similar – a recursive hyper-optimizing machine that can even optimize itself (to physical limits of course; we know it can’t modify the CPU itself… and even that’s not 100% true; microcode can change how a CPU behaves… though also you gotta wonder how far you can get this idea if you put FPGAs into the mix).

1 Like

You just wrote the reason why AI will not replace humans in any forseeable future. What you wrote isn’t a specification. It’s a wish. “Chat scalable to million users” has about a million of other things that you didn’t mention:

  • does it have group chats or is it one-to-one only?
  • is it text only?
    • what about formatting? Plain text, Markdown, or custom formatting?
    • any limits on text length?
  • does it allow file uploads?
    • Any files, or just images? Or images and video?
      • Which formats?
      • What maximum size?
      • Will there be previews? Resizing? Re-encoding?
      • where will all this be stored?
      • do files expire?
  • end-to-end encryption?
    • does it apply to any files?
    • does it apply to group chats?
  • moderator tools?
    • for creators of group chats?
    • for managers of the chat service?
  • any additional tools like link unfurling, image previews, youtube autoplay?

And that’s off the top of my head without going into any serious detail. Because each of these details has about a million of other details. And for each of these details two different humans will give you 15 different opinions.

In general, it’s “Draw me 7 red lines, all perpendicular to each other, two of them green, one of them transparent, and one in the shape of a kitten”: The Expert (Short Comedy Sketch) - YouTube


Sorry, the original post said “Make me a webapp that can scale to a million connected users”, but even more things apply there, than in a chat: what app, what data etc. :slight_smile:

I mean, I was going for a brief example of the concept, not something I would expect to be actually full spec. As with any kind of software interaction, it does what you tell it to do not what you want it to do. So being explicit and identifying all the required features and corner cases is still going to be important. But I don’t see why that’s any different from now other than when you ask an experienced developer to make a chat app, they will ask for the additional parameters based on prior experience. No reason to assume the hypothetical AI dev won’t get to that level at some point.

Apart from, you know, the human actually understanding what you are talking about, and the large language model just inferring statistically when you might mean, based on previous training data, while having zero actual understanding?


Context matters here. The statement you quoted was preceded by

being explicit and identifying all the required features and corner cases is still going to be important.

Whether the receiver of the information uses “actual understanding” or statistically informed inference doesn’t matter. In both cases the sender needs to provide all the relevant information.

To your point that understanding by the developer is different from the inference by the LLM, that’s why I said the developer might be able to ask for missing relevant information up front based on prior experience and understanding of the problem space. I don’t see why an advanced AI couldn’t get to the point of specialized knowledge in the future, though I fully understand that the current iterations are not close to that.

Disagreed, simply because that requires combinatorics on a mind-blowing scale. As @dmitriid outlined above, just for one project idea you can have millions of combinations of yes/no prompts. To encompass what we do with software today we’d likely need something on the order of hundreds of trillions of combinations.

Don’t get me wrong, I do like statistics. But it does have limits.

Better hardware and/or better optimized algorithms will make it possible in TheFuture™? Absolutely. I have my doubts for the next 1-5 years however.

I don’t know why you keep cutting off part of my statement to make it look like I’m arguing a point that I’m not making. You quoted

but the point was the next sentence

My point was not about the merits of human intelligence or the limits of AI. It was simply that from a practical perspective, the person who desires someone or something else to produce the software always has to provide the requirement specs said software needs to fulfill but should not care about the process of actually producing the code. Which is NOT the point I originally posted about either. I keep getting dragged further away from the simple idea that when we are using AI to generate code we don’t need to continue using abstractions that were developed explicitly to make it easier for humans to generate code. That’s the sum total of my thoughts on this topic, so I think I’ll bow out now.

It does.

Because actual understanding means understanding the constraints, legal implications, applicability etc.

E.g. just the simple fact of “display image on screen in a scalable manner” means zilch without understanding the context. E.g., if your scalable app is only/mostly accessed through mobile phones, you can easily downscale and resize all images, and save millions on storage. But if the actual target audience is professional photographers who use the app all the time because it’s an easy way to access photos on the go, you need to store the actual RAW images as well.

AI has no understanding of that.

And anyone who says that it’s just a matter of refining prompts, has never:

  • tried to actually get a full technical description from humans for a task (good luck asking a non-technical product owner whether you should store images in webp or png)
  • worked on a project with more than a few moving parts (that team responsible for scalable storage? it’s outside the statistical model of your AI, and has its own constraints)



Yeah it would be nice to have a formal and structured language for dealing with specifications of business logic, one that would be able to go into more technical details should that need arise, oh wait, that is just a programming language :joy:.

1 Like