Who is using ChatGPT to drive development and what have you made with elixir so far?

This is awesome thank you!

The limited set of training material for Elixir vs many other languages seems to make chatgpt4 less effective, esp for newer stuff like the current version of LiveView. I am finding it helpful in some ways to quickly learn about Elixir functions and that’s nice, but the practical recipes it is offering me are not very good. For example, I want the user to be able to scroll back and forth in a paginated, streamed CSV file. So that’s really straight Elixir, no liveview or Ecto or anything. And ChatGPT 4 says

  def stream_file(file_path, lines_per_page) do
    File.stream!(file_path)
    |> Enum.chunk_every(lines_per_page)
    |> Stream.cycle()
    |> Stream.with_index()
  end

  def read_page(stream, current_page) do
    stream
    |> Enum.drop(current_page)
    |> Enum.at(0)
  end
end

…which is useful for identifying some of the key functions that will be used, but certainly will not actually work. So I say, “Can you check that read_page function? I don’t think it will work.” And it agrees with me and proposed something else that won’t work, but is a little closer.

I sort of suspect this would work better in Node because there are a whole lot more online examples of EVERYTHING in JS/Node than Elixir. So I ask for a Node implementation. The Node implementation looks better (literally, because there is code highlighting in Node and it was not applied by ChatGPT for Elixir). But interestingly you see it’s preference for older paradigms. E.g., returning promises explicitly like it’s 2019 or something:

async readPage(pageNumber) {
    return new Promise((resolve, reject) => {
...
1 Like

I have to keep telling to stop using the ~L sigal and to use the ~H and also mostly heex issues too.

1 Like

Ok here’s something cool to share.

Once you get it to understand the standards you can move much faster.

So there is a small build up. which I bet you could condense into a pre prompt.

Any how for basic levels of achievements check this out this chat I had on ChatGPT

I had to keep coercing it but once it got it right the way I wanted it it locked in.

2 Likes

I love using ChatGPT and Copilot. A year ago I would find myself getting weary writing the boilerplate for something I had done hundreds of times before. Now ChatGPT or Copilot (I am using Copilot with Emacs) can generate a really great starting point for me in seconds.

The generated code always looks right and it almost always has subtle problems.

I am not worried about becoming obsolete. I am just faster. The way I think of it is like building the pyramids. We used to throw human bodies at the problem until it was done. ML-driven tools are like bulldozers/cranes/diggers in this analogy. They can do more work in a day than I can do in a year but they are still useless without human influence.

6 Likes

Checkout https://cursor.sh

This IDE + OpenAI API key + Copilot is a killer

3 Likes

I’ve been using copilot, and I’m about ready to kick it to the curb.

I read these breathless accounts that the sky is falling for programming asserting that AI generated a fully-functioning Tetris clone in Javascript, but so what? Github has plenty of complete implementations of various games in Javascript. Github is disproportionately Javascript! When it comes to Elixir code, copilot is the boss’ nephew assigned to you as a junior pair programmer who once did a thing in –– what’s that programming language called again…? Oh yeah! –– HTML.

When I get to domain specific stuff, Copilot writes really capable looking code. Most of the time it doesn’t error out, even. But that’s the problem for me. The cognitive load for having to suss out the AI’s “dreaming” is more than if I just powered through the code for myself. It insists that there are methods from Elixir’s core modules that simply don’t exist. I’m not talking about the copilot text saying, “But you’ll have to implement this function for yourself”, but telling me straight up that a function exists, is documented, and this is how you use it when it is clearly, unmistakably wrong.

It lays out HTML using tailwind like a field. It writes decent CSS. It’s really quite good at javascript. Domain-specific elixir is absolutely rubbish.

4 Likes

Another upvote for Cursor here - its flat out amazing. Like, gamechanger level amazing.

It will automatically vectorize your project source code files and use it on-demand have proper context for GenAI, and doing inline-prompting is just so much better than writing paragraphs into a dedicated ChatGPT browser window with copying stuff back-and-forth.

It can rewrite existing code and refactor it into a desired style at will, it can generate code like a new function in some module by merely describing what you want it to do. There is a “auto debug” button on errors that is pretty good at pinpointing the problem or at least sends you into the right direction in many cases.

And you can “talk” with your entire codebase at will and discuss ideas or ask questions.

Last but not least its a fork of VSCode so all plugins work just as-is, but you get the AI-first functionality on top seamlessly.

Also you can bring your own OpenAI key and then its basically free instead of having to sign up for a subscription.

I failed at running it through WSL :sweat:
I’ll try again, I’m curious to try it out.

its almost like how the cell phone made everyone into a professional photographer.

this most certainly is not true.

2 Likes

Well yeah its a exaggeration, and most people have no clue what an aperture is or why its technically bigger as the number gets smaller, or even what the shutter angle is

But what it did do was make is so you could ignore most of that and still make a amazing photos. And thus photography as a professional job is taken for granted because most people rely on their app to do the hard work of understanding the root of the task.

So in that regard it leveled the playing field by making solutions and tools and understandings to those solutions easier to abtain And thus yeah now everyone with a cell phone today has vastly more tools at their disposal to achieve the same results as an photographer could not just 20 years ago.

In otherwards if you had the same level of tools in as standard photography gear 20 year ago, with exception to the quality of lens and sensor you still spent close to a few grand.
Today that’s all in your pocket for less that $500.

If you don’t think that has not lower the bar making that industry accessible to the mass so that anyone could achieve the same results as a higher paid professional, than I have a bunch of unemployed photographers to introduce you to.

But also yeah I do know a thing or two about being a photographer.

Thanks for the pointer. I just downloaded it.

I notice that the IDE itself is not open source. The github repo is issues-only.

How does it compare with open source alternatives, such as Continue for example (there seem to be at least half a dozen active projects with similar goals). One advantage that Continue has, is that it allows non-openAI models to be used in the back end.

The core key to understanding how good the tool is would be how much of it has been bootstrapped using itself as a tool.

I’d love to see what part of the code behind all the copilots has been designed and written using themselves. Meanwhile, all the attempts to use any of them simply remind me of mentoring a junior. They are good at executing tasks like “ok, here are docs, here are tests, here are explicit requirements, please write a code which adds two integers,” and absolutely of no use when it comes to my daily routine, like (that’s literally my today’s task) “implement a function which takes a cron line and produces an infinite stream of DateTime instances.”

2 Likes

I think all the tools are using AI assistance to a greater or lesser degree. Whether they are using the tool itself or not is unclear. I have personally found that for anything complicated, I keep going back to the chat interface and copy/pasting relevant code to get good results. But it seems like a well designed Chat UX for an IDE, should be able to get the best of both worlds - the flexibility of chat but with access to IDE tooling to summarize context for the LLM.

One tool that is definitely using itself is Aider, the maintainers even document the conversation that produced the diff as part of the commit. However, that’s a command line tool, and you would think that it could be improved in the IDE tooling.

Would love to see a link to such a commit. I went through the last 10 commits in their repo and found zero evidence of the tool itself used.

Chat interface gives me the exact same result as copilot in the IDE: none. I never saw ChatGPT being able to produce somewhat complex. I mean, never.

What it can produce without hallucinating, I would type directly as code in like ten times faster.

3 Likes

Here’s one

Here’s another from a related repository that aider depends on, maintained by the same author.

I guess they don’t use it on every commit, and it looks like they’re editing the conversation that gets placed in the commit. I remember noticing it on some code that I was looking at, and there appeared to be a full conversation.

Hmm. Very much not my experience (unless we mean different things by complex). I’ve found it to be quite effective at saving time. But yes, you have to check for mistakes. I use it for coding the same way that I use it for text. It gives me a first draft, which I then generally refactor/rewrite a few times (sometimes within the chat itself) until I’m happy with it.

Even copilot, in the sense of automated text completions, is decent at times for short snippets.

Are you using ChatGPT 4?

Well, this one is a made-up one (really? using a tool to bring this change to the codebase?) and proves the tool’s futility; the commit message is longer but less clean compared to the diff itself.

No, I am not using anything because I do not see any positive impact and I see a lot of negative impact that such a tool might bring to the development process. If you were asking about whether my experience is based on 4th version, then yes, I’ve tried all of them.

Could be they stopped adding the messages to the commits for that reason. I would have. Other than proving that they used the tool itself, it doesn’t really add that much to the commit.

But your question was whether anyone was using it (GPT) on the tooling itself. They certainly appear to be doing that.

I agree that it might have a detrimental impact on the process, if people blindly start using AI generated code. For the time being you certainly need to know what you’re about when merging in code that’s been AI generated.

I guess we disagree on the positive impact. I find it a time saver.

They certainly tried and even showed the result when the change itself is faster to figure out, type in, commit, and push, compared to waiting for the tool response.

This is by no means a bootstrapping and it surely slows down the development process, not boosts it. So my original question stays.