To date, Elixir is my favorite language. I don’t think of reaching for another language if I don’t have to… until recently.
Also, once businesses catch on to the productivity improvements, there may just be a mandate to use only AI supported languages in the office.
I think Elixir should find a way to be properly generated by AI tools lest organizations/developers will start using whatever language is most productive.
The question is: how much boilerplate code do you really write? Elixir compared to other languages has little to none boilerplate, and for moments such as phoenix things, there are configurable generators.
I wouldn’t want an AI incapable of problem solving to generate complex code for me, because as tempting as it seems, the productivity decreases a lot if we talk about refactoring generated code compared to creating your own new code.
Productivity is about solving a problem in the most efficient way, not about generating code the fastest, so I frankly don’t understand this argument.
You are giving both of these tools too much credit.
As @D4no0 mentioned, generating code just so you can get started with task X is not such a huge deal as people make it out to be. Emacs and VIM have had “snippet managers” for decades where you press a particular combo of keys and e.g. get a function with a for-loop over a list written for you.
GitHub Copilot and ChatGPT are better in that for sure – but not by much.
98% of all commercial programming is reading, understanding and modifying code. Improving the 2% that are the writing of the initial code is not impressive by any stretch of the imagination.
And finally, having something generate bigger scaffolds for you comes with the severe disadvantage that you might not understand that initial code and then you’ll lose more time first trying to understand it and then actually extending it or even using it on a basic level.
I’m wondering about this. I mean lot of time at least in our company’s software I first have to think about how to implement a feature because it might effect many places in the app. Lot of time is going into design then usually writing the code is just small part of the equation. It’s really hard for me to believe that 50% productivity claim from Copilot. I write TypeScript and I think it support that and maybe I’ll try it out.
I agree, it will be mandatory that code can be generated by AI.
The fact that chatGPT is so much better - at a task it is not made for - than copilot lets us guess where Microsoft will go with it in the future. In a few years all code will be written in dialog with a bot, no loc will be entered by hand.
They… Really aren’t. After two weeks of running Copilot on a JS codebase
it managed to suggest autocompletion maybe 5% of the time
maybe another 5% of the time it was actually useful or correct
All other cases IDEA’s autocompletion was better, and actually correct.
Both Copilot and ChatGPT are barely competent for the smallest and easiest of use cases. For the absolute vast majority of cases it’s a junior developer misunderstanding the assignment and spitting out some code that you have to thoroughly review to catch all the mistakes.
Don’t mistake raving reviews on Youtube/Twitter that are result of an early honeymoon period for actual usability of the products.
For what tbh, AI code generation is currently garbage technology. You also mentioned ChatGPT as code generation AI / tool. It’s not. It’s a text generation AI and nothing else. Use correct tool for correct job and try not believing in stupid marketing phrases
I think this is the same argument revolves about no-code solutions, a lot of solutions you build don’t have to be made from scratch and can be solved either at a higher level or by something that was already built, a simple example would be a system like node-red, because in most cases the defaults it provides are more than enough for you to get going.
Programming languages are for these fine grained requirements you have, where you can control everything starting from concurrency and ending with memory management, and in itself AI tool tries to abstract away having this fine grained control over the code.
For those saying the code generated isn’t good quality or can’t refactor, or is only for boilerplate:
Case in point: When we first launched GitHub Copilot for Individuals in June 2022, more than 27% of developers’ code files on average were generated by GitHub Copilot. Today, GitHub Copilot is behind an average of 46% of a developers’ code across all programming languages—and in Java, that number jumps to 61%.
It hasn’t even been out for a year yet. What will it be like in 2 years? 5 years?
And they just launched for businesses recently. It is coming!
This reminds me of these ads: “it clears 99% of all bacteria”.
Moreover there is this question about such tools, github copilot was trained on open-source code that is not all licensed to be operated on private projects, and this is applicable to all AI tools that steal data from random places.
Specific AI that would review your whole codebase and find all problems would be really helpful. I don’t think these generic ones will be proficient enough any time soon and my understanding co-pilot is not designed to that at all. Also having AI to understand concurrency to catch all those problems might not be a trivial task.
But one problem I see is that how can AI know the design spec to know in some cases something is actually wrong with your code. Because code could look like it’s working fine but according spec that is not included in the code something is wrong. I think this problem is much harder to crack than it looks from the surface.
I assume you’re just being funny here because yes, professionally speaking you have to review and understand code much more than you have to write it.
Writing is the much easier part.
Nobody is stopping you from using these tools. By all means, go wild with them. We do have to warn you however: just having a bucket load of code you didn’t write – and it mostly works – will hurt you when you have to evolve it at one point. That’s all really.