I thought it would be interesting discussing what independent languages like Elixir, or those that are not created by FAANG (or whatever the AI equivalent might be) will be able to compete/survive/stay relevant, assuming, for the purpose of this thread, that the AI industry ends up being like pretty much every other industry in a capitalist world:
In the beginning there is ‘healthy’ competition (where most languages will probably be treated well as the companies will be competing for customers/users).
Over time smaller companies are acquired/swallowed up/go out of business (some languages may begin to get neglected or dropped at this stage).
I realise there could be lots of different outcomes but for the purpose of this specific thread (and in the interest of being prudent) let’s assume the above pans out - what can programming languages do to stay relevant/survive?
Which one if any may become the xAI’s? favorite language? Or Meta’s? Amazon’s? Dell’s?
Might be missing something but where exactly do you see a threat?
By what I am seeing every IT corp and their kitchen sink are into building their own “data” centers now now. Sure, the industry will consolidate at some point but do you really think mergers the likes of MS + Amazon are even remotely possible?
In comparison, I haven’t noticed a preference of corporate OS’s on AWS. Actually, it’s like everyone’s deploying on (alpine) linux there by definition. Purely a sound technical choice.
It’s just one possible scenario based on what’s happened in other industries - a few companies end up controlling the market, and from what we’ve seen of big tech, they sure do love their own languages/frameworks; if they control the tools they can ultimately control or influence the competition to some degree or another.
Going back to what independent languages can do, Elixir (and languages like Ruby) already have a head start in this imo - by being intuitive, natural, easy to use, and beautiful.
If people enjoy programming in a certain language then this could be one way to remove reliance on AI and AI controlling firms.
If a programming language is easy to use (like being intuitive and natural) then again, there could be less of a reliance on advanced AI/firms controlling them.
If a programming language brings you joy, whether from the process or by just being beautiful, again, there could be less reason to want to use AI
In summary: in all this hype and excitement around AI focusing on things that might draw people into using the language directly (or with a very light sprinkle of AI) might be a real lifeline/advantage
I’ve made this joke a couple of times (hopefully not on the forum yet) but I like to imagine a future where AI slop has gotten out of hand and someone says, “Maybe we should come up with some kind of deterministic language to program computers with and we can all learn it—like a “programming language” if you will.”
The funny thing is languages that are very popular right now may actually be the first to ‘die’/stop being used by humans - when AI gets to a certain acceptable level, who’s going to want to program in Javascript? (Not me! )
I asked ChatGPT which language it would choose in the scenario where all coding is left up to AI and people just describe what they want it to build. It’s first pick was LLVM IR followed by Rust. At first, I asked if it would be preferable to code in binary, but it said no, that some level of abstraction was useful.
At some point I imagine AI will create its own hyper optimised language or abstractions over machine code, I wonder actually if there has already been any effort to get an AI to create its own language…
Ya, I’ve heard this idea before and it only makes sense. When thinking of “vibe coding” as a viable thing (good gravy do I ever hate that term) it makes little sense for it to be hammering out code meant for humans. People scoff at that but like, I’ve never learned assembler or Erlang byte code and I still write programs. Not sure if it’s totally equivalent but maybe…?
In any event, Imma keep doing what I’m doing and hope I’m retired before this all gets out of hand I’m assuming I won’t be, though.
What do you mean by this? “AI” + “vibecoders” taking over before you’re retired? Not buying it.
The other day I spent something like an hour trying to get Grok and ChatGPT to find a bug in a reduced snippet of just HTML + AlpineJS listeners. They both went on to philosophize in the same direction obviously not reasoning but regurgitating (in variations) what they scraped from the web. It was a “stateful” bug (in terms of the browser’s handling of events) and eventually I had to find it myself once they entered their respective the-end-of-any-additional-value-response loops.
Now, I’m only trying to imagine the order of magnitude of the problems the AI is going to encounter when it starts digging under the bare surface with the chances of success for the “feedback → results → feedback” logarithmically approaching 0.
A prompt “communication” of the kind comes to mind:
[… weeks of explaining and forgetting/changing one’s own requirements for the nth time later]
=> Great, finally it all works! Now make it support 100,000 concurrent users.
Sure, rewriting everything to run on BEAM now..
=> But, nothing works anymore!
Why this happens..
Just AI in general. I think we’re a long way off “vibe coding” being viable for massive projects. As touched on, I feel the programming languages would need to look very different, though I have no idea what that might be.
I’ve been dipping my toes because we’re being encouraged to at work (and they are paying for it). My CTO is great and he’s holding a good degree of skepticism, but wants us to experiment which I think is more than fair. I still have not tried agentic coding which is to say I’ve only heard stories like yours so I have very little context on what the experience is actually like.
I’m more in favor of thinking about what different approach we’ll need to organize and supply the requirements and solutions to the AI assuming they manage to pull it off beyond the surface.
The problem “vibecoding” faces is not a translation from ideas/requirements to code but from the problem domain to the solution domain. Who’s going do the work in-between if not humans? When a client or a business analyst unload their requirements onto developers, it’s a two way communication from then on, and not just a multiple-choice question. It is about finding the problem behind a problem that saves the day, not translating a solution into code.
Given the way the LLMs currently work, I simply don’t find it feasible for them to pattern match their way out of such situations, no matter how large the “model”.
And don’t get me wrong, I would love so much for the “AI” to be way smarter and help me more than it’s currently capable of. The bitter taste I’m left with after spending a considerable time hoping it will get me somewhere, and then resorting back to my own human intelligence feels so bad, like a complete waste of time. After a couple of such wasteful disappointments it takes me time to dare try again.
LISP would be amazing for dynamic self-correcting in-real-time agent. Though LISP runtimes lag so much behind various others, and that’s especially egregious when you compare them to… the BEAM VM.
IMO something running in the BEAM VM has much higher chances to be the first birth cradle of a true AI (general / super intelligence) by the mere virtue of it spawning thousands of mini agents and orchestrating them. Nearly every other runtime will stumble and fall.