Saw this and wondered what each stage could or might mean for Elixir - anyone given it any thought?
What kind of impact do you think it will have on languages like Elixir and Erlang, or programming in general?
Saw this and wondered what each stage could or might mean for Elixir - anyone given it any thought?
What kind of impact do you think it will have on languages like Elixir and Erlang, or programming in general?
There is no evidence of a possibility of the quantum transition [Multi-]Agentic → AGI.
The evolution of a jellyfish might end up being Einstein, or it might have stayed the stinging muck in coastal waters forever.
What if… tho?
And what are the odds given our current trajectory? (Are those stages plausible? Or not?)
Ultimately, is progress inevitable (if we don’t annihilate ourselves)? ![]()
What if? We’re all out of job, that’s what if.
Progress is inevitable, but we do not even know what is required to achieve AGI.
But the funny thing about the video is that they stacked up those three bottom stages we’ve practically already achieved (won’t get into if and how actually useful and to whom) and then then simply continued linearly to AGI, kinda a like:
Worth keeping an eye on https://www.stateof.ai - I’m not sure the jump between 3 and 4 is as big as you might think it is, but admit it’s difficult to say right now.
The State Of AI survey is worth a look in terms of general AI use…
Once again: there is zero evidence this jump is ever possible.
Is progress inevitable? If so does that mean it’ll not just be possible, but likely?
To me your comment is a little bit like “single cell organisms would never (or be highly unlikely to) evolve into general intelligence (humans)”
To me its more like science people behind what makes LLM possible says that leap is not likely nor possible.
It’s not like that. We’re currently using GPUs to multiply the matrices, but it’s not because we’ve come to the conclusion that it’s optimal for pattern matching but because that’s the tech we had on hand to simulate what we believe may come closer to what we believe (again) is how our brains pattern match.
That’s not evolution. That’s just entrepreneurship.
It sure is. That’s why I mentioned a jellyfish in the very first comment. Past 500M years it became… a jellyfish.
Progress is inevitable. That means sooner or later the evolution would have come to AGI. It does not mean the previous step is AgenticAI.
Think many would disagree with you Oliver…
While Artificial General Intelligence (AGI)—AI systems with human-level capability across all domains—is generally considered theoretically possible, consensus on its timeline is split between a few years to several decades.
In a nutshell
- Four key factors are driving AI progress: larger base models, teaching models to reason, increasing models’ thinking time, and building agent scaffolding for multi-step tasks. These are underpinned by increasing computational power to run and train AI systems, as well as increasing human capital going into algorithmic research.
- All of these drivers are set to continue until 2028 and perhaps until 2032.
- This means we should expect major further gains in AI performance. We don’t know how large they’ll be, but extrapolating recent trends on benchmarks suggests we’ll reach systems with beyond-human performance in coding and scientific reasoning, and that can autonomously complete multi-week projects.
- Whether we call these systems ‘AGI’ or not, they could be sufficient to enable AI research itself, robotics, the technology industry, and scientific research to accelerate, leading to transformative impacts.
- Alternatively, AI might fail to overcome issues with ill-defined, high-context work over long time horizons and remain a tool (even if much improved compared to today).
- Increasing AI performance requires exponential growth in investment and the research workforce. At current rates, we will likely start to reach bottlenecks around 2030. Simplifying a bit, that means we’ll likely either reach AGI by around 2030 or see progress slow significantly. Hybrid scenarios are also possible, but the next five years seem especially crucial.
The godfather of AI himself:
Thats a lot of text I don’t really feel like sifting through, but to me that graph looks about the same as:
I make a substantial living off of the benefits of AI assisted development and I spend a lot of time on the bleeding edge and if anything every new release makes me sure we are firmly in a “smartness” plateau. Only the RLHF and harness engineering is now progressing meaningfully.
This made me laugh so much!
Well if we get an AGI or a Super Intelligence, we are toast. Programmers become unnecessary in one year if we are lucky. If that SI has enough computing resources, it’ll achieve a full convergence – it will find 4-5 programming languages that allow it to progress quickly (like statically strongly typed and with super-fast startup time… so, Rust / Zig / D, basically), then it will start striving to use just one PL, then it will fix the one that it feels has the least blockers or problems, it will make a mega-uber compiler and a runtime, will start working only with it, then will drill deeper and will hyper-optimize the machine code, and then it will use the extra processing speed to accelerate its own development even more.
From then on we the humans are practically redundant and just a drag.
We should start working on positronic brains with the 3 laws of robotics hardcoded inside the hardware. ![]()
I like to think we’d be able to create some sort of QSI (quarantined SI) and it’ll hopefully help us with things like mastering genetics. Perhaps leading to our own evolution, transhumanism.
Until that time… surprised nobody has said that AI would love to use a language like Elixir or Erlang, are we forgetting Robert’s comment:
![]()