Many of the efficiency issues of immutability were manageable as functional programming dates from the 1950’s.
I think OO was seen as something different offering some kind of promise of better software faster because it offered new concepts of “encapsulation”, “inheritence”, and “polymorphism” which were seen as a potential saviour to the industry.
This left “functional” most likely being seen as academic, mathematical, and potentially also conflated with the problems of that “evil” procedural “spaghetti code”, as superficially that a makes use of “functions” too right?
In the end, the “OO languages” created the “ravioli code” we have today and did not implement the actual definition of OO as described (or hoped) by Alan Kay.
The “OO languages” we have are non determinisitc (there is no guarantee to get the same output with the same inputs), and much harder to test.
One is left with a complex global graph of mutable object state tucked behind “methods”, with that state scattered to the wind in a single address space, with the execution model almost an afterthought.
With concurrency grafted on, you have a mess of threads, critical sections, mutex locks, semaphores and condition variables with many opportunities for deadlocks, no ability to actually clean up for every eventuality, as well as dealing with thundering herds of threads queuing up in the oddest of places (every mutex lock or critical section is actually an unintended queue).
How do you program defensively or even reason about it when the whole graph is a buzz with parallel changes and any thread could at any moment do a rug pull from under your feet? You use locks, and now any read on that graph of objects creates a stop work meeting and often results in a cascade of object interactions, more locks and even more deadlock potential. When considering exceptions none of the threads can actually cleanup properly in all circumstances, leaving a debris field and odd state in that global graph. It is all about coding the sad defensive path in this OO world, writing more code and doing less, but gee with static types you might catch some typos in the many times more code you must write but can’t adequately test.
There is also a runtime cost, shared data and locks actually lock the bus across CPU cores, and hurt cache effectiveness. We all know about “stop the world” GC and how that impacts the 99th percentile.
The BEAM avoids this hell.
As a recent prominent example of what “OO languages” beget, the Java crowd inflicted the log4j security vulnerability on the world which has cost companies in aggregate billions of dollars. What was the root cause you ask? A thread race condition due to fundamentally bad language design, a problem which goes like this:
The barman asks what they want.
Two threads walk into a bar.
The “OO languages” really are what Joe said, touch the banana and this sets off a cascade of side effects where the entire jungle moves.
The attack surface in the log4j vulnerability is vast and easy to exploit. Attacks like this essentially attack the fundamental issues at the core of most language designs that never considered the interplay of the language and processing model for concurrency together. I expect to see many more of these programming model attacks based on exploiting race conditions.
So how do you not end up with race conditions and deadlocks?
You avoid concurrency altogether in most languages and use heavy weight OS processes (often disguised as containers) to keep that state isolated within an OS process boundary and turn it into an operations problem.
Ultimately the truth of Erlang process model of concurrency and isolation prevails, albiet an impoverished analogue of it with many times the operational cost, recovery impact and recovery time when you do have a failure.
In my view the “OO languages” have proven to be a programming and operations tarpit. They looked like a nice shimmering sea of opportunity for a desperate software community, but like all tarpits once you’re deep in it, it’s hard to see a way out. Once you’re in the pit and “failing about”, the solution looks like “better tools”, and this attracts “investment” by tool vendors as there is good money prolonging a problem, which in turn attracts even more victims.
The “tools problem” compensates and hides bad language design which has increasingly been pushed into an operations domain with increased complexity though things like Kubernetes clusters for scalability and resilience of “OO language” programs. The burden of resilience and recovery disappears from the programmer and reappears in operations with orders of magnitude more cost. Those that don’t get Elixir, Erlang or the BEAM probably don’t understand the real problem. They don’t even perceive the problem to exist because of “best practices”, but the vendors and hosting provider do understand, and are waiting there smiling and ready to “help” ensure you consume as much as possible using “all of the best practice complexity you can afford” to retrospectively “solve it”.
Soon one needs to resource an ops team capable of maintaining the “cloud on cloud” cluster “service mesh”, etc, more technologies which no-one can all be experts in, but hey it’s all hipster and well “all the [OO] guys are doing it, Google loves Kubernetes, and RedHat loves OpenShift too”.
One of the red flags of a language/ecosystem that never solved the hard fundamental problems is that it ends up requiring a large set of disparate technology tools (footprint), and lots of people (layers of managers) to compensate for the operational problems it creates. These are usually “mainstream” technologies that generate a lot of revenue, and like lemmings corporates walk to their death willingly, because everyone else is doing it too.
This is where the significant competitive advantage of BEAM languages comes from to those that are savvy enough to understand and do actual risk management.
Less code, better testability, higher quality, low operational complexity and smaller technology footprint = less vendors and fewer SME’s needed to build understand and manage it.
Many decades on, some people are beginning to question their assumptions. This is why there are some significant examples that demonstrate success was no accident with services like, “WhatsApp”, “Discord”, “Pinterest”, and the UK health spine all using BEAM languages for competitive advantage.
I can posit that Jose probably reached a point working with Rails, that something had to change, there just had to be a better way… and thankfully he identified and built a solution and has been a huge mover in bringing the functional language paradigm and the true intent of OO into the mainstream, all by building on the shoulders of giants like Joe Armstrong who reasoned about and solved the fundamental problems that needed solving.
Alan Kay has said “Erlang might be the only object oriented language”. Coming from the person who defined “object oriented” I think that says it all, we have the best of both.