OO or FP easier for beginners

MOD NOTE: These were extracted from https://elixirforum.com/t/elixir-should-focus-on-newbie-adoption

I’m going to push back on some of the posts above suggesting Elixir is not a good language for learning “programming” in general. I didn’t really solidify many basic concepts of recursion, data structures, and state management until I started working with Elixir and the associated docs and tutorials.
Recursion is an obvious functional programming basic concept, so it makes sense that more OO or imperative languages might not highlight this concept as well. For me, recursion really clicked with Elixir in a way that helped me go back to other languages and suddenly be able to apply that technique effectively.
Data structure design becomes so obviously critical with pattern matching. Other languages might be able to highlight performance characteristics of this or that structure over another. In terms of conceptualizing the problem space and the logic flow, however, I found the pattern matching paradigm of Elixir really drove home the importance of how you model the data in your application in ways that OO and imperative languages had not. Perhaps a personal failing or just an obvious lack of formal education on my part, but I think it really is an intrinsic benefit of the language design.
State management becomes such an obvious thing with immutable data structures, but in “pure” languages you end up wondering how anything gets done. Elixir really makes this obvious with the use of GenServers and the like to illustrate how to manage and update global state in a way that (mostly) prevents data races and the like. Again, a concept that more formal education may make clear to students, but something that the fundamental design of BEAM languages makes more obvious to the neophyte, imo.

10 Likes

I strongly believe that functional programming is more “natural”. As someone with zero compsci background and taught themselves to program by flailing around with PHP 3 and 4 with no framework for guidance, I found myself creating different nested data structures to describe my state and, without knowing their names, wanting recursion and even some simple types of meta programming (I quickly discovered PHP allowed recursion and some very limited “metaprogramming” with “variable variables”). I already mentioned but after developing and thinking in these concepts, it took me an embarrassingly long time to understand OO (I don’t even wanna say, lol). So I believe that if functional concepts are the first thing you learn, there is absolutely nothing strange or hard about them—they are only strange and hard if you are used to a different way of doing things.

I’m assuming that functional didn’t prevail because for the longest time because it wasn’t that efficient as it would require literal copying of entire data structures in order to change them which was untenable for larger systems. I don’t know my language history that well, though.

12 Likes

I try to stay open in regards to classic OOP, because I have to write in it at my current job, however I don’t see in any way how this is a good way to write software, in any use-cases. I have learned patterns, architectures, all of them being very abstract about a implementation as they all are, everyone implementing them how they think is the right way, only adding more complexity to this pile of already complex code. Usually when I talk to people that have never written in FP code, they defend OOP, however I never seen anyone the other way around.

I think there are many more things in play when we talk about bad OOP, like side-effects, mutability, bad concurrency model, the all influence very much the style in witch OOP code is written.

The OO crowd will respond to this by saying it’s because we’re just wanting to stick with the “cool” thing and we’re secretly waiting for OO to be cool again so we can go back to it :joy:

1 Like

This isn’t a problem unique (or even related) to OO. Really this is the problem of implicitness, which is just as present in Elixir’s import as it is in C’s #include, implicitness is one of those things that’s really nice when you write the code, but can be a PITA when you read the code :slight_smile: Thankfully ctags and nowadays language servers, make finding function/method/identifier definitions relatively trivial.

1 Like

That’s because the claimed OO languages with their classes and methods are imposters and not actually object oriented.

A famous Joe Armstrong quote seems apt:

The problem with object-oriented languages is they’ve got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.

Joe provides some pretty good definitons on this forum on why Erlang languages are the only real OO languages:

So I wouldn’t hate too hard on real object oriented programming as that is actually what Elixir on the BEAM is.

4 Likes

While this is all 100% true, when people say “OO” they almost always mean “classes and methods” which is certainly what I mean in this context!

Thanks for moving over the discussion, @benwilson512!

5 Likes

I can see that today the tooling probably makes it easier. LSPs weren’t the norm when I was learning to program, so you would have something like this:

# a.py
class A:
    def abc(self):
        self.thing = []

# b.py
from a import A

class B(A):
    def cde(self):
        self.do_stuff()

    def do_stuff(self):
        # etc etc

# c.py
from b import B

class C(B):
    def xyz(self):
        self.abc()

All in different files and with a lot more complexity. So I would ask the question of “where did the abc method come from?” and I knew enough to understand that it probably came from a parent class, but then I’d go up to the parent class and … surprise! it inherits from another class or maybe even multiple classes. Then it was juggling between different files trying to find where it came from. Sometimes I would search for it and never solve the mystery of where self.abc() came from. It was probably inherited from a class from some other library.

So yes, part of it is implicitness, but this is a sort of implicitness that was really hard to reason about. It wasn’t simply a matter of checking the imports. I don’t think the implicitness seen in Elixir is comparable. With most imports, you know exactly where something is coming from (and it’s very common to reference a module directly rather than importing it - I rarely do statements like import Enum, warn: false or import Enum, only: [map: 2]. You certainly don’t have a situation where a self.abc() can modify your %C{} struct without you explicitly assigning the value it returns to a new variable.

1 Like

LSPs weren’t unless you learnt very recently, but your example situation, where there’s not an overridden method, is trivial for ctags, and if that wasn’t around when you were learning to program then (like me) you just used grep before riding home on your dinosaur.

I don’t see why you think the “checking the imports” is simple but “checking the base class(es)” isn’t, and re your point that with most imports you know exactly where something is coming from - I’d argue that with most inheritance/delegation you do too.

Your point about mutability is of course true and a significant difference, but nothing to do with knowing where methods are defined.

For me it’s about being easy, it’s about being disruptive to jump between files. Though I recently discovered a few keyboard shortcuts in NeoVim that allow you not only to jump to definition but to also jump after. Pretty nice!

Many of the efficiency issues of immutability were manageable as functional programming dates from the 1950’s.

I think OO was seen as something different offering some kind of promise of better software faster because it offered new concepts of “encapsulation”, “inheritence”, and “polymorphism” which were seen as a potential saviour to the industry.

This left “functional” most likely being seen as academic, mathematical, and potentially also conflated with the problems of that “evil” procedural “spaghetti code”, as superficially that a makes use of “functions” too right?

In the end, the “OO languages” created the “ravioli code” we have today and did not implement the actual definition of OO as described (or hoped) by Alan Kay.

The “OO languages” we have are non determinisitc (there is no guarantee to get the same output with the same inputs), and much harder to test.

One is left with a complex global graph of mutable object state tucked behind “methods”, with that state scattered to the wind in a single address space, with the execution model almost an afterthought.

With concurrency grafted on, you have a mess of threads, critical sections, mutex locks, semaphores and condition variables with many opportunities for deadlocks, no ability to actually clean up for every eventuality, as well as dealing with thundering herds of threads queuing up in the oddest of places (every mutex lock or critical section is actually an unintended queue).

How do you program defensively or even reason about it when the whole graph is a buzz with parallel changes and any thread could at any moment do a rug pull from under your feet? You use locks, and now any read on that graph of objects creates a stop work meeting and often results in a cascade of object interactions, more locks and even more deadlock potential. When considering exceptions none of the threads can actually cleanup properly in all circumstances, leaving a debris field and odd state in that global graph. It is all about coding the sad defensive path in this OO world, writing more code and doing less, but gee with static types you might catch some typos in the many times more code you must write but can’t adequately test.

There is also a runtime cost, shared data and locks actually lock the bus across CPU cores, and hurt cache effectiveness. We all know about “stop the world” GC and how that impacts the 99th percentile.

The BEAM avoids this hell.

As a recent prominent example of what “OO languages” beget, the Java crowd inflicted the log4j security vulnerability on the world which has cost companies in aggregate billions of dollars. What was the root cause you ask? A thread race condition due to fundamentally bad language design, a problem which goes like this:

The barman asks what they want.
Two threads walk into a bar.

The “OO languages” really are what Joe said, touch the banana and this sets off a cascade of side effects where the entire jungle moves.

The attack surface in the log4j vulnerability is vast and easy to exploit. Attacks like this essentially attack the fundamental issues at the core of most language designs that never considered the interplay of the language and processing model for concurrency together. I expect to see many more of these programming model attacks based on exploiting race conditions.

So how do you not end up with race conditions and deadlocks?

You avoid concurrency altogether in most languages and use heavy weight OS processes (often disguised as containers) to keep that state isolated within an OS process boundary and turn it into an operations problem.

Ultimately the truth of Erlang process model of concurrency and isolation prevails, albiet an impoverished analogue of it with many times the operational cost, recovery impact and recovery time when you do have a failure.

In my view the “OO languages” have proven to be a programming and operations tarpit. They looked like a nice shimmering sea of opportunity for a desperate software community, but like all tarpits once you’re deep in it, it’s hard to see a way out. Once you’re in the pit and “failing about”, the solution looks like “better tools”, and this attracts “investment” by tool vendors as there is good money prolonging a problem, which in turn attracts even more victims.

The “tools problem” compensates and hides bad language design which has increasingly been pushed into an operations domain with increased complexity though things like Kubernetes clusters for scalability and resilience of “OO language” programs. The burden of resilience and recovery disappears from the programmer and reappears in operations with orders of magnitude more cost. Those that don’t get Elixir, Erlang or the BEAM probably don’t understand the real problem. They don’t even perceive the problem to exist because of “best practices”, but the vendors and hosting provider do understand, and are waiting there smiling and ready to “help” ensure you consume as much as possible using “all of the best practice complexity you can afford” to retrospectively “solve it”.

Soon one needs to resource an ops team capable of maintaining the “cloud on cloud” cluster “service mesh”, etc, more technologies which no-one can all be experts in, but hey it’s all hipster and well “all the [OO] guys are doing it, Google loves Kubernetes, and RedHat loves OpenShift too”.

One of the red flags of a language/ecosystem that never solved the hard fundamental problems is that it ends up requiring a large set of disparate technology tools (footprint), and lots of people (layers of managers) to compensate for the operational problems it creates. These are usually “mainstream” technologies that generate a lot of revenue, and like lemmings corporates walk to their death willingly, because everyone else is doing it too.

This is where the significant competitive advantage of BEAM languages comes from to those that are savvy enough to understand and do actual risk management.
Less code, better testability, higher quality, low operational complexity and smaller technology footprint = less vendors and fewer SME’s needed to build understand and manage it.

Many decades on, some people are beginning to question their assumptions. This is why there are some significant examples that demonstrate success was no accident with services like, “WhatsApp”, “Discord”, “Pinterest”, and the UK health spine all using BEAM languages for competitive advantage.

I can posit that Jose probably reached a point working with Rails, that something had to change, there just had to be a better way… and thankfully he identified and built a solution and has been a huge mover in bringing the functional language paradigm and the true intent of OO into the mainstream, all by building on the shoulders of giants like Joe Armstrong who reasoned about and solved the fundamental problems that needed solving.

Alan Kay has said “Erlang might be the only object oriented language”. Coming from the person who defined “object oriented” I think that says it all, we have the best of both.

7 Likes

Great post!

Yeah I’ve noticed this too. I’ve had an interview for a golang position a few years ago, where they told me that they use kubernetes for fault tolerance and scaling, when they asked me what I used at my job, I just said I don’t have to use anything, the VM was handing all of that for me, they looked at me like I was the last idiot on earth…

I think the concept of what we say is wrong OOP states: encapsulation shakes hand with mutability and side-effects.

What I find more baffling than all of this, is how errors are treated as a separate entities. A typical example:

try {
      File myObj = new File("filename.txt");
      if (myObj.createNewFile()) {
        System.out.println("File created: " + myObj.getName());
      } else {
        System.out.println("File already exists.");
      }
    } catch (IOException e) {
      System.out.println("An error occurred.");
      e.printStackTrace();
    }
  }

This approach enforces defensive programming, somehow groups multiple parts of code handling together, and introduces a new type of completely different results from the function, not to mention that it reminds me of goto style programming. My only explication for this implementation is either an afterthought or the actual limitation of the type system, because you need to have these polymorphic types that can have different values like Ok, and Error that we use as a convention both in elixir and other languages.

What is strange to me is the fact that haskell was released before languages like java, yes haskell is more focused on academic use, however a lot of concepts that were whacked together as an afterthought were already implemented correctly previously.

So to articulate correctly, namely inherent bad design of features around OOP make the languages complex and hard to learn, not only the actual OOP specific features like inheritance, etc.

2 Likes

Your post is excellent but IMO the sarcasm about statically strongly typed languages detracts from it:

You and @sodapopcan (to whom I already gave an example about what they bring to the table :stuck_out_tongue: but he still seems to wonder what are they good for) seem to underestimate the ability to refactor via a checklist that the compiler is literally handing you, as opposed to crudely searching for certain texts in the codebase and assuming that’s enough – and to be fair to all sides it usually is, which makes the cases where it is not enough all the more dangerous and confusing (some codebases pass modules around and use them dynamically, others use protocols, yet others use apply, and let’s not even mention those that import or alias stuff so it’s not trivial to find even for LSP sometimes etc.).

That, plus the ability to strongly statically type and enforce data structures in important places – like configs, a good amount of which are a trap in Erlang land, a mish-mash of nested keyword lists, just lists and various tuples – actually saves time. I already gave example to @sodapopcan about HTTPoison / hackney and a few others whose e.g. SSL configs are one of the things I’ve seen people get wrong no less than 50 times in the last 7 years and a few months I’ve been contracting with Elixir.

Also you’ll need to clarify on the “can’t adequately test” part because for years of writing Golang and Rust I am just not seeing that to be true.


So I propose: let’s argue OO vs. FP without sarcastic jabs at the expense of static strongly typed languages because (1) it’s a different discussion entirely and we’re muddying the topic and (2) I don’t feel that most detractors have made an effort at understanding “the other side”.

I’ve been at both sides and while I agree static strong typing system won’t add much value to Elixir in particular because pattern-matching already allows you to both assert on data shape and the types of [certain parts of] it, I would still posit that some better typing system will reduce certain commonly happening WTFs per minute and will especially remove a whole class of deployment problems where you have to make a release (or force-rerun of a CI/CD action) just to curse “frak, where is that keyword list value supposed to go in the config of this library exactly?”.

Yes, shakes hand, and then some.

BEAM languages also enables a form of “mutability” for process state (basically these are the OO BEAM objects) through recursion, when passing the new state into a tail call within whatever function that is performing the process “receive loop” or “runloop”. Noting we don’t actually have loops in functional languages, only recursion, code is always “moving forward”, never backward, so loops or branching backward are impossible in valid Erlang bytecode.

This approach has other benefits at a VM level for safe and preemption (soft realtime) and guarantees of always terminating and cleaning up process state that other languages and VMs can never get. Even when other languages try to adopt the Erlang actor model, they can’t preempt safely to get the Erlang determinative response time, they can only flag well behaving code to please be nice and yield, please be nice and die, but a thread can be running any code anywhere as state is just floating about, the thread may be in the middle of updating a shared data structure, or using log4j, or stuck in a hard loop and can never safely yield or be killed by the VM with the only way out being a hard restart of the OS server process at that point.

The difference with the BEAM vs other languages is that in other langues objects are divorced from the execution context. Whilst in one sense a traditional OO object may be “owned” through an object relationship (e.g. a manager object), they are not owned by a single execution context. Objects “float around” for use by an execution context (eg a thread) often many threads at a time as program state is not bound to threads which are usually OS based if you want actual parellism and use of all cores. This thread and object promiscuity often leads to race conditions and faulty state management that spreads through the program state, often a kind of creeping death that eventually results in a crash or misbehavior much later on because there is just no way to clean up and recover from faults in a pragmatic way. This is what Joe Armstrongs research and thesis pioneered:

“The central problem addressed by this thesis is the problem of constructing reliable systems from programs which may themselves contain errors. Constructing such systems imposes a number of requirements on any programming language that is to be used for the construction. I discuss these language requirements, and show how they are satisfied by Erlang.”

My view is that unless you believe you and your team have zero defects, infinite sigma quality, 100% perfect, the BEAM really is for you and your users.

In BEAM languages the objects exist with their lifecycle bound to the execution context, the object and its state cannot be mutated by all and sundry in a non serialised way, only the owning process has the state and only it evolves the process/object state in a serialised manner. In other languages it is left up to the programmer to solve the hard synchronisation problems, and they can’t.

Exception handling was tacked onto C++ as an afterthought. It was seen as a necessity because of the horrible C++ constructor approach, as an error in the constructor was nasty, and objects had to initialise their super classes outside the constructor body. This led to two phase object construction where it was bad to do anything that may fail.

The other motives for exceptions was the coding effort for detection of errors and propagation of errors “up” to another code block, method or object, which has not the means to do much about it either without violating all of the encapsulation and understanding what else another object may have done before the exception.

Almost any operator could raise an exception in C++ and the nightmare of exceptions in exception handlers was soon realised.

But they had the similar idea to “let it crash” with “let it throw”, but in reality every code path had to be concerned with cleaning up, and cleaning up differently depending on where and what exception occured when, because the state is divorced from the execution context the code must attempt to maintain valid state every step of the way in the face of goto like exception handlers whilst other threads may also hit the same pot hole or pull the rug out whilst the first thread is in the middle of handling the exception. The combinational explosion of root causes in such systems is frightening and so are the debugging costs.

1 Like

Ah, thanks, I wasn’t sure so I shouldn’t have speculated.

Very nice post, though, thanks!

I’m one of those “full stack” generalists who really enjoys both frontend and backend work. I’ve been incidentally more backend focused in my career, but that is more incidental than intentional. I never bought into the Angular/React hype (though I thought JSX was a really good idea from the get-go, but I digress) because it made more sense to me, and still does, to favour to the server. I say this because when I was working in Rails, I’d always grab a frontend ticket to avoid doing any work that involved concurrency. When I would have to do it, it was always pretty light stuff, so I’d re-read up on mutexes and semaphores, get the ticket done, then pretty much forget anything I learned by the time I got home. I mean, that’s a bit of a joke, I understand how locks work, but, well, all the things you said are wrong them—it never sat right with me (though, yes, I was not trying too hard). Then I was working on a more heavily concurrent application a few years ago built in a Rude Goldberg mismash of ModernTech™. It was a scheduling application which is already a pretty tough problem that required multiple people to be able to work on the same schedule! That’s around when I started playing with Elixir (though full disclosure, it was LiveView that really got me there) and I was able to prototype a better version of the concurrent part of our app in half a day! TL;DR, your post resonates :slight_smile:

2 Likes

Oh come now, @dimitarvp, you can’t make a jab like that, even in good nature, and not expect me to get dragged back to talking about types :upside_down_face: I absolutely understand the value of types, the only reason I’ve been talking about it so much lately is because this is the first time I’ve felt comfortable talking about it on the internet. Perhaps you were unaware that there are a lot of angry software developers out there :sweat_smile: If you say “I’m ok with dynamic typing because I’ve never really felt any pain” on somewhere like Hacker News, people respond in the extreme to the point that it feels like they’re saying: “Oh, so I guess you also support murdering children, because that’s no worse than not using types.” So I’ve generally stayed out of it.

I’ve also never been able to quite articulate what I enjoy about writing in dynamic languages and a lot of the arguments in favour of dynamic typing I’ve come across online have been really weak. That is, until a recent thread on this forum where there were suddenly a bunch of actual intelligent arguments in favour of it! So that has felt good so I’ve been a bit noisy about it lately. Also the fact that I came to Elixir happy that it was dynamic, and now that is maybe going to change (though I know it’ll be a while).

Do I think string replacement is a better experience than simply hitting “refactor” and filling out some fields? No, of course not. But the verbose way I write code (which I would still want to do even if I had types) makes renames quite fast—I can be finished with a few vim commands, largely thanks to abolish.vim. It’s certainly not as fast as a refactor button, but still pretty fast. I’ve also already said that I find being able to type structs useful!

2 Likes

Haha :003: Okay, fair! Be the kid in the candy store! :heart:

That’s kind of my point, I want us to avoid falling into camps where neither side feels the need to explain well what they like and dislike about their chosen tech while looking down on the other side, and feeling all high and mighty about it. This forum does not stand for that so if I detect a trace of that I call it out. I hope that’s seen as fair as well. Sorry if I projected, I am mostly a pessimist.

On topic, Elixir draws a ton of value from:

  • Pattern-matching;
  • Guards.

Combining these two can nearly eliminate the need for strong types. The problems however are that MANY apps and libraries don’t utilize these well enough so as to properly parse / validate their inputs so we’re back to the eternal dilemma of “if you don’t make something mandatory people will always skip it”.

The balance I found when working with Elixir was to not try-hard everywhere; like, I don’t feel the need to add @spec-s to 99% of the Phoenix files and modules; this stuff is well-known and if you make a mistake you’ll find out about it pretty damn soon.

At the other side of the spectrum though, you have 3rd party API clients where you have to be fairly thorough and make sure you get the right HTTP code, recognize HTTP 429 and back off, make sure the string format you are forced to parse adheres to what you already know so your mini-parser better be coded with the facility to bail early and give detailed explanation as to why, and also you have to make sure you have a mechanism to retry 3 times, and have your custom structs that are being filled after parsing/validating the external data be well-typed and have good constructor function(s) etc. etc.

I was in projects where we had like 10% coverage of @spec-s and we encountered 3 runtime errors in 6 months and 2 of them were because of a junior being onboarded, and the last 3rd was because our upstream API provider figured they’ll sneak a change without telling anyone. We called them out, they fixed their docs, the bug was fixed by my PR something like 40 minutes after their email.

Do strongly statically typed languages help here? Not much, they might not even help at all. At best you’d just make a good parser and serializer using Rust’s amazing serde library/ecosystem and that’s it. You still have to write basically 99% the same code at runtime.

So yeah, Elixir is in a perfect spot: you can opt in to some stricter checks where you truly feel they are needed, and leave the rest alone. And even though I advocate for strong static typing I still write 80% of everything I do in Elixir (though that figure was 90% not that long ago, 2-3 months).

1 Like

I think we have to make a clear distinction when we talk about types here also. Because types in languages like Java and Rust while are same static types, they are inherently different in their core principles, I could even argue that we talk about 2 completely different typing systems from the logical point of view.

There are several reasons I see from my current standpoint:

  1. Types in languages that implemented a correct type system, never mix behavior with data - typing is strictly used to enforce a contract. Languages from Java category use types for classes also, witch enforces a contract over something that contains data as well as behavior, moving to an entire different concept.
  2. Mixing behavior and data, leads to implementation details to leak into the type system - this is most probably where everyone posts about hard to test code, this leads to creation of multiple detached classes in the process, increasing the code size dramatically, this is exactly the reason that you have to use those design patterns in those OOP languages.
  3. Complex tools to deal with the problem - the tools additionally introduced to support for these typed classes, like inheritance, inherently :joy: introduce complexity, because they solve a problem that never needed to be solved in the first place if the concept was constrained correctly.
3 Likes

Yeah, strong agree here: Rust does OOP much better than Java in many ways but mostly those you enumerated: it doesn’t mix behavior and data that much (you still have methods but it’s nowhere near the craziness of inheritance in Java), and the need to use various stuff like factories and dependency injection to deal with deficiencies of the language / the runtime.

1 Like

The rust, golang methods (if they can be called like that) are actually the same as extension methods in languages like java, static functions that receive the reference to base data structure in the parameters, and IMO are more of syntactic sugar for people used to write OOP code, the implementation is a pure function.