Nice read about oop (with historic citations from A.Kay f.e.)

Object Oriented Programming is an expensive disaster which must end

Person A: “No Scotsman ever steals.”

Person B: “I know of a Scotsman who stole.”

Person A: “No True Scotsman would ever steal.”

Person A is thus protected from the harmful effects of new information. New information is dangerous, as it might cause someone to change their mind. New information can be rendered safe simply by declaring it to be invalid. Person A believes all Scotsman are brave and honorable, and you can not convince them otherwise, for any counter-example you bring up is of some degraded Untrue Scotsman, which has no bearing on whatever they think of True Scotsman. And this is my experience whenever I argue against Object Oriented Programming (OOP)…


Whoa, that was a long read. Read through it until the end (okay, skimmed here and there) and gained so many insights. Kind of justified my choice of learning FP language :smile: Thanks for sharing!

Used language java language interested in haskell? Then this could be an interesting read for you also:
EWD is a source of a lot of interesting documents.


I have read the article. It indeed was a very long read.

It is a good article, although I also took the time to read some of the responses on it on different other media (Hacker News, Reddit) where some people have very strong opinions about it. It is very true that the article seems, from time to time, to use the ‘No True Scottsman’ fallacy itself to drive its point home.

I think the main message that is conveyed in the article however is the following important one:

  • Object-Oriented languages are fragmented; nobody really is sure what the fundamental features of an OOP language are.
  • Many of the things that are called features of certain OOP-languages actually exist in other environments as well (such as in certain Functional Programming languages), and arguably are more usable there.
  • OOP has a lot of design patterns that were only invented to solve problems that were caused by OOP. There is a whole industry now to ‘teach you how to shoot you in the foot the least’.

An example: Inheritance was lauded as ‘the feature that solves everything’. Now, however, we are told that inheritance is bad (hard to keep the ‘full picture’, hard to mold everything in an absolute hierarchy, slow compile times, etc); composition should be used instead. Composition has been there from the beginning in FP, and is the way all problems are solved.

The article is very long, and was already written two years ago. I wonder how much has changed during that time? I also really want to refrain from attacking anything; I’ve seen OOP-(blub)programmers talk about how the article is trash, and FP-(blub)programmers talk about how the article hits the nail right on its head. I don’t know how objective the article is, or how objective the responses of these people are or what their frames of reference are.

I do know that I personally like functional programming, and that it (and with ‘it’ I mean mostly the immutability feature of FP) seems to solve concurrency a lot better than OOP’s mutex and lock based systems. I also have worked with Java, Ruby, Elixir and Haskell projects, and seen that the functional-programming projects remain a whole lot more readable after half a year of hacking has passed.

I therefore like Functional Programming more than I do OOP. And this is exactly the reason I decided to learn C++: It is outside my comfort zone, so it will hopefully increase my frame of reference.

My two cents. :slight_smile:


Well you can easily do functional programming in C++. The standard library is already more functional than OOP (though not really immutable), and tons of libraries. ^.^

Let me first put a little disclaimer here that I am still very much in the process of learning C++, and there are many features I haven’t touched at all so far. Most importantly, I haven’t touched the Template system, and I haven’t touched STL or lambdas.

I totally agree with you that you can, (at least in C++11 and beyond; before that, e.g. lambdas did not exist) to some extent, write code in C++ that behaves a little bit in a functional way. The language has come a long way from ‘C with Classes’.

Here are the things in my opinion it does right:

  • Starting with C++11 we have lambdas! Wooh!
  • We can probably use namespaces instead of classes to group functions together. (But who does that?)
  • The Template system allows you to write functions that accept types that you did not conceive of yet. (Up to a certain level, create ‘protocols’)

Here, on the other hand, are the things I find to be lacking somewhat in the language thus far:

  • Functions are not really ‘first class’: C++ has function pointers, function objects and now also lambdas. (This is somewhat better in newer versions, where there is some STL-type that allows you to pass any of them.)
  • C++ call methods that can clearly alter state ‘functions’, as well as confusingly calling function objects ‘Functors’. This is commensurability waiting to happen.
  • You are expected to do all memory-management yourself, which usually includes doing a lot of mutating of stuff. (Copy Constructors, Move Constructors, Operator Assign, Operator Index with proxy objects to differ between reading/writing, etc.)
  • C++ uses nominative typing instead of structural typing.
  • It is impossible in C++ to do polymorphism on the return result of a function.
  • There is a difference in C++ between primitive types and user-defined types(e.g. structs/classes) and how they should be treated.
  • Want to make your object usable as something else? No problem. Just override Operator Type, and it magically works!.. until it doesn’t.
  • And last but definitely not least: C++ is a(the ?) language of leaky abstractions. It is very hard to write a simple layer of abstraction for your program, because you have to keep so many extra details in mind. (are you dealing with primitives or objects? lvalues vs rvalues? memory management? pointers? constness? Conflicting naming schemes used by libraries. Oh, and don’t forget to not break interoptability with this-and-that C interface).

Also, something that I am wondering about which does not fit in above list, but does have me confused: You cannot ‘monkeypatch’ existing classes, but it seems you can indefinitely add more methods(functions) to them. Why? Am I misunderstanding something here?

But let’s get back on subject: It is definitely interesting to attempt to use a functional style in C++ (and I will probably greatly annoy my teachers in the future by doing so :stuck_out_tongue_winking_eye: ). If you have tips for me to make this as painless and simple as possible, I am all ears.


I read Effective Java book (maybe not full ) from this article, and then I realized how this language is broken :slight_smile:

1 Like

Easily emulateable though, I used the Boost Phoenix library for that before C++11, something like:

auto f = [](int a, int b){return a+b};

In Phoenix would be:

BOOST_AUTO f = arg1 + arg2;

Except where C++11’s lambda was monomorphic, phoenix was polymorphic (polymorphic lambdas did not come to C++ until C++14 I think), meaning that it can operate over any types that fulfill the access pattern within. I.E. the C++11 f(1,2) would return 3, and same with phoenix’s, but with phoenix’s you could also do f("Hello ", "world!") and get back “Hello world!”. In phoenix you could enforce types if you want, but there was no need most of the time. Phoenix used expression templates to encode a special ‘type’ that held no data (unless you bound in surrounding context, then it would hold that context data, but if you did not it was a 0-byte sized type) where the type was based on the access within. It was intensely powerful and I still prefer it over C++11 lambdas in many cases (though I use C++11 lambdas exclusively now since they are a language feature and not a library).

We had those before C++11 in many forms, Boost.Phoenix was my favorate, but there were others too. ^.^

Uh, many many people, including I. I rarely touch classes at all.

That is how Phoenix worked for example, they just need to fulfill a certain ‘contract’ (is how it is called in C++, not protocol) is all. :slight_smile:

It has first class function pointers, however function objects, lambdas, etc… are different and there is an stl type that can bind them all as one thing (std::function<>), however even before that it was fairly trivial before, and Boost had boost.function before too that compiled down to the most basic assembly machine instructions (std::function copied boosts almost precisely). Everything that C++ has now there were methods before, just the standard library included a lot built in now.

What it calls a functor is just a struct with an operator() function bound to it, nothing special there. And a ‘function’ is anything that is called via the assembly ‘call’ instruction (or equivalent, depending on the CPU being compiled for). ‘Functors’ were only a hack to emulate lambdas, no one should be using them anymore (although you could do polymorphic style lambdas with them, they are what phoenix used behind the scenes).

Of which I personally prefer, I know exactly what is being done and where, unlike GC languages where it is more… unexpected. It is an absolute necessity for determinacy. And that mutating of stuff is just programming style, I always tried to minimize it myself, I’ve been a fan of immutability for a long long time in a variety of languages, I’m very glad that Rust defaults to immutable, but even in C++ I like to ‘const’ everything.

And on an aside, yes manual memory management sucks, but if you are manually managing memory in most C++ work then you likely have a bug waiting to happen. I’d been programming in a very Rust style of ownership semantics (using Boost types) since long before Rust was ever thought of. If you have a ‘new’ in your program, you probably have a bug, but ‘new’ is still absolutely required when you need detailed control, such as writing kernel drivers, but in user code it should never be explicitly used, rather either owned pointers or shared pointers or a managed container (like std::vector) should ever be used. There is a reason that C++ should not be a language for most people, it is designed for efficiency of execution, not of programming, those two things are often at odds and I would rarely recommend C++ for anything where performance was not an utter and absolute necessity.

Absolutely so, structural typing to me is just bugs waiting to happen. Just become some two random data types have the same structure does not imply that they could be used the same way at all, think of a Matrix<float,1,4> compared to a Quaternion, same internal structure, absolutely different things that you should never even accidentally use in the wrong context. OCaml uses nominative typing everywhere except for struct’s, where it uses row typing (similar to structural typing but only on parts), that is how it should be done in my opinion.

Huh? Can you elaborate? Are you referencing returning a different type based on the arguments (simple to do, templates), or ‘choosing’ a function overload based on a return type (also easy to do via templates or just passing the return container as a reference or passing back a proxy that can cast itself to whatever it is being assigned to, all of which is simple to do).

Eh not really? They are both just a set order and usage of bits in memory. All can go through function overloading, operator overloading, etc… etc… Not sure what you are referencing?

The rules for how operator casting works are very well defined in the language (admittedly a very large rule set) so no, it does not ‘magically work until it doesn’t’, it follows the rules explicitly and precisely, it is very well defined and expected. It is not something that I tend to use myself however; it is more used by OOP’ers to work around the design faults of OOP or to simplify getter code as stupid as that is.

No difference from primitives or objects, they are all just bytes in memory. lvalues or rvalues is something to help you not do something stupid, an lvalue is something you can assign to, an rvalue is something that if you assigned to then it would just vanish unused anyway, which is often a bug. Memory management is just like handling any other kind of resource, and this is one of my big BIG BIG issues with GC languages, they have a GC to automatically (and slowly) manage memory, but they do not manage any other resource such as files, networking ports, and other kind of I/O in general, locks, synchronizations of any form, GPU memory bindings, etc… etc… etc… C++'s RAII handles all of that properly, ownership objects for single point ownership, shared objects for multi-point ownership, they all work for any resource type, from memory to GPU memory to files to network sockets to mutexes and semaphores to etc… etc… etc… Rust does this right in that it makes RAII enforced, no GC needed and it works for any resource, not just ram, and without the significant overhead (in both speed and determinism, the loss of determinism is the big thing for me) of GC’s. And what is wrong with pointers? Doing math on pointers should be an opt-in compiler warning in my opinion, but there is nothing wrong with pointers, just do not ever new or delete with them. If you have a pointer to an object that might vanish later then you will have a weak pointer (which forces you to test if it is valid before you can access it), if you have a pointer that is always valid then it is a reference, etc… etc… What about constness? I love constness! I use it almost excessively. ^.^ And with libraries you will always have conflicting naming schemes, unsure how this is C++ related, I see the problem here in Elixir at times, I see it in every language. And what about interoperability with C interfaces? C interfaces are well defined in C++, if you do not then it cannot be used from C, simple as that.

Uh, I have no clue what you are talking about. C++ has not even a concept of monkeypatching existing classes (or existing anything for that matter), nor can you add methods to classes once they are defined either. You can always define open functions (and that is what I prefer anyway, defining methods I consider a code smell in many ways) that operate on something (whether a class or struct or whatever) but that is certainly not adding a method to them…

Use logic? C++ is a very easy language that is wonderfully type safe if used well, once you learn the thousands of pages of its definition (which I have done, which I would not recommend doing). You mention teachers, in my experience most teachers teach C++ like it is either C or like it is Java, it is not either and trying to program in it like one of those is invariably and utterly beyond stupid. It is a fantastic low-level type-safe language that lets you drop out of the type safe world into C when necessary (and anyone who encourages you to do so, like using raw pointers, is an utter moron, feel free to point them my way).

However, I would not recommend learning C++ unless:

  • You need performance at any, and I do mean any cost. C++ itself does not bring anything to the table that other languages do not already, it is just a better assembly and should be thought of as such.
  • Or you are going to hack on existing C++ code, for obvious reasons.

If you want a fast-enough low-level language (faster than java, ocaml, etc… etc…) but significantly better designed then:

  • Use OCaml: It is a fully typed wonderful language that often compiles to within a single order of magnitude then C++, the one thing that could hurt you would be the fact it has a GC, the determinism loss is usually not an issue in most things, but when it does become an issue then, well, the GC is a huge issue. It also lacks a concept of RAII for other resources so those have to be manually managed like in most GC languages.
  • Use Rust: Rust is really C++ done right. It lacks templates and high-end metaprogramming that C++ has (though should be coming someday, but not today), but depending on the code you write it either compiles to within an order of magnitude the speed of C++ or even faster than C++ (due to the lack of aliasing in its borrow system, which is beyond fantastic). In time Rust could easily replace all uses of C++ so if you want to learn a fast language, I would highly recommend Rust over C++ and if you need to learn C++ later then use the style you use in Rust in C++.

Wow. OvermindDL1, thank you for that amazing in-depth reply.

It indeed is very obvious that I still am a novice in the C++ language. I would love to learn more about the functional style you are using.

In the course I am following, we first learn the basics (which includes raw memory management, pointer management, etc) and we will continue on to learn about all the other wonderful features. I fully expect that many of the things that feel like a necessity to me right now, turn out to be something that is only used in very extreme special circumstances.

Some elaboration on some of the questions you asked:

It might be the case that templates can be used for this, I do not know. The ‘use the first (or, according to some styleguides, last) few parameters as ‘output arguments’ by passing things-to-be-mutated as pointers/references’-strategy in any case seems to be a bit of a hack to me.
The proxy-approach will probably work – hadn’t thought about that.

Not sure what you are referencing?

I mostly mean the way they are instantiated and passed around: primitives do not really follow RAII (if you do not give them an explicit value they might contain anyhing), and you use call-by-value on them, whereas it is frowned upon doing this for structs/classes because they might be larger (so you use either pointers or references for these instead).
This is probably the kind of thing that feels weird for a greenhorn, but completely obvious and normal for a die-hard language user :smile:

As for the part about monkey-patching: Scratch that. I thought you could still define new functions even when their declarations were missing from the class definition, but this is untrue. Probably an implicit difference between classes and namespaces…

It is definitely true that my two teachers have a tendency (at least thus far) of explaining things from an object-oriented (or, as my teacher put it, object-based) point of view. I really look forward to the next part of the course, where hopefully the exercises are a bit more free in how we pick our implementations, so instead of ‘make a Matrix class that has a copy constructor’ having exercises like ‘make a problem that solves problem XXX’. Who knows.

Learning Rust is definitely on my Bucket List. I have some experience with Haskell, but I will take some time to look into OCaml.

Again, thank you for your marvelous in-depth reply :heart_eyes: !


Mostly just using const everywhere (which helps the compiler to optimize too) and immutable data structures, the libraries out there for it in C++ are beyond innumerable (or make it yourself for the challenge, it is quite easy to do but illuminating to do too). :slight_smile:

So, they are teaching C, not C++, got it. And yes, all of that is useful for systems programming, but it is highly unlikely that you will be doing systems programming, so almost none of it will be useful. If they teach you RAII, then encode that style in to your very being, it is one of the most popular resource management styles (not just memory) in any non-GC language (and most GC languages have no destructor semantics so cannot support RAII without special scoping like Python or .NET stupidity does, which just adds more work for you to do).

You ‘can’ use templates for this, and if you need absolute speed that is what you do (it can save one function call overhead cost, not a big deal in almost all things), but most easily is returning a proxy that can cast itself to a set of known types (or you can template the proxy and return an unknown user-definable type that they define a converter for, but even that can be done without templates with a tagged namespace). And yes, passing the return variable in the inputs is a C’ism, C++ has tuples and tuple deconstruction if you want to return multiple outputs.

Only if you do not ‘construct’ them, you can easily ‘new’ a user-defined type and have it be random memory crap too, you should never ever do that, and you should always default construct a primitive too. You can use call-by-value on structs/classes with impunity, you just have to know where and when it is useful. Move or Forward hoisting can ‘move’ an entire struct, no matter how big, into the function you are calling (or even returning from!) without copying it. ^.^ There are plenty of times to keep a struct/class as a value and not as allocated HEAP memory, in the general case do what seems natural, and if you really want to take a potentially large thing but immutable, a const reference tends to be best (in most cases) as that allows the compiler to do automatic forwarding when it optimizes. :slight_smile:

Heh, yep, you can add things to any namespace anywhere (this is vital for tagged namespacing types), but once a class is defined it is defined (work-arounds for that due to old C compat, but if you do those in production code you will be shot, don’t drop into the C world in C++).

If you want to see a matrix library done right in C++, look at Eigen. It is template magic to an extreme but even the most complex matrix transformation and work is inlined at compile-time to be some of the most efficient code you could write, for SSE4 or MMX or whatever the CPU supports. ^.^

Reading it as template magic is surprisingly readable. A lot of boost template magic (admittedly a higher order of magnitude in some areas) is like reading an eldritch book, you either go insane or learn to love it. ^.^

If you are a ‘I want to get this done and I want it to work and I want it to work fast’ kind of person, like if you are writing a CPU heavy game, learn Rust, it is a fantastic language, a bit verbose in some areas (still better than C++), but absolutely well made.

Haskell is fantastic to learn to see how functional purity can be perfected, but its compile times can rival even C++'s so be warned. :wink:

OCaml is like Haskell but entirely pragmatic, you are able to (though you shouldn’t) break the typing system if necessary. It compiles down to native code generally on par with C++ (a single order of magnitude is common, 2x as slow as C++ is average, sometimes faster, sometimes slower, depending on what you do, being anywhere near there is utterly fantastic though, at least until you start getting into C++ template magic, which will run circles in speed around almost any other language). It is a ‘work’ language, unlike Haskell that was built in the university system, OCaml was built in the work place (mostly Jane Street and nowadays also Facebook), so it is designed to get stuff done. Instead of HKT’s it has HPT’s, which are a bit more verbose (not bad though, and when implicit modules land in OCaml soon that wordiness vanishes for most cases) but are more powerful than HKT’s and compile significantly faster. The OCaml language is designed for compilation efficiency, both in optimized output and in actual compiling time. It will trounce Haskell, C++, Rust, Java, .NET, etc… in compiling speed and its output machine code is still as optimized as C++ (just without the template magic), it is an amazing design, but the language feels odd at times because of it, but it makes sense once you know why. :slight_smile:

It was the weekend and it was fun. I’m an INTJ, so I love debates (even for things that I personally am not for as it still forces me to reason things out). ^.^

1 Like

Just to bring my view on it.

First a bit of background. My first language was Caml (yes the one written before Ocaml) them embedded C. Then i moved to the rest.

I personally think that the main problem with “OOP” is that they mixed Liskov’s Abstract Data Types with Simula and Parnas’ Hierarchy of programming to build their systems. But by mixing data with functions associated with it (and mixing their type system with it) they cornered themselves in a dead end where touching one thing or changing one way things work break the whole thing.

Mutex and lock are not fundamentally OOP. Just like Erlang was not really trying to be functional. It just happened that the designers goal for what became mainstream OOP were aligned with that type of thinking. (I would say it was a bottom up thinking from hardware/OS limitations opposes to what do we need to build that system properly way to look at it)

Regarding C++ : i see it as a “here is a complete toolkit to do anything you want”. It can support nearly any style. But it tends to ask for a lot of works and rebuilding of abstraction. Kinda like a Lisp, with more speed but less glue.

I completely agree on Rust being what low level programming should have been earlier. Same type of power but better API and tools.


Precisely, great description of it. :slight_smile:

Some good libraries make it wonderful to work in, but it can indeed be utter hell, especially other peoples code. ^.^

Eh, it still lacks an equivalent to C++'s template system. Its macros help a bit, but no type work in this. If it added HKT’s or HPT’s then that would fix it. C++ templates can pretend to be either HKT/HPT and more, although a lot more wordy too. ^.^

1 Like

Another nice read: A reaction I found on
Recognizable, I’m doing a massive refactoring on a overengineered oo code base now.
Removing factories, inheritance & subtype poymorphism (ridiculous: “abstract methods”) etc.
Btw “Object Oriented Programming without Inheritance” from Bjarne Stroustrup is interesting also.

The problem with OO is that it is exactly the opposite of failure: it was immensely succesful
(in contrast to the actual benefits it provides).

In the dark days of OO's height of success it was treated almost like a religion by both language 
designers and users.

People thought that the combination of inclusion polymorphism, inheritance, and data hiding was 
such a magical thing that it would solve fundamental problems in the "software crisis".

Crazier, people thought that these features would finally give us true reuse that would allow us to 
write everything only once, and then we'd never have to touch that code again. Today we know
that "reuse" looks more like github than creating a subclass :)

There are many signs of that religion being in decline, but we're far away from it being over, with
many schools and textbooks still teaching it as the natural way to go, the amount of programmers
that learned to program this way, and more importantly, the amount of code and languages out
there that follow its style.

Let me try and make this post more exciting and say something controversial: I feel that the 
religious adherence to OO is one of the most harmful things that has ever happened to 
computer science.

It is responsive for two huge problems (which are even worse when used in combination): over
engineering and what I'll call "state oriented programming".

1) Over Engineering

What makes OO a tool that so easily leads to over engineering?

It is exactly those magical features mentioned above that are responsible, in particular the 
desire to write code once and then never touch it again. OO gives us an endless source 
of possible abstractions that we can add to existing code, for example:

Wrapping: an interface is never perfect for every use, and providing a better interface is 
an enticing way to make code better. For example, a lot of classes out there are nothing
more than a wrapper around the language's list/vector type, but are called a "Manager", 
"System", "Factory" etc. They duplicate most functionality (add/remove) while hiding others, 
making it specific to what type of objects are being managed. This seems good because 
it simplifies the interface.

De-Hard-Coding: to enable the "write once" mentality, a class better be ready for every
future use, meaning anything in both its interface and implementation that a future user 
might want to do differently should be accommodated for, by pulling things out into additional 
classes, interfaces, callbacks, factories.

Objectifying: every single piece of data that can be touched by code must become an 
object. Can't have naked numbers or strings. Besides, naming these new classes 
creates meaning which seems like it makes them easier to deal with.

Hiding & Modularizing: There is an inherent complexity in the dependency graph
of each program in terms of its functionality. Ideally, modularizing code is a clustering 
algorithm over this graph, where the most sparse connections between clusters 
become module boundaries. In practice, the module boundaries are often in the 
wrong spot, produce additional dependencies themselves, but worst of all: they 
become less ideal over time as dependencies change. And since interfaces are 
even harder to change than implementation, they just stay put and deteriorate.

You can iteratively apply the above operations, and in most cases thus produce
code of arbitrary complexity. Worse, because all code appears to be doing
something and has a clear name and function, this extra complexity is often 
invisible. And programmers love creating it because it feels good to create 
what looks like the perfect abstraction for something, and to "clean up" 
whatever ugly interfaces it sits on top of some other programmer made.

Underneath all of this lies the fallacy of thinking that you can predict the future
needs of your code, a promise that was popularized by OO, and has yet to die out.

Alternative ways of dealing with "the future", such as YAGNI, OAOO, and 
"Do the simplest thing that could possibly work" are simply not as attractive to 
programmers, since constant refactoring is hard, much like perfect clustering 
over time (for abstractions and modules) is hard. These are things that computers 
do well, but humans do not, since they are very "brute force" in their nature: 
they require "processing" the entire code base for maximum effectiveness.

Another fallacy this produces is that when over engineering inevitably causes problems 
(because, future), that those problems were caused by bad design up front, and next 
time, we're going to design even better (rather than less, or at least not for things 
you don't know yet).

I keep coming across this video (though some of the arguments aren’t that compelling):

Brian Will: Object-Oriented Programming is Bad
Hacker News

and it’s companions
Object-Oriented Programming is Embarrassing: 4 Short Examples
Object-Oriented Programming is Garbage: 3800 SLOC example

A collection of quotes:
What’s Wrong With Object-Oriented Programming?

I still think a large part of the problem in OO is the pursuit elusive benefit of “reuse” - that cost of OO style software will somehow magically amortize in the future because existing OO software can be more easily leveraged in the future (much like mechanical or electronic “components”). It’s one thing to generalize code to reduce duplication within the same codebase for current needs but pushing beyond that, trying to predict the needs of other yet to be built products (or even the needs of the same product in the future) usually fails to deliver on the “reuse” promise only resulting in lots of accidental complexity. “Reusable” software is usually extracted out of products - after it has proven itself multiple times in different contexts.
Given the velocity of change it seems to make more sense to choose the conceptual boundaries within a design to support “replaceability”.

For example:
Write code that is easy to delete, not easy to extend.

PS: Medium: Goodbye, Object Oriented Programming


All this stuff about reuse sounds so mismatched to OOP. Like in functional strongly typed languages, like take Haskell’s Hoogle for one big example, you can just search for a ‘type’ and find the dependencies that do what you want. Like if you search for 'a list -> 'a -. 'a, you will find the fold/reduce functions in the standard library. The type not only tells what it accepts, but it often tells what it does too (if it doesn’t, you probably don’t use enough types).


Nice video about FP


On reuse our hero (if I may call him so) Joe Armstrong said the following famous words:

I think the lack of reusability comes in object-oriented languages, not in functional
languages. Because the problem with object-oriented languages is they've got all this implicit 
environment that they carry around with them. You wanted a banana but what you got was a 
gorilla holding the banana and the entire jungle. If you have referentially transparent code, if 
you have pure functions-all the data comes in its input arguments and everything goes out and
leaves no state behind-it's incredibly reusable. You can just reuse it here, there, and everywhere. 
When you want to use it in a different project, you just cut and paste this code into your new 
project. Programmers have been conned into using all these different programming languages 
and they've  been conned into not using easy ways to connect programs together. 
The Unix pipe mechanism-A pipe B pipe C-is trivially easy to connect things together.
Is that how programmers connect things together? No. They use APIs and they link them
into the same memory space, which is appallingly difficult and isn't cross-language. 
If the language is in the same family it's OK-if they're imperative
languages, that's fine. But suppose one is Prolog and the other is C. They have a completely 
different view of the world, how you handle memory. So you can't just link them together like that. 
You can't reuse things. There must be big commercial interests for whom it is very desirable that 
stuff won't work together.

When you try to maximize reusabiltity / agility in oo code you have to no. 1 expel inheritance and
and subtype polymorphism. Even the gof book says “prefer composition”. But of course there is a
lot more to writing “agile” code. CS is working hard for that. Hickey had a nice presentation also on
agility in code: . Summary

Architectural Agility wins else - push the elephant

So you can talk and talk your heads off on endless tiring scrum meetings (Individuals and Interactions over processes and tools! Responding to Change over following a plan! Working Software over comprehensive documentation! Hahahaha - furious laughter, excuse me) - if you are not working on that you keep pushing an elephant. Tiring, boring and not agile at all.
A cause of all these messed up code bases are - often uneducated - programmers using patterns / technologies in production code, with other than business motives - like building a maintainable codebase. And of course that focus on “Working” (instead of maintainable) Software, Individuals and Interactions over […], which is by the way very good for shareholdervalue (call your company agile, throw out programmers older than 34 and sell the company again). :wink:


This has to be my favourite programming quote ever


This and Graham’s article about the blub programmer remind me of the other perspective that Lamport offers.


FYI: Another anti-OOP opinion piece (nothing new …)