I read Effective Java book (maybe not full ) from this article, and then I realized how this language is broken
state-machine.pdf
250.58 KB
I read Effective Java book (maybe not full ) from this article, and then I realized how this language is broken
Easily emulateable though, I used the Boost Phoenix library for that before C++11, something like:
auto f = [](int a, int b){return a+b};
In Phoenix would be:
BOOST_AUTO f = arg1 + arg2;
Except where C++11’s lambda was monomorphic, phoenix was polymorphic (polymorphic lambdas did not come to C++ until C++14 I think), meaning that it can operate over any types that fulfill the access pattern within. I.E. the C++11 f(1,2)
would return 3, and same with phoenix’s, but with phoenix’s you could also do f("Hello ", "world!")
and get back “Hello world!”. In phoenix you could enforce types if you want, but there was no need most of the time. Phoenix used expression templates to encode a special ‘type’ that held no data (unless you bound in surrounding context, then it would hold that context data, but if you did not it was a 0-byte sized type) where the type was based on the access within. It was intensely powerful and I still prefer it over C++11 lambdas in many cases (though I use C++11 lambdas exclusively now since they are a language feature and not a library).
We had those before C++11 in many forms, Boost.Phoenix was my favorate, but there were others too. ^.^
Uh, many many people, including I. I rarely touch classes at all.
That is how Phoenix worked for example, they just need to fulfill a certain ‘contract’ (is how it is called in C++, not protocol) is all.
It has first class function pointers, however function objects, lambdas, etc… are different and there is an stl type that can bind them all as one thing (std::function<>), however even before that it was fairly trivial before, and Boost had boost.function before too that compiled down to the most basic assembly machine instructions (std::function copied boosts almost precisely). Everything that C++ has now there were methods before, just the standard library included a lot built in now.
What it calls a functor is just a struct with an operator()
function bound to it, nothing special there. And a ‘function’ is anything that is called via the assembly ‘call’ instruction (or equivalent, depending on the CPU being compiled for). ‘Functors’ were only a hack to emulate lambdas, no one should be using them anymore (although you could do polymorphic style lambdas with them, they are what phoenix used behind the scenes).
Of which I personally prefer, I know exactly what is being done and where, unlike GC languages where it is more… unexpected. It is an absolute necessity for determinacy. And that mutating of stuff is just programming style, I always tried to minimize it myself, I’ve been a fan of immutability for a long long time in a variety of languages, I’m very glad that Rust defaults to immutable, but even in C++ I like to ‘const’ everything.
And on an aside, yes manual memory management sucks, but if you are manually managing memory in most C++ work then you likely have a bug waiting to happen. I’d been programming in a very Rust style of ownership semantics (using Boost types) since long before Rust was ever thought of. If you have a ‘new’ in your program, you probably have a bug, but ‘new’ is still absolutely required when you need detailed control, such as writing kernel drivers, but in user code it should never be explicitly used, rather either owned pointers or shared pointers or a managed container (like std::vector) should ever be used. There is a reason that C++ should not be a language for most people, it is designed for efficiency of execution, not of programming, those two things are often at odds and I would rarely recommend C++ for anything where performance was not an utter and absolute necessity.
Absolutely so, structural typing to me is just bugs waiting to happen. Just become some two random data types have the same structure does not imply that they could be used the same way at all, think of a Matrix<float,1,4> compared to a Quaternion, same internal structure, absolutely different things that you should never even accidentally use in the wrong context. OCaml uses nominative typing everywhere except for struct’s, where it uses row typing (similar to structural typing but only on parts), that is how it should be done in my opinion.
Huh? Can you elaborate? Are you referencing returning a different type based on the arguments (simple to do, templates), or ‘choosing’ a function overload based on a return type (also easy to do via templates or just passing the return container as a reference or passing back a proxy that can cast itself to whatever it is being assigned to, all of which is simple to do).
Eh not really? They are both just a set order and usage of bits in memory. All can go through function overloading, operator overloading, etc… etc… Not sure what you are referencing?
The rules for how operator casting works are very well defined in the language (admittedly a very large rule set) so no, it does not ‘magically work until it doesn’t’, it follows the rules explicitly and precisely, it is very well defined and expected. It is not something that I tend to use myself however; it is more used by OOP’ers to work around the design faults of OOP or to simplify getter code as stupid as that is.
No difference from primitives or objects, they are all just bytes in memory. lvalues or rvalues is something to help you not do something stupid, an lvalue is something you can assign to, an rvalue is something that if you assigned to then it would just vanish unused anyway, which is often a bug. Memory management is just like handling any other kind of resource, and this is one of my big BIG BIG issues with GC languages, they have a GC to automatically (and slowly) manage memory, but they do not manage any other resource such as files, networking ports, and other kind of I/O in general, locks, synchronizations of any form, GPU memory bindings, etc… etc… etc… C++'s RAII handles all of that properly, ownership objects for single point ownership, shared objects for multi-point ownership, they all work for any resource type, from memory to GPU memory to files to network sockets to mutexes and semaphores to etc… etc… etc… Rust does this right in that it makes RAII enforced, no GC needed and it works for any resource, not just ram, and without the significant overhead (in both speed and determinism, the loss of determinism is the big thing for me) of GC’s. And what is wrong with pointers? Doing math on pointers should be an opt-in compiler warning in my opinion, but there is nothing wrong with pointers, just do not ever new or delete with them. If you have a pointer to an object that might vanish later then you will have a weak pointer (which forces you to test if it is valid before you can access it), if you have a pointer that is always valid then it is a reference, etc… etc… What about constness? I love constness! I use it almost excessively. ^.^ And with libraries you will always have conflicting naming schemes, unsure how this is C++ related, I see the problem here in Elixir at times, I see it in every language. And what about interoperability with C interfaces? C interfaces are well defined in C++, if you do not then it cannot be used from C, simple as that.
Uh, I have no clue what you are talking about. C++ has not even a concept of monkeypatching existing classes (or existing anything for that matter), nor can you add methods to classes once they are defined either. You can always define open functions (and that is what I prefer anyway, defining methods I consider a code smell in many ways) that operate on something (whether a class or struct or whatever) but that is certainly not adding a method to them…
Use logic? C++ is a very easy language that is wonderfully type safe if used well, once you learn the thousands of pages of its definition (which I have done, which I would not recommend doing). You mention teachers, in my experience most teachers teach C++ like it is either C or like it is Java, it is not either and trying to program in it like one of those is invariably and utterly beyond stupid. It is a fantastic low-level type-safe language that lets you drop out of the type safe world into C when necessary (and anyone who encourages you to do so, like using raw pointers, is an utter moron, feel free to point them my way).
However, I would not recommend learning C++ unless:
If you want a fast-enough low-level language (faster than java, ocaml, etc… etc…) but significantly better designed then:
Wow. OvermindDL1, thank you for that amazing in-depth reply.
It indeed is very obvious that I still am a novice in the C++ language. I would love to learn more about the functional style you are using.
In the course I am following, we first learn the basics (which includes raw memory management, pointer management, etc) and we will continue on to learn about all the other wonderful features. I fully expect that many of the things that feel like a necessity to me right now, turn out to be something that is only used in very extreme special circumstances.
Some elaboration on some of the questions you asked:
It might be the case that templates can be used for this, I do not know. The ‘use the first (or, according to some styleguides, last) few parameters as ‘output arguments’ by passing things-to-be-mutated as pointers/references’-strategy in any case seems to be a bit of a hack to me.
The proxy-approach will probably work – hadn’t thought about that.
[quote]There is a difference in C++ between primitive types and user-defined types(e.g. structs/classes) and how they should be treated.
Not sure what you are referencing?
[/quote]
I mostly mean the way they are instantiated and passed around: primitives do not really follow RAII (if you do not give them an explicit value they might contain anyhing), and you use call-by-value on them, whereas it is frowned upon doing this for structs/classes because they might be larger (so you use either pointers or references for these instead).
This is probably the kind of thing that feels weird for a greenhorn, but completely obvious and normal for a die-hard language user
As for the part about monkey-patching: Scratch that. I thought you could still define new functions even when their declarations were missing from the class definition, but this is untrue. Probably an implicit difference between classes and namespaces…
It is definitely true that my two teachers have a tendency (at least thus far) of explaining things from an object-oriented (or, as my teacher put it, object-based) point of view. I really look forward to the next part of the course, where hopefully the exercises are a bit more free in how we pick our implementations, so instead of ‘make a Matrix class that has a copy constructor’ having exercises like ‘make a problem that solves problem XXX’. Who knows.
Learning Rust is definitely on my Bucket List. I have some experience with Haskell, but I will take some time to look into OCaml.
Again, thank you for your marvelous in-depth reply !
It indeed is very obvious that I still am a novice in the C++ language. I would love to learn more about the functional style you are using.
Mostly just using const
everywhere (which helps the compiler to optimize too) and immutable data structures, the libraries out there for it in C++ are beyond innumerable (or make it yourself for the challenge, it is quite easy to do but illuminating to do too).
In the course I am following, we first learn the basics (which includes raw memory management, pointer management, etc) and we will continue on to learn about all the other wonderful features. I fully expect that many of the things that feel like a necessity to me right now, turn out to be something that is only used in very extreme special circumstances.
So, they are teaching C, not C++, got it. And yes, all of that is useful for systems programming, but it is highly unlikely that you will be doing systems programming, so almost none of it will be useful. If they teach you RAII, then encode that style in to your very being, it is one of the most popular resource management styles (not just memory) in any non-GC language (and most GC languages have no destructor semantics so cannot support RAII without special scoping like Python or .NET stupidity does, which just adds more work for you to do).
It might be the case that templates can be used for this, I do not know. The ‘use the first (or, according to some styleguides, last) few parameters as ‘output arguments’ by passing things-to-be-mutated as pointers/references’-strategy in any case seems to be a bit of a hack to me.
The proxy-approach will probably work – hadn’t thought about that.
You ‘can’ use templates for this, and if you need absolute speed that is what you do (it can save one function call overhead cost, not a big deal in almost all things), but most easily is returning a proxy that can cast itself to a set of known types (or you can template the proxy and return an unknown user-definable type that they define a converter for, but even that can be done without templates with a tagged namespace). And yes, passing the return variable in the inputs is a C’ism, C++ has tuples and tuple deconstruction if you want to return multiple outputs.
I mostly mean the way they are instantiated and passed around: primitives do not really follow RAII (if you do not give them an explicit value they might contain anyhing), and you use call-by-value on them, whereas it is frowned upon doing this for structs/classes because they might be larger (so you use either pointers or references for these instead).
This is probably the kind of thing that feels weird for a greenhorn, but completely obvious and normal for a die-hard language user
Only if you do not ‘construct’ them, you can easily ‘new’ a user-defined type and have it be random memory crap too, you should never ever do that, and you should always default construct a primitive too. You can use call-by-value on structs/classes with impunity, you just have to know where and when it is useful. Move or Forward hoisting can ‘move’ an entire struct, no matter how big, into the function you are calling (or even returning from!) without copying it. ^.^ There are plenty of times to keep a struct/class as a value and not as allocated HEAP memory, in the general case do what seems natural, and if you really want to take a potentially large thing but immutable, a const reference tends to be best (in most cases) as that allows the compiler to do automatic forwarding when it optimizes.
As for the part about monkey-patching: Scratch that. I thought you could still define new functions even when their declarations were missing from the class definition, but this is untrue. Probably an implicit difference between classes and namespaces…
Heh, yep, you can add things to any namespace anywhere (this is vital for tagged namespacing types), but once a class is defined it is defined (work-arounds for that due to old C compat, but if you do those in production code you will be shot, don’t drop into the C world in C++).
It is definitely true that my two teachers have a tendency (at least thus far) of explaining things from an object-oriented (or, as my teacher put it, object-based) point of view. I really look forward to the next part of the course, where hopefully the exercises are a bit more free in how we pick our implementations, so instead of ‘make a Matrix class that has a copy constructor’ having exercises like ‘make a problem that solves problem XXX’. Who knows.
If you want to see a matrix library done right in C++, look at Eigen. It is template magic to an extreme but even the most complex matrix transformation and work is inlined at compile-time to be some of the most efficient code you could write, for SSE4 or MMX or whatever the CPU supports. ^.^
Reading it as template magic is surprisingly readable. A lot of boost template magic (admittedly a higher order of magnitude in some areas) is like reading an eldritch book, you either go insane or learn to love it. ^.^
Learning Rust is definitely on my Bucket List. I have some experience with Haskell, but I will take some time to look into OCaml.
If you are a ‘I want to get this done and I want it to work and I want it to work fast’ kind of person, like if you are writing a CPU heavy game, learn Rust, it is a fantastic language, a bit verbose in some areas (still better than C++), but absolutely well made.
Haskell is fantastic to learn to see how functional purity can be perfected, but its compile times can rival even C++'s so be warned.
OCaml is like Haskell but entirely pragmatic, you are able to (though you shouldn’t) break the typing system if necessary. It compiles down to native code generally on par with C++ (a single order of magnitude is common, 2x as slow as C++ is average, sometimes faster, sometimes slower, depending on what you do, being anywhere near there is utterly fantastic though, at least until you start getting into C++ template magic, which will run circles in speed around almost any other language). It is a ‘work’ language, unlike Haskell that was built in the university system, OCaml was built in the work place (mostly Jane Street and nowadays also Facebook), so it is designed to get stuff done. Instead of HKT’s it has HPT’s, which are a bit more verbose (not bad though, and when implicit modules land in OCaml soon that wordiness vanishes for most cases) but are more powerful than HKT’s and compile significantly faster. The OCaml language is designed for compilation efficiency, both in optimized output and in actual compiling time. It will trounce Haskell, C++, Rust, Java, .NET, etc… in compiling speed and its output machine code is still as optimized as C++ (just without the template magic), it is an amazing design, but the language feels odd at times because of it, but it makes sense once you know why.
Again, thank you for your marvelous in-depth reply
!
It was the weekend and it was fun. I’m an INTJ, so I love debates (even for things that I personally am not for as it still forces me to reason things out). ^.^
Just to bring my view on it.
First a bit of background. My first language was Caml (yes the one written before Ocaml) them embedded C. Then i moved to the rest.
I personally think that the main problem with “OOP” is that they mixed Liskov’s Abstract Data Types with Simula and Parnas’ Hierarchy of programming to build their systems. But by mixing data with functions associated with it (and mixing their type system with it) they cornered themselves in a dead end where touching one thing or changing one way things work break the whole thing.
Mutex and lock are not fundamentally OOP. Just like Erlang was not really trying to be functional. It just happened that the designers goal for what became mainstream OOP were aligned with that type of thinking. (I would say it was a bottom up thinking from hardware/OS limitations opposes to what do we need to build that system properly way to look at it)
Regarding C++ : i see it as a “here is a complete toolkit to do anything you want”. It can support nearly any style. But it tends to ask for a lot of works and rebuilding of abstraction. Kinda like a Lisp, with more speed but less glue.
I completely agree on Rust being what low level programming should have been earlier. Same type of power but better API and tools.
Regarding C++ : i see it as a “here is a complete toolkit to do anything you want”. It can support nearly any style. But it tends to ask for a lot of works and rebuilding of abstraction. Kinda like a Lisp, with more speed but less glue.
Precisely, great description of it.
Some good libraries make it wonderful to work in, but it can indeed be utter hell, especially other peoples code. ^.^
I completely agree on Rust being what low level programming should have been earlier. Same type of power but better API and tools.
Eh, it still lacks an equivalent to C++'s template system. Its macros help a bit, but no type work in this. If it added HKT’s or HPT’s then that would fix it. C++ templates can pretend to be either HKT/HPT and more, although a lot more wordy too. ^.^
Another nice read: A reaction I found on www.quora.com/Was-object-oriented-programming-a-failure
Recognizable, I’m doing a massive refactoring on a overengineered oo code base now.
Removing factories, inheritance & subtype poymorphism (ridiculous: “abstract methods”) etc.
Btw “Object Oriented Programming without Inheritance” from Bjarne Stroustrup
https://www.youtube.com/watch?v=xcpSLRpOMJM is interesting also.
The problem with OO is that it is exactly the opposite of failure: it was immensely succesful
(in contrast to the actual benefits it provides).
In the dark days of OO's height of success it was treated almost like a religion by both language
designers and users.
People thought that the combination of inclusion polymorphism, inheritance, and data hiding was
such a magical thing that it would solve fundamental problems in the "software crisis".
Crazier, people thought that these features would finally give us true reuse that would allow us to
write everything only once, and then we'd never have to touch that code again. Today we know
that "reuse" looks more like github than creating a subclass :)
There are many signs of that religion being in decline, but we're far away from it being over, with
many schools and textbooks still teaching it as the natural way to go, the amount of programmers
that learned to program this way, and more importantly, the amount of code and languages out
there that follow its style.
Let me try and make this post more exciting and say something controversial: I feel that the
religious adherence to OO is one of the most harmful things that has ever happened to
computer science.
It is responsive for two huge problems (which are even worse when used in combination): over
engineering and what I'll call "state oriented programming".
1) Over Engineering
What makes OO a tool that so easily leads to over engineering?
It is exactly those magical features mentioned above that are responsible, in particular the
desire to write code once and then never touch it again. OO gives us an endless source
of possible abstractions that we can add to existing code, for example:
Wrapping: an interface is never perfect for every use, and providing a better interface is
an enticing way to make code better. For example, a lot of classes out there are nothing
more than a wrapper around the language's list/vector type, but are called a "Manager",
"System", "Factory" etc. They duplicate most functionality (add/remove) while hiding others,
making it specific to what type of objects are being managed. This seems good because
it simplifies the interface.
De-Hard-Coding: to enable the "write once" mentality, a class better be ready for every
future use, meaning anything in both its interface and implementation that a future user
might want to do differently should be accommodated for, by pulling things out into additional
classes, interfaces, callbacks, factories.
Objectifying: every single piece of data that can be touched by code must become an
object. Can't have naked numbers or strings. Besides, naming these new classes
creates meaning which seems like it makes them easier to deal with.
Hiding & Modularizing: There is an inherent complexity in the dependency graph
of each program in terms of its functionality. Ideally, modularizing code is a clustering
algorithm over this graph, where the most sparse connections between clusters
become module boundaries. In practice, the module boundaries are often in the
wrong spot, produce additional dependencies themselves, but worst of all: they
become less ideal over time as dependencies change. And since interfaces are
even harder to change than implementation, they just stay put and deteriorate.
You can iteratively apply the above operations, and in most cases thus produce
code of arbitrary complexity. Worse, because all code appears to be doing
something and has a clear name and function, this extra complexity is often
invisible. And programmers love creating it because it feels good to create
what looks like the perfect abstraction for something, and to "clean up"
whatever ugly interfaces it sits on top of some other programmer made.
Underneath all of this lies the fallacy of thinking that you can predict the future
needs of your code, a promise that was popularized by OO, and has yet to die out.
Alternative ways of dealing with "the future", such as YAGNI, OAOO, and
"Do the simplest thing that could possibly work" are simply not as attractive to
programmers, since constant refactoring is hard, much like perfect clustering
over time (for abstractions and modules) is hard. These are things that computers
do well, but humans do not, since they are very "brute force" in their nature:
they require "processing" the entire code base for maximum effectiveness.
Another fallacy this produces is that when over engineering inevitably causes problems
(because, future), that those problems were caused by bad design up front, and next
time, we're going to design even better (rather than less, or at least not for things
you don't know yet).
I keep coming across this video (though some of the arguments aren’t that compelling):
Brian Will: Object-Oriented Programming is Bad
Medium
Reddit
Hacker News
and it’s companions
Object-Oriented Programming is Embarrassing: 4 Short Examples
Object-Oriented Programming is Garbage: 3800 SLOC example
A collection of quotes:
What’s Wrong With Object-Oriented Programming?
I still think a large part of the problem in OO is the pursuit elusive benefit of “reuse” - that cost of OO style software will somehow magically amortize in the future because existing OO software can be more easily leveraged in the future (much like mechanical or electronic “components”). It’s one thing to generalize code to reduce duplication within the same codebase for current needs but pushing beyond that, trying to predict the needs of other yet to be built products (or even the needs of the same product in the future) usually fails to deliver on the “reuse” promise only resulting in lots of accidental complexity. “Reusable” software is usually extracted out of products - after it has proven itself multiple times in different contexts.
Given the velocity of change it seems to make more sense to choose the conceptual boundaries within a design to support “replaceability”.
For example:
Write code that is easy to delete, not easy to extend.
All this stuff about reuse sounds so mismatched to OOP. Like in functional strongly typed languages, like take Haskell’s Hoogle for one big example, you can just search for a ‘type’ and find the dependencies that do what you want. Like if you search for 'a list -> 'a -. 'a
, you will find the fold/reduce functions in the standard library. The type not only tells what it accepts, but it often tells what it does too (if it doesn’t, you probably don’t use enough types).
Nice video about FP
On reuse our hero (if I may call him so) Joe Armstrong said the following famous words:
I think the lack of reusability comes in object-oriented languages, not in functional
languages. Because the problem with object-oriented languages is they've got all this implicit
environment that they carry around with them. You wanted a banana but what you got was a
gorilla holding the banana and the entire jungle. If you have referentially transparent code, if
you have pure functions-all the data comes in its input arguments and everything goes out and
leaves no state behind-it's incredibly reusable. You can just reuse it here, there, and everywhere.
When you want to use it in a different project, you just cut and paste this code into your new
project. Programmers have been conned into using all these different programming languages
and they've been conned into not using easy ways to connect programs together.
The Unix pipe mechanism-A pipe B pipe C-is trivially easy to connect things together.
Is that how programmers connect things together? No. They use APIs and they link them
into the same memory space, which is appallingly difficult and isn't cross-language.
If the language is in the same family it's OK-if they're imperative
languages, that's fine. But suppose one is Prolog and the other is C. They have a completely
different view of the world, how you handle memory. So you can't just link them together like that.
You can't reuse things. There must be big commercial interests for whom it is very desirable that
stuff won't work together.
When you try to maximize reusabiltity / agility in oo code you have to no. 1 expel inheritance and
and subtype polymorphism. Even the gof book says “prefer composition”. But of course there is a
lot more to writing “agile” code. CS is working hard for that. Hickey had a nice presentation also on
agility in code: https://www.youtube.com/watch?v=rI8tNMsozo0 . Summary
https://raw.githubusercontent.com/richhickey/slides/master/simplicitymatters.pdf
Architectural Agility wins else - push the elephant
So you can talk and talk your heads off on endless tiring scrum meetings (Individuals and Interactions over processes and tools! Responding to Change over following a plan! Working Software over comprehensive documentation! Hahahaha - furious laughter, excuse me) - if you are not working on that you keep pushing an elephant. Tiring, boring and not agile at all.
A cause of all these messed up code bases are - often uneducated - programmers using patterns / technologies in production code, with other than business motives - like building a maintainable codebase. And of course that focus on “Working” (instead of maintainable) Software, Individuals and Interactions over […], which is by the way very good for shareholdervalue (call your company agile, throw out programmers older than 34 and sell the company again).
You wanted a banana but what you got was a
gorilla holding the banana and the entire jungle.
This has to be my favourite programming quote ever
This and Graham’s article about the blub programmer remind me of the other perspective that Lamport offers.
For quite a while, I’ve been disturbed by the emphasis on language in computer science. One result of that emphasis is programmers who are C++ experts but can’t write programs that do what they’re supposed to. The typical computer science response is that programmers need to use the right programming/specification/development language instead of/in addition to C++. The typical industrial response is to provide the programmer with better debugging tools, on the theory that we can obtain good programs by putting a monkey at a keyboard and automatically finding the errors in its code.
I believe that the best way to get better programs is to teach programmers how to think better. Thinking is not the ability to manipulate language; it’s the ability to manipulate concepts. Computer science should be about concepts, not languages. But how does one teach concepts without getting distracted by the language in which those concepts are expressed? My answer is to use the same language as every other branch of science and engineering—namely, mathematics. But how should that be done in practice? This note represents a small step towards an answer. It doesn’t discuss how to teach computer science; it simply addresses the preliminary question of what is computation.
250.58 KB
FYI: Another anti-OOP opinion piece (nothing new …)
OOP is considered by many to be the crown jewel of computer science. The final solution to code organization. The end to all of our…
Reading time: 27 min read
Does Object Oriented Programming really make it easier for programmers to develop? Of is an alternatve like functional programming a better way to go?
Developers who hate on OOP don’t know how to use it
Object-Oriented Programming seems to be receiving a lot of hate recently, usually from inexperienced developers who have just “learned” Functional Programming and then want to hate on anything that doesn’t support functional purity. Unfortunately, this crowd always seems to fit the same mold. They are usually web developers, fairly young, seemingly intelligent but extremely impatient and easily swayed by new technology (e.g. JavaScript developers fit this mold precisely) but the most important trait that sets them apart from developers who embrace OOP, is not persevering long enough to actually learn how to use it.
Interesting! I believe that the statement about using Mathematics as ‘general programming language’ might have been the inspiration for the language Fortress which contains/contained some very cool ideas:
Unfortunately, development on it ceased after a couple of years. Nevertheless, I believe that Fortress in turn was one of the inspirations for Rust (although I do not remember the source of that claim).
I find the writer of that article undervalues what Joe Armstrong said (more complete citation from Armstrong’s text: Nice read about oop (with historic citations from A.Kay f.e.) .
This is a nice presentation from mpj who proposes not to use inheritance at all: https://www.youtube.com/watch?v=wfMtDGfHWpA . Another problem (besides inheritance & dependent use of patterns) I saw in OO code is the (over)use of dependency injection, introducing needless instance variables that cause coupling between object members (methods etc.): https://blog.ploeh.dk/2017/01/27/from-dependency-injection-to-dependency-rejection/
I found myself, following what Armstrong says, working towards a ports and adapters architecture https://blog.ploeh.dk/2016/03/18/functional-architecture-is-ports-and-adapters/ . I turned parent objects into helpers and send them to methods in parameter objects.
Alexander Stepanov: “inheritance is bad unless base class has no members”
undervalues what Joe Armstrong said
Why OO Sucks
web archive 2001 snapshot
Recent (2019 April) HN discussion
2015 August reddit discussion
Older (2012 July) HN discussion
https://blog.ploeh.dk/2017/01/27/from-dependency-injection-to-dependency-rejection/
I think the core article is Dependency Rejection.
I like how Mark Seemann’s posts often revolve around pushing the impure code to the edges of the system and how he uses Haskell for creating functional reference implementations to inform the design of production code.
However I would expect that lots people would be uncomfortable with how
// Reservation -> int option
let tryAcceptComposition reservation =
reservation.Date
|> DB.readReservations connectionString
|> flip (tryAccept 10) reservation
|> Option.map (DB.createReservation connectionString)
tends to expose implementation details that they are used to burying (encapsulating/injecting) deep in an object somewhere.
Functional architecture - The pits of success - Mark Seemann
At the time (1990s) OO felt like a step up from structured programming in terms of organizing a code base.
But as Rich Hickey outlines in Value of Values (2012 July):
So we adopted an approach to programming that was based around the manipulation of places. It totally made sense. And the keyword there, the key aspect to that is it made sense. It used to make sense. Those limitations are gone.
OO had a head start because it worked within the given limitations of the computer hardware at the time - limitations that held other approaches back. However since then:
The most amazing achievement of the computer software industry is its continuing cancellation of the steady and staggering gains made by the computer hardware industry.
Henry Petroski
It’s largely advances in hardware that have made it possible to create increasingly complex systems - more complex than the complexity that needed to be tamed back in the 1990s.
FP may not be the saviour but class-based OO needs a lot more (continued) scrutiny and not be automatically grandfathered as the best way to move forward.
From an interview with Tony Hoare (2002)
Programming languages on the whole are very much more complicated than they used to be: object orientation, inheritance, and other features are still not really being thought through from the point of view of a coherent and scientifically well-based discipline or a theory of correctness. My original postulate, which I have been pursuing as a scientist all my life, is that one uses the criteria of correctness as a means of converging on a decent programming language design—one which doesn’t set traps for its users, and ones in which the different components of the program correspond clearly to different components of its specification, so you can reason compositionally about it.
There seem to be some fundamental issues with OO, outside advances in hardware. But we’re stuck with large OO codebases that need maintenance.