The complexity of Haskell vs. Elixir's simplicity

I wrote this comment on r/haskell, and it’s not popular there. :wink: But I think I’m on to something…

Haskell reminds me of Java, and even Ruby: it’s got a couple awesome features — strong static inferential typing plus purity. (Ruby has dynamic behavior and “developer friendliness”. Java has “strong” static typing.)

But you can’t just have an excellent type/safety system at the application developer level. To make it work, it’s turtles all the way down. Lens only exists because the native type system makes that (normal) kind of work too hard. And so there’s a seemingly never ending rabbit hole of complexity to learn, all in service of the original goal. The complexity also arises from the need to master implementation details of the Haskell system.

In Ruby, the cost of the “awesome experience” is the need to become an expert test writer, and essentially write every app twice.

In Java, it’s the cost of more and more additions and complex syntax (generics, etc.) to support the original strongly typed goal.

Contrast this with languages which have decided to go another, less pure way: Elixir and Elm.

Learning Elixir, you’ll be blown away by how quickly you can master the whole system and the libraries. I was surprised to realize I was done learning, and I could just spend my time creating business value.

In Elm, they drew a line in the sand at typeclasses — they sealed up the recursively descending rabbit hole of complexity right there.

I do really really like programming in Haskell — at the experienced newbie level. I enjoy being able to refactor and the enforcement of case checking exhaustion (with the correct options enabled.) But would I switch to another simpler ecosystem which provides these same benefits? Yeah.


Thank you for your post!

Having programmed in all these languages myself, I find myself disagreeing with you on a couple of points:

First, let me start off with mentioning that every language has its warts. Being a polyglot programmer is, in that regard, both a blessing and a curse: I miss all the features the other languages have, while being slightly annoyed at times at the odd implementation-details or things that exist because ‘it seemed like a good idea at a time’ (and are kept for backwards-compatibility).

I love all the languages (truth be told, some more than others, but it is definitely only a partial order):

  1. About the complexity of the abstractions in Haskell. Yes, you are right, Haskell has mechanisms that can be very difficult to understand. However, I have the strong feeling that this is caused more by a lack of good documentation and tutorials than because ‘abstraction is bad’. Be aware that complexity is not the same as difficulty; often, Haskell turns out to be harder for developers whose mind already contains the leaky abstractions that are present in other languages: For instance, working with recursion is only more difficult than working with loops if you already know loops.

And I have to say: In the last four years, the amount of beginner and beginner-intermediate material about Haskell (and pure functional programming in general) has skyrocketed; probably in part caused by more and more people from other communities looking at the functional paradigm for inspiration.

Haskell’s type system does not exist ‘because it is nice’ (although, of course, tail recursion is its own reward :stuck_out_tongue_winking_eye:), but because it allows you to write code that is as decoupled as can be: For instance, you can write algorithms that work on certain kinds of containers (like: ‘all things that you can add a new element to’) without knowing:

a) what type of actual container it is, nor
b) what things are stored inside.

And because this not only works on containers, but because computational statements are also a ‘container type’, it is possible to leverage the compiler to do insane things, like running a piece of code no longer sequential, but as parallelized as is possible.

That said, Haskell definitely has some rough edges. Most of them are caused by its age (because now we know more than we knew back then), resulting in:

  1. A Prelude with some non-total functions, which basically go against how to program properly in the language.
  2. Some things have very counter-intuitive names, especially when coming from a different programming language (class is for instance used to create a Haskell typeclass. If it had either been called interface or protocol, then a lot more people would have understood it the first time).

Ok, besides my first and maybe most important point, that it is mostly a difficulty to get started for new programmers because of a lack of good tutorials (which seems to be changing), let’s attempt to answer some of your points directly:

Lenses exist because they allow you to decouple what you want to read/change from how it is stored, meaning that you do not have to hard-code your accessors (which in many other languages, you are forced to). In Elixir, we basically have lenses too: It’s called the Access behaviour.

To be honest, I have never found the need to be mindful about the implementation details of Haskell while writing my code, (other than a general sense that Haskell is non-strict in its evaluation, which is not really an implementation detail). The only times you’d dive into this, is when you want to get that last 20% of speed or memory-optimization, which you almost never need. (The root of all evil, etc.)

Ruby is a language that is easy, but definitely not simple: The objects (the proverbial ground) you are holding are shifting under your feet all the time. It is very hard to reason about a Ruby-based system for both users and e.g. computer optimizers. This indeed means a lot of tests need to be written.

It is also the reason that I really like Ruby for writing prototypes, or short command-line-interface tools. But for the same reason, I dislike working with long-running Ruby (on Rails or otherwise) applications: It takes a lot of discipline to keep your code clean in such an unrestricted programming context.

As for Java: Java definitely is somwhere lower on my poset of liked programming languages; mostly because Java’s approach to Growing a Language, and because of its enforced ‘static’ typing.
As one of my colleagues often says: Java is not so much a programming language, as it is a platform to build on top of, because it has such a large ecosystem.

Yes, new Java versions definitely make an attempt at making it a better language :slight_smile: .

Let it be known that Elm is a fully pure language. (Especially since Elm 0.19, since it is now impossible to write your own Native modules)
I still see Elm as ‘Haskell-lite’, and that is meant in the most affectionate way. Because it did not have to care about being backwards-compatible with Haskell’s existing things, it was able to rename a lot of things, and make some different design decisions.

This point is wrong for two reasons:

  1. It has never been said that Elm will not ever support typeclasses. It is definitely true that they are not there right now, but depending on how Elm as a language develops, they might still be added. Evan Czaplicki is more concerned with keeping elm simple (frequently removing languages from the language, which is still possible in its current pre- 0.1 phase). It might very well turn out that typeclasses might greatly simplify some things. But there is definitely a lot of other stuff to implement first. (and building the compiler logic that allows typeclasses to exist, well, that definitely is hard :sweat_smile:).
  2. Elixir has typeclasses. They are called Protocols.

With all that having said, though, currently, I am able to work professionally with Elixir (and Elm), whereas Haskell hasn’t found a place in my professional workspace yet.
But this is probably because we are mostly a web-development shop.
The reason then that we use Elixir, is because it is a very pragmatic language (as is Erlang; which, as Joe Armstrong has repeatedly said, only ended up as a functional language ‘by accident’ because it was the most practical solution for the problem that Ericsson had at the time).

In the end, all programming languages are a tool. Programming in Haskell (or in any language) is not the goal, but a means to build something truly marvellous.


“Naaah,” I respond, digging into my own box of smelly opinions … :wink:

Haskell has great ideas but years and years of it being out there in the wild shows that it is more complex than is useful for nearly all developers working nearly all problems. This is not necessarily a failure, it is just a choice, and I do not think any amount of documentation will change the impact of those design decisions. Haskell is about the language, not the application of it. The designers of the language seem to understand this, fwiw, and are similarly ok with it.

I agree with the OP that Elixir (and, for that matter, for its day, Erlang) strikes a far more pragmatic road to walk down as a developer. That Elixir emerged with a few lighthouse frameworks (Phoenix, Nerves) from rather early on shows the application-not-the-language emphasis. This is also not entirely a good thing, but it does make it usually more practical to use.

Which is probably the biggest thing Haskell will end up giving to the world. Microsoft language designers pay a lot of attention to Haskell, it seems, for instance.

Probably just because Haskell is a poorer fit most of the time, and the rest of the time languages already in use for the other cases are acceptably not-bad-at-that-problem and so get selected even then.

I see a lot of non-web companies around here that are using “non-traditional” languages, but very few are using Haskell … even though the languages they choose are often either FP or draw heavily from FP ideas. shrug

LOL, yeah, I know that feeling … :slight_smile:


FWIW, this also sounds an awful lot like Ecto Changesets too. I realize that’s a library-level concept, not a core language feature, but the parallel came to mind.

I’ve used both languages extensively for five years or so now. My personal take is that development in Haskell is more pleasant when all else is equal, but frequently all else is not equal, and I think most development teams would be better served by Elixir just from the standpoint of hiring and training developers. For personal projects I usually reach for Haskell first but not always.

The thing about lens is, it is awesome and not really that hard to learn, but its documentation is terrifying. I saw this comment the other day that explains it so much better. It can feel overwhelming until you’ve developed the intuitions about how to use the various functions, but you only need a few. Once you’ve mastered it though you have a tool that has less syntactic overhead compared to put_in/get_in, plus rigorous type-safety.


Yep! I’ve been using Haskell well over a year in production but still can’t understand most r/haskell posts.

Haskell is the programming language equivalent of the fable about the king with the mouse problem who solved it by getting cats. But then he had a cat problem, which he solved by getting dogs. Etc., etc.


Just as a note, have you tried OCaml? It has Haskell’s power but is more Python’y in syntax (even 95% of type declarations are ‘optional’ but still inferred)? The whole ML family is also much older than Haskell, just OCaml is the latest incarnation of it directly and even then it’s 20 years old. ^.^

Otherwise yeah, what other people said here too.

Plus Kotlin is basically eating Java, it’s already done so in the android world…

Also, Elm does have typeclasses, it’s number type is one such incarnation of it, they are just not user-definable typeclasses, but it does have typeclasses (with quite a surprising number of bugs popping up around it’s incarnation of it).

Eh, protocols are similar conceptually, but they are not typeclasses, they require a value to dispatch on so they are dispatchers, not a typeclass thing, which can work purely on types without values.

Lenses are a library in almost every language that has them. Also Ecto Changesets are nothing like Lenses. A ‘lens’ in the Elixir world would be something like:

defmodule Lens do
  defstruct [get: nil, set: nil]

  def compose(%Lens{get: in_get, set: in_set}, %Lens{get: out_get, set: out_set}) do
      get: &in_get.(out_get.(&1)),
      set: &out_set.(&1, in_set.(out_get.(&1), &2)),

Now let’s have an object setup like:

person = %{name: "some_name", measurements: %{inseam: 42.8}}

We want some lenses to access both the inseam and the measurement (I’ll ignore name for now):

lens_person_measurements = %Lens{
  get: & &1.measurements,
  set: & %{&1 | measurements: &2},

lens_measurements_inseam = %Lens{
  get: & &1.inseam,
  set: & %{&1 | inseam: &2},

Now that we have the lenses setup to access each of those values we then can combine them to create a thing to access the entire depth in a single option:

lens_person_measurements_inseam = Lens.compose(lens_person_measurements, lens_measurements_inseam)

Which we can now use like:

iex(11)> lens_person_measurements_inseam.get.(person)
iex(12)> new_person = lens_person_measurements_inseam.set.(person, 6.28)
%{measurements: %{inseam: 6.28}, name: "some_name"}
iex(13)> lens_person_measurements_inseam.get.(new_person)

Now with a statically typed language you can build the Lens’s automatically, like what Elm does with ports, or what OCaml does with PPX’s, but with a dynamically typed language you can’t, however Elixir’s Access protocol can still let you specify them manually and more shortly, for example for the same person object:

iex(14)> access_person_measurements_inseam = [:measurements, :inseam]
[:measurements, :inseam]
iex(15)> get_in(person, access_person_measurements_inseam)
iex(16)> new_person = put_in(person, access_person_measurements_inseam, 6.28)
%{measurements: %{inseam: 6.28}, name: "some_name"}
iex(17)> get_in(new_person, access_person_measurements_inseam)               

Now admittedly a Lens can do a lot more than just ‘simple lookups’, it can do arbitrary set/get (which you can compose to ‘update’ as well), whether a map, list, tuple, a database call, or any combinations there-of, the ‘user’ of the lens doesn’t know or care what it is actually doing, and it remains entirely type-safe. :slight_smile:

EDIT: As an aside I had a beautiful to use Lens library in Elixir before the most recent OTP version killed off tuple calls at my great annoyance… And no, adding a compile option is not sufficient as most people just wouldn’t do that… >.<

Thus you are left with the ugly mess of blah.get/set.(..) calls everywhere… >.<


To me, it looks like basic lenses simply bring the standard Elixir map/struct accessors to Haskell. So it’s hard for me to get so super-excited about them.

1 Like

Two things:

First, Haskell already has record/dict accessors, so that’s meh.

And Second, you missed a big thing that I said:

Unlike Elixir’s Access, which is built only for structs/maps/lists/tuples, a proper Lens library can handle any kind of get/set operation, even databases, environment, anything, all with the same interface and API without needing to know anything about what you are accessing.

But as you can see, you can still write Lens’s in Elixir, just a bit more verbose now without tuple calls. ^.^;


The problem with number is related that Elm pretends to have arbitrary-size integers but secretly still uses JS’s built-in numeric IEEE-like floats underneath. I would not call number a typeclass because it is not parametrized (it is still of kind * and not * -> * or higher): A type cannot be both number as well as something else.

But yeah, I am unsure what to call number if not typeclasses. My point was, in any case, that it was not possible to define your own typeclasses (nor instantiate them for your custom types), which mean that you need to maintain a large amount of boilerplate code manually.And yes, typeclasses are not a first-class feature, which means that you should not over-use them even in languages that have them, but they are still really expressive.

Haskell’s protocols also need to haul around a vtable if you call a typeclass-function in a way that would mean that more than one instance would reach at the same time. Typeclasses are only able to work on ‘just types’ and be compiled away in a compiled language, and then only sometimes. The same is true for Rusts traits.

That said, no, the comparison is not 1:1. But that was also not my point:

My point, comparing Elixir Protocols to Haskell Typeclasses, is that having a feature that somewhat looks like Typeclasses does not make your language devolve into a ‘recursively descending rabbit hole of complexity’ right away, but that it is rather a very expressive and widely-used and accepted feature of the language.

When comparing Haskell with OCaml I do want to point out for people that do not know, that OCaml is not pure (and AFAIK has no way to enforce purity), which might matter to you.


In what sense?

As for your original post, how did you learn? Did your company provide internal training, books, etc.? I’m curious mostly because you say you’ve now used it for a year in production but you don’t seem at all comfortable with it and you seem to have almost no concrete points against it. I would expect specifics from someone who’s sat down and actually used it.


I’ll add that OCaml’s guiding light has never been ergonomics and you can feel it when you use it.

Tons of the features that make Haskell compilation slower (solved by using ghcid, by the way, for anyone who hasn’t used Haskell ever / in a long time) are there to make your development better. Even the simple things like free ordering of definitions, partial application of type constructors without extra libs, top-level pattern matching, etc… It’s death by a thousand cuts even before you get to inevitably missing typeclasses and other big features.

As a general comment on a lot of things in threads like these that toss around “practical” and “pragmatic”:

It’s an interesting sort of accepted way to argue in favor of less rigorous methods and languages. Some would argue that having more static verification is more practical and pragmatic, but these words are almost always used as “Well, those are not the bugs that happen in my code [read: that I remember]”. My reality looks a lot different. There’s very little that’s practical about not having more static guarantees and I miss it everywhere I don’t have it.

It’s a matter of degree too. When I saw what people could do with dependent types I wanted them immediately. So I tried Idris and just didn’t get how to use them. I later tried again and got just a little bit closer to getting it. I’m not “with it” enough to sit down and think in dependent types, but I definitely want to be and I definitely will be. Because it just adds more static guarantees, even on the value level (by proxy of being able to have the value level in the type level).

Let’s face it: Humans are the best at programming but still extremely bad at it. This puts us in the position that we pretty much have to do it, because even while we’re terrible at it we still don’t have good stand-ins that can take our place. Having the machines help us along the way seems the most sensible way forward to me.


@gon782 I recently came across the following talk:

which was very interesting, since it brings up some important points about the Haskell development environment that could be improved;
some quick examples being:

  • The learning cliff.
  • There are too many libraries, so it is difficult to find good/maintained libraries that do what you want between the cruft.
  • The Haskell community is divided about a couple of core issues, such as if it is better to manage your dependencies (and even ‘how to install Haskell’) using Cabal or Stack.
  • Some libraries make odd choices of when to use partial functions, and when this inevitably does bring down your application, the error messages are extremely cryptic and there usually isn’t a stacktrace at all.
    Besides this, the talk also talks about web-development-specific things, where their team had issues changing their Rails-mindset to a Haskell one. (And even though it has been a year, in my personal opinion, some of that Rails mindset still shines through in what they attempt to do in Haskell, but most of their points are very valid and definitely interesting to hear).

I really like what you’ve written in your latest post, by the way. I couldn’t have said it better :slight_smile: .


I like the talk and I can’t claim to have gone through the same thing as they have in terms of scale at all. DB stuff in particular I think is something I feel spoiled with in terms of Ecto.

While there are things I dislike about Haskell the language (records, etc.) they’re fairly few, because I think as a language there really isn’t a general purpose one that can rival it. The real issue is that I’ve always felt that I needed a Haskell mentor that could answer all of these peripheral questions (many of which are brought up in the talk). There’s only one other language I can think of where a mentor is essentially required because you have to really be one with the zeitgeist/community/current best choice and it’s C++.

I’ve found posts like this one really good:

To some extent it’s about someone making a choice for you and letting you move forward and you trusting that that choice is good enough. In certain areas we might only have bad choices and that’s another issue, but often there will be a best choice somewhere and without someone to tell you you’re just fumbling in the dark.

One of the reasons I want a Haskell job is because I want a Haskell senior to tell me all of these things so that I, as a “Haskell for fun” user can unlock all of these secrets I imagine are out there. In contrast, I learned Elixir pretty much on my own and have managed to find my way into a very productive situation both in terms of output and financials (granted, this is really a function of the contact network I established from working with Erlang).

So yeah, I agree that there are issues, but they’re not usually about the language. That makes it almost infinitely more frustrating, because you know that they’re avoidable and solvable, yet they remain. The divide between cabal (+ nix…) and stack is perhaps the most obvious one because it hits you the moment you decide to start (and also come back if you’ve been away for a while). It has much bigger consequences than you’d imagine at first as well because there are things that plain just won’t work for one or the other.

Edit: A special note about the editor integration, etc.:

A while ago I stopped using HIE and went back to basics: I started using only ghcid (+ a separate ghci process) and Hole Driven Development (this video shows basically zero initial knowledge with holes driving the implementation entirely, which you’d rarely actually do, though it shows the power of it as a guide and the experience of doing it, as mentioned in the video, is actually better now). The compiler can answer a lot of questions for you and the dev cycle you get with ghcid telling you stuff like this on save gets you extremely far.

If you feel like it, you can use ghcid together with editors to have them display the same information in-editor, but I’ve found that I just really never run GhcidStart in neovim anymore.

This becomes even better with valid function suggestions for holes, with which you’ll get actual suggestions for functions to use based on the type of the hole. This’ll increase the discoverability of library functions a lot and it’s something GHC provides on its own, so it’s usable from the most basic tooling.


A type being able to fullfill interfaces is not part of typeclasses but rather part of witness collapsing (remember that typeclasses are just less powerful witnesses), which conceptually it’s like passing the same thing into multiple arguments of the different types, thus that can still be emulated in Elm. :slight_smile:

You ‘can’ still do witnesses in Elm, not as succinct as in other languages, but it is still entirely doable, so you can still get typeclass functionality, just with more writing. :slight_smile:

Yep, that vtable ‘is’ an incarnation of the global witness table, though optimally in a full witness system (the way OCaml does it) the global witness table would not exist and all calls would be statically known with no virtual lookups needed. You can still ‘bake’ a global witness table in OCaml if absolutely necessary, both the module system or the object system can be used for that (or GADT’s if you know all types ahead of time).

+1 I don’t get Elm’s reluctance to add exceedingly useful QoL things, it reminds me of Go, trying to be as low level as possible, which just makes both easy to use, but very very boring and hard to read and brittle (in logic).

Indeed! OCaml tries to be as pure as erlang, which is to say purity where it matters and dropping it (but safely) to let you get ‘real work done efficiently’. :slight_smile:

It’s designed to be unambiguous and easy to parse, of which it is, there are never any questions of what a construct is at any point, unlike haskell and elixir and so forth.

OCaml supports this with interfaces fine, it’s just not default because OCaml rely’s on ordering of commands allowing for overriding the visibility of prior ones, thus allowing you to something like having a module extend another module and change and/or add functionality to it (or remove it), which is not something you can in Haskell, and this has fantastic boons for keeping code short and reuseable.

That’s because haskell generates functions for the heads, which pollutes the function namespace. I prefer OCaml’s style of opt-in function generation by far, especially as I don’t use it most of the time (which means I can make head names be far more generic and readable).

Uh, doesn’t OCaml have quite a lot of pattern matching? A common test would be something like this (at the top level):

let 4 = 2 + 2 (* Test that addition works as expect *)

You can even destructure, match, even include guards and all via the match or function calls:

let blah = function
| 1 -> "One"
| 2 -> "Nope"
| x when x<0 -> "ACK!  Too low!  But I have a guard!"
| _ -> "Higher positive"

(* match is identical syntax except you pass in the initial value like `match value with ...` *)

So I’m not seeing these, at all… I find that I’m as productive in OCaml as I am in Python in terms of languages (and indeed even many libraries, except all type safe), and I definitely would not be able to say that about haskell. I am curious where you got the above thoughts though as they seem to lack the reasoning on the other side as to ‘why’ those decisions where made (for example, OCaml doesn’t include typeclasses just because they don’t, but rather because it has a significantly more powerful capability that is slightly more verbose, but even the verbosity will vanish once a PR is finished up), or just outright wrong (like matching, a big thing about OCaml is destructuring matching with guards, ala Erlang/Elixir).

1 Like

No, I think I’m fairly certain why most of the QoL things that aren’t in ocaml are missing and I even gave one of those in the post.

You mean the thing that never ever ships, just like multicore? It’s the Duke Nuke’em of language features at this point.

I like ocaml and I understand what it’s doing, I’ve even praised the singular dedication they’ve shown in never implementing anything that doesn’t fit into those ideas, but it just isn’t as concerned with QoL as I’d like it to be.

About top-level pattern matching, it’s a nitpick but I’d just prefer split clauses with pattern matching right on the parameters. I didn’t use to but I just think it’s unnecessary to not have it at this point.

I’ll reiterate what I’ve said elsewhere: I think OCaml is about as much power as you can get with the least amount of tools, because when stuff is added it fits extremely well. I’d just rather they lowered the power-to-feature ratio in favor of comfort a bit. But it’s not in their philosophy to do that and that’s fine. We still use OCaml for some of our projects and I’m very happy about it because it beats Javascript by heaps.


Except they aren’t missing. OCaml has overriding in lieu of free ordering, or just use and instead of let at all top level definitions except the first, then you get free ordering, Haskell’s definitions are like they are in an implicit and chain without any way to break out. OCaml doesn’t like to pollute the function namespace as it then makes ‘return’ type deduction not what one would expect, unlike what Haskell does (I.E. it is opt-in). And OCaml very much has pattern matching that is near the power of erlang, I’m not actually sure that haskell even has the capability of guards on pattern matching?

Not in mainline but they are in branches. Unlike Haskell that likes to break things on every-singly-version, OCaml actually takes backwards compatibility seriously, thus they don’t just immediately toss in everything that is handed them but rather test it extensively, including with existing code-bases to make sure no prior issues, and that it will properly cover all issues that it is intended to cover into the unbounded future, which is very much unlike Haskell that has 40 ways of doing things, 3 string types just in core alone, and kludge upon kludge upon kludge just to get it’s type system working.

They are very concerned with QoL, not just with immediate usage but also with maintainability and for people to not worry that their code will just outright break on later versions, unlike Haskell.

A PPX can change that, but that seems like a lot of needless verbosity when just a simple | indicates a new head on the same function, nicely readable and perfectly consistent with match expressions, unlike haskell that uses entirely different syntax between it’s function heads and case.

I’m curious how it is considered ‘least amount of tools’. Everything you can do with Functors and modules you can do via flat record packing/unpacking (and a whole TON more verbosity), you get the completely fluff row-typing via objects (which are just row-typed records), you have polymorphic variants (also entirely fluff on a normal more-verbose tag system), among a great deal more. OCaml is designed for ease of use and ease of reading (where haskell often looks like line noise and runs slower to boot…). No surprise slow lazy calls (instead it’s opt-in via lazy, thus only where you need it, instead Haskell is opt-out), it compiles significantly faster, has a single package manager system, doesn’t have an ambiguous syntax that relies on whitespace to try to disambiguate (and haskell still fails at times) that makes copying code actually sensible, has the ability to extend modules, etc… etc…

However yes, OCaml’s parallel work sucks, the python model is not acceptable, but then again OCaml usually replaces my Python work, it doesn’t replace my erlang/elixir or C++/Rust work. Use the right language in the right place and all. Haskell does have some cool concurrency models, but then again it is a move fast and break things language. And it’s not like OCaml is standing still, the current work is focusing on getting windows to be a first-class citizen, which is no small undertaking.

1 Like
f x | x < 0     = -1
    | x == 0    = 0
    | otherwise = 1

It’s just that I also prefer, if ever so slightly, to be able to do this:

update model (ServerMsg msg) = handleServerMessage model msg
update model (ClientMsg msg) = handleClientMessage model msg

instead of this

let update model = function
  | Server_msg msg -> handle_server_message model msg
  | Client_msg msg -> handle_client_message model msg

Much like the picture you’re painting is an exaggeration I’ll say at least Haskell got new features since the millenium. For real, though, I don’t really see where you’re going with this as breaking changes haven’t been something I’ve taken note of in Haskell. If you mean new language features changing things they’re exclusively in language extensions (which OCaml should probably adopt, by the way, so new features can actually enter the language instead of languishing in repositories for 4+ years).

Modular implicits have been coming Soon™ for like 2-3 years. If ergonomics/QoL was a serious priority they’d have been out long ago, but I guess the safest way to ensure no breakage is to literally never actually release the feature.

Text & ByteString aren’t in core, they’re just packages. That’s either worse or better depending on who you ask, I suppose. If you were to say it’s confusing for newbies, yeah, it’s currently what half of this thread is about at the moment, but luckily these have “best practices” now. If it’s text, just use Text, if it’s binary data that’s not meant to be treated as text, just use ByteString.

… once the compiler has gotten its pound of flesh and something has absolutely no performance hit whatsoever?

Jokes aside I’m not even saying OCaml is somehow much worse than every other language. All things considered it’s probably in my top 5 languages, maybe even top 3. It’s just that you inevitably feel like you’ve gone back in time for a bit, in my opinion.

All in all I just can’t really agree with you that it seems they’ve ever really considered ergonomics and putting the programmer first in almost any choice. I love the features they have put in, but at some point it must’ve been like obviously useful things like ad-hoc polymorphism just weren’t prioritized highly enough. At this rate Haskell will likely implement dependent types before modular implicits are released for OCaml. That’s all work done by, as far as I know, one person and that’s to take the language into the future, not just catch up.

1 Like

I actually prefer the latter, even putting a case inside a def in elixir at times, it makes it obvious they all belong together, heads of the same function, unlike the above and normal elixir/erlang styles where you can’t always tell they are the same head without mentally parsing the name and arguments of the function itself. I.E. it makes it contextually obvious without needing to do mental work, which is a huge boon when scanning through code quickly. :slight_smile:

OCaml core is pretty easy about taking in new features as long as there is sufficient tests done, pretty stringent on that. In Haskell I’m speaking more of old code breaking, not new features.

Modular implicits are a fairly small thing overall, they save you from typing a single word each on a witness patterned function call, not a huge boon, though a nice QoL mostly on witness based operators (I tend to go for piping though).

Most of my Haskell work ended ~5 years ago, so I havn’t kept up much after that except the occasional compiling and updating/fixing of something that broke from new versions of ghc. Still though it’s massive amount of type extensions hasn’t gotten any better by far, nor its compiling speed (still rivals that of C++, especially templatees when HKT’s are used in Haskell).

Ooo? They have an actual setup for dependent types coming out? I’d actually like to read on that a bit. I’ve tried implementing dependent typing a few times in languages I’ve made but all the corner cases start driving me crazy and idris is not quite mentally parseable… ^.^;

Overall though, I still prefer languages that are slow on taking in features when the features are well designed. Take C++, I waited 12 years for C++11 to come out just to give me something as trivial as auto, even though I could 90% of the way there with template magic and certain design patterns, auto was just so trivial for the compiler as it already had that information, but it still took 12 years to get it. OCaml seems like it moves very fast to me compared to what I’m used to. ^.^;

There’s been a lot of progress and from the latest status update I think Richard Eisenberg said that he’s hopeful for actual dependent types in GHC in 2019, but there’s no telling if I’m up to date on that at all.

You can do the basics right now, essentially:

Yeah, fair point. Everything is relative.