The complexity of Haskell vs. Elixir's simplicity

language-implementation
haskell

#21

It’s a matter of where the complexity is placed.

In the case of Elixir the language itself and the tooling is dead simple, but the runtime and ecosystem of BEAM is very complex, and there’s a lot of compile time complexity.

Ocaml has a very small runtime and is dead easy to parse, In terms of code reusability if you abuse the sh*t out of functors even though it’s not all that great appearance wise and adds a lot of complexity compared to say type classes to the application developer, MLI files make code reviews and documentation easy.

As for it’s concurrency story, honestly its story there shouldn’t be compared to Python, like Lwt + libevent or uwt is actually more performant than NodeJS, and nobody talks about Node’s GIL.

I’m going to use the aformentioned excuses that the the Python people do but honestly I just find it hypocritical because nobody says this about Node.

Honestly the main case where Ocaml is actually as bad as people think in a multicore world, is like when you have a shared state bottleneck where you’re using a mutex and there are a lot of writes that actually take a long time to do, and Node is in the same boat.

If we’re talking C10K it’s on par with node and when it comes to data parallel stuff, using fork it’s comparable to Go.

Don’t even get me started on how great Mirage is in a parallel world, since like you’re just spawning a bajillion tiny unikernels that only use the resources it needs.

is just a PITA because you are constantly reinventing the wheel and like there isn’t a lot of documentation in the cooler projects, like you’re expected to just mess around with it and read the MLI files and maybe send a message on the forum or email the maintainer.

Multicore is usab
TL;DR: Elixir is simpler le now for the most part apart from a handful of backwards compatability bugs, that have improved rapidly, like the only thing that holds back widespread use is lack of documentation, the fact that it’s targeted towards people who know how to and really want to implement schedulers.

Modular Implicits can be used it just hasn’t been integrated as part of the main version of the language, mostly because the community is like fuck that I’ll just use ppx rewriters for the time saving parts, or I’ll use the record system, or first class modules to imitate x aspect of typeclasses I want, and honestly prefer the modularity over the conveniences of type classes.

Ocaml’s problem isn’t a lack of performance or language features, the community is dominated by less than a 100 systems hackers or compiler nerds, who don’t mind reinventing the wheel, its weakness is the lack of empathy to normal application programmers who don’t want to read a lot of code they aren’t writing, contribute to a handful of open source projects, and have their GitHub full of tiny libraries that maybe 10 people use, and implement a truckload of common algorithms .

Erlang has a similar story but like it makes things so very easy to build cool shit.


#22

In Erlang’s defense it has tons of libraries for stuff a telco programmer cares about. Megaco for telephony, ASN.1 for weird protocols, httpd/httpc for simplistic webby interface, diameter authentication, SNMP management all built in.

We just happen to muck around in a very different space.


#23

Back at the point:

  1. Haskell is actually super simple. The basics are just sugared typed lambda calculus (actually sequent calculus, but, one of those things I haven’t looked into yet)

  2. The simplicity, leads to rigorousness, leads to ppl being able to do v complex things with solid foundations

So at some level you can choose your poison in Haskell. No need to jump into the deep end.

Elixir to me can be incredibly complex, when types aren’t checked possible and something very deep in the stack errors.

Where Elixir wins against Haskell is the culture. Great tooling & libs, and a big early focus on the web also…


#24

Java is not expressive as Ruby at all, while Haskell is one of the most expressive language.

Haskell is not complex at all. It’s just having some initial barrier. Fundamentally it’s simpler than most of the languages.
Once you know how to deal with monad (don’t even need to understand it well), you can write a lot of useful things just like other functional programming languages.
Once you know more about typeclass and monad transformer, you can write beautiful programs.
As for Template etc, it’s not a hard requirement for programming language. And referential transparency makes the need for macro lesser. (Think about the if macro in Elixir tutorial, in Haskell it could be actually implement as a function due to it’s lazy nature)

The great thing of Elixir is not only based on the language itself, but also the platform (Erlang/OTP), great tooling etc.

Compared to Haskell, Erlang/OTP makes Elixir a more practical language. There are some projects like Cloud Haskell could be an alternative to Erlang/OTP in the future, but it’s far less mature right now.

Compared to Erlang itself, Elixir tooling and documentation is far easy to learn and use.


#25

Oh absolutely, node as well, OCaml has no JIT, it compiles to machine code so given two languages with a GIL it should on average be faster for equivalent quality code. I still can’t wait for Multicore OCaml with no GIL to finally be merged in (piece by piece it is coming in… slowly…).

Yep, OCaml, NodeJS, Python, Crystal, Ruby, etc… etc… All of them you can fork out more processes of course, but a singular process will only be one executable thread at a time (with many possible I/O threads of course).

Heh, Mirage looks awesome, I’ve still yet to touch it but it always looks so enticing when I run across it. MirageOS is a microkernel made mostly in and made to run OCaml in a massively parallel way, whether directly on bare metal, a hypervisor, whatever. Think Nerves for the BEAM, but for OCaml, and even more low level and faster from my understanding.

Heh, there are quite a number of PPX’s that have basically become ‘Standards’ for projects nowadays I’ve been noticing… ^.^;

I’ve used so many of those weird protocols too! Using ASN.1 was a slight bit of a pain in Mix, had to write my own mix compiler to just call into erlang’s compiler even though erlang can already do it but mix’s erlang plugin ignores those definition files… >.<

Eh not really. Like take just one of its ‘base’ features, HKT’s, those are way complicated to compile and even more amazingly complex to do efficiently (which Haskell does not). Haskell is a very non-simple language, from choosing rather inefficient FP constructs as defaults (HKT’s, lazy-first, etc…) to horrible syntactical choices like significant whitespace… >.>

And then you get into ‘fun’ stuff like the GHC compiler extensions that about every Haskell program uses with impunity nowadays, some of which are not quite ‘type safe’.

Yeah I agree here, types in a language make debugging SO much easier. Debugging in Elixir can be an absolute royal pain at times, debuggers and tracing or not.

If it were then you wouldn’t require non-standard GHC language extensions to be able to do some types of work (of which, as an example, OCaml does not need).

Harder to Parse, harder to implement, harder to reuse code because of significant whitespace, lack of built-in code extensions ala OCaml Functors, etc… etc… etc… Doesn’t really seem ‘not complex’?

Monads aren’t just a haskell thing, they are a concept that ‘most’ languages anymore use with impunity, even if they don’t call them that.

Except depending on what the type of what you pass in is then sometimes it wouldn’t be lazy, and you wouldn’t be able to tell at the function definition site (without using compiler extensions). :wink:


#26

All valid points.

There is definitely a niche market for some enterprising Jose-Valim-type to clean Haskell up a lot, and make it look like Ruby :stuck_out_tongue:

The big picture remains. There are pitfalls, but it’s not the crazy beast ppl make it out to be


#27

Which is perfect, IMO. As an end user (a polyglot app developer) I have no concept of the BEAM’s complexity and I don’t need to.

Cf. Haskell, Ruby, and git, which all but require a firm knowledge of the implementation details. For me, that’s important evidence of poor or unfinished design.


#28

If only the basics were sufficient, I’d agree with you. :wink:

IMO, Practical Haskell is a morass of hacks and implementation leaks. Consider just a simple function invocation:

m n o p q

That syntax, which doesn’t differentiate between the function and the operands, is not a product of top-down design, but rather bottom-up implementation leakage. (The language being based on simple curried functions.)

Coding in Haskell often feels like JVM Bytecode - a syntax defined according to the needs of the underlying machine.

But my needs are different than the machine’s. I think at a higher level of abstraction. No files, relative paths, etc. Instead - interfaces, abstractions, classes, and objects (if OO). Smalltalk got this.


#29

Funny that’s how I felt about Rust when I tried learning it this year! I enjoyed some initial experimentation after reading a good amount but when I started poking around pull requests, issues, RFCs and community posts, I got so confused and frankly intimidated by how much I couldn’t understand that I decided to drop it.

Elixir really is amazing in this way. It doesn’t take long to learn because the core of the language is just so simple. Yes there are a few small warts which take some learning but the language design itself is so easy to wrap your head around. Looking over the issues and pull requests on the Elixir GitHub, I can understand everything, and that is a powerful, and rewarding feeling. One that I think is undervalued in today’s world of many languages.


#30

I’m completely with you. I have the same experience with the Haskell subreddit, r/haskell. I can’t understand most posts - they’re meaningless to me. Yet I’ve been writing apps with Haskell for years.


#31

But that’s what I love about it…

Arguments and functions are somehow fungible.

Larger abstractions (e.g. Elixir behaviours) are just records with lambdas…

Etc. etc.

Maybe there’s a better balance to strike though.


#32

Sure it differentiates it fine, the first token is the function, the rest are the arguments, that is not implementation leakage but rather just getting rid of superfluous things like commas that are just noise in ‘getting work done’, a lot of languages use this pattern of function calls, including ones without automatic currying.

Eh, not really, more I’d say it feels like it wraps the type system in most case, which is a bit of a mixture of sometimes conflicting ideas (hence why I prefer OCaml’s type system).

Interfaces/abstractions is something haskell does quite well (though not really in my preferred way). Classes/Objects from the traditional OOP world (not true OOP as it was originally defined, though smalltalk does this better than most) is useful in some cases, but it is also way WAY overused in places where it is just not suited, and that is when you start running into conceptual-hell (unable to follow the code flow because of how the poorly designed abstractions confuse which path code can take through the object calls) and the thousand-paper-cuts-of-death (OOP style abstractions are hell to optimize so you end up doing a lot of indirect function calls on the machine, which is death for performance-sensitive programs that overuse virtual calls).

Actually BEAM Behaviours are more like typed module interfaces. These worked really well with tuple calls that were removed just recently. It’s a great common OCaml idiom that is trivially optimizable but utterly fantastic at abstractions (unlike OOP abstractions).


#33

Sorry but I couldn’t disagree more with this comment: there are few languages less “hacky” and with fewer leaky abstractions than Haskell. Syntax-wise the ML-family of PLs is hard to beat as their designers reached a global optima when considering minimalism and usability. Just take a look at the BNF definition of Haskell 98 and compare it to any other language :slight_smile:

Of course, all these conversations are mostly based on personal opinions… but I don’t think your example provides a well-founded criticism against Haskell, at least from a language design perspective: when Milner designed ML in the mid 70’s he wanted for various good reasons, to have both the lambda calculus notation for function application and also infix notation for standard algebraic expressions / boolean relations. This was a deliberated decision that worked fantastically well. Then, to allow extensibility, its descendants included ways to define your own infix functions, and in the case of Haskell with some constrains for which characters are allowed in their names. Back to your example, if o was an operator it would be for instance something like +, or &&& for an user-defined one, making the operands quite evident. There is no need to add any additional tokens into the grammar rules.