Learning Elixir, frst impressions ( plz don't kill me ! )

OTP is the core framework and main value proposition for both Elixir and Erlang. It’s so core it’s part of the kernel. Without OTP Elixir probably doesn’t exist and Erlang is just a weird Prolog variant nobody remembers. It is to Elixir what the DOM is to JavaScript.

On the sidebar: Languages have their purposes and being a pure functional language reference design is not one of Elixir’s. Characterizing Elixir’s diff of Haskell features as a negative or con is misguided and a disservice. Both are functional. Haskell exists to be purely functional at the expense of pragmatism. Elixir exists to be robust, fault tolerant and concurrent at the expense of purity. Being a functional language is a means to an end in Elixir and Erlang, it’s not the reason they exist unlike Haskell. Some of the things you list as negatives are impossible without a static, strongly enforced type system. Elixir has a dynamic, weakly enforced type system.

2 Likes

Yep, took it. I am now reading an Elixir book ( Which book to read? - #7 by peerreynders ) and I am starting to think that Erlang / Elixir might have been what I have been looking my whole life as a backend developer but never knew existed.

And now that I made my piece with Erlang / Elixir and see it for what it truly is ( concurrency oriented programming language with some FP features ) I am rather excited :smiley:

This is mainly thanks to Python’s prominence int he field of AI, a battle ruby lost long ago.
Also, nice list, thanks !

This is exactly how I felt. Still, now I want to learn Elixir’s strengths and I think I will love them. I have already starting to read a book and it sure sounds convincing!

As I have stated before, I now understand that my expectations played quite a role here. Now I understand Elixir’s main purpose and I am already on the learning wagon :smiley:

3 Likes

This isn’t quite right. Erlang and Elixir are definitely functional languages. Not inspired by, not functional features, but fully functional. There are no loops, no mutable variables, no “classes” or modular state of any kind. Most imperative techniques are simply not applicable. It isn’t pure, but neither are the lisps, and its more functional than most of them.

5 Likes

Actually:

4 Likes

Well, let’s just agree to disagree then. No worries, we can still be friends :stuck_out_tongue:

Now this is an interesting turn of events :smiley:

There is no disagreement. I’m simply explaining how these terms are used in computer science and in industry. If you want to use them differently after knowing that, its fine with me.

6 Likes

The with keyword used with pattern matching on success and error (e.g. {:ok, value} and {:error, error}) tagged tuples could be considered as Elixir’s built-in equivalent to the maybe monad.

That would be a Result monad, (maybe would value | nil).
with is quite along way from a result monad.

OK is closer, because if forces the use of only {:ok, value} | {:error, reason}. It’s what I use and at this point I don’t feel like I’m missing anything more. however all of it’s checks are runtime so if @Fl4m3Ph03n1x is familiar with Result types in other languages might still not regard it as a solution. Combining OK with dialyzer is even better but it’s still a compromise. In the end I have made good use of my functional programming knowledge to work in Elixir but when I miss stuff from other functional languages I just have to remind myself how helpful the process model is.

2 Likes

The good thing about Elixir (and Erlang) lies in the concurrency - it’s all about creating parallel process and sending messages - and this (as Alan Kay has pointed out on numerous occasions) is the
essence of OO programming.

OO programming is all about objects. Objects are things that respond to messages (or should be) - to get an object to do something you send it a message - how it does it is totally irrelevant - think of objects as black boxes, to get them to do something you send them a message, they reply by sending a message back.

How they work is irrelevant - whether the code in the black box is functional or imperative is irrelevant - all that is important is that they do what they are supposed to do.

Unfortunately the first big OO language based on this model (smalltalk) talked about objects and messages but messages in smalltalk were not real messages but disguised synchronous function calls - this mistake was repeated in C++ and Java and the “idea” of OO programming morphed into some weird idea that
OO programming had something to do with the organisation of code into classes and methods.

Erlang and Elixir support the lightweight creation of millions of isolated processes - everything works by messaging between processes - the design of a system involves observing the concurrency you want in your application and mapping it onto processes.

A web server in Elixir for 10,000 users is not “one web server with 10,000 users” (like Apache or Jigsaw or all the rest) it’s “10,000 web servers with one user each” - this is a radicle departure from conventional practise.

The fact that Erlang/Elixir processes are described in a simple functional language is more or less an accident - they started off in a relational language (Prolog) and C-nodes (for example) can be written in ANY language - the important point about Elixir (and any BEAM) based machine is that the underlying VM can handle extremely large numbers of parallel processes.

For a long time I’ve said “Erlang is the only true OO language” (now I guess I can add Elixir).

The basis of OO programs are:

 - isolation between objects (we do this)
 - late binding (we decide what to do when a message arrives at a process)
 - polymorphism (all objects can respond to the same message, for example, you could
   think of sending a "print-yourself" message to any object and it would know how to do it)

Of lesser importance is

- division into classes and methods
- syntax
- the programming model (ie functional or imperative) 

Once we have split a system into large numbers of small communicating processes the rest is relatively easy - each process should be rather simple and do rather little wich makes
programming easy.

What Erlang (and Elixir) added to programming was the idea of the link. This was Mike Williams idea - this extends error handling over process boundaries and with links and process we have all we need to build supervision trees and so on.

Supervisors, gen_servers and all that jazz are just simple libraries that hide a bit of detail from the user - they are just built in a rather simple way with links and parallel processes.

Erlang was not designed as a FP language - but as a tool for building long-lived
fault-tolerant systems.

Central to fault-tolerance is the notion of remote error handling. If an entire machine fails the fault must be corrected on a DIFFERENT machine. It cannot be corrected locally because the local machine is dead.

This means that to program for fault-tolerance we need distribution and messaging to be easy to program - so basically any design for fault-tolerance will end up looking like Erlang.

The whole point of Erlang was to make it easy to program fault-tolerant systems, a side effect is that it’s also easy to program scalable systems.

The differentiator between Erlang and Elixir and “All the rest” is the concurrency and fault-tolerence mechanisms - it not about Monads and Syntax and whether it’s a pure FPL.

Now would you like to handle 10,000 users in a single thread using callbacks to emulate concurrency or would you like to make 10,000 concurrent processes, each of which is simple and has no callbacks.

Each process waits for a message that it is interested in then performs a computation and sleeps waiting for the next message.

I think a big problem in evangelising Erlang/Elixir is that you have to explain how having large numbers of parallel processes solving you problem helps. Since no other common languages support concurrency in any meaningful way the need for it is not understood.

“But I can do everything in callbacks in a single thread” they’ll say - and they do - and it is painfully difficult - and you ask “what happens if the callback gets into a loop or raises an exception” - if they don’t understand the question you’ll have some explaining to do - if they do understand the question then you can tell them that in a strange far off land where was a was to program concurrency that did not involve callbacks.

72 Likes

I guess I could shorten the previous posting.

Please don’t sell Elixir as an FPL - it’s not it’s a CPL (Concurrent Programming Language)

Don’t get into “my FPL is purer than yours” arguments. Don’t even think about talking about Monads. Change the subject.

“What’s a CPL?”
“You know, the kind of stuff the WhatsApp servers are written in …”

:slight_smile:

30 Likes

Killjoy :unamused:

:grinning:

There used to be via ‘Tuple Call’, but that feature was removed from the BEAM at the behest of Elixir in the most recent version, now it can only be used if a compiler option is passed in.

I was looking for this! But yes, OCaml does not have automatic currying at definition site. In OCaml a proper inlineable efficient function definition and usage would be:

let add(a, b) = a + b

let 3 = add(1, 2)

Conceptually this is a single-argument function that takes a tuple, but the OCaml compiler actually fully sets that up as a tuple-less stack-based function invocation like you would do in C++. The ‘normal’ function definition in OCaml would be:

(* short form *)
let add a b = a + b
(* long form *)
let add = fun a -> fun b -> a + b

let 3 = add 1 2

Conceptually this is a function that takes an integer and returns a function that takes an integer that returns an integer. However OCaml’s compiler is built for the very purpose of optimizing this case back into the prior one (which is why it is so blazing fast), but will create thunks as necessary for any partial application. :slight_smile:

The ‘tuple’ form is not partial appliable though, hence why most don’t use it. In SML (OCaml’s predecessor) it does not optimize the ‘curry-style’ to normal calls very well, so it is common to see people use the tuple style as it is just faster in comparison, that is why OCaml went the other way and pretend that currying is not currying and instead makes normal functions that it partially applies via thunks on demand.

And thus yes, ‘currying is a mere frill’ is entirely true. :slight_smile:

In my MLElixir little sandbox thing I had that same ‘currying’ style as OCaml, they are normal calls but they pretend they are not via automatic thunk generation as necessary (although the syntax I find even nicer than OCaml’s as you can partially apply any of the arguments, not just the start or end arguments ^.^).

As a long time FP’er, ‘horrible’ in this context means that you cannot pipe into a partial application, but since Elixir doesn’t have partial application it’s not as big of an issue (and thus elixir arbitrarily chose to pipe into the start argument, which I do find a touch odd honestly as erlang prefers topic-last arguments like most functional languages, but eh, it works). :slight_smile:

It’s an aspect of the BEAM. It could have been worked around by Elixir pretty trivially via tuple calls, but I ‘think’ those are slightly slower to call than the anonymous function syntax, I’m not certain though, need benchmarking…

I find with to be more of an ‘early-out’ construct than anything moand’y, but that does handle many use-cases regardless.

And lfe for another big beam language! ^.^

But yes, the BEAM is far more proper OOP than any more traditional OOP language.

Lol, I’m swiping this. ^.^

2 Likes

Hey again!

First of all, I would like to thank everyone for their time and support in this discussion. The community’s response has been truly overwhelming and surprisingly positive considering the tone of my first post, which I wrote in a moment of frustration and anguish.

Since I have joined the community I have learned the true purpose of Elixir / Erlang and I have read a lot about its strengths. I am now invested in learning more and I can’t wait to bring all this knowledge to my company. I want you all to know that it was the effort the community poured into this topic that enlightened me.

Now I hope I can one day pay back, but so far I still have a long road ahead of me and that’s fine - at least now I know it.

As this discussion grows, so does the time I need to invest into reading every single post. A great number of individuals have poured their brilliant responses here and I have read them all, but it is my belief that I learned my lesson.

Now I have a plethora of other lessons I still need to learn and little time to invest in each. It is for this reason that I will stop answering comments in this discussion. It is my belief that I already got everything I could from here, and I want to invest my time learning other lessons that Elixir /Erlang both have to offer me.

If you are reading this late, or arrive here after this post, don’t feel angry nor discouraged because the original poster didn’t reply to you. I am just 1 person and you are lot. Instead if you feel that something still needs to be said do it for any future readers who may find this discussion interesting.

Thank you all!

13 Likes

Thank you for the useful links. I really appreciate it.

1 Like

I’ve often wondered why this choice too.

1 Like

Not sure why this decision has been made, but I can tell you it’s one of my favourite Elixir properties, because I’ve always found the ordering in Erlang stdlib confusing. Let me explain this with a simple example using :lists.map. Here’s how we map a list:

:lists.map(
  fn el -> do_something(el) end,
  fetch_elements_from_somewhere(...)
)

I’ve always had problems reading such code, because the order of arguments is unnatural to me. The first argument explains how each element is transformed. My problem here is that at this point I don’t know what each element is, so the information is partial. Perhaps it’s just how my mind is wired, but I usually want to know what is being transformed before knowing how is it being transformed. Thus, the reading order for me is: line 1, line 3, line 2.

I used the same order when writing code. I’d start with this:

:lists.map(
  ,
)

Then, I’d proceed with:

:lists.map(
  ,
  fetch_elements_from_somewhere(...)
)

And only then would I add the mapper function.

The story is even more complicated with :lists.foldl, where reading/writing order for me is 1, 4, 3, 2 :slight_smile:

Again, perhaps it’s just how my mind works, but I find Elixir’s convention more natural, because it helps me read/write code in a more linear fashion.

8 Likes

It may be related to the Elixir idiom of passing optional args as an optional keyword list to functions which also supports the syntactic sugar of omitting the [.

defmodule Foo
  def bar(list, opts \\ []) do
    # ...
  end
end

Which allows it to be used as either:

  • Foo.bar(list)
  • Foo.bar(list, [an: arg])
  • Foo.bar(list, an: arg)
  • list |> Foo.bar([an: arg]) |> ...
5 Likes

Good point, I think that requires the piped-in arg to be first.

I just want to say that for someone who is not used to other languages that have something like this, Elixir’s way is far more intuitive. The piped-in arg is on the left of the call it’s being piped into, so why on earth should it become the rightmost argument of the call??? It’s coming in from the left, it should go into the left “slot” :wink:

Well… Just turn the pipe operator around the other way and set up the language so function execution proceeds from bottom to top:

last_operation()   <|
middle_operation() <|
first_operation()  <|
starting_data

(P.S. Don’t actually do this)

5 Likes

Or you can just be like Clojure and add an additional > to the end of pipe to indicate that you’re piping to the end!

data
|> pipe_first(with, other, values)
|>> pipe_last(with, other, values)

(P.S. Or this)

2 Likes