Tuple Calls

We both agree on it making more sense for piping to work on the last argument because of currying (and this would make quite a few functional patterns, such as applying applicative functors, a lot easier to express in Elixir!). However, this is a choice that cannot be altered anymore as it is deeply ingrained into the language.

The difference between tuple calls and piping, and why I call this ‘OOP-style’ is that you are dispatching functions based on the data type you happen to have at hand:

|> MyOneModule.bar(extra_argument)
|> MyOtherModule.baz()

Here, it is clear what code is called.


Here, which bar/2 is called depends on what my_data_type actually is bound to. Furthermore, which baz/1 is called depends on what this bar/2 returns. This is the kind of implicitness inherent in OOP programs because in objects, code and data lives together, and exactly what we try to avoid in functional programming.


It is identical to dispatching based on a module ID as well:

defmodule Blah do
  def send_message(msg) do
    send(NamedProcess, msg)

a = Blah
a.send_message(1).send_message(42).send_message(6.28).send_message("Hello world")

Just happens to be a function that returns the right thing in the right format, though in this case it is more obscure that a tuple call as it is obvious that push should be returning the updated structure, which it indeed does. Really though, if someone did that with different things that returned different tuples to different modules, that would be quite obscure and not recommended, but I do not even recommending doing that with pipes (when the type changes in pipes then I add intermediate bindings unless it is a trailing Enum.into/2 or something obvious like that).

It is not different then just calling a binding that has a module name bound to it, like a=IO; a.inspect(42). :wink:

Yep, just like a=IO; a.inspect(42) does.

First class modules in OCaml are certainly not even remotely OOP, and yet this is exactly how it is implemented in OCaml as well (internally it is also a tuple of data, though it does not need to carry the first element ‘type’ along like Erlang’s tuple calls do since it knows the type already) and is a common pattern (standard way is to name it SomeName.S where S is the recursive module signature of that module).


Criticism of tuple dispatch for the curious:

[erlang-questions] Proposal to remove tuple dispatches from Erlang (from José)

One major project using them is Webmachine, which used to be based on parameterized modules:


1 Like

Yes, dynamic module dispatch is bad enough. Adding additional parameters makes it so much worse. This kind of dynamic module calling is generally very rare in Elixir and Erlang (outside of behaviours) and if it’s used, it’s usually wrapped in libraries.

Because dynamic calls are so rare, the dynamism of Elixir is a much smaller issue than one might think initially. That’s probably also why I don’t miss type system most of the time. The biggest source of “dynamism problems”, in my experience, stems from the fact that we have no idea what code foo.bar.baz is going to execute. It’s entirely based on runtime values. With fully qualified module calls, that’s not an issue - you always know what code will be executed.


And as I’ve mentioned back in the ol’ Erlang days (long long time ago!), essentially all of the Tuple Calls ‘cons’ go away if you type things properly (even dialyzer, though in-source typing would be more clear). It is still immutable, there is no mutation, not even any hint of a mutation (unless you are sending messages off to another process or so, but same issue with functions there), and the only real issue that I consider an issue is that there are then two ways to call the same function, which I do agree is annoying, but could be resolved if a function declaration that took a tuple call as its final argument could not be called ‘normally’ anyway (meaning you could also clean up error messages related to its arity and so forth).

I am quite a fan of removing parameterized modules, they added too many out-of-place bindings and made the implicit things about tuple-calls gone so only the explicit ways were left (I like explicit).


That’s awesome! I may have to start using that. Concerns over implicitness would be largely alleviated with better IDE-esque tooling.

Not too thrilled on the tuple syntax. It would be nicer as a struct for protocols.


Honestly, the variable ambiguity argument seems to be overdone in Elixir. Proper variable names and typespecs go a long way towards communicating your code. As long as you only use a single type to represent a concept, there shouldn’t be an issue with creating descriptive/unambiguous variables.


Structs don’t exist on the BEAM. ^.^

True true yeah, but I still like typed systems enforcing it. Way WAY too many bugs leak through dynamic typing everywhere.


No, it is not. :slight_smile: One implicitly passes arguments, the other does not. To see the issue with this approach and how it couples behaviour and data, just consider what would happen if you have queue.push(42) and then someone wants to add a new queue.new_function(bar) that you haven’t defined. The coupling make it impossible to extend it without introducing a series of extension mechanisms, such as monkey patching or inheritance, all of them unnecessary if you don’t couple them in the first place. None of those issues happen with behaviours or pipes, where all arguments are still explicitly given.

And as a consequence they are even no supported fully by tools. For example, Hipe will not perform tuple dispatches. For all purposes, it is an undocumented feature and it may be removed from Erlang anytime.


Elixir could just pattern match at the dot.


I can’t keep quiet here. :grinning: Yes, they kept tuple calls for backwards compatibility after they removed parametrized modules but IMAO they should have removed them as well and stuck BC. They were always an implementation hack. :rage: It is a pity that some of the libraries like dict work with them but my only defence there is that they came later.

And I’m whispering here, but I am not really a fan of pipes either. Shh. :hushed:


Heh, I view it that the atom-call is implicitly passing nothing, since there is no data to pass in. ^.^
Looks the same from that viewpoint. :slight_smile:

Exactly why there should be a @behaviour that could be checked by Dialyzer. :slight_smile:

Yeah, OCaml gets around that via it’s include keyword, can do something kind of like this (pseudo-code but the basic gist):

module MyExpandedList = struct
  include List (* Standard lib's List! *)

  let my_special_function a b c = a+b+c

(* Then you can use it straight and intercompatible with normal List function calls,
   or you can 'overwrite' the normal List in the current scope via: *)
module List = MyExpandedList (* Only for the current scope *)

Discourse highlights OCaml very poorly… ((*/*) are comments).

Oh I would never ever suggest tuple calls to replace piping, ever, I consider them as just a refinement on Behaviours while carrying data along without needing to carry two bindings (just one).

I know, still makes me sad. :wink:

Adds overhead, but yeah it could, that is how elixir does [indexing] currently, transforms it to a case, same with if's and more too.

Lol, I remember how vocal you were on the mailing lists back then about that. :wink:

I really think they were originally added to mimic OCaml’s First Class Modules, though you could actually mimic those a lot better nowadays via Maps (and it gives a single point of call too!:

defmodule Vwoop do
  def new(v) do
      get: fn -> v end,
      add: fn x -> Vwoop.new(x+v) end,
      new: &new/1,

Used as:

iex(41)> v = Vwoop.new(21)
iex> v.get.()
iex> v = v.add.(21)
iex> v.get.()

How’s that for more proper use of immutable closures? :wink:
But yeah, that is much more ‘normalish’ of OCaml’s First Class Modules (hmm, immutable prototype system?).
Could macro it for simplicity too (though you are always left with that dot before the ().

I only think the pipes pipe into the wrong position, even BEAM puts the ‘implicit tuple call’ at the end (plus it optimized the recursive stack to have the ‘changing arguments’ at the end and the unchanging arguments at the start last I checked the interpreter), but piping itself is awesome, consider that it is OCaml’s only one of 2 default non-math/list operator functions (the other is @@, which if OCaml’s operator precedence allowed for then it really should have been <|). :slight_smile:


That is not true. You get the least instructions when the result of the last function call is the first argument of the next function call. Not that it would matter much in real code, though.


It’s just tradition for me. Every language I’ve seen that has piping via main code or library’s from C++ to Haskell to OCaml to many many others all put it in the last position.

I do agree that the first position would make the most ‘logical’ sense, but about everyone followed the ML way of a function being a single argument to a single return, so even the non-ML languages followed it, so they had a programmatic reason that was required since they did not have the concept of Macro’s (at the time, now they have bigger). :slight_smile:

Oh wait, I mis-read that, not talking about pipes. ^.^;
I recall in the BEAM (at least when I last read it, I’ve no reason to think this has changed though?) that when arguments can be reused between function calls (like say a loop function calling itself, or a function calling another function that takes some of the same arguments) that it only popped off the stack up to the ‘removed’ ones and only replaced what was necessary, so in essence it did this:

def blah(a, b, c, d), do: vwoop(a, b, c, d+1)

That would only pop d off the stack, calculate d+1 onto the stack, then call vwoop.

def blah(a, b, c, d), do: vwoop(a+1, b, c, d)

This one would pop all the arguments off the stack, calculate a+1, put it on the stack, then put b, c, and d back on, then call vwoop.

As for pipes in some code it does not really matter as many calls are like:

|> blah0()
|> blah1(a)
|> blah2(b)
|> blah3(c)

But in the case you have stuff like:

|> blah0(a)
|> blah1(a, b)
|> blah2(a, b)
|> blah3(a, whatever)

Then piping into the last position would always be more efficient unless the piped value never changed (and even then I doubt the BEAM could optimize it knowing that it is the same value since the same binding is not used again).


BEAM is a register-based virtual machine, not a stack-based one. There are X and Y registers (1024 of both). X registers are where regular operations happen. Y registers are slots on the BEAM stack. Arguments to a function are passed in X0-X(n-1) registers. The return value is always in the X0 register. The X registers are caller-saved.

EDIT: registers are 0-indexed, not 1-indexed.


Given the following code:

defmodule Test do
  def test(x, y) do
    x + y

  def test_first do
    x1 = test(1, 1)
    x2 = test(x1, 1)
    test(x2, 1)

  def test_last do
    x1 = test(1, 1)
    x2 = test(1, x1)
    test(1, x2)

We get the following assembly (stripped down for brevity)

{function, test_first, 0, 9}.

{function, test_last, 0, 11}.

The last-argument-chaining version has the extra move instruction to move from the return position in X0 to the last argument position in X1. But as I said before, this level of difference, probably won’t matter in real-life programs, since there are additional instruction-fusions and optimisations going on in the loader.


Exactly my point. :slight_smile:

It only really matters on a tight looping/recursive function, where it did provide a sizable performance difference when I tested it long ago.

(Sent too soon, so edit:)
And I know the BEAM is register based, but when it executes it then it ran it as a stack (or the JIT did, one or the other…).


The main drawback of piping with the first argument is that it conflicts with partial function application.

Partial function application is a very powerful tool, and should be more well-known within the Elixir community:

iex> [1,2,3] |> FunLand.map(Currying.curry(&+/2)) |> FunLand.apply_with([10, 11, 12])
[14, 15, 16, 15, 16, 17, 16, 17, 18, -6, -7, -8, -5, -6, -7, -4, -5, -6, 40, 44,
 48, 50, 55, 60, 60, 66, 72]
iex> import Currying
iex> [curry(&+/2), curry(&-/2), curry(&*/2)] |> FunLand.apply_with([4,5,6]) |> FunLand.apply_with([10,11,12])
[14, 15, 16, 15, 16, 17, 16, 17, 18, -6, -7, -8, -5, -6, -7, -4, -5, -6, 40, 44,
 48, 50, 55, 60, 60, 66, 72]
iex> maybe_num1 = FunLandic.Maybe.just(10)
iex> maybe_num2 = FunLandic.Maybe.just(20)
iex> FunLand.map(maybe_num1, Currying.curry(&+/2)) |> FunLand.apply_with(maybe_num2)
iex> maybe_num2 = FunLandic.Maybe.nothing()
iex> FunLand.map(maybe_num1, Currying.curry(&+/2)) |> FunLand.apply_with(maybe_num2)

Partial function application (and its flip-side, currying) are very powerful functional programming techniques, but they have been unsupported by Erlang (probably because of its basis in Prolog, which is also where it obtained its ‘functions with different arities are different functions’ notion that is definitely related to this, from), the only possibility being to build a library-level wrapper around it that will not really be able to be optimized by the compiler.

(I know of two approaches:

  1. Create a macro that defines function clauses with less parameters given than required, that returns clauses to the higher-level function. This is what the curry library does. Main drawback: Impossible to use for functions with multiple clauses.
  2. Check the actual arity of the called function, and if the amount of supplied arguments is less, create a new anonymous function where the rest of the parameters could be passed in. This is what the Currying library does. (disclaimer: I wrote Currying)

Neither approach is probably very performant, as they are library-level constructs, rather than language-level.

That being said, Elixir’s & shorthand function syntax is a good alternative for most situations. (I don’t remember where, but José mentioned it as “Elixir’s best feature” somewhere. If he was jesting or serious, I do not know though :stuck_out_tongue_winking_eye:).


I don’t remember saying that but probably jesting as I don’t consider it the best feature. :slight_smile:

In any case, currying was thrown out of the window when we decided to be fully compatible with Erlang (i.e. name/arity) and also due to its performance costs. Dynamic loading and lack of a static type system means it is hard to avoid creating intermediate lambdas - which makes currying expensive. For better or worse, it is an idiom that is unlikely to ever be first-class in Elixir.


I still think there could be an Elixir-with-typing for that. ^.^