What would you remove from Elixir?

I suppose I would argue that there’s a bunch of macro stuff which is great for language designers but prob not good for library designers.

I and many ppl using Elixir have 2 responses when using macro based libraries:

At first -> cool, this looks appealing

Sometime later ->
this API isn’t v clear to me
this error message is pretty cryptic

I love with etc.

But agree strongly with the beginner’s guide caution about macros: i.e. “use with caution”.

So much so, that they should be seen as bad practice unless you are Mr. Valim (a little extreme, but not far from it… ; ))

That is different thing. In general I would also advise to not use macros unless you know what you are doing and that you know you can handle that. I have written some libraries that heavily utilise macros (ecto_function) but also I have removed a lot of macros from libraries that didn’t needed that (prometheus_ex). So macros are great, but you shouldn’t write one yourself.

1 Like

So write macros responsibly.

Don’t Write Macros But Do Learn How They Work Jesse Anderson, ElixirConf 2017


No, it’s closer to reduce but more succinct.

I agree, but Elixir prepends arguments instead of post-pends and it doesn’t handle anonymous functions cleanly like that.

I personally don’t know why they are maps, they should be tagged tuples on the beam, that is the usual ‘structural’ type after all… >.>

They should instead be called row-typed records or ‘objects’ (not OOP objects).

They should really look closer at it, that interfaces gains you a lot of safety and features that is not otherwise possible via a lower level SQL interface.

Not always, macro’s allow you to add to the language that which it is not otherwise capable of as efficiently or at all.

No, it already can be that, the issue is that using it would then require doing 42 |> &someFunction(&1, 1, 2). |> is a normal infix operator, you can implement it as a normal function or as an operator, it works fine either way, but by using a macro it allows a more ‘normal’ function call syntax, even though honestly I would rather kind of prefer the currying syntax, but you’d have to hope it gets optimized out, and honestly it probably wouldn’t (though a macro could fix that).

Structs and Records should have just been the same thing, a set of tagged tuples with information compiled into a module for introspection about it.

Eh technically some of those are special forms, although with could have been implemented with a better syntax as a macro in my opinion… >.>

I don’t, the comma-turds (erlang’ism) littering it’s expressions and the formatter making it look horrid are two big issues I have with it, both of which significantly harm readability for me (maybe not others, but for me, the formatter is just so bad on it…). I only use it to not pull in other libraries (that do it better) to reduce the dependencies I use.

1 Like

everything is just a reduce ; ) but i was messing around with maps with a colleague, and saw the map is a much closer analogy


thing is, random_library_authors think they knew some new language feature that will solve our woes - mostly not the case : (

The semantics are good, the syntax is indeed in need of a spring clean : )

Finally I thought of something!

I would remove the current Enum.map and similar functions, and replace them by ones that make more clear that we are destroying the structure and returning a list while mapping. And I would like to introduce a new built-in Protocol that allows you to iterate over things while keeping their structure. (So a transforming map rather than a destructive one)


This might not be the most obvious place, but the reasoning behind that behaviour by the Enumberable protocol is explained in the docs for the Collectable protocol. So using the Enum module does already imply you might loose shape as it’s working with a protocol build on that premise.


1 Like

Except map cannot filter or transform the return type to a different container type, where for can.

This is actually a great argument for making macro’s Less easy, which there are indeed some great arguments for. However I think tooling could fix it far better while keeping Macro’s easy, however the tooling that would really help would not easily exist without the language being strongly statically typed, which Elixir very much is not right now.

The syntax comes from them being Special Forms instead of just Macro’s. Really most of Elixir could be implement ‘in’ elixir directly as macro’s as long as we have our currently existing case special form or some method to build it. ^.^

So… monad interfaces? :wink:

Ugh yeh, forgot about that sort of thing.

I use for loops once or twice a year in any language. Thankfully…


I was talking about “user defined” operators, not about different form of parsing |> macro which is plain old macro so you can override is as you please. Currying is hard in BEAM languages (without help of any additional syntax) as there is no way to differentiate between foo/1 and foo/2 in that case, and if there would be currying built in into Elixir that do not need any “special syntax” then adding new method with lesser rarity than all existing ones could be breaking change.

TBH I thought it is implemented as regular macro. I do not know why it is in Kernel.SpecialForms o.O

The only reason why Erlang uses tuples instead of maps is that records are much older than maps (which became a thing in OTP 17). Both solutions have advantages and disadvantages and I think that using maps makes it more uniform (Erlang uses #{} for maps and #name{} for records, which can make it a little confusing) and simpler in implementation way (at least for me). And I think that Elixir way is better in “modern” OTPs.

1 Like

It is because it uses commas to separate expressions instead of a normal block as would be consistent with the rest of elixir (both with and for are horribly broken in this way I say…)

Or if you have type information. ^.^

Not really, most structs are pretty small, meaning the record is a lot faster to parse and update and you don’t have to worry about it containing ‘extra fields’ like row-typing would handle (bugs waiting to happen!).

Oh, yeah. I have forgot about that. Yeah, I think that using plain block would be better as well and would be more “do-like”. Maybe in Elixir 2.0 as I think it would be good choice. I do not think it would be possible with for though.

Typed Erlang/Elixir is completely different beast, however I think that it would be ambiguous like in this situation:

def foo(a), do: fn b -> a * b end
def foo(a, b), do: a + b

fun = foo(a) # it will be curried `foo/2` or lambda returned by `foo/1`?

And that is why small maps (up to 32 keys) have such semantic under the hood (thanks to @michalmuskala) since OTP 21. The disadvantage of records is that in such case you need to know definitions upfront (that is why Erlang utilises -include macro with .hrl files). With one map per module it can be easier. We could make records in similar way, but:

  • it would require record definition even for reading, you cannot use foo.bar to access field value, we would need different syntax for such actions, like foo%MyRecord.bar (following Erlang) or other sigil-magic
  • with OTP <21 it would collide with tuple_calls feature of BEAM (as Elixir uses the same syntax for accessing fields and calling functions)

So with current syntax “structures defined in form of records” would make a lot of things a lot harder.

I have my own project that replaces for with a macro comprehension. ^.^

Mine decorates what you pass in with types (or ‘Access’ if not otherwise known) so it can generate optimal code for the specific types being used, meaning it’s faster than for. It’s in one of my playground library and I should probably pull it out into it’s own project as it is quite functional…

My fairly trivial benchmark:

defmodule Helpers do
  use ExCore.Comprehension

  # map * 2

  def elixir_0(l) do
      x <- l,
      do: x * 2

  def ex_core_0(l) do
    comp do
      x <- list l
      x * 2

  # Into map value to value*2 after adding 1

  def elixir_1(l) do
      x <- l,
      y = x + 1,
      into: %{},
      do: {x, y * 2}

  def ex_core_1(l) do
    comp do
      x <- list l
      y = x + 1
      {x, y * 2} -> %{} # line 35

inputs = %{
  "List - 10000 - map*2" => {:lists.seq(0, 10000), &Helpers.elixir_0/1, &Helpers.ex_core_0/1},
  "List - 10000 - into map +1 even *2" => {:lists.seq(0, 10000), &Helpers.elixir_1/1, &Helpers.ex_core_1/1},

actions = %{
  "Elixir.for"  => fn {input, elx, _core} -> elx.(input) end,
  "ExCore.comp" => fn {input, _elx, core} -> core.(input) end,

Benchee.run actions, inputs: inputs, time: 5, warmup: 5, print: %{fast_warning: false}

And the results locally right now:

Operating System: Linux
CPU Information: Blah
Number of Available Cores: 6
Available memory: 16.430148 GB
Elixir 1.6.6
Erlang 21.1.1
Benchmark suite executing with the following configuration:
warmup: 5.00 s
time: 5.00 s
parallel: 1
inputs: List - 10000 - into map +1 even *2, List - 10000 - map*2
Estimated total run time: 40.00 s

Benchmarking with input List - 10000 - into map +1 even *2:
Benchmarking Elixir.for...
Benchmarking ExCore.comp...

Benchmarking with input List - 10000 - map*2:
Benchmarking Elixir.for...
Benchmarking ExCore.comp...

##### With input List - 10000 - into map +1 even *2 #####
Name                  ips        average  deviation         median
ExCore.comp        370.81        2.70 ms     ±2.76%        2.67 ms
Elixir.for         245.68        4.07 ms    ±21.72%        3.90 ms

ExCore.comp        370.81
Elixir.for         245.68 - 1.51x slower

##### With input List - 10000 - map*2 #####
Name                  ips        average  deviation         median
ExCore.comp        2.50 K      399.55 μs     ±9.28%      405.00 μs
Elixir.for         1.92 K      521.94 μs     ±7.26%      535.00 μs

ExCore.comp        2.50 K
Elixir.for         1.92 K - 1.31x slower

Interestingly it won’t run on Elixir 1.7.4, get a syntax error, @josevalim did something change in a backwards incompatible way with Elixir 1.7.4? o.O

╰─➤  mix bench comprehension                                                                                                      1 ↵
** (SyntaxError) bench/comprehension_bench.exs:35: unexpected operator ->. If you want to define multiple clauses, the first expression must use ->. Syntax error before: '->'
    (elixir) lib/code.ex:767: Code.require_file/2
    (mix) lib/mix/tasks/run.ex:146: Mix.Tasks.Run.run/5
    (mix) lib/mix/tasks/run.ex:85: Mix.Tasks.Run.run/1
    (elixir) lib/enum.ex:1314: Enum."-map/2-lists^map/1-0-"/2
    (mix) lib/mix/task.ex:355: Mix.Task.run_alias/3
    (mix) lib/mix/task.ex:279: Mix.Task.run/2

I notated line 35 as # line 35 in the above benchmark source. Feel free to clone git clone https://github.com/OvermindDL1/ex_core.git && mix deps.get && mix compile && mix bench comprehension

Backwards incompatibility thread opened at: Elixir 1.7.4 backwards incompatibility

Seems awesome. I would only like to know how it behaves when you also want to filter some entries like you can do with for, ex.: for a <- 1..10, mod(a, 2) == 0, do: a * a.

I would be nice to get rid of that feeling I get using other languages and platforms that “this would be better in Elixir.” :joy:

With comp it was going to be just a matcher name like:

comp do
  x <- list l
  x <- filter mod(x, 2) == 0 # Only evens
  x * 2 # Double the evens

The ‘type’ definition was the code generation command, so list generated optimal code for list iteration, filter would generate optimal code for running the test, etc… etc… I was considering if to make it user extensible as well for custom types that could get around needing to incur the ‘Access’ cost for generic enumerations.

1 Like

At least 1/3 of the core’s API.

I’d remove the required argument from start_link, that is: give it a default value of nil. It is very common that:

  • there is no sensible default value to give to the module
  • the starting value is always the same, in which case it should not be specified in the caller
  • The module has to set up its state after it was started (for instance because state setup takes too long or might fail), which means it will not have a state that is used yet until after init returns.

In general you should pass runtime configuration there.

Why not? Also if we take into account above you will often change that value in tests.

For this you have handle_continue/2 callbacks as you shouldn’t do long initialisations in init/1 either.

1 Like