That is why I prefer writing Elixir-style for-comprehensions as multiple lines. 
Though doing that all in a function argument with commas at the end of each-but-final line is irritating, so I ended up making my own version of for comprehensions a while back that even outperform the stock version of for comprehensions (with even more additional abilities) thanks to some type information you can supply. 
How mine look (remember, a bit of extra typing information, I prefer it as it makes it both more clear to me and to the compiler of my comprehension (named comp
since for
is taken) and allows it to generate at worst equal to Elixir stock-for performance and at best a LOT better performance! ^.^
Example from my very limited docs (as I’ve not published it or anything, I just use it at times in my own stuff, I probably should clean it up sometime and release my ‘core’ library after removing cruft):
iex> comp do
...> x <- list [1, 2, 3]
...> x
...> end
[1, 2, 3]
iex> comp do
...> x <- list [1, 2, 3]
...> x * 2
...> end
[2, 4, 6]
iex> l = [1, 2, 3]
iex> comp do
...> x <- list [1, 2, 3]
...> y <- list l
...> x * y
...> end
[1, 2, 3, 2, 4, 6, 3, 6, 9]
And a stupid-simple benchmark I made when I was initially developing it:
defmodule Helpers do
use ExCore.Comprehension
# map * 2
def elixir_0(l) do
for\
x <- l,
do: x * 2
end
def ex_core_0(l) do
comp do
x <- list l
x * 2
end
end
# Into map value to value*2 after adding 1
def elixir_1(l) do
for\
x <- l,
y = x + 1,
into: %{},
do: {x, y * 2}
end
def ex_core_1(l) do
comp do
x <- list l
y = x + 1
{x, y * 2} -> %{}
end
end
end
inputs = %{
"List - 10000 - map*2" => {:lists.seq(0, 10000), &Helpers.elixir_0/1, &Helpers.ex_core_0/1},
"List - 10000 - into map +1 even *2" => {:lists.seq(0, 10000), &Helpers.elixir_1/1, &Helpers.ex_core_1/1},
}
actions = %{
"Elixir.for" => fn {input, elx, _core} -> elx.(input) end,
"ExCore.comp" => fn {input, _elx, core} -> core.(input) end,
}
Benchee.run actions, inputs: inputs, time: 5, warmup: 5, print: %{fast_warning: false}
And the results:
Operating System: Linux
CPU Information: Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz
Number of Available Cores: 2
Available memory: 8.011072 GB
Elixir 1.6.0-dev
Erlang 20.1
Benchmark suite executing with the following configuration:
warmup: 5.00 s
time: 5.00 s
parallel: 1
inputs: List - 10000 - into map +1 even *2, List - 10000 - map*2
Estimated total run time: 40.00 s
Benchmarking with input List - 10000 - into map +1 even *2:
Benchmarking Elixir.for...
Benchmarking ExCore.comp...
Benchmarking with input List - 10000 - map*2:
Benchmarking Elixir.for...
Benchmarking ExCore.comp...
##### With input List - 10000 - into map +1 even *2 #####
Name ips average deviation median
ExCore.comp 342.58 2.92 ms ±4.24% 2.89 ms
Elixir.for 307.20 3.26 ms ±5.52% 3.21 ms
Comparison:
ExCore.comp 342.58
Elixir.for 307.20 - 1.12x slower
##### With input List - 10000 - map*2 #####
Name ips average deviation median
ExCore.comp 2.48 K 403.16 μs ±17.93% 403.00 μs
Elixir.for 1.99 K 501.74 μs ±12.10% 512.00 μs
Comparison:
ExCore.comp 2.48 K
Elixir.for 1.99 K - 1.24x slower
And it generated this code if you are curious as to how it works and is so fast:
(
(
defp($comp_ex_core_1_1_32(l, acc)) do
:maps.from_list($comp_ex_core_1_1_32_2(l, acc, l))
end
(
defp($comp_ex_core_1_1_32_2(l, acc, [])) do
_ = l
acc
end
defp($comp_ex_core_1_1_32_2(l, acc, [x | list]) when true) do
_ = l
acc = [(
y = x + 1
{x, y * 2}
) | acc]
$comp_ex_core_1_1_32_2(l, acc, list)
end
defp($comp_ex_core_1_1_32_2(l, acc, [_ | list])) do
$comp_ex_core_1_1_32_2(l, acc, list)
end
)
)
(
defp($comp_ex_core_0_1_15(l, acc)) do
:lists.reverse($comp_ex_core_0_1_15_2(l, acc, l))
end
(
defp($comp_ex_core_0_1_15_2(l, acc, [])) do
_ = l
acc
end
defp($comp_ex_core_0_1_15_2(l, acc, [x | list]) when true) do
_ = l
acc = [x * 2 | acc]
$comp_ex_core_0_1_15_2(l, acc, list)
end
defp($comp_ex_core_0_1_15_2(l, acc, [_ | list])) do
$comp_ex_core_0_1_15_2(l, acc, list)
end
)
)
)
Which should be fairly optimal erlang (after the usual optimization passes) if my memory does not fault me. I cannot run the code through the formatter because Elixir is not homoiconic and it’s AST can represent things that the language cannot, so the above display is just Elixir’s “Best Effort” at outputting valid Elixir (which failed in this case, it should have unquoted the names and so forth).
The basic syntas is as such:
comp do # Standard body wrapping
# The body can contain a variety of things, but 3 basic 'formats (`<>` is requires, `[]` is optional)
<match> [when blah] <- <cmd> <something>
# Matches the output of the `something` expression, interpreted as a `cmd` (you can use `Access` as
# a fallback for any iterable type), a few built in ones like map/list/filter and such, but any custom user
# module works too (which is how the `Access` fallback works) that follows the Access style calls.
# The when part is optional of course, the match is what it is matched with, if the match fails then it
# bypasses this loop as expected.
# If this is the last expression in the body then the value of this binding is what is returned for this loop.
[binding =] <expr>
# A simple binding expression, not a filter like in Elixir's `for` (use the `filter` command for that)
# If this is the last expression in the body then it is returned, useful when you have just an expression
# without a binding (though you can always have an expression for it's side-effects elsewhere).
expr -> <type>
# If this exists it must be the last value of the body (or it errors). The expression can be any expression
# that returns a list (I need to support typing this better...) and it is returned 'as' the type (like Elixir's `for`).
# You can specify an example of the type like `%{}` or `[]` or so, or the typespec name of it, like `map()` or
# `list()` or so (or a custom module). For types it knows about internally then it will generate optimized code
# for it.
end # Standard body ending
I prefer this style a lot better for a large body-style iterable. Though I still wish we had Erlang’s comprehension syntax ‘as well’; it is SO much shorter and more readable (though very constrained in what you can do) so would be perfect for ‘most’ use-cases, like the one a few posts above. 