Really excited for this one, and just love the focus on performance and stability the community has these days … it’s a real asset to the BEAM ecosystem to have the Elixir community banging away like everyone is.
I have more ideas for optimisations directly in the compiler or the VM that could be done that would benefit especially Elixir and some more that could be done in the Elixir compiler, but there are only 24 hours in a day
I don’t really keep track of those. Maybe I should . Here are some of the things:
Most Enum calls return lists. Could we take advantage of that information and emit better code?
When you pattern match on a struct we know it’s type - can we use that to emit better code, especially if later the struct is updated with %Foo{foo | bar: :baz}?
merge map accesses - foo.bar and foo.baz could be compiled to a single pattern match on foo.
Compile for that don’t do any filtering into Enum.map instead of Enum.reduce.
Compile for that discards result into Enum.each instead of Enum.reduce with nil as an accumulator.
keep track of binary/bitstring types in beam_type.erl - this should improve, in some cases, the code Elixir emits for string interpolation,
improve when core passes can eliminate tuple allocations - e.g. alternative complication of with - second pattern described here
propagate type information in beam_type.erl through jumps and function calls (this is a more general one that would benefit Erlang as well).
allow the compiler to eliminate closures in some situations (that’s partially explained in here)
include get_map_elements instruction in basic blocks, this allows optimizing registries - PR #1506
improve sharing optimisation - especially benefits the new way with is compiled - PR #1511
eliminate unnecessary stack allocations in map pattern matches Issue #452
eliminate extra move instructions in binary pattern matches Issue #444
Those are the ones that come to my mind right now, if I can remember something more or think about something else, I’ll post here.