Elixir v1.20.0-rc.2 and v1.20.0-rc.3 released

Overall, the compiler finds more bugs, for free, and it has never been faster:

  • Infers types across clauses, finding more bugs and dead code

  • Compiles ~10% faster and has a new interpreted mode (up to 5x faster, scales to the number of cores). For more information, follow the benchmarks

  • Modifying a struct definition recompiles fewer files (it no longer requires files that only pattern match or update structs to recompile)

1. Enhancements

Elixir

  • [Code] Add module_definition: :interpreted option to Code which allows module definitions to be evaluated instead of compiled. In some applications/architectures, this can lead to drastic improvements to compilation times. Note this does not affect the generated .beam file, which will have the same performance/behaviour as before
  • [Code] Make module purging opt-in and move temporary module deletion to the background to speed up compilation times
  • [Integer] Add Integer.popcount/1
  • [Kernel] Move struct validation in patterns and updates to type checker, this means adding and remove struct fields will cause fewer files to be recompiled
  • [Kernel] Add type inference across clauses. For example, if one clause says x when is_integer(x), then the next clause may no longer be an integer
  • [Kernel] Detect and warn on redundant clauses
  • [List] Add List.first!/1 and List.last!/1
  • Add Software Bill of Materials guide to the Documentation

Mix

  • [mix compile] Add module_definition: :interpreted option to Code which allows module definitions to be evaluated instead of compiled. In some applications/architectures, this can lead to drastic improvements to compilation times. Note this does not affect the generated .beam file, which will have the same performance/behaviour as before
  • [mix deps] Parallelize dep lock status checks during deps.loadpaths, improving boot times in projects with many git dependencies

2. Potential breaking changes

Elixir

  • map.foo() (accessing a map field with parens) and mod.foo (invoking a function without parens) will now raise instead of emitting runtime warnings, aligning themselves with the type system behaviour

3. Bug fixes

IEx

  • [IEx] Ensure warnings emitted during IEx parsing are properly displayed/printed
  • [IEx] Ensure pry works across remote nodes

Mix

  • [mix compile.erlang] Topsort Erlang modules before compilation for proper dependency resolution
39 Likes

As with previous RCs, give it a try and let us know about compilation times and any false or unclear warnings!

7 Likes

Files: 2994, Lines: 412276, Code: 349918, Comments: 6592
CPU: Apple M1 Max (10-cores), 32GB RAM
MIX_OS_DEPS_COMPILE_PARTITION_COUNT not set

Elixir 1.20.0-rc.2-otp-28

mix deps.compile 312.59s user 60.35s system 226% cpu 2:44.85 total
mix compile --force 120.38s user 22.69s system 429% cpu 33.347 total

Elixir 1.19.5-otp-28

mix deps.compile 286.97s user 57.77s system 241% cpu 2:22.93 total
mix compile --force 114.93s user 21.28s system 414% cpu 32.892 total

4 Likes

All run with Erlang 28.4, using time mix compile --force --profile time

1.19.5-otp-28

[profile] Finished cycle resolution in 0ms
[profile] Finished compilation cycle of 492 modules in 8234ms
[profile] Finished writing modules to disk in 53ms
[profile] Finished after compile callback in 1095ms
[profile] Finished group pass check of 492 modules in 201ms
Executed in   36.69 secs    fish           external
   usr time   41.91 secs    0.42 millis   41.91 secs
   sys time    7.75 secs    4.52 millis    7.74 secs

1.20.0-rc.0

Application does not compile

1.20.0-rc.1-otp-28

[profile] Finished cycle resolution in 0ms
[profile] Finished compilation cycle of 492 modules in 8444ms
[profile] Finished writing modules to disk in 50ms
[profile] Finished after compile callback in 1193ms
[profile] Finished group pass check of 492 modules in 212ms
Executed in   38.67 secs    fish           external
   usr time   42.52 secs    0.44 millis   42.51 secs
   sys time    8.45 secs    4.03 millis    8.45 secs

1.20.0-rc.2-otp-28

[profile] Finished cycle resolution in 0ms
[profile] Finished compilation cycle of 492 modules in 6833ms
[profile] Finished writing modules to disk in 40ms
[profile] Finished after compile callback in 344ms
[profile] Finished group pass check of 492 modules in 322ms
Executed in   34.96 secs    fish           external
   usr time   26.77 secs    0.29 millis   26.77 secs
   sys time    5.98 secs    4.01 millis    5.98 secs
7 Likes

@josevalim Type checker crash: FunctionClauseError in Module.Types.Pattern.badpattern/2 (rc.2) · Issue #15131 · elixir-lang/elixir · GitHub

2 Likes

Already running it on prod, my claude.md file in my monorepo has guardrails for tests and now for compile type errors. Thank you!

Hey! I ran into a compilation issue with open_api_spex on rc.2 that doesn’t happen on rc.1.

On rc.1 the whole thing compiles fine in ~29s. On rc.2 (and current main at 6fd161d) it just hangs forever on two modules: cast.ex and deprecated_cast.ex. I waited 5+ minutes before giving up and killing it. The compiler prints the “it’s taking more than 10s” message for both and never moves past them.

Both modules have a ton of function clauses (~20+) doing pattern matching on nested structs and maps with overlapping shapes, stuff like:

def cast(%__MODULE__{value: nil, schema: %{nullable: true}}), do: ...

def cast(%__MODULE__{value: nil, schema: %{nullable: false}} = ctx), do: ...

def cast(%__MODULE__{value: nil, schema: %{oneOf: list}} = ctx) when is_list(list), do: ...

def cast(%__MODULE__{schema: %{type: :object}} = ctx), do: ...

def cast(%__MODULE__{schema: %{type: :string}} = ctx), do: ...

*# ... many more*

def cast(%{} = ctx), do: cast(struct(\__MODULE_\_, ctx))

My guess is the type checker is getting tripped up by all these overlapping map/struct patterns and blowing up in time.

3 Likes

Phew, there was an extreme regression in compiler performance, for the project I used to evaluate the last RC

timestamp             elixir_version      stage     partition     seconds
--------------------  ------------------  --------  ------------  -------
2026-03-05T13:39:45Z  1.18.4-otp-28       deps      unset         77.32
2026-03-05T13:39:58Z  1.18.4-otp-28       project   default       12.77
2026-03-05T13:41:12Z  1.19.3-otp-28       deps      unset         74.10
2026-03-05T13:41:56Z  1.19.3-otp-28       deps      2             43.29
2026-03-05T13:42:29Z  1.19.3-otp-28       deps      4             33.32
2026-03-05T13:42:43Z  1.19.3-otp-28       project   default       13.82
2026-03-05T13:43:53Z  1.19.5-otp-28       deps      unset         70.23
2026-03-05T13:44:35Z  1.19.5-otp-28       deps      2             42.40
2026-03-05T13:45:08Z  1.19.5-otp-28       deps      4             32.47
2026-03-05T13:45:21Z  1.19.5-otp-28       project   default       13.20
2026-03-05T13:46:33Z  1.20.0-rc.1-otp-28  deps      unset         72.00
2026-03-05T13:47:16Z  1.20.0-rc.1-otp-28  deps      2             42.63
2026-03-05T13:47:49Z  1.20.0-rc.1-otp-28  deps      4             33.29
2026-03-05T13:48:03Z  1.20.0-rc.1-otp-28  project   default       13.74
2026-03-05T13:49:11Z  1.20.0-rc.2-otp-28  deps      unset         68.06
2026-03-05T13:49:53Z  1.20.0-rc.2-otp-28  deps      2             42.51
2026-03-05T13:50:26Z  1.20.0-rc.2-otp-28  deps      4             32.89
2026-03-05T13:53:14Z  1.20.0-rc.2-otp-28  project   default       167.24

I used time mix compile --force --profile time to get an idea for what is the cause. One file containing ~35 objects and input opjects for a graphql api jumped from ~220ms to 74s. The file doesn’t contain any macros that would explain the massive increase in compile time. Is there anything I can do get get a better idea on what is causing this?

I didn’t check the out the warnings yet, because I think this is a more glaring issue

Edit: Sorry, this was not supposed to be a reply

1 Like

Yes, some of the new checks are expensive and we got bug reports, we are looking into optimized them right now. :slight_smile:

11 Likes

We have released v1.20.0-rc.3 with many performance improvements around the type system. See the CHANGELOG below.

For those using Absinthe, I have created this pull request which I recommend using until v1.9.1 is out: perf: Break function dispatch into groups by josevalim · Pull Request #1414 · absinthe-graphql/absinthe · GitHub


1. Enhancements

IEx

  • [IEx] Optimize autocompleting modules

2. Bug fixes

Elixir

  • [Enum] Fix Enum.slice/2 for ranges with step > 1 sliced by step > 1
  • [File] Preserve directory permissions in File.cp_r/3
  • [File] Fix File.cp_r/3 infinite loop with symlink cycles
  • [File] Fix File.cp_r/3 infinite loop when copying into subdirectory of source
  • [File] Warn when defining @type record(), fixes CI on Erlang/OTP 29
  • [File] Fix File.Stream Enumerable.count for files without trailing newline
  • [Float] Fix Float.parse/1 inconsistent error handling for non-scientific notation overflow
  • [Kernel] Process fields even when structs are unknown (regression)
  • [Kernel] Improve performance on several corner cases in the type system (regression)
  • [Kernel] Fix regression when using Kernel.in/2 in defguard (regression)
10 Likes

Apple M1 Max (10-cores), MIX_OS_DEPS_COMPILE_PARTITION_COUNT not set

Elixir 1.19.5-otp-28 + Erlang 28.4
mix deps.compile --force 284.31s user 54.21s system 231% cpu 2:25.97 total
mix compile --force 114.06s user 20.10s system 416% cpu 32.182 total

Elixir 1.20.0-rc.3-otp-28 + Erlang 28.4
mix deps.compile --force 235.06s user 39.24s system 202% cpu 2:15.61 total
mix compile --force 87.31s user 15.89s system 377% cpu 27.335 total

:exploding_head:

4 Likes

1.19.5.-otp-28

[profile] Finished cycle resolution in 0ms
[profile] Finished compilation cycle of 499 modules in 8490ms
[profile] Finished writing modules to disk in 49ms
[profile] Finished after compile callback in 972ms
[profile] Finished group pass check of 499 modules in 203ms
Executed in   10.29 secs    fish           external
   usr time   42.28 secs    0.27 millis   42.28 secs
   sys time    7.64 secs    2.13 millis    7.64 secs

1.20.0-rc.2-otp-28

[profile] Finished cycle resolution in 0ms
[profile] Finished compilation cycle of 499 modules in 7269ms
[profile] Finished writing modules to disk in 25ms
[profile] Finished after compile callback in 307ms
[profile] Finished group pass check of 499 modules in 235ms
Executed in    8.46 secs    fish           external
   usr time   25.73 secs    0.32 millis   25.73 secs
   sys time    4.49 secs    2.26 millis    4.49 secs

1.20.0-rc.3-otp-28

[profile] Finished cycle resolution in 0ms
[profile] Finished compilation cycle of 499 modules in 7055ms
[profile] Finished writing modules to disk in 28ms
[profile] Finished after compile callback in 250ms
[profile] Finished group pass check of 499 modules in 231ms
Executed in    8.09 secs    fish           external
   usr time   25.52 secs    0.27 millis   25.52 secs
   sys time    4.48 secs    1.97 millis    4.48 secs

The times I reported previously also included time to compile the dependencies while these do not, I’m not sure which is more correct/helpful.

4 Likes

Performance has definitely improved from rc.2 for the project I’m benchmarking against.
Previous values above

timestamp             elixir_version      stage     partition     seconds
--------------------  ------------------  --------  ------------  -------
2026-03-10T09:04:59Z  1.20.0-rc.3-otp-28  deps      unset         68.38
2026-03-10T09:05:41Z  1.20.0-rc.3-otp-28  deps      2             41.69
2026-03-10T09:06:13Z  1.20.0-rc.3-otp-28  deps      4             31.59
2026-03-10T09:06:29Z  1.20.0-rc.3-otp-28  project   default       16.25
--------------------  ------------------  --------  ------------  -------
2026-03-10T09:08:03Z  1.20.0-rc.3-otp-28  deps      unset         69.89
2026-03-10T09:08:45Z  1.20.0-rc.3-otp-28  deps      2             42.17
2026-03-10T09:09:17Z  1.20.0-rc.3-otp-28  deps      4             32.01
2026-03-10T09:09:34Z  1.20.0-rc.3-otp-28  project   default       17.05

Also I should have done so earlier, but here’s the script I use for creating these tables with ease.

3 Likes

Did you use the GraphQL patch mentioned above? That should bring further improvements. On another project we saw compilation time go from 15s to 10s with the GraphQL patch!

1 Like

My bad, I didn’t notice the pull request in the comment. There is a huge difference after switching to that PR in both rc.2 and rc.3 :heart:

timestamp             elixir_version      stage     partition     seconds
--------------------  ------------------  --------  ------------  -------
2026-03-10T14:22:43Z  1.18.4-otp-28       project   default       15.11
2026-03-10T14:25:25Z  1.19.3-otp-28       project   default       13.61
2026-03-10T14:28:04Z  1.19.5-otp-28       project   default       13.03
2026-03-10T14:30:46Z  1.20.0-rc.1-otp-28  project   default       13.47
2026-03-10T14:33:19Z  1.20.0-rc.2-otp-28  project   default       12.28
2026-03-10T14:35:50Z  1.20.0-rc.3-otp-28  project   default       10.39
2 Likes

elixir 1.19.3-otp-28 => 1 minutes and 45 seconds elapsed.

elixir 1.20.0-rc.3-otp-28 => 2 minutes and 39 seconds elapsed.

The main culprit for the slowdown:
Verifying OpenApiSpex.Cast.Object (it's taking more than 10s)

(version {:open_api_spex, "~> 3.22.2"})

1 Like

Please open up an issue if you can isolate it down! It may be as simple as putting that file in a separate project. And it will be very important for us to push further improvements to the type system.

I created an issue and a PR (the PR does not solve compile times it just reduces the number of compiler complaints)

Tested open_api_spex:

elixir 1.19.5-otp-28      -> 0 minutes and 5 seconds elapsed.
elixir 1.20.0-rc.3-otp-28 -> 0 minutes and 36 seconds elapsed.

Created an issue here: 701
(will try to further investigate over the weekend - but mainly I think longer compile times are due to macros)

Script used to calculate compile times:

#!/bin/bash

# clean the current build directory
rm -fr ./_build
rm -fr deps

mix deps.get

SECONDS=0
MIX_ENV=prod mix compile --no-optional-deps
duration=$SECONDS
echo "$((duration / 60)) minutes and $((duration % 60)) seconds elapsed."
1 Like

Thank you! I did benchmark open_api_spex but for some reason it compiles much faster here. I will try to reproduce the large slow down.