Elixir v1.16.0-rc.0 released

Code snippets in diagnostics

Elixir v1.15 introduced a new compiler diagnostic format and the ability to print multiple error diagnostics per compilation (in addition to multiple warnings).

With Elixir v1.16, we also include code snippets in exceptions and diagnostics raised by the compiler. For example, a syntax error now includes a pointer to where the error happened:

** (SyntaxError) invalid syntax found on lib/my_app.ex:1:17:
    error: syntax error before: '*'
  1 │ [1, 2, 3, 4, 5, *]
    │                 ^
    └─ lib/my_app.ex:1:17

For mismatched delimiters, it now shows both delimiters:

** (MismatchedDelimiterError) mismatched delimiter found on lib/my_app.ex:1:18:
    error: unexpected token: )
  1 │ [1, 2, 3, 4, 5, 6)
    │ │                └ mismatched closing delimiter (expected "]")
    │ └ unclosed delimiter
    └─ lib/my_app.ex:1:18

Errors and warnings diagnostics also include code snippets. When possible, we will show precise spans, such as on undefined variables:

  error: undefined variable "unknown_var"
5 │     a - unknown_var
  │         ^^^^^^^^^^^
  └─ lib/sample.ex:5:9: Sample.foo/1

Otherwise the whole line is underlined:

error: function names should start with lowercase characters or underscore, invalid name CamelCase
3 │   def CamelCase do
  │   ^^^^^^^^^^^^^^^^
  └─ lib/sample.ex:3

A huge thank you to Vinícius Muller for working on the new diagnostics.

Revamped documentation

Elixir’s Getting Started guided has been made part of the Elixir repository and incorporated into ExDoc. This was an opportunity to revisit and unify all official guides and references.

We have also incorporated and extended the work on Understanding Code Smells in Elixir Functional Language, by Lucas Vegi and Marco Tulio Valente, from ASERG/DCC/UFMG, into the official document in the form of anti-patterns. The anti-patterns are divided into four categories: code-related, design-related, process-related, and meta-programming. Our goal is to give all developers examples of potential anti-patterns, with context and examples on how to improve their codebases.

Another ExDoc feature we have incorporated in this release is the addition of cheatsheets, starting with a cheatsheet for the Enum module. If you would like to contribute future cheatsheets to Elixir itself, feel free to start a discussion with an issue.

Finally, we have started enriching our documentation with Mermaid.js diagrams. You can find examples in the GenServer and Supervisor docs.

v1.16.0-rc.0 (2023-10-31)

1. Enhancements


  • [EEx] Include relative file information in diagnostics


  • [Code] Automatically include columns in parsing options
  • [Code] Introduce MismatchedDelimiterError for handling mismatched delimiter exceptions
  • [Code.Fragment] Handle anonymous calls in fragments
  • [Code.Formatter] Trim trailing whitespace on heredocs with \r\n
  • [Kernel] Suggest module names based on suffix and casing errors when the module does not exist in UndefinedFunctionError
  • [Kernel.ParallelCompiler] Introduce Kernel.ParallelCompiler.pmap/2 to compile multiple additional entries in parallel
  • [Kernel.SpecialForms] Warn if True/False/Nil are used as aliases and there is no such alias
  • [Macro] Add Macro.compile_apply/4
  • [Module] Add support for @nifs annotation from Erlang/OTP 25
  • [Module] Add support for missing @dialyzer configuration
  • [String] Update to Unicode 15.1.0
  • [Task] Add :limit option to Task.yield_many/2


  • [mix] Add MIX_PROFILE to profile a list of comma separated tasks
  • [mix compile.elixir] Optimize scenario where there are thousands of files in lib/ and one of them is changed
  • [mix test] Allow testing multiple file:line at once, such as mix test test/foo_test.exs:13 test/bar_test.exs:27

2. Bug fixes


  • [Code.Fragment] Fix crash in Code.Fragment.surround_context/2 when matching on ->
  • [IO] Raise when using IO.binwrite/2 on terminated device (mirroring IO.write/2)
  • [Kernel] Do not expand aliases recursively (the alias stored in Macro.Env is already expanded)
  • [Kernel] Ensure dbg module is a compile-time dependency
  • [Kernel] Warn when a private function or macro uses unquote/1 and the function/macro itself is unused
  • [Kernel] Do not define an alias for nested modules starting with Elixir. in their definition
  • [Kernel.ParallelCompiler] Consider a module has been defined in @after_compile callbacks to avoid deadlocks
  • [Path] Ensure Path.relative_to/2 returns a relative path when the given argument does not share a common prefix with cwd


  • [ExUnit] Raise on incorrectly dedented doctests


  • [Mix] Ensure files with duplicate modules are recompiled whenever any of the files change

3. Soft deprecations (no warnings emitted)


  • [File] Deprecate File.stream!(file, options, line_or_bytes) in favor of keeping the options as last argument, as in File.stream!(file, line_or_bytes, options)
  • [Kernel.ParallelCompiler] Deprecate Kernel.ParallelCompiler.async/1 in favor of Kernel.ParallelCompiler.pmap/2
  • [Path] Deprecate Path.safe_relative_to/2 in favor of Path.safe_relative/2

4. Hard deprecations


  • [Date] Deprecate inferring a range with negative step, call Date.range/3 with a negative step instead
  • [Enum] Deprecate passing a range with negative step on Enum.slice/2, give first..last//1 instead
  • [Kernel] ~R/.../ is deprecated in favor of ~r/.../. This is because ~R/.../ still allowed escape codes, which did not fit the definition of uppercase sigils
  • [String] Deprecate passing a range with negative step on String.slice/2, give first..last//1 instead


  • [ExUnit.Formatter] Deprecate format_time/2, use format_times/1 instead


  • [mix compile.leex] Require :leex to be added as a compiler to run the leex compiler
  • [mix compile.yecc] Require :yecc to be added as a compiler to run the yecc compiler

For people reading left-to-right (I’d argue most of the literate humanity) this reads like “replies come before requests, WTF?”.

My Mermaid.JS is rusty so I can’t offer a PR at the moment (plus I don’t have much free time, if any) but I’d suggest the diagram to be redone so it gives the clearer impression of (1) several clients sending requests in parallel, (2) they get replies and (3) replies come one by one and never in parallel.


Is there more information on the @nifs attribute? Module — Elixir v1.16.0-rc.0

If I am reading the erlang documentation right this is supposed to be specified like the following

defmodule Thing do
  @nifs [foo: 1, bar: 2]

  def foo(thing1), do: :erlang.nif_error(:not_loaded)

  def bar(thing1, thing2), do: :erlang.nif_error(:not_loaded)

Also worth capturing that whilst sending requests may appear to be sent in parallel they actually get serialized to “near the end” of the process message box through a clever lock free algorithm in the BEAM. Furthermore the BEAM punishes the callers when a process/GenServer message box grows large by reducing their scheduling reduction count so that in effect it creates a kind of back presssure so that existing queued work can be processed ahead of new work.

The subsequent processing of those messages within the genserver is also a serial process and once a message is received by the genserver process, all further message processing is blocked whilst the genserver is busy doing work, waiting on an IO or otherwise detained from servicing the message box in the receive call within the genserver “runloop”.

Hence why with any non trivial genserver processing it is typical to use a “hot potato” approach and spawn/dispatch the actual request processing to yet another process so as to allow the genserver process to get back to servicing the message queue as fast as possible.

If you don’t reduce the time spent between handle_* callbacks and receive then the latency experienced by clients of the genserver increases in direct proportion to every instruction/reduction spent outside of receive.

This is why we use :noreply in handle_call to allow returning to receive and replying from another process such as a Task that actually does the work.


Yeah, I do scatter-gather processing like this often (and not only in Elixir). The GenServer becomes a bottleneck that is used both for back-pressure and to enforce rate limits (in case of having to use 3rd party API with quotas, for example). And then the GenServer actually delegates the actual work to Tasks spawned by a DynamicSupervisor. Sometimes I am also putting an upper limit to those spawned tasks but I have rarely found a need for it because the BEAM is extremely tolerant towards a huge amount of processes. The limits were dictated by an actual resource on the machine that can’t take too much hammering (like a hard drive) or external API rate limits, 100% of the time.

Yes it’s kind of a tension between having requests that have been accepted by the genserver and converted to process contexts sitting in a scheduling queue vs the requests not yet accepted sitting in the message queue.

At the point the BEAM schedulers are fully saturated you just need to ensure there is enough “process work” queued to minimise stalls, which kind of suggests it’s better to convert process messages into processes when you can to avoid stalling but not beyond the inherent capacity of the system (back pressure is necessary in this case).

Back pressure and indeed fail fast semantics on external callers that are inducing more load than can be serviced by the available resources is also not that well understood.

Failure to fail fast and handle these conditions is an architectural flaw I see in many systems and it results in the latencies spiking whilst network buffers and memory use blows out in each hop/tier of the application often leading to hosts crashing with out of memory. I am mostly talking about typical non Erlang systems here such as those Java and “service bus architectures” which I’ve found are fragile as fsck when there is a hiccup.

Failing fast and refusing work so it caps the memory, cpu and network pressure to what can actually be serviced is how we save systems, not pretending we can absorb demand unbounded which leads to unintended queues manifesting in the strangest of places in application architecture. Typically this afflicts non Erlang/elixir systems as the BEAM forces you to deal with these things in a sound way using processes and message passing rather than queuing being offuscated through multi threaded hell systems.

1 Like

I know! I couldn’t make the diagram behave otherwise, so we need to decide to either remove it or keep it, warts and all.

If someone wants to try a PR, it will be very welcome. I have dealt with enough nits this week but I am also glad to remove it if we all consider it more misleading than helpful. :smiley:

1 Like

Yes, correct, I will add an example. :slight_smile:


FWIW, this has been removed for a while.


I have mixed feelings about cheatsheets. The more documentation, the better. But existing cheatsheet just duplicates examples from @doc, isn’t it? Personally, I can’t tell when some code snippet should be put to cheatsheet rather than to docs of the function.

Both. They are meant to be consumed differently. Cheatsheets are meant to provide a quick glance on how to use a function or an API. Docs provide several examples, contexts, etc, which make them harder to scroll and/or glance.


I believe this is an illustrative image with a simplified view but that is useful so you should leave it. Maybe add a small note saying exactly that: it’s a simplified version of how it works in reality.

Ok thanks Jose I’ll take your word for it.

Do you know if it was a recent beam change? My reading indicated it still seemed to be a feature in 2019 that senders can still lose reductions in some circumstances, potentially with remote message sends.

It has been a while, I would read the major release notes here: Otp 25.0 - Erlang/OTP, Otp 24.0 - Erlang/OTP, etc.

There is a penalty based on the message size but no longer on the message queue of the receiving process.


I think this is a cool visually styled cheat sheet for Enum by @angelikatyborska :


Found where they removed that, it was in OTP 21.

The reduction cost of sending messages is now constant. It will no longer scale according to the length of the receiving process’ message queue.

However there was a change added (exactly as you said) in OTP 22 to punish sender’s of large messages:


Processes sending messages are now punished with a reduction cost based on message size. That is, a process sending a large message will yield earlier than before.

1 Like

I think of docs vs. cheatsheets like man pages vs. a program like tldr (Funny, that project even has the word ‘cheatsheet’ in the description).

The docs give a broad, deep overview of every aspect of what a module is and how it works, but a cheatsheet is for when you know what something is, but you just need a few quick examples on how to use it. Which, for me, is more often what I’m searching for.

Cheatsheets are hugely underrated IMO and I would love to see them in more places. (I want to use them more myself FWIW, I’m not just asking for free work from others. :stuck_out_tongue:)


Running on 1.16 in a couple of places now, smooth sailing as usual. From raspberry pi’s, my laptop and in the cloud.

Just needed 1 change in one Phoenix app. In one heex template I was calling a function without parenthesis and that failed to compile now :slight_smile:

1 Like

If you renumbered Client 3 to Client 1 and vice versa, the overall inversion of the LTR ordering (to RTL) would be inferrable from the numbering, but with the new RTL orientation, the request/response ordering would be correct. But whether the diagram-wide inversion of LTR orientation is equally or more confusing than the partial (request/response) orientation mismatch… :person_shrugging: