Code coverage tools for Elixir?


I have a test suite and I need to know the coverage of the project.
I have played around with mix test --cover but I find the native erlang’s coverage analysis tool to be insufficient at best.

The native coverage tool doesn’t tell you about branch coverage nor function coverage. It’s only metric seems to be relevant lines which I have no idea how they calculate. For all I know, this is just the most basic form of test coverage: see if a given text line was executed.

What have you tried?

I have tried Coverex but the result was disastrous. Not only does it suffer from the same issues that the native tool does, it also seems not produce correct results as it counts imported modules as untested.

Or maybe it is doing a great job and my code is poorly tested, but I can’t know for sure because it doesn’t tell me how it is evaluating my code. Have 40% coverage in a file? What am I missing? I can’t know, the tool wont tell me.

I am now using ExCoveralls. It is considerably better than the previous options, it allows me to easily configure which folders I want to ignore, but it uses the native coverage tool, so it suffers pretty much from the same issues.

What do you want?

I was hoping to find something among the lines of Istanbul, or in this case nyc:

It’s test coverage analysis tells me everything I need to know, metrics and all:

Branches, Functions, Lines, Statements, everything you need to know is there.


  1. Is there any tool that uses Istanbul for code coverage metrics with Elixir instead of the native erlang one?
  2. If not, is there a way to configure the native coverage tool to give me more information?
  3. Which metrics does the native coverage tool uses ?
1 Like

No, as InstanbulJS is ECMAScript specific. There is no “general purpose” coverage analysis tool, as this kind of metric is highly language/platform dependant.

You can check out cover module to see what you can get out of it. As far as I know this is only coverage tool available.

I already did it. This module is what I refer to as the native cover tool and all the drawbacks I present are actually drawbacks of cover - which is why I was trying to find an alternative.

So sad this is the only choice we have …

Have you tried


The point is that in Erlang there is no much difference between statements and lines, so I would roughly say that line = statement. Second stat is branch which also do not make much sense, as most of the time Erlang will use pattern match, so it will be split into 2 categories:

  • function clauses
  • lines

Both of these are available in cover module.

Functions are available as well as modules in cover, so that shouldn’t be much of the issue either.

So as you can see that metrics either do not make sense or are supported in Erlang’s cover.


No. Lines are not statements.

Second stat is branch which also do not make much sense, as most of the time Erlang will use pattern match,

No, branches !== pattern matching. A single if can have several branches. This has nothing to do with pattern matching nor multi-clause functions.

The fact that in Elixir you can have multiclause functions thanks to pattern matching and avoid long if or cond statements is a nice feature, but pattern matching together with guards can only take you so far and even though all multiclause functions can be converted to if counterparts, the reverse is not possible.

I recommend you dive deeper into the world of coverage metrics. Only then will you understand that they do make sense.

1 Like

Erlang do not have notation of “statement” in its’ .beam files, only lines are stored. While in theory you could use 'Abst' chunk to extract “statements” it would be infeasible due to macros (especially in Elixir), and even then there is no mapping between 'Abst' values and 'Code' segments (only 'Line' have such relation).

So unless you write EEP to introduce such connection between “statement” and 'Code' chunk then it is not possible with current VM implementation.

Yes, usually (and in Elixir always) 2.

It is possible, but not always feasible.

Any code in form of:

if a, do: b, else: c

can be written as:

def do_branch(true), do: b
def do_branch(false), do: c


And it doesn’t matter what is a.


This is interesting. It appears that even though no .beam file is created, you are still correct !

I would agree with you if only guards weren’t limited. The limitations with guards directly imply that some conditions cannot be converted into the pattern matching formula. For example, Map.key? can’t be used in a guard. if you have

if Map.key?(a), do: b, else: c

Then you are stuck.

Interesting discussion overall, can’t wait to see where it leads!

Conclusions so far:

  1. Differentiating between lines and statements makes no sense in Erlang/Elixir because BEAM doesn’t have the concept of statements. For BEAM, every line is a statement.
1 Like

FWIW the current cover tool does not use the assembly stored in .beam files. What it does is to retrieve the Erlang AST for the module, transform it and recompile.

The transformation consists primarily of introducing calls to :ets.update_counter/3 each time the line annotation of the AST changes. The analysis is just counting how many lines could be called vs how many were actually called.

An Elixir version that worked on Elixir AST in a similar way wouldn’t be extremely hard to do. It could also inject the counter calls for each branch, etc, to achieve the desired coverage.


No exactly, the difference is that JS is interpreted, so it has no notion of “compiled code”. In Erlang on the other hand there is such possibility, and single line can result in multiple VM “statements”, for example:

Integer.to_string(a + 1)

Will result with 2 instructions:


So you can see that it is hard to match statements to instructions, especially as in theory compiler is free to reorder commands as it pleases as long as the result is the same (this can be especially seen in C/C++ where it is written in standard, and result of it is that foo(a++, a++) is undefined behaviour).

I have badly misread while I was on the phone. Apologies.


I believe what @hauleth was driving at is that you can restructure that code as

def handle_has_key(true), do: b
def handle_has_key(false), do: c

def main do

i.e. pass the result of the check into the function

Of course, I’m not advocating to write all your code in this manner, just pointing out that it is possible


I would be extremely hard to do because of two things:

  1. the very tenuous mapping between Elixir’s AST and the macro-expanded version. This makes it very hard to count which branches we covered, because the branches in the macro-expanded code are not necessarily related to the branches in the original code

  2. It’s very hard to get the Elixir AST for a given module. You’d have to use files as compilation units, and not modules.

Maybe it’s not as hard as I think it is, but it’s certainly not very easy. It’s much easir to get access to and manipulate the Erlang abstract format. I mean, I can’t even find a way to expand something inside a defmodule into Elixir AST. That’s probably because defmodule acts more like a function that creates a module and injects it into the running BEAM instance than a macro that is expanded into something more basic.

I guess this probably means there isn’t even a “fully expanded ast” for me to look at. But there must be something that the Elixir compiler feeds into the Erlang compiler, right?

Just a brief update, because I needed to get solid code coverage for a decent phoenix app (umbrella app of 5 sub-apps, 17k loc).

On the app side, we’ve been using excoveralls : it is reliable, can be run in umbrella apps, with partitioned parallel testing and can output in different formats. :ok_hand:

Then we needed also a good UI to be able to drill down within our application code to know what to test to improve our code coverage. I tested:

  • the UI is only OK but not really convenient nor pretty. I could not make Github Status checks or comments to work (in order to get instant code coverage feedback when opening a pull request). Integration in the CI was not really convenient as well because we had to disable partitioned testing to run all tests at once and upload a single report.

  • was disapointing :frowning_face: Integration within CI was ok : we had to generate multiple json reports (one per partition) with excoveralls then format them, merge them and upload the whole with cc_test_reporter (binary provided by code_climate). Github feedback is working well, but you can’t really explore your code source tree within codeclimate UI. No go :no_good_man:

  • is our choice. Integration within the CI is really simple : you can upload multiple report for a single build (one per test partition) and everything will be merged on the serverside. Github feedback is really insightful and the UI gives you the ability to drill down as you want to build your test coverage strategy :medal_sports:

Codecov sunburst GIF here


Does give you a coverage and status badge?

Sure :+1: Look at my repo here:

Hey there, I just wrote this article: