Is there a way to predetermine which tests and why would be run on `mix test --stale`

I’m currently in the process of taking over a very large codebase in my new job and already did my first simple changes within a day ago or two.

The problem is, the way it is now, the tests are mostly running as async: false and take a whole 10 to 15 minutes when running the full suite.

The changes I made have been covered by roughly 10 tests, which could be ran within a split second, still due to a lot of deps which I still have to debug/analyze, hundreds or thousands of tests are ran, which take also 5 to 10 minutes per iteration.

Is there an easy way, to discover why these tests are ran? How they depend on the changed module? mix xref --sink does not tell anything about the tests.


Reading through mix help test's section on --stale:

## The --stale option

The --stale command line option attempts to run only the test files which
reference modules that have changed since the last time you ran this task with

The first time this task is run with --stale, all tests are run and a manifest
is generated. On subsequent runs, a test file is marked "stale" if any modules
it references (and any modules those modules reference, recursively) were
modified since the last run with --stale. A test file is also marked "stale" if
it has been changed since the last run with --stale.

The --stale option is extremely useful for software iteration, allowing you to
run only the relevant tests as you perform changes to the codebase.

…it seems you have to be only using mix test --stale for the “only re-run tests whose tested modules (or the tests themselves) have changed” check to work.

So you probably have to endure one full run first.

Additionally, you can tag your tests with @mytag true and then only run them with mix test --only mytag?

1 Like

I endured the first run, and still subsequent stale runs have taken that long, as there is some coupling that is probably not necessary.

I admit, the module I changed was part of user authentication and therefore probably affects all somehow web related tests, though in the case of not so obvious things, are there tools to debug that?

Before I wade through all web related tests and manually check if they require authentication and mock it out, what tools do I have at hand to verify that before hands? Such that I can contentrate on those tests first, that would give the biggest gain?

At the same time, such an analyzis might give additional insights into unwanted coupling which might be discovered “accidentally” by such --stale test runs.

1 Like

I always wanted to get more proficient in mix xref but sadly I admit I still haven’t. I’ve scoped some threads in the past where people had partial and small success untangling some dependencies but IMO right now you have a better chance asking colleagues and eye-balling.

Sorry I can’t be of more help. :confused:

Still, you can start off with mix xref graph --label compile-connected; this seems to (partially) address runtime dependencies, not only compile-time ones.

A super heavy gun would be to use for finding references in code.

Recently I used this to be able to track and collect all functions (of a single module) used in a project:

$1 = "ex" # Elixir
$2 = "Api"

comby -matcher ".$1" -match-only -json-lines "$2:[~.|::|->]:[fn](:[_])" '' | jq '.matches[].environment[] | select(.variable == "fn")' | jq -r '.value' | sort | uniq

Or, to give an example (since the above is a wrapper script I made):

comby -matcher ".ex" -match-only -json-lines "Api:[~.|::|->]:[fn](:[_])" '' | jq '.matches[].environment[] | select(.variable == "fn")' | jq -r '.value' | sort | uniq

I also used it by doing cd $project/test first. This helped me establish which functions I should make a behaviour out of because I wanted to mock a module with Mox (since its tests currently directly hit a real production API).