One of my projects uses the config in
mix.exs during compilation to specialise the generation of a set of functions. But I’m not sure how to test this part of the app since the app is compiled before tests run (of course).
Any ideas on how to build a test harness that will allow me to change configuration, recompile and then run tests?
Use case: My lib reads json files representing a locale definition (number formats, date/time formats, pluralisation and so on) during compilation. To optimise performance, I read these files at compile time, do some parsing and generate a bunch of functions. There are 514 locales so doing this for the full data set isn’t helpful except for testing. So the required locales are configured in
mix.exs and only those locales are compiled into the code. When a requested locale is configured but not available it is also downloaded and then processed. I need to test changing the configuration, confirming unit tests and testing locale downloading.
I typically have the functions that need the configuration receive the configuration instead of calling
Mix.Project.config or similar directly. This way, you can test them as usual and you just need to worry about the Mix integration in one particular moment (which will likely be from a Mix task or similar anyway).
If you really want to test it, then you can look at how Phoenix tests its new project generator, which means setting up a temporary directory and running mix commands with System.cmd. Those are expensive though, so I would recommend 1 or 2 integration tests and still finding a way to check multiple configurations in isolation.
José, thank you as always. I have decoupled the configuration in the way you suggest and the test coverage I have is pretty good. The major area which I can’t cover using ExUnit (as best I can tell) is the situation when I need to test the code that at compile detects that a locale isn’t present, downloads it and then generates functions from it - all during the compilation phase.
I was trying to avoid setting up a special test harness but I’ve already had a couple of ‘bad’ hex releases because of this test gap so I think i need to sort it out now.
I’ll check out what Phoenix does as you suggest. This would be a test run just before a release so the lengthy test cycle is ok. Anyway, conditionally compiling 514 locales, which then automatically generates nearly 12,000 tests isn’t exactly speedy either
Hi, I’m dealing with a similar problem while testing a library I’m working on. I’m not sure what you mean with this. Can you please provide an example?