I am writing the library TypeCheck which has an optional dependency on StreamData.
From time to time, part of the code (wrongly) ends up calling StreamData directly, rather than (correctly) being wrapped in a Code.ensure_compiled?(StreamData) do ... end check.
In the code and test suite of the library itself this never results in any errors, as optional dependencies will be installed in a library which mentions them in its top level mix.deps.
However, when someone uses a version of the library with this problem, then they will get compiler-warnings unless they install the optional dependency themselves.
This means that twice now this mistake has only been seen after a version of the library was published.
To make sure that this does not happen again, I would like to create a regression test as part of the test suite of TypeCheck itself. But how can this be done?
Is there a way to wrap an extra ‘sample mix project’ in a test? Or is another approach possible?
With help of that code example, I was able to tackle the task.
In the end, I made it a bit simpler as we won’t need multiple of these tests to be run side-by-side (/at the same time), so it was OK to just create the actual project files rather than creating it dynamically.
If someone is interested, the resulting test became:
Thank you for noticing! This has been fixed .
(Interesting how assert works correctly even outside of a test, except that errors will not be highlighted as nicely.)