So I have a personal project with 2 types of tests: unit tests and integration tests. It is my personal believe this is a rather common scenario for people who value testing or where testing makes sense.
How I do it
I use the config folder to put all of the configurations and my folder looks like the following:
│ ├── config.exs
│ ├── dev.exs
│ ├── prod.exs
│ ├── integration.exs
│ └── test.exs
Each file has configurations for the env they use.
In my code, I make use of such configurations like the following:
defmodule MarketManager.Store.FileSystem do
@products_filename Application.compile_env!(:market_manager, :products)
# Rest of code...
This way if I run a test, I can pass in
MIX_ENV=integration and it will fetch the values from
integration.exs for example.
The problem with this approach is that you either have to always prepend
MIX_ENV=blabla to your commands and it encourages you to have a more convoluted
mix.exs file if you want to avoid typing all the jargon before every command.
How do you do it?
I have seen some people in this forum mention they use a “Repo” file for this kind of thing, where they pass in dependencies. Others use a Settings file that is a wrapper for configuration.
How do you guys do it?
I don’t do this anymore, I mark my integration tests
@moduletag :integration, though I currently don’t have any tests that hit the headless browser, I have a separate repo for those because our frontend isn’t in elixir !!?!!
Imagine you are in a backend and you need to hit a DB or an external API. You need to check if the things are being saved in the DB and if the requests are being done properly.
Do you consider these types of tests different from ones where you just test the public API of a module?
Do you consider them E2E or integration tests? If so, how do you configure them?
I’d start with the question: Why do you need another mix env for your integration tests in the first place?
I’m aware that many things differ between unit and integration tests, but if
config.exs seems like the only way to provide them you might want to re-evaluate your options on dependency injection.
Very good question!
Perhaps this is more related to the way I pass in dependencies. Imagine you have a module MyModule:
- In some tests you want to stub it, as in, you want its functions to return pre-determined values.
- In other tests you want to use it, but you want MyModule to be executed with a given configuration, for example, you want to execute all its code but you want it to call a dummy server instead of the real, payed server.
Given this scenario, how do you organize your code?
How do you inject your dependencies?
Are there any patterns you’d recommend?
yeah I guess they’re technically E2E tests. Sorry. Due to this insane division where I work we have started calling the frontend everything that isn’t in the datacenter, so our database lives in the frontend. I keep forgetting that everyone else doesn’t use this terminology the same way. Sorry for the confusion.
All of those questions boil down to “how can I make my code do different things (at runtime)”. If you can do that the stubbing/predefined responses part becomes simple, as it’s just one of those “different things” (likely a mox module). To get to that point imagine building a system, where in production you need to be able to switch out those parts, which you want to be able to switch out in your tests. E.g. the code you posted in your first post won’t cut it, as the dependency is set at compile time, so it’s not changeable at runtime.
I just need to get fake answers (stubs) from a collaborator and to sometimes call a fake server. Nothing changes at runtime, not the way I see it.
Could you elaborate why runtime is the heart of the issue here? I seem to not understand your point of view.
That’s imho a big misconception. But there are layers to the problem here:
Currently you’re changing the code executed by having a different config when running tests. This is fine if your tests need a single static setup of dependencies applied.
Now you’re wondering how to move to a place where the single static setup of dependencies is no longer enough for all your tests. One way to do that is changing the setup between sets of tests, which need a common setup.
You’re doing this via the MIX_ENV (could be something else you can check on in a config.exs file) + multiple separated testruns, which is the only way to affect config early enough so you can depend on it at compile time. This is great in terms of your application being statically configured by config and nothing is dynamic at runtime. This also means all the tests you run in one of such batches need to deal with the same setup.
If the above is not flexible enough you need to start setting up dependencies at runtime. And that point is reached earlier than one might expect. E.g. using Mox means you’re setting up a dependency at runtime (the module might be the same, but the functions are different) and you either need to run tests with
async: false to have the correct mocked implementation called or you need to pass in something from your test to the callsite, which makes it call the mocked implementation. The same is true for any two concrete implementations you want to change between within the same testrun. If you’re actually using Mox or not is actually not relevant.
So unless you can configure everything statically at compile time you’re in a place where you’re handling multiple interchangeable implementations at runtime and you’re imo better of acknowledging that fact in your codebase instead of trying to ignore it – even if in production there’s only one implementation used.
As a matter of curiosity, I do use Mox xD
I think I now understand a little bit better. After some research I have found this article which depicts several ways to run tests and do DI, at what I believe is runtime according to your definition (please correct me if I am wrong).
I really good article I recommend. It makes it specially clear there is no silver bullet