Testing processes that act on database inputs

Okay i hope i can simplify my setup enough to express this question.

I have a logging module. This logging module is global. Every log that gets constructed, gets saved into an ecto repo with one table.

  schema "logs" do
    field(:level, LogLevelType)
    field(:verbosity, :integer)
    field(:message, :string)
    field(:function, :string)
    field(:file, :string)
    field(:line, :integer)
    field(:module, AtomType)
    field(:version, VersionType)
    field(:commit, :string)
    field(:target, :string)
    field(:env, :string)

and it defines an API similar to Elixir.Logger with debug, info, warn, error functions. As i said each Log gets saved to this repo. This is to persist application shutdowns, network disconnects/splits etc. each Log in the repo gets removed at a later moment in time after being dispatched via amqp in a fire and forget manor.

Hopefully that is a simple enough overview of the system.

Now my issue is i want to unit test these parts, but ecto’s sandbox api is tripping me up. Since there is a constant processes watching the database for inserted logs, i’m not sure what sandbox mode i should use. I can’t choose manual obviously, since my logger is global i can’t manually manage ownership. {:shared, pid} shares the same issue, only it prints an ugly genserver timeout. :auto might as well just not use the sandbox at all.

I’m thinking this is just an application design issue. I guess i must need a behaviour for my logging module? I don’t know though. I would appreciate any help.

Having active processes do something with the db in your application while your tests are running is mostly a pain for which I haven’t found a good solution either. So, if I understand the problem correctly I would probably go for a combination of:

  1. Not starting the monitoring processes when you want to test the database,
  2. Using behaviour with mocks in most of your other tests (the unit test of the monitoring processes excluded - here you probably want to test the db implemenation itself)

For a quick test maybe you can use Ecto.Adapters.SQL.Sandbox.unboxed_run?

test "logs" do
  Ecto.Adapters.SQL.Sandbox.unboxed_run(YourApp.Repo, fn ->
    # creates the log for testing
    %YourApp.Log{...} = log = fixture(:log, level: ..., verbosity: 7)

    # asserts ...

    # cleans up

This way the log would actually be persisted into the database, and not just exist for the duration of a transaction.


The fixture from the test above can be defined as

def fixture(:log, attrs) do
    level: attrs[:level] || fixture(:log_level, attrs[:log_level] || []),
    verbosity: attrs[:integer] || :rand.uniform(10),
    message: attrs[:message] || "asdfasdf",
    # etc
1 Like

Thanks for the replies.
I’ve implemented varying levels of the potential solutions suggested previously.

1 not starting the monitoring process

I want to try this again. I’ve restructured a bit that this may be more feasible especially combined with 2

2 defining a behavior for both logging and the worker process

This I think is probably the “proper” solution. I’ve just never been able to accomplish it successfully. I have a global process that dispatches events. Something in Elixir that has bothered me is you can only define behaviours for functions. A lot of the behavior I need to be able exchange are servers/processes that need to exchange a certain set of messages. I know I could abstract that out further, but then I have 500 sloc now and what have I gained? It’s now harder to read, the docs are spread out across multiple files, and initial setup and runtime config is huge and the entire process for storing and dispatching a message that says “hello world” is way to complex and all im testing is that I didn’t make a typo in the logger repo context module.


I didn’t know about this but it is essentially what I’m doing right now anyway. I don’t start the repo in sandbox mode at all right now. I just alias my test task to do ecto.drop, excto.migrate, test which feels obsurd.

Anyway I realize I must be doing something wrong, just no solution I come up with feels right. My project has never really had great test coverage because of this sort of thing. The logging system is just one example. Really for me it boils down to testing any global or named process in Elixir. I’ve seen questions like this come up all the time but never seen an answer that actually works.

Obviously one solution is i can just not use named processes and then use a registry of sorts to wire up communication between processes manually. I’ve done this before and after you get more than about 4 global processes, it just gets obsurd to work with in practice. That’s not to say it won’t work, its just another layer of abstraction that doesn’t offer any real tangible benefits.

Thanks for reading my small(big) rant. Hopefully that didn’t scare anybody off.

You can provide names by default (for example the __MODULE__ as name, but allow them to be overwritten in tests. This way you can have the best of both.
Most of the time when I need named GenServers I do this, because it makes testing way easier.

1 Like

This is something i forgot about. I tried this a long time ago when i just started doing Elixir development, but i don’t think i knew enough about the subject at hand to do it correctly. I’d like to revisit this also. I suspect i will hit a different version of the same problem, but it will require some more explanation.

this app has another completely separate ecto repo for configuration. Things like boolean flags the modify behavior at runtime. So i have a global genserver i’d like to test. for the sake of simplicity - we will call it Firmware. Firmware communicates with some external piece of hardware so actual communication needs to be mocked out. assuming this is done to some acceptable extent. (i believe it is at least). Now i might be inclined to write a test that looks like the following:

test "does something different when some flag is set" do
   Config.set_value(:firmware_inject_one, false) 
   # Opts would get passed to `GenServer.start_link()` here.
   {:ok, fw} = Firmware.start_link([name: :fw_flag_test])
   # flag defaults to false
   assert  Firmware.add_numbers(:fw_flag_test, 1, 2) == 3
   # where this is a global process. This is the issue i see.
   Config.set_value(:firmware_inject_one, true) 
   assert Firmware.add_numbers(:fw_flag_test, 1, 2) == 4

Obviously that is just a silly example, but it points out a major flaw in the overwriteable __MODULE__ name default. Because now in this case, me modifying that global config, would change the behavior of the global named Firmware process, (or any parallel tests)

So now here’s the problem with that: Each version of the Firmware process needs to be passed in a process identifier (pid or name) of the Config process (in this case the Ecto repo but keep reading). This doesn’t sound like a terrible idea at first. Once you start needing more than one other global service/process. you need to pass that in also… actually turns out - EVERY named process needs to be passed to everything everywhere. We pretty much can’t use global names for this to work…

A naive solution might be to not run any tests in parallel.

Well, if you have a global named module, you have a singleton and a singleton is quite hard to test. The best option (that I can think of right now) is that in Elixir, you can change things with the config at compile time, so even the singleton can have different behaviour with the config.
If that’s not sufficient, but you still don’t want to pass everything around, I would go and look at the registry to use via tuples (https://hexdocs.pm/elixir/master/Registry.html#module-using-in-via). If your modules that depend on the Config know the via tuple of the Config, you can ask for it’s location (pid) at runtime.
This also means that you can provide a different Config in your tests.
You’ll still have to be careful with running tests in parallel of course, but it might be enough for you?

As a last remark I want to say that I would go first passing the dependencies explicitly, it might be some more work, but it makes a lot of things easier, and it can also point to smells in your design.