Sandbox-like testing for read-only replica

Hello everyone!

I’m currently experimenting with postgres database replicas and have successfully set up an environment with a master database for writes, and a slave readonly database, each with their own Ecto.Repo. The application itself only lists the master’s Repo in the ecto_repos config, as no migrations can run in the slave Repo at all.

While this setup works in practice, I’m stuck on finding the ideal testing approach for it. So far, I’ve used the SQL Sandbox method for all my tests that require data to be set up in the DB but I cannot expect the slave DB to have valid data on test runtime because everything runs within a transaction, so, while the master DB can be populated without a problem, the slave DB will always stay empty during these tests. Unboxed runs do work just fine but without the benefit of having a clean database upon the run, I wonder if there’s a way around that.

I even considered placing the tests related to reading from the replica in a separate file and having setup_all and on_exit callbacks to dump and load the whole database but besides sounding like a terrible idea I couldn’t find a way to do that anyway (ecto.dump / ecto.load only yields table structure).

Has anyone ever faced a similar situation? Is there a best practice for this sort of test? Any insight would be immensely appreciated, thanks a bunch! :slight_smile:

I’m experiencing the same issue and haven’t found a solution for it yet. :frowning:

Unboxed runs do work just fine but without the benefit of having a clean database upon the run, I wonder if there’s a way around that.

Currently I am calling Repo.delete_all/2 (https://hexdocs.pm/ecto/Ecto.Repo.html#c:delete_all/2) in my fixture to cleanup before a test as a workaround.

What if in your test environment you just defined your slave Repo to point to the master repo? Otherwise I believe you’re basically just trying to test DB replication, which is something that you shouldn’t need to test. I think I saw a library posted the other day that was meant to assist you in using a master/slave configuration, that might be helpful if anyone has a link to it.

1 Like

This is exactly what I did with the Ecto config (both ReadRepo and WriteRepo have the same config), however, the transactions seems to block that route. An insert on WriteRepo is isolated from the get of ReadRepo. Not using the sandbox pool is working fine too, but has major downsides as stated above.

See: Problem: Failing tests with multiple Ecto repo's using same database (click the pencil with the edit date to see the original post)

ps. A dirty test was to alias WriteRepo as ReadRepo in my code. When I did that, the tests did run fine. However, I can’t do that in production :wink:

Why not just do

@repo Application.get_env(:my_app, :read_repo)

Then in config/config.exs:

config :my_app, :read_repo, MyApp.ReadRepo

And in config/test.exs:

config :my_app, :read_repo, MyApp.WriteRepo

You could do something similar for :write_repo as well, but it’s not really necessary since you don’t need to change it’s definition based on the environment.

1 Like

@axelson Never thought of it. Brilliant solution. A slave has become a master :wink:

1 Like

Great! Glad it works for you.

Here’s another option (thanks @axelson for the config idea)

defmodule MyApp.Repo do
  use Ecto.Repo,
    otp_app: :my_app,
    adapter: Ecto.Adapters.Postgres

  @replica Application.get_env(:my_app, :repos)[:replica]

  def replica do
    @replica
  end
end

this way you can just do

Repo.replica.all(query)

etc.

3 Likes