Multitenancy with Ecto


I’m implementing a system that implements multitenancy via multiple databases. For business reasons I cannot do away with prefixes, which I’d prefer. In any case, I was able to make it work by using put_dynamic_repo, but am concerned it might be a bit too “magic”.

I’m doing this with a Phoenix app, so I set the repo on the request process, any sub process will have to set the repo again, which is easy to forget.

I’ve been pointed out to an alternative that is to wrap the Ecto API and require a pid for every call, which seems more obvious but both more labor intensive as well as brittle, since any change in the API breaks the code.

I’m very torn in terms of what’s the best option (and maybe there’s even another that I’m missing). Any advice would be great.

Here’s what I’m using to manage the repos, note that the tenants are agencies and I’m using a GenServer to cache the dynamic repos that were created. Also have a function to create a new database and run the migrations if needed.

defmodule MyApp.RepoManager do
  use GenServer

  alias MyApp.{

  def start_link(settings) when is_list(settings) do
    GenServer.start_link(__MODULE__, settings, name: __MODULE__)

  def set_agency_repo(agency, ensure_exists \\ false) do
    if ensure_exists do

    repo_pid =, {:get_dynamic_repo, agency})

    {:ok, repo_pid}

  def unset_agency_repo do
    repo_pid = Repo.put_dynamic_repo(Repo)

    {:ok, repo_pid}

  def init(_opts) do
    {:ok, %{repos: %{}}}

  def handle_call({:get_dynamic_repo, agency}, _from, state) do
    case state.repos[get_database_name(agency)] do
      nil ->
        {:ok, repo_pid} = Repo.start_link(get_connection_options(agency))

        {:reply, repo_pid,
         %{state | repos: Map.put_new(state.repos, get_database_name(agency), repo_pid)}}

      repo_pid ->
        {:reply, repo_pid, state}

  defp get_database_name(agency) do

  defp get_connection_options(agency) do
      name: nil,
      pool_size: 2,
      database: get_database_name(agency)
    ] ++ Application.db_config()

  defp ensure_repo_exists(agency) do
    options = get_connection_options(agency)
    {:ok, repo_pid} = Repo.start_link(options)
    Repo.put_dynamic_repo(repo_pid), :up, all: true, dynamic_repo: repo_pid)


  # TODO: Manage pool size better
  # TODO: Support removing repos

1 Like

If you immitate the Ecto.Repo behaviour, just with a pid as first parameter everywhere needed this would be just as brittle as the MyApp.Repo api. Not sure why you’d feel one is more brittle than the other.

What I meant was that if the Ecto internal API changes (Ecto.Adapter.Schema and the sorts), it would break my app, which is different from the public API.

But is that the approach you would recommend? What issues are there with using put_dynamic_repo that I’m missing?

Why would you need to depend on ecto internal api? The public api of ecto doesn’t change without a major version bump and your wrapper could just manage the “dynamic repo” without making callers be concerned with it.

Ok, let me share some code to ensure we’re talking about the same thing:

defmodule MyApp.DynamicRepo do
  def insert(pid, struct, opts \\ []) do
    Ecto.Repo.Schema.insert(pid, pid, struct, opts)

  def update(pid, struct, opts \\ []) do
    Ecto.Repo.Schema.update(pid, pid, struct, opts)

  def delete(pid, struct, opts \\ []) do
    Ecto.Repo.Schema.delete(pid, pid, struct, opts)

Something like this. is Ecto.Repo.Schema considered public? Also, it means that every call to Ecto needs to use this wrapper.

Regardless of the brittleness or not, is this the preferred approach for multitenancy with multiple databases?


Therefore you should rather make it so that your wrapper sets a dynamic repo on each call, like this:

def insert(pid, struct, opts \\ []) do
  current = MyApp.Repo.get_dynamic_repo()
  MyApp.Repo.insert(struct, opts)

Dynamic repo support was added for the usecase of multiple dynamically setup repos, so yes.

May I clarify what constitutes a database in this context? For example, a Postgres server can host multiple databases and therefore only one Repo would be necessary. But perhaps you are referring to different servers (rather than databases) which would require multiple Repos.

Ah, yes. It might need to run in different servers, yes.

Ah, I see. Why do you think this is best that put_dynamic_repo at the beginning of the request, which would affect all Ecto calls?

Because this will leave the current dynamic repo for processes intact. You could skip it if you’re not using Ecto directly anymore. Otherwise afaik this is all you can do using public api.

My clients are currently doing the same things. I did an initial sketch of this, and our current draft is to use a small wrapper around put_dynamic_repo at the top-level context functions. Basically each top-level context function would have to do something like Tenant.with(tenant_id, fn -> do_something end). Given their usage patterns, perhaps we end up setting a global default from some phoenix plug or absinthe middleware, b/c a single request handler doesn’t have to deal with multiple tenants, so there should be no risk in working with a wrong db.

As an aside, there was a small bug in EctoSandbox which didn’t honor the dynamic_repo setting. The fix has been merged and I think it will be included in the upcoming Ecto 3.5.0.


Exactly, that was my thought process. My main concern is the fact that any sub process does not use the dynamic_repo. That’s why the wrapper you mention might be a good option, even though it’s more code to write every time.

This is a bit annoying but I’m not particularly worried about it, because there’s no global default, so failing to set the repo will raise an exception, and this should be caught by tests.

How do you ensure that there is no global default? At the supervisor level?

We set name: nil repo option, and yeah there’s no global tenant repo instance running. The set of current tenants are read from some place (in the current draft it’s the “main” database powered by a different repo which is global) and corresponding instances are started during the boot, but they are nameless so they won’t be used by default.

Right, that was what I was thinking. Do have any “smart” way of handling migrations so that you can have tenant only migrations, and “main” database migrations that run separately?

That sounds like something you could have a look at Triplex for inspiration, which handles multi-tenancy using schemas.

1 Like

We have some helper functions, basically wrappers around Ecto.Migrator, which first migrate the main db, then all known tenants. It’s hard to say yet if that will suffice for production, but they seem to be good enough for local dev. We defined a couple of mix aliases, such as “ecto.reset” and “ecto.migrate” to make it fit with the flow of other projects.

For tests, we precreate a single tenant db in test_helper.exs (or migrate it if the db already exists). That way, most of the tests can work on that single tenant db, and so they can run in a sandbox with async: true. There is one test module which tests dynamic addition and removal of repos, and that one has to be async: false b/c afaik db creation & migration can’t run on a sandbox.

There seems to be enough material here for a blog post, I only need to find some time to write it up :slight_smile:


When testing, where do you clean up that tenant db, since you don’t have the on_exit callback available on test_helper.exs, or am I missing something?

You can just leave it there, is that the idea?

Yeah, we leave it there, which is actually good b/c we don’t have to recreate it on every test run.

1 Like