Making sense of Ecto pool_size


I’m trying to understand how the Repo pool_size config, or connection pooling in general, works exactly.

My specific issue: I run a Phoenix app in k8s, 10 pods running, each with pool_size: 10 – but the database immediately goes up to the maximum of 400 active connections, and I’m not sure how that is possible. What could lead to more than 100 connections being used when only 10 pods are running?

Does async code start its own pool? For example, I have something like this to warmup caches:

  max_concurrency: System.schedulers_online(),
  ordered: false,
  timeout: :timer.minutes(2),
  on_timeout: :kill_task

Are database connections in this task not limited by the pool?

I’m using :tumbler_glass: oban for job processing but according to the docs it only uses one extra connection.

Any ideas? :pray:

1 Like

Starting MyApp.Repo as part of your supervision tree will create a pool of 10 connections to your db. Times 10 for the number of pods/beam instances running. Plus 10 for each of the oban connections. So you should land on 110 connections unless you’ve more tooling besides ecto/oban connecting to your db or you’re starting more instances of your repo.

1 Like

Thanks, I added a log entry to the init method of MyApp.Repo and now I see that it gets started multiple times during startup. Now I only need to figure out why :slight_smile:

Do you run migrations separately or something like that?

Yes, looks like it was due to migrations. That code’s a few years old and I was calling"ecto.migrate", ["--log-sql"]) during application boot. Running migrations using Ecto.Migrator now, that seems to work better :ok_hand:

Oban only adds one connection per node for pubsub, not per queue. So if you have an Ecto pool size of 10 you’ll have 11 total connections.