What pool_size in umbrella app with postgres schemas?

Hi folks,

I have an umbrella app that is using Postgres Schemas to isolate the data for each OTP app. I’m having trouble with what the best approach is for defining the pool_size for each OTP app.

Say we have 7 OTP apps, all with a separate schema and each app defines a pool_size of 5. That will be 7 * 5 = pool size of 35. Now, if we have 3 server nodes that is 3 * 35 = pool size of 105.

My question is this: Is it normal to have a pool size this large? Or are there other best practices that I should be aware of?

Sorry if this is a stupid question - I’m not really sure what I’m doing here.

1 Like

105 is entirely normal. I hit well above that on one of my especially large sites (hitting over 40k concurrent users per second, this was not with elixir/beam though).

1 Like

you might be aware of that already (I think you sort of know what you are doing ;)) but the more connections you have in the pool, the more resources your PostgreSQL database allocate. You might want to play with work_mem setting in that case, but lowering it might also slow things down… Basically the more connections you keep open and allow concurrently to talk to database, the more RAM you need. 105 does not sound bad, however.

1 Like

Thanks folks. That puts my mind at rest!

Thats a very abstract question :slight_smile:
You might be running PG on $5 droplet or on 8 way box with a few TB of RAM.
You might have simple selects that use indexes and return few rows or you might be running complex queries against TB sized tables with a bunch of joins.