I’ve been experiencing a bit of timeout in production with respect to my queries. I’ve going through the docs and a few resources here, I came up with the below configuration. Kindly advise if it will suffice for production.
Configuration file
config :debitor, Debitor.Repo,
url: database_url,
pool_size: String.to_integer(System.get_env(“POOL_SIZE”) || “10”)
timeout: 60000
Repo call
Repo.all(query, [timeout: 120_000])
Thanks.
I would probably leave the default timeout alone, and only increase it for specific Repo calls. Regularly needing to support queries up to 60 seconds seems like an issue.
3 Likes
@benwilson512, thanks a lot sir, you’ve been a great source of inspiration and a mentor. I’m most grateful.
How do I use queue_timeout
and queue_target
and what’s the difference. I’ve not been successful in getting any material on queue_target
. How do I use this with Ecto. Just like the others, should it be configured for the Repo.
Not the most intuitive of documentation location so here’s the link explaining those config options: DBConnection — db_connection v2.6.0
2 Likes
Thanks for the documentation. I’m trying to wrap my head around those parameter. How does the DBConnection fits into Ecto architecture. For queue_timeout
and queue_target
, I believe I have to set them alongside the Repo configuration in the config file like above.
Thanks.
db_connection
is what ecto_sql
steps on. It is the lower-level library that Ecto’s DB handling code (namely the ecto_sql
library) is wrapping in a more convenient package for us.
And yes, you simply have to add queue_timeout
and queue_target
to the Repo configuration. If you expect long transactions then it’s worth to simply increase queue_target
to something larger and ignore the other option. In my hobby and professional apps I usually bump it up all the way to 5000 (ms) to give the apps breathing room to wait for a connection from the pool in high-load conditions. Having Ecto error out in such a situation isn’t very useful to me so I first increase the queue_target
and then start analysing why my transactions take so long.
Alternatively, you can just increase pool_size
to a bigger number. I rarely accept the default of 10 and usually set it to 20.
EDIT: Here’s my Repo config in one of my hobbyist projects that does ingestion of public retail datasets:
config :retail, Retail.Repo,
username: "postgres",
password: "postgres",
database: "retail_dev",
hostname: "localhost",
pool_size: 20,
queue_target: 5000,
migration_timestamps: [type: :utc_datetime_usec]
18 Likes
It makes a lot of sense now. Thanks a lot for the awesome explanation as well as the configuration details. I will apply changes to my config file on production and monitor.
3 Likes