01:00:15.016 [error] Postgrex.Protocol (#PID<0.2310.0>) disconnected: ** (DBConnection.ConnectionError) client #PID<0.26161.221> timed out because it queued and checked out the connection for longer than 15000ms
#PID<0.26161.221> was at location:
:prim_inet.recv0/3
(postgrex) lib/postgrex/protocol.ex:2837: Postgrex.Protocol.msg_recv/4
(postgrex) lib/postgrex/protocol.ex:1879: Postgrex.Protocol.recv_bind/3
(postgrex) lib/postgrex/protocol.ex:1771: Postgrex.Protocol.bind_execute/4
(db_connection) lib/db_connection/holder.ex:316: DBConnection.Holder.holder_apply/4
(db_connection) lib/db_connection.ex:1255: DBConnection.run_execute/5
(db_connection) lib/db_connection.ex:1342: DBConnection.run/6
(db_connection) lib/db_connection.ex:539: DBConnection.parsed_prepare_execute/5
I have been getting this error in production the most contribution of my prod 5XX.
I tried search around and found multiple solutions but not sure which one to apply and don’t want to break the system trying incorrect solution.
Here is the two solution I found
timeout: 60_000,
pool_timeout: 60_000
And
ownership_timeout: 60_000
Can you please help me by explaining this and which one to use and where.
It’s possible that you didn’t setup the config right. I’ve ran into this issue several times before, and for all of it I didn’t have to adjust the default timeout; It was just because of a bad config.
To start, you can try to manually connect to the DB, with the same parameters as you provided to Ecto, in the CLI.
Before increasing those timeouts, have you investigated what operation is taking more than 15 seconds to complete? Usually that’s an indication of database issues, from missing indexes to queries that need optimization.
or extra indexes. I just had a case of a migration with two repeated indexes, and a HUGE payload to insert. The BTREE wasn’t having enough time to reshuffle with every insertion and insertions needed to wait. so TIMEOUT
Thanks for the tip.
I removed the indices that I had created and no more Timeouts!
I was doing a massive load (~9.000.000 rows) in the database - in Livebook - through the method in this link
After the load, I recreated the indices in the table.
I don’t remember, something in the range of 6 to 10 hours (one of those tables had about 200GB without indexes so it might have been bigger than 100M), it doesn’t matter as long as your PG instance is configured accordingly and has the necessary resources to run, I left it running in prod and went home .