I’m testing a phoenix application that basically inserts a random number into a
:ets table and also a Postgresql database.
I’m getting the below error when I run a benchmark with siege or wrk
Postgrex.Protocol (#PID<0.2622.0>) disconnected: ** (DBConnection.ConnectionError) client #PID<0.23411.2> exited(not sure why this error) as well as
Connection not available and request was dropped from queue after 131ms. You can configure how long requests wait in the queue using :queue_target and :queue_interval; which I believe is due to the pool size (currently using 30, how can I fine tune this for a site that will receive about 1-2k request per second).
I’ve read a number of post on the beam and memory leakage, is it something I should worry about when dealing with phoenix; at the core is the cowboy server which relies on ranch, it spawn a process for each request; is the process garbage collected once the request is done, can I use sweetxml without anything to worry - read somewhere of a possible memory leak.
Also I plan cleaning the
:ets table every 2-5mins as I’m using it to store sessions (no way to store sessions using the built in phoenix session as the request is a ussd request *num#) using either select_delete or match_delete, there’s a field in the tuple that stores timestamp, I intend to delete based on that, which functions is more efficient -
I plan using either hackney or httpc(does it support http2) as the http client(post some
xml and using
xpath provided by sweetxml to extract a few values) disabling connection pooling as it won’t be necessary in this instance; anything to note here.
Thanks, anxious and curious newbie.