I have a pretty simple Phoenix API which takes as input some CSV data, translates it to some JSON objects and then makes HTTP requests with those objects. In normal use this is called with a small amount of data and works correctly and quickly.
Occasionally I need to upload (running the server locally) a batch of data, which represents hundreds of times more data than the usual use case. This produces hundreds of times more JSON objects, but the code path is exactly the same - it just sends those objects one at a time as a HTTP request.
All of this is just done in the controller. I’m not worried about getting a response in the batch case, I have plenty of logging and can see what’s going on there.
But it seems that after about a minute the processing just stops. I get no more log messages, but also no error messages from either Elixir/Phoenix or the server I’m posting the objects to. Looking at
observer nothing seems out of the ordinary, nothing spikes or anything like that. The server remains responsive if I hit a different API route. If I remove the actual HTTP requests and add sleeps then the issue persists, which rules out my initial guess that I was hitting an issue with HTTPoison pools (or killing the server I’m making requests to).
My best guess is that Phoenix (or something else) has killed the process that is handling the request. Is this expected behaviour? Is there a way to configure it so this doesn’t happen/increase the time my controller function stays alive?
In case it helps, the API I’m hitting is
def import here: callum_runs/import_controller.ex at main · mcintyre94/callum_runs · GitHub. The logging that I see stop abruptly is this line: https://github.com/mcintyre94/callum_runs/blob/main/lib/callum_runs_web/controllers/import_controller.ex#L70