Lifecycle of a Plug.conn process owner

Hi everyone,

TL;DR
I faced a complex issue with a Plug project where in production (and not in local) environment my process seemed to exit a few lines after I called Conn.send_resp/3, in the middle of a function. Is this the nominal behavior, and if yes where can I find some documentation about the lifecycle of my Plug.conn process owner ?

Some context
I’m currently working on a reverse proxy implementation based on https://github.com/slogsdon/elixir-reverse-proxy. This reverse proxy will be used in https://github.com/applidium/pericles to replace the existing Ruby proxy suffering from slow clients. More details on Pericles and its proxy here https://www.slideshare.net/FABERNOVELTECHNOLOGIES/apidays-2018-api-development-lifecycle-the-secret-ingredient-behind-restful-apis-specifications.

The issue
I added a next step after the process_response https://github.com/slogsdon/elixir-reverse-proxy. This last step handles the reporting part where I store in a database the request/response to match it later against a JSON schema. It looks like that:

method
      |> client.request(url, body, headers, timeout: 5_000, follow_redirect: true)
      |> process_response(conn)
      |> create_report

When I deployed in production, the proxy played its role by sending back the data, but no report was created. I added a log every line to check where the execution stopped, and it happened in the middle of my create_report function.

I added a log every line and found that execution stopped always at the same line, the one initializing my Ecto schema. I tried adding a rescue around, a catch :exit, but they were never called. After a few hours of debugging I tried commenting the Conn.send_resp line in process_response and it magically fixed everything.

I solved my problem by generating the report first and then sending the response.

Would anyone have an explanation to this issue, or even better a documentation explaining the situation I faced ?

1 Like

Are you using the cowboy (default) plug adapter? If so, then it might be related to how cowboy handles http requests – it starts one process for each request. I’m not sure here, but I think once you call send_resp, the cowboy process handling the request supposedly dies since there is no more work left for it to do. To be sure, you can either read the source code for :cowboy_req.reply (which is called from plug’s send_resp, so this branch is executed, which just builds the response).

I added a log every line and found that execution stopped always at the same line, the one initializing my Ecto schema.

This is strange, though. What do you mean by “initializing my Ecto schema”? What happens on the line right after?

UPD

I’m not sure here, but I think once you call send_resp , the cowboy process handling the request supposedly dies since there is no more work left for it to do.

Seems like I was wrong about it (from the docs I’ve linked above):

Cowboy implements the keep-alive mechanism by reusing the same process for all requests. This allows Cowboy to save memory.

So you can safely ignore me. :man_facepalming:

The process per request approach was introduced in cowboy 2.

2 Likes

Thanks for the answer !

The process per request approach was introduced 2 in cowboy 2.

I am indeed using Plug.Adapters.Cowboy2, so this process per request could explain it. Thanks a lot for the link to the documentation.

This is strange, though. What do you mean by “initializing my Ecto schema”? What happens on the line right after?

My create_report (with debug logs) looked like that

Logger.info("Reporting")
decoded_body = decode_body(response)
Logger.info("Decoded body #{decoded_body}")
changeset = Report.changeset(%Report{}, %{
...
}
Logger.info("Changeset #{inspect(changeset)"}

And on my several attemps I always got the first 2 logs, but never the last one. Maybe it was pure coincidence that Cowboy2 always terminated the process on the same line, but I started to wonder if Erlang scheduler waited for the first call to an external module to terminate my process.