Proxy vs API for obtaining request metrics from different locations

Bear with me.

I originally started building an availability monitor with Elixir. I had a great system setup, streaming from a Postgres db, using Stream to concurrently perform requests, build up results and chunk into a bulk insert query. 9 lines of code. Done. Beautiful.

Then I discovered that the HTTP request libraries in Erlang/Elixir are not that interested in lower level details such as DNS lookup, TLS handshake, etc, etc,

Turns out Go has some great libraries for this kind of tracing. Obviously, I could perform requests using Curl, but I don’t want to drop down to system commands and deal with the many headaches due to my other requirements (timeouts, retries, exponential backoffs). So, I went with Go and go-httptrace library. I also used go-fasthttp to build a simple API server.

I can have the Go server perform requests for me and send me back the metrics I need. For example, I send:

{"url": ""}

My API server returns:

{"dns_lookup": 10, "tls_handshake": 150", etc }

One of my requirements is to measure these metrics from different regions across the world. My Elixir worker will delegate requests to these servers based on the location configuration of that particular request (domain1 checks from london and carolina, domain2 checks from amsterdam).

If this seems a bit naive, it’s because it is - I’m new to this kind of thing. My thinking now is what I actually need a set of proxy servers in different regions which will attach request/response metrics to the requests I make through them from my Elixir app.

Though the Elixir -> Go API Server works, it seems to complicate the request/response cycle (now I have one in my Elixir app and one on the Go server), whereas a proxy would simply return custom headers I can use to obtain the metrics I cannot get from the Elixir HTTP request libraries.

Could somebody weigh in who has experience in this kind of thing? Would you run with an HTTP API or would you configure proxy servers?