Phoenix application statistics

I am sure that someone else has already asked this question, many times, Pardon, I am asking it again. This is mostly a question coming from a guy who has decent experience in writing elixir but has 0 information about beam and related stuff.

We have a Phoenix Application https://github.com/evercam/evercam-server, We are mostly dealing with CCTV Cameras, My main concern for posting here is to collect all information about how Can I get my applications inside stats.

  1. We have almost 1000 Cameras, and HTTP request works like this.

While using HTTPoison with Hackney,
The first request goes to Camera for JPEG and then next request goes to Seaweedfs (A file system) with that JPEG. Both of them can fail, both of them can take time.

What I am looking here is to monitor all my HTTP Requests.

I have been reading this: http://big-elephants.com/2019-05/gun/ and it shows some cons for Hackney, which I am unaware of.

There is also https://github.com/elixir-mint/mint, Tesla and HTTPotion.

How I can use any of these libraries and also keep a check and balance of all HTTP requests.

In one of my applications, I do a lot of outgoing requests. I use prometheus + Grafana for monitoring and the Prometheus Plug package already covers incoming requests.

For the outgoing requests, I wrote a small wrapper around HTTPoison which then emits :telemetry events which I use to add those metrics to the prometheus metrics.

Is that all open source? or on GH?

No, but I can look if I can extract some code to demonstrate it.

Yes please that would be awesome.

Starting with my deps. There might be newer versions than I declared here but that doesn’t matter too much.

{:httpoison, "~> 1.5"},
{:telemetry, "~> 0.4"},
{:prometheus_ex, "~> 3.0"},
{:prometheus_plugs, "~> 1.1"},

I then have a HttpClient module with a single call/5 function.

  def call(method, url, body, headers, opts) do
    pool = Keyword.get(opts, :pool, :default)

    options = [ssl: [{:versions, [:'tlsv1.2']}], hackney: [pool: pool]]

    start = :erlang.monotonic_time()

    resp = case method do
      :get -> HTTPoison.get(url, headers, options)
      :post -> HTTPoison.post(url, body, headers, options)
    end

    stop = :erlang.monotonic_time()
    diff = stop - start

    with {:ok, response} <- resp,
         %HTTPoison.Response{status_code: status} when status in 200..299 <- response do
      push_to_telemetry(method, url, status, diff)
      {:ok, response}
    else
      %HTTPoison.Response{} = response ->
        push_to_telemetry(method, url, response.status_code, diff)
        {:error, response}

      {:error, %HTTPoison.Error{} = err} ->
        push_to_telemetry(method, url, 502, diff)
        {:error, err}

      err ->
        push_to_telemetry(method, url, 502, diff)
        {:error, err}
    end
  end

Not the cleanest code but it’s just enough wrapping for my needs. Take a look when it calls the push_to_telemetry/4 function. The function itself looks like this.

  defp push_to_telemetry(method, url, status, duration) do
    parsed_url = URI.parse(url)

    payload = %{
      authority: parsed_url.authority,
      fragment: parsed_url.fragment,
      host: parsed_url.host,
      path: parsed_url.path,
      port: parsed_url.port,
      query: parsed_url.query,
      scheme: parsed_url.scheme,
      duration_microseconds: duration,
      status: status,
      method: method
    }

    :telemetry.execute([:myapp, :http, :remote, :request], payload, %{})
  end

And off it goes to telemetry.

For the next part, I assume that you are familiar with the use of telemetry and how people commonly set it up. There are quite a few articles out there describing the setup, even in combination with the Prometheus package. So I just post my custom HttpInstrumenter module.

defmodule MyApp.Metrics.HttpInstrumenter do
  require Logger
  require Prometheus.Contrib.HTTP

  use Prometheus.Metric

  def setup() do
    Counter.declare(
      name: :myapp_http_remote_requests_total,
      help: "Total number of HTTP requests made to remote sources",
      labels: [:status_class, :status_code, :method, :host, :request_path, :scheme]
    )

    Histogram.declare(
      name: :myapp_http_remote_request_duration_microseconds,
      help: "The remote HTTP request latencies in microseconds",
      labels: [:status_class, :status_code, :method, :host, :request_path, :scheme],
      buckets: Prometheus.Contrib.HTTP.microseconds_duration_buckets()
    )

    events = [
      [:myapp, :http, :remote, :request],
    ]

    :telemetry.attach_many(__MODULE__, events, &handle_event/4, nil)
  end

  def handle_event([:myapp, :http, :remote, :request], payload, _metadata, _config) do
    labels = [
      Prometheus.Contrib.HTTP.status_class(payload.status),
      payload.status,
      format_method(payload.method),
      payload.host,
      payload.path,
      payload.scheme,
    ]

    Counter.inc(
      name: :myapp_http_remote_requests_total,
      labels: labels
    )

    Histogram.observe(
      [
        name: :myapp_http_remote_request_duration_microseconds,
        labels: labels
      ],
      payload.duration_microseconds
    )
  end

  defp format_method(:get), do: "GET"
  defp format_method(:post), do: "POST"
  defp format_method(:put), do: "PUT"
  defp format_method(:delete), do: "DELETE"
  defp format_method(:info), do: "INFO"
end

That module translates the telemetry event to prometheus metrics. If you don’t use Prometheus then this approach can also be used for other metric platforms.

I hope this gives you some inspiration.

9 Likes

Thanks a lot. This is all I think what I was looking for. :slight_smile:

1 Like