Prometheus.io Elixir integrations

I’ve just released stable versions of my Prometheus Elixir libs:

Elixir client is based on Erlang client which exports dozens of metrics such as VM memory metrics, VM system info metrics, Mnesia metrics, etc.

For linux users there is also Process info collector.

I would be glad to answer any questions!

17 Likes

For those new to monitoring (like myself), I wrote a blog post on how to set this up:

http://aldusleaf.org/monitoring-elixir-apps-in-2016-prometheus-and-grafana/

3 Likes

Thank you! Excellent post. Just a note: prometheus_plugs can also instrument a single plug or it’s own pipeline:

defmodule EnsureAuthenticatedInstrumenter do
  use Prometheus.PlugInstrumenter

  plug Guardian.Plug.EnsureAuthenticated

  def label_value(:authenticated, {conn, _}) do
    conn.status != 401
  end
end

Basically it uses Plug.Builder and similar to it in this aspect.

1 Like

I just pushed version 3.0.1 of Prometheus.erl with huge performance update:

Run mix deps.update prometheus to update!

2 Likes

Update:

  • Prometheus.ex - mnesia contrib (e.g. calculate full space occupied by a table on disk);
  • Prometheus Plugs - content negotiation is turned on now. I.e. exporter should render text when request comes from browser and Protobuf when it comes from prometheus server. Format set to :auto by default.

I also have Slack channel (#prometheus) now: Browser link.

New stuff:

8 Likes

Just released new version:

  • Boolean metric;
  • Text format rendering optimization (> 30% faster).

Github: https://github.com/deadtrickster/prometheus.ex
Hex.pm: https://hex.pm/packages/prometheus_ex/1.3.0

4 Likes

Many thanks for your work. It’s very useful to have integrations with Prometheus. (Even if I haven’t used Elixir in production)

1 Like

Thanks for your work on this. I found it pretty easy to get set up with and the docs were helpful. The pull-based model Prometheus uses is really great to get started easily sending metrics with a very small amount of overhead.

To people evaluating open source metrics solutions, would recommend based on my early experience.

2 Likes

This is really cool @deadtrickster, thank you! Are there any kind of guide i can follow, if i want to push my own metrics, which are more business oriented, like user signups etc. :slight_smile:

It’s all there in the README. For user signups, you probably want a counter. Lifting directly from the API docs, and doing some renaming:

defmodule UserSignupInstrumenter do

  use Prometheus.Metric

  ## to be called at app/supervisor startup.
  ## to tolerate restarts use declare.
  def setup() do
    Counter.declare([name: :my_service_user_signups_total,
                     help: "User signups count.",
                     labels: [:country]])
  end

  def inc(country) do
    Counter.inc([name: : my_service_user_signups_total,
                labels: [country]])
  end

end

Then from the places in your code where you handle user signups:

def handle_signup(user, other, stuff) do
  # create the user here
  country = MyGeoIPService.lookup_country(user)
  UserSignupInstrumenter.inc(country)
end

I made up a country label, so you can track user signups on a per-country basis. Not sure how useful it might be or not :slight_smile:

Thanks @orestis!

User instrumenter can be rewritten in more declarative style starting from 1.4.1

defmodule UserSignupInstrumenter do

  use Prometheus.Metric

  @counter [name: :my_service_user_signups_total,
            help: "User signups count.",
            labels: [:country]]

  def inc(country) do
    Counter.inc([name: : my_service_user_signups_total,
                labels: [country]])
  end

end

and UserSignupInstrumenter__declare_prometheus_metrics__() will be generated from that automatically. Found that useful when I have many metrics.

Cool thanks @orestis & @deadtrickster, appreciate it, gonna give it a try for our next project :smile:.

And don’t hesitate joining our #prometheus channel on elixir-lang Slack team!

2 Likes

Starting from version 2.1 Cowboy has special metrics stream handler - https://github.com/ninenines/cowboy/blob/master/src/cowboy_metrics_h.erl.
Prometheus cowboy integration uses this interface to directly instrument Cowboy now.

4 Likes

Q1:
I’m using

      {:prometheus_ex, "~> 2.0"},
      {:prometheus_phoenix, "~> 1.2.0"},

and doc for https://github.com/deadtrickster/prometheus-phoenix/tree/v1.2.0 states that it has 2 metrics:

  • phoenix_controller_call_duration_microseconds - Whole controller pipeline execution time.
  • phoenix_controller_render_duration_microseconds - View rendering time.

Instead, the /metrics page shows me

# TYPE phoenix_controller_call_duration_microseconds histogram
# HELP phoenix_controller_call_duration_microseconds Whole controller pipeline execution time in microseconds.
# TYPE phoenix_controller_render_duration_microseconds histogram
# HELP phoenix_controller_render_duration_microseconds View rendering time in microseconds.
# TYPE phoenix_channel_receive_duration_microseconds histogram
# HELP phoenix_channel_receive_duration_microseconds Phoenix channel receive handler time in microseconds
# TYPE phoenix_channel_join_duration_microseconds histogram
# HELP phoenix_channel_join_duration_microseconds Phoenix channel join handler time in microseconds

Where do they come from? They have no values. After using prometheus_phoenix 1.2.1 metrics are the same but WITH values. Where does all that come from?

Q2:

my config for PhoenixInstrumenter:

config :prometheus, AppWeb.PhoenixInstrumenter,
  controller_call_labels: [:controller, :action],
  duration_buckets: [10, 25, 50, 100, 250, 500, 1000, 2500, 5000,
                     10_000, 25_000, 50_000, 100_000, 250_000, 500_000,
                     1_000_000, 2_500_000, 5_000_000, 10_000_000, 20_000_000, 30_000_000],
  registry: :default,
  duration_unit: :microseconds

Why the key is called controller_call_labels? Other sample configs have smth like

config :prometheus, AppWeb.PipelineInstrumenter,
  labels: [:status_class, :method, :host, :scheme, :request_path],

but not controller_call_labels. How is it set?

Q3:
If I add/remove values from duration_buckets: they change after a certain period of time. Are they cached somewhere?

Thanks ahead for the answers!

Q1: probably I forgot to update the docs, new metrics are for phoenix channels
Q2: because you can have different sets of labels for each instrumenter
Q3: where/how you observe the chnages?

@deadtrickster thank you for the answers!
I observe the changes on the /metrics page. Just now I left:

config :prometheus, AppWeb.PipelineInstrumenter,
  labels: [:status_class, :method, :host, :scheme, :request_path],
  duration_buckets: [10, 100, 1_000, 10_000, 100_000,
                     5_000_000, 10_000_000, 20_000_000, 30_000_000],

and the metrics still have:

 http_request_duration_microseconds_bucket{status_class="success",method="GET",host="localhost",scheme="http",request_path="/",le="10.0"} 0
http_request_duration_microseconds_bucket{status_class="success",method="GET",host="localhost",scheme="http",request_path="/",le="100.0"} 0
http_request_duration_microseconds_bucket{status_class="success",method="GET",host="localhost",scheme="http",request_path="/",le="1.0e3"} 0
http_request_duration_microseconds_bucket{status_class="success",method="GET",host="localhost",scheme="http",request_path="/",le="1.0e4"} 0
http_request_duration_microseconds_bucket{status_class="success",method="GET",host="localhost",scheme="http",request_path="/",le="1.0e5"} 0
http_request_duration_microseconds_bucket{status_class="success",method="GET",host="localhost",scheme="http",request_path="/",le="3.0e5"} 0
http_request_duration_microseconds_bucket{status_class="success",method="GET",host="localhost",scheme="http",request_path="/",le="5.0e5"} 0
http_request_duration_microseconds_bucket{status_class="success",method="GET",host="localhost",scheme="http",request_path="/",le="7.5e5"} 0
http_request_duration_microseconds_bucket{status_class="success",method="GET",host="localhost",scheme="http",request_path="/",le="1.0e6"} 0
http_request_duration_microseconds_bucket{status_class="success",method="GET",host="localhost",scheme="http",request_path="/",le="1.5e6"} 0
http_request_duration_microseconds_bucket{status_class="success",method="GET",host="localhost",scheme="http",request_path="/",le="2.0e6"} 0
http_request_duration_microseconds_bucket{status_class="success",method="GET",host="localhost",scheme="http",request_path="/",le="3.0e6"} 0
http_request_duration_microseconds_bucket{status_class="success",method="GET",host="localhost",scheme="http",request_path="/",le="5.0e6"} 0
http_request_duration_microseconds_bucket{status_class="success",method="GET",host="localhost",scheme="http",request_path="/",le="1.0e7"} 1
http_request_duration_microseconds_bucket{status_class="success",method="GET",host="localhost",scheme="http",request_path="/",le="2.0e7"} 1
http_request_duration_microseconds_bucket{status_class="success",method="GET",host="localhost",scheme="http",request_path="/",le="3.0e7"} 1
http_request_duration_microseconds_bucket{status_class="success",method="GET",host="localhost",scheme="http",request_path="/",le="+Inf"} 1
http_request_duration_microseconds_count{status_class="success",method="GET",host="localhost",scheme="http",request_path="/"} 1

Another question, Q4:
I have a different Instrumenter which doesn’t get to /metrics page:

defmodule AppWeb.ExampleInstrumenter do
  use Prometheus.Metric

  @histogram [name: :http_request_duration_ms,
              labels: [:method],
              buckets: [100, 300, 500, 750, 1000],
              help: "Http Request execution time"]

  def instrument(%{time: time, method: method}) do
    Histogram.observe([name: :http_request_duration_ms, labels: [method]], time)
  end
end

It’s included to Endpoint Instrumenters:

config :app, AppWeb.Endpoint,
  instrumenters: [AppWeb.ExampleInstrumenter]

Where else should it go? The /metrics page doesn’t have http_request_duration_ms label.

Thanks again!

Hi @deadtrickster!
The problem with {:prometheus_phoenix, "~> 1.2.0"} was not only that 2 metrics were missing in docs. The problem was that until I used 1.2.1 version /metrics page wasn’t showing any data for phoenix.

Hi @deadtrickster. Thanks for all replys.
I use your library, and i need to add label to all metrics. I need add ‘hostname’, and before the response, i set this value (in handler). My question is: exists one configuration for this?: add an label to all default_metrics.

Thanks!,

Hernán-