I am looking for a way for benchmarking and recording the metrics from a Phoenix/Elixir App as part of my prototyping/development process. I’d like to record and compare test runs against each other to see how they perform between iterations. Ideally, I could export the metrics to .csv
or a .json
file and import them into Pandas/Seaborn for visualization (or even possibly LiveBook). benchee
is great, but it fits better for benchmarking specific functions or modules instead of a whole application. I’ve been looking at beamchmark
, but it sort of relies on setting up things in a .exs
file as a scenario.
My goal is to be able to compare and visualize changes to see if there is performance improvements. For Example:
- Test Run 1:
- vm.memory.total
- vm.total_run_queue_lengths.cpu
- … other metrics and custom telemetry
- Test Run 2:
- vm.memory.total
- vm.total_run_queue_lengths.cpu
- … other metrics and custom telemetry
What I am considering as an approach is to record some of the metrics produced from phoenix_live_dashboard
or [:telemetry]
and log them to a file for the test run. Exporting the telemetry via the Prometheus Exporter prom_ex
packages all the data points nicely, but I’d need to scrape them using Prometheus and record to a file. Alternatively, I was considering using the telemetry_influxdb
telemetry reporter to InfluxDB to record the Telemetry events.
The reason I am initially considering logging things to a file is that it seems like it is easier to manipulate, review and analyze after test runs, although I’m open to other ideas on what might enable easier/better analysis for rapid iterations during the development cycle.
Additionally, are there any examples of using performance metrics in test suites?