How does one approach gathering runtime statistics about function calls from modules that belong to a specific app?
In my app, using a combination of decorator
and telemetry
packages, I am able to collect statistics about function calls like this:
defmodule MyApp.FunctionTracing do
use Decorator.Define, span: 0
def span(body, context) do
quote do
metadata = %{
function: "#{unquote(context.module)}.#{unquote(context.name)}/#{unquote(length(context.args))}"
}
:telemetry.span([:my_app, :function_call], metadata, fn ->
{unquote(body), metadata}
end)
end
end
end
defmodule MyApp.MyModule do
use MyApp.FunctionTracing
@decorate span()
def create_session(args) do
# ...
end
end
This is then exposed via a family of telemetry
packages in a form of GET /metrics
endpoint for Prometheus collector to come and pick up.
I can do the above manually for functions that I suspect may not be called. Could someone think of a way to scale this approach? E.g. is there a way I could tell Elixir compiler: please, when compiling ALL modules that belong to my own app, do wrap them in such a way?
The end goal here is to highlight & eliminate dead from from a fairly large codebase. Having all functions instrumented this way, I’d deploy the code in staging or production, collect the statistics for a week or two, then use it to base decisions about code clean up.
Or is this too crazy of a thing to want for such a use case? If it’s too crazy, what would be a good alternative?
I’ve heard one could go a long way with tools already available in Erlang VM / stdlib for tracing and such, - if so, could someone point me in the right direction? At this point all I’m interested in is just counting all function calls coming from modules that belong to my own app.
Ideally, after a week or two of “counting”, I could filter out all function calls with 0 calls, and begin cleaning up the code.