Filename and linenumber as comment in generated sql

I’m starting down a bit of a “side quest” at work. We’d like to be able to tie some comment containing filename and linenumber to each sql query for easier tracing from db tools into the Elixir source of the query. Has anyone else explored this path? It seems like a common enough desire, so since it’s not already easy to do it makes me concerned this will be hard. I’d expect some changes and/or config in ecto to capture the caller position and passing that down into postgrex.

If someone has made this work and can share, I’d be very grateful for whatever you can share. Otherwise, I’ll be digging down with a co-worker to try to fit the pieces together.


Hey @gregvaughn if you search for stacktrace on this page here Ecto.Repo — Ecto v3.10.3 you’ll see that you can supply a stacktrace: true option to your repo config. When you do, your Ecto Repo telemetry events include the stack trace of where the query was called from. We use this both when logging queries as well as when we emit otel traces related to queries.

Does this accomplish what you’re looking for?


hi @gregvaughn !

There was this at the Ecto mailing list a little bit ago: and the last comment mentioned GitHub - dkuku/sqlcommenter. Maybe @dkuku could maybe give some insight?

Other mentions of comments in the mailing list:

I can definitely see how this can add sanity to a codebase :slightly_smiling_face:, please keep us informed!

@benwilson512 that’s awesome! TIL, thank you!

I wrote a blog post about how this package can be used 2 days ago Enhancing SQL Traceability with Sqlcommenter in Elixir - DEV Community

1 Like

Thanks for the replies!

@benwilson512 no, I wasn’t aware of that stacktrace setting. Very intriguing. It’s not exactly what we were looking for (based on some people’s prior tech stack familiarity) but it may ultimately serve the purpose.

@pdgonzalez872 thanks for those links. I didn’t even realize the ecto mailing list was still used. It brings up concerns about query caching in postgres.

@dkuku have you done any performance analysis of your technique in sqlcommenter? I see a couple of low-hanging Elixir optimizations, but if it bypasses the postgres query caching, that’s important to know.


I just wrote the library. It does escape the params, otherwise it’s possible to do sql injection using custom params from let’s say web request.
It’s a bit problematic to hook it into ecto because you don’t have access to the underlying query. we are missing an comment macro probably in ecto. You may achieve this maybe using fragments?
The problem with my approach is that it returns an postgrex result and not the actual data so there is another step needed.
I also did not know about the stacktrace trick, but you can add the stacktrace inside the actual function.

  def all_traced(queryable, opts \\ []) do
    {metadata, opts} = Keyword.pop(opts, :metadata, %{})
    {:current_stacktrace, stacktrace} =, :current_stacktrace)

    stacktrace =
      stacktrace |> Enum.drop(2) |> Enum.take(1) |> Exception.format_stacktrace() |> String.trim()

    {query, params} = __MODULE__.to_sql(:all, queryable)
    query = Sqlcommenter.append_to_query(query, put_in(metadata, [:stacktrace], stacktrace))
    __MODULE__.query(query, params, opts)
____here is needed a step to convert the result from Postgrex.Result to Ecto.Schema___

Hello! I am curious if any one has had any movement on this topic?

It seems that there are some concerns around using prepared statements along with query comments that have a bit of dynamic data, for example a request_id field. However, it seems the cache will parse the query before creating and then caching the query plan, so my assumption is the comments will be ignored during the parsing stage and the planner will only see the query AST? I am wrong about my understanding about how prepared queries are cached and how comments can effect the planner cache?

At my company we abandoned the approach. We use Ecto telemetry plus Logger metadata to allow us to correlate the sourcecode location with sql in our application logs rather than trying to do that in the database logs.

Basically what Ben said. I’ll go mark him as the solution now.