Preventing GenServer terminating errors being reported to Sentry

On a regular basis I’m seeing errors reported to Sentry such as GenServer #PID<0.106725.0> terminating

The “crash reason” reported in Sentry is {{%RuntimeError{message: "cannot fetch records from Kafka (topic=spaces partition=0 offset=520536). Reason: :not_leader_for_partition"}.

We filter our the above RuntimeError (see code below) so it’s not reported to Sentry, since it’s transient and resolves itself.

But we still end up with the GenServer error being reported.

It actually doesn’t look like an exception, so I’m not sure how we can filter it out.

Our working filtering code looks like this:

defmodule Shared.Infrastructure.Errors.EventFilter do
  @behaviour Sentry.EventFilter
  require Logger

   @spec exclude_exception?(Exception.t(), atom()) :: boolean()
  def exclude_exception?(exception, source) do
    (kafka_not_leader_for_partition_error?(exception) ||
       kafka_cannot_fetch_records?(exception) ||
       kafka_cannot_resolve_offset?(exception) ||
       invalid_path_error?(exception) ||
       malformed_request_error?(exception) ||
       no_route_error?(exception) ||
       invalid_query_error?(exception))
    |> tap(&maybe_log(exception, source, &1))
  end

  defp kafka_not_leader_for_partition_error?(exception) do
    exception_type?(exception, RuntimeError) &&
      message_includes?(exception, [
        "cannot fetch records from Kafka",
        "Reason: :not_leader_for_partition"
      ])
  end

  # <SNIP>

  defp maybe_log(exception, source, excluded) do
    if excluded do
      Logger.info(
        "Sentry exception excluded from being reported (source: #{inspect(source)}): #{inspect(exception)}"
      )
    end
  end
end

Any suggestions or pointers welcome…

is this your genserver crashing or is it part of some library?

I think it’s a GenServer in Broadway given the “crash reason” (I don’t know where crash reason comes from, but it looks like it’s the exception which causes the GenServer (process) to terminate.

Ah okay then, this probably means you don’t have control over the process to not make it crash over a somewhat expected error. In this case you probably want to filter this error from all logs, not just Sentry. Adding a primary logger filter might be the solution!