Error tracking tools and "let it crash" philosophy

In Elixir and Phoenix applications, we tend to let it crash when there is an error and can’t return a meaningful response the user.

E.g. a controller action may try to fetch a user by id Users.get_user!(user_id) from a route param, and if not found, Phoenix.Ecto lib allows to convert the Ecto.NoResultsError error to a 404 http not found page.

However, when using an error tracking tool such as AppSignal, Sentry, etc. couldn’t that lead to many useless error reports, bloating the logs, e.g. in cases where there are bots frequently trying to requesting URLs with wrong route params?

What about DDoS attacks against URLs with wrong route params, won’t that lead to a lot of resource consumption because of mass api calls to the chosen error tracking systems?

1 Like

Assuming correct configuration, sentry or app signal wont see those errors, as they have been handled by other means.

Those error trackers/aggregators are to give you insight into errors that haven’t been handled, or that have been sent to the tracker explicitely.

2 Likes

Sentry (and I’m assuming most error tracking services) allow you to filter errors before they’re sent to the 3rd party service.

You can set up your own custom filter with Sentry, as documented here.

But even without any special configuration on your part, Sentry will by default use essentially the following filter (seen here). You can see that it ignores exceptions of type Phoenix.Router.NoRouteError as well as a few others.

defmodule Sentry.DefaultEventFilter do
  @behaviour Sentry.EventFilter

  @ignored_exceptions [Phoenix.Router.NoRouteError] # …others included here, too

  def exclude_exception?(%x{}, :plug) when x in @ignored_exceptions do
    true
  end

  # …etc.
end

This should all happen in memory and be quite efficient, so I don’t imagine it being a big concern from a DDoS standpoint.