Some sentry errors from live_view have no context

Most of our production errors in sentry have the expected context: HTTP request, params, cookies, etc. This is great. Some errors do not, which makes them impossible to diagnose.

For example, I have this particular view that crashes in circumstances yet unknown. I’m getting events with the context and the following stacktrace:

This has all the relevant information.

The “same” exception occurs with a different call stack and then I get no information:

Using Liveview 0.18.2.

What is going on?
What can be done about it?

1 Like


Sentry uses process dict to store context, similar to Logger. Sentry plug sets the context on the process that handles the HTTP requests and the first liveview render. I guess your problem arises when the error happens in the live view’s channel process (apparently in handle_info) which has no context set by Sentry.

def mount(_, _, socket) do
  ctx = Sentry.Context.get_all()
  if connected?(socket) do
    IO.inspect(ctx, label: "live view channel context")
    IO.inspect(ctx, label: "live view first http request context")

  {:ok, socket}

If that’s indeed the case, then you can probably add an on_mount hook that would copy the context from HTTP request into the channel process.

1 Like

If it helps, here is some code that we used in Bytepack to store LiveView information:

The audit context is passed to the context/model layer. The request context is stored in Sentry’s pdict.

1 Like

Just to make this a bit more explicit. When thing fail on the websocket connection there is no http request to have information about in the first place. Cookies (of the initial upgrade request) are also not exposed to channels. Relevant information would need to be fetched through other means, which I’m not sure the sentry library has integrations for yet.

1 Like