Handling validation of function parameters

I’m working on the next iteration of the API for my nerves_neopixel library and I’m curious how people feel about having an API where there are descriptive error types for invalid arguments as opposed to just using some basic guard clauses and/or just taking whatever the client passes in and “letting it crash” if they put in something that turns out to be invalid?

Should I move the validations inside the GenServer callbacks? I was trying to convince myself that that’s “private” and I should only worry about stuff coming through this public API.

This GenServer is ultimately calling a C port (that I wrote so I can change) that re-validates things, which got me thinking whether I should just remove all the rest of the validation and have the C code return an error message that gets passed all the way up the stack and back to the client.

  @doc """
  Set the color of the pixel at a given point on the virtual canvas.
  @spec set_pixel(Point.t(), Color.t()) ::
    :ok |
    {:error, :invalid, :point} |
    {:error, :invalid, :color}
  def set_pixel(%Point{} = point, %Color{} = color) do
    with \
      :ok <- validate_point(point),
      :ok <- validate_color(color),
    do: GenServer.cast(HAL, {:set_pixel, point, color})

  # ...

  defp validate_point(point, tag \\ :point)
  defp validate_point(%Point{x: x, y: y}, _tag) when x in 0..65535 and y in 0..65535, do: :ok
  defp validate_point(_point, tag), do: {:error, :invalid, tag}

  defp validate_color(%Color{r: r, g: g, b: b, w: w}) when r in 0..255 and g in 0..255 and b in 0..255 and w in 0..255, do: :ok
  defp validate_color(_), do: {:error, :invalid, :color}

My preference is to validate external data and turn into a “known” data structure at the edges, and let it crash once you know you have good data. I think this applies a bit more to the case where you have truly external data such as console input or tcp data but also to your public API of your library.

I personally don’t care to much about the error message from a library. I.e :badarg is OK as I then can look up the documentation.

If you are going to “let it crash” or not on bad input depends on the consequences of the crash and if it is only going to happen during development or if it can happen in production. It is used quite a bit in OTP libraries so I guess it OK to do so :smiley:


I think client\server models where invalid client input cannot be detected until it’s arrived at the server (ex. servers that wrap external programs), should raise in the client.

It’s the client that has bad data and should crash and be restarted in a known good state, not your server; and the backtrace should include the call site where they invoked your client with bad data. So I would propagate the error as a reply in your handle_call callbacks.

This implies that your analogous handle_cast callbacks with invalid data would simply fail silently, instead of crashing the server. If that doesn’t sound like the behaviour you’re looking for, then you probably do want to take down the server in both cases.

Yeah, I was planning to change the handle_cast to handle_call as well, since I’d want to get back a response about whether it worked. I had been using handle_cast initially because I had the Port code set up to either just work or crash the Port program, so there was never anything coming back as a response. I think in addition to getting back error messages, I would want to use calls so I have back-pressure from the C port and don’t overwhelm the GenServer mailbox with casts.

I love descriptive errors, and you can still raise the match exception or function head exception with such messages too. :slight_smile:

But otherwise this yep. :slight_smile: