Difference between using assert and pattern matching in unit test

I came across the following code when browsing the FireZone GitHub repository:

  use ExUnit.Case, async: true

  alias FzVpn.Interface
  alias FzVpn.Server

  test "delete interface" do
    name = "wg-delete"
    :ok = Interface.set(name, %{})

    assert :ok == Interface.delete(name)
  end

If I understand correctly, the line :ok = Interface.set(name, %{}) will raise a MatchError if Interface.set(name, %{}) doesn’t return :ok. Why not simply use an assert statement, as is done in the very next line?

Probably because first line cannot fail, so author decided that simple pattern match is better.

I think the question is more “why bother with the :ok = ?” I know there was some discussion about that around here recent-ish-ly but I’m sorry I can’t find it. It does bring the error (that should never happen) up to the test code which is maybe convenient? It also does indicate, as already pointed out, that that line can’t (or at least shouldn’t ever) fail. Maybe there is something more obvious I’m missing.

Otherwise, I would caution against adding asserts to lines that you aren’t the explicit subject under test as it makes things less clear. IE, in this case, Interface.set is not what is being tested, it’s just part of the setup. You could otherwise technically assert or refute every line single line!

3 Likes

PS, welcome!

Ps you can assert on matches too.

I take a different take. I encourage asserting as much as possible because asserts give better error messages than pattern matches. Probably don’t assert something that is a basically unfailable primitive on another library (for example assert :ok = MyPubSub.subscribe(...) is silly, but if it’s like assert {:ok, inserted} = Db.insert(...) Yeah go ahead and do it even if it’s part of your setup.

2 Likes

It appears hat [at least] three people whose opinions I respect don’t agree with me, lol. Is the error message that much better? I feel like seeing an assertion failure in a setup could cause a red herring situation when running all tests. It would be a pretty minor red herring, of course, and furthermore, this is all pretty low stakes. Really my preference here comes down to being able to identify the important assertions with as little brainpower as possible.

Now four. :003: (Generously assuming you respect my opinion of course. :icon_rolleyes:)

I use the assert :ok == a_prerequisite_function(...) thing religiously. Anything that requires something else besides what you included in setup_all and setup should also be tested. Every step should be assured to succeed or yell loudly if it doesn’t.

Example #1

I was writing a function that inserted several records in the DB and they had to be inter-connected in a specific manner and had to have X amount of fields filled with certain values. Absolutely every single detail of those expectations is tested. A week later I forgot to enforce one of these data connections (in a second function that was 80% identical to the first one) and the test blew up 2 minutes later. Fixed the problem on the spot, before a commit even.

A project growing and growing can invalidate even the most basic of assumptions – seen it happen many times.

Example #2

People have called me crazy and told me I am wasting time for making sure that the Repo.insert result indeed puts the right column values in the DB – but they forgot that their changeset function (the one that’s called before the insert) was actually modifying the data that was given to it and they assumed the values would be inserted as-is which they of course were not.

It can happen to each and every one of us. The tests were there to slap my colleagues and say “hey doofus, you thought you had value X but actually you have Y, go check why”.

My weekends have been saved no less than 50 times in my career by having semi-paranoid tests.

Example #3

Make sure that when inserting, if you omit a required field you’ll get an error in your changeset. Sure enough, just some days later I added a new field to an Ecto.Schema and forgot to put its name in the required fields list. Running tests, boom! The changeset complains “field X can’t be null”. Went into the schema module file and added the field, everything is green.

Is that a minor gripe? Maybe. But we should not forget that code is secondary. Data rules everything. Data must be consistent. Data’s invariants must be always held. The code is servant of the data.

This does not contradict what me and others have said. Personally I’d prefer to be informed if an assumption about the pre-conditions of the code that I am testing are broken.

I totally respect your position too.

1 Like

test “delete interface” do

As per the test title, the author is currently testing the delete interface, So when writing a test we should only focus on what this test suppose to do, So the key is to make the test simple and more focused on the current scenario. So readers can understand the test better.
that is why he has assertion only on delete

1 Like

Ha, I’m certainly including you! It was @ityonemo and the two likes he on his message of which you were one :slight_smile:

Without diving too much into testing philosophy we are mostly in agreement here. In fact, I’m having a bit of an “oh yes, of course” moment re: the :ok pattern match because yes, I want things to blow up. I just have yet to find myself in a scenario where I wouldn’t be using insert! or, more commonly, a factory (which is the topic I don’t feel like diving into right now). So yes, I definitely always want my setups to fail loudly if they fail, I’m just saying I’d rather not do so with an assertion.

Yeah I get it. I don’t always use the non-bang functions either; there are many scenarios where I don’t assert on the return values of Repo.insert and Repo.update (reverse of these I outlined above, namely there’s no special treatment of values) and just use the bang functions. This too is a strong signal in the test code: “these calls must succeed or the sky is falling otherwise”.

what about

  use ExUnit.Case, async: true

  alias FzVpn.Interface
  alias FzVpn.Server

  test "delete interface" do
    name = "wg-delete"
    with :ok <- Interface.set(name, %{}) do
      assert :ok == Interface.delete(name)
    else
       _ -> raise RuntimeError "test failed because of setup rather than on delete"
    end
  end

or something similar? This way you get a meaningful error if there’s an issue with the setup step, but your test is still focused.

1 Like