Property-testing Phoenix applications

Hello everybody!

I’ve discovered property-testing with stream_data a few months ago. Since then, I’ve successfully re-written the tests of a library, and I’m willing to use property-testing everywhere I can.

I’ve tried to property-test a Phoenix context boundary, but I’m facing performance issues:

defmodule Kakte.AccountsTest
  use Kakte.DataCase

  alias Kakte.Accounts
  alias Kakte.Repo

  [...]

  property "list_users/0 returns all users" do
    # user_attrs/0 is a generator for valid user attributes.
    check all attrs_list <- uniq_list_of(user_attrs(), length: 5) do
      Repo.transaction(fn ->
        users =
          Enum.map(attrs_list, fn attrs ->
            {:ok, user} = Accounts.register(attrs)
            user
          end)

        users_list = Accounts.list_users()

        assert length(users) == 5
        Enum.each(users, fn user -> assert user in users_list end)

        Repo.rollback(:done)
      end)
    end
  end

  [...]
end

Running this test takes more than 2.5 seconds on my machine. I know a property-test is expected to be longer to execute due to higher number of tests, but 2.5 seconds for one test is way too long IMO. I’ve tried to replace the test body by assert true and it runs quick, so I presume this is due to many repo transactions, but maybe I am misleading.

I have basically three questions:

  1. Is it a good idea to use property-testing everywhere?
  2. Do you property-test your Phoenix applications?
  3. How can I manage to speed up this test?

For information, the context source is here on GitHub.

It depends on what you are testing in this case, and whether if can be accomplished with unit tests. Maybe you can separate the test above into property tests on changeset functions and then a unit test on “list_users/0 returns all users”. However, if you do need to touch the database in your property tests (for example, if you use stateful property tests for state machines backed by postgres), then maybe try reducing the complexity of the tests (with :numtests, :max_size proper or propcheck options and sized macro) so that they don’t run for too long during development.

but 2.5 seconds for one test is way too long IMO

It might be okay if you leave them to run overnight on some remote testing server, then you’d have a higher confidence that the code works as expected.

UPDATE: There are similar options for stream data as well.

1 Like

Yes, that seems to be a good idea. Thank you.

That’s what I did to get a bit more speed, but I feel less confident with fewer tests.

How would you do that? With an environment variable, like MIX_MAX_RUNS=100 mix test and writing something like this?

defmodule Kakte.AccountsTest
  [...]

  @max_runs System.get_env("MIX_MAX_RUNS") || 5

  property "list_users/0 returns all users" do
    # user_attrs/0 is a generator for valid user attributes.
    check all attrs_list <- uniq_list_of(user_attrs(), length: 5), max_runs: @max_runs do
    [...]

end

I’d create an additional testing mix environment (in addition to test) which will be more similar to prod than dev, as in it would have remote (not a single local one like with dev or default test) databases to test possible race conditions, higher number of test iterations, mocks replaced with real internal services etc.

This is the trade off with property tests. You’re asking the computer to do a lot of work for you. But in return you’re gaining greater confidence in your overall system because your tests are working to find edge conditions in your logic. If you don’t feel like the trade-off is worth it then that’s a pretty clear indicator that your logic probably isn’t complex enough to warrant spending the time on a property test for it.

A potentially better use for property tests might be to actually drive this out from an api or webpage. Testing with more pieces in integration might help you find more edge cases. You can either generate api requests or use state machine models to drive a fake browser, etc.

Property tests provide robustness. But you pay for it in test time. I think people get hung up on tests times and I would always take robustness over test speed. That said, not every application needs that level of robustness and you still want to ensure that your spending your test time on the pieces of your system that need that level of robustness.

Most of that time is probably due to you hitting the database. You’ll get much better times for testing pure computation.

1 Like