Ash Api testing setup (intermittent unrelated test failures in application)

I’m having a hard time setting up testing for a new Ash Framework project. App details:

  1. It’s a Phoenix app, with the server boss_site and the web app boss_site_web
  2. I’m just starting to build out the Ash model in a coe context

I had a hard time getting the Ash Api to a point where I could do some basic testing (not much documentation on how to set it up, and what’s there doesn’t really explain what’s going on). I wanted to start with some basic unit tests around the code interface.

I’m writing up my tribulations in case someone else runs into the same problems. Also, I have a question (at the end).

I did find this post which had me trying to use Ash.Api in my unit tests. That didn’t seem to work out too well. A simple unit test:

defmodule COE.WalkTest do
  @moduledoc false

  use Ash.Api
  use ExUnit.Case, async: true

  doctest COE.Walk.ActivityStereotype

  test "activity stereotypes can be created" do
    {:ok, s} = COE.Walk.ActivityStereotype.create("Meetings")
    assert(s.id != nil)
  end
end

While the tests passed, I’d end up with lots of warnings.

% mix test test/coe/activity_stereotype_test.exs
warning: Api COE.WalkTest is not present in

    config :boss_site, ash_apis: [COE.Walk].

To resolve this warning, do one of the following.

1. Add the api to your configured api modules. The following snippet can be used.

    config :boss_site, ash_apis: [COE.Walk, COE.WalkTest]

2. Add the option `validate_config_inclusion?: false` to `use Ash.Api`

3. Configure all apis not to warn, with `config :ash, :validate_api_config_inclusion?, false`

  test/coe/activity_stereotype_test.exs:1: COE.WalkTest.__verify_spark_dsl__/1
  (elixir 1.15.7) lib/enum.ex:984: Enum."-each/2-lists^foreach/1-0-"/2
  (elixir 1.15.7) lib/module/parallel_checker.ex:271: Module.ParallelChecker.check_module/3
  (elixir 1.15.7) lib/module/parallel_checker.ex:82: anonymous fn/6 in Module.ParallelChecker.spawn/4

.
Finished in 0.3 seconds (0.3s async, 0.00s sync)
1 test, 0 failures

Clearly something is off here, I don’t want to add all of my unit tests to my ash_apis. Options 2 and 3 didn’t sound healthy. So, I took out use Ash.Api but I just get (Ash.Error.Unknown) Unknown Error and a lot of diagnostics because all the important bits are missing.

After a lot more playing around, I got to a point where my Ash Framework unit tests work, but “everything else” broke. But the errors gave me a few clues to go on:

% mix test

  1) test renders 404 (BossSiteWeb.ErrorJSONTest)
     test/boss_site_web/controllers/error_json_test.exs:6
     ** (MatchError) no match of right hand side value: {:error, {:badarg, [{:ets, :lookup_element, [Ecto.Repo.Registry, #PID<0.627.0>, 4], [error_info: %{cause: :badkey, module: :erl_stdlib_errors}]}, {Ecto.Repo.Registry, :lookup, 1, [file: ~c"lib/ecto/repo/registry.ex", line: 27]}, {Ecto.Adapters.SQL.Sandbox, :lookup_meta!, 1, [file: ~c"lib/ecto/adapters/sql/sandbox.ex", line: 581]}, {Ecto.Adapters.SQL.Sandbox, :checkout, 2, [file: ~c"lib/ecto/adapters/sql/sandbox.ex", line: 494]}, {Ecto.Adapters.SQL.Sandbox, :"-start_owner!/2-fun-0-", 3, [file: ~c"lib/ecto/adapters/sql/sandbox.ex", line: 416]}, {Agent.Server, :init, 1, [file: ~c"lib/agent/server.ex", line: 8]}, {:gen_server, :init_it, 2, [file: ~c"gen_server.erl", line: 962]}, {:gen_server, :init_it, 6, [file: ~c"gen_server.erl", line: 917]}, {:proc_lib, :init_p_do_apply, 3, [file: ~c"proc_lib.erl", line: 241]}]}}
     stacktrace:
       (ecto_sql 3.11.0) lib/ecto/adapters/sql/sandbox.ex:413: Ecto.Adapters.SQL.Sandbox.start_owner!/2
       (boss_site 0.1.0) test/support/data_case.ex:39: BossSite.DataCase.setup_sandbox/1
       (boss_site 0.1.0) test/support/conn_case.ex:35: BossSiteWeb.ConnCase.__ex_unit_setup_0/1
       (boss_site 0.1.0) test/support/conn_case.ex:1: BossSiteWeb.ConnCase.__ex_unit__/2
       test/boss_site_web/controllers/error_json_test.exs:1: BossSiteWeb.ErrorJSONTest.__ex_unit__/2
<snip>
..
Finished in 0.1 seconds (0.1s async, 0.03s sync)
4 features, 7 tests, 4 failures, 4 excluded

The hints about sandboxing led me, eventually, to Testing With Postgres on Ash-HQ. I’ve added that to my test_helper.exs (see the defmodule bit):

# /test/test_helper.exs
ExUnit.start()
Ecto.Adapters.SQL.Sandbox.mode(BossSite.Repo, :manual)

# Wallaby setup
Application.put_env(:wallaby, :base_url, BossSiteWeb.Endpoint.url)
{:ok, _} = Application.ensure_all_started(:wallaby)

# ExUnit behaviors (such as excluding Wallaby UX tests by default)
ExUnit.configure(exclude: [ux: true])

defmodule COE.DataCase do
  @moduledoc """
  This module defines the setup for tests requiring access to the application's data layer.

  You may define functions here to be used as helpers in your tests.

  Finally, if the test case interacts with the database, we enable the SQL sandbox, so changes done to the database are reverted at the end
  of every test. If you are using PostgreSQL, you can even run database tests asynchronously by setting `use AshHq.DataCase, async: true`,
  although this option is not recommended for other databases.
  """

  use ExUnit.CaseTemplate

  using do
    quote do
      alias BossSite.Repo

      import Ecto
      import Ecto.Changeset
      import Ecto.Query
      import COE.DataCase
    end
  end

  setup tags do
    pid = Ecto.Adapters.SQL.Sandbox.start_owner!(BossSite.Repo, shared: not tags[:async])
    on_exit(fn -> Ecto.Adapters.SQL.Sandbox.stop_owner(pid) end)
    :ok
  end
end

And then use COE.DataCase in my Ash unit tests:

defmodule COE.WalkTest do
  @moduledoc false

  use COE.DataCase
  use ExUnit.Case, async: true

  doctest COE.Walk.ActivityStereotype

  test "activity stereotypes can be created" do
    {:ok, s} = COE.Walk.ActivityStereotype.create("Meetings")
    assert(s.id != nil)
  end
end

And I finally landed on something that works. Phew.

% mix test --include ux
Including tags: [:ux]

.........
Finished in 13.1 seconds (13.1s async, 0.00s sync)
4 features, 7 tests, 0 failures

That took several hours of very frustrating experimentation. My question:

  1. Where does defmodule COE.DataCase belong? I can’t seem to put it in a separate .exs file (it doesn’t get compiled in) and adding it to the non-test code seems off.

A few suggestions, to make this easier on folks in the future:

  1. We would really, really benefit from a nice “Ash Framework testing guide” – I’m sure there are lots of interesting strategies and ‘gotchas’…
  2. At the least, a tutorial to get us to a point where we can test a few simple cases (like above) would be great, even one or two simple cases
  3. I still don’t understand what defmodule COE.DataCase does, really, not at a fine grained level – an explanation of what’s going on behind the scenes would be super informative (why “setup tags?” and I’m was surprised to see Ecto here versus Ash… and I still don’t understand why not setting up the sandbox caused all my simple HTML/JSON endpoint tests to break…)
  4. At the very least, linking the article about testing with Postgres to the tutorial would be good, with big red letters that say if you don’t do this, all your tests are probably going to fail and you’ll spend hours trying to figure out why :slight_smile:

defmodule COE.DataCase goes in its own .ex file, and that .ex file should go in test/support. Then, you add test/support to your elixirc_paths in mix.exs

  def application() do
    [
      ....
      elixirc_paths: elixirc_paths(Mix.env())
      ....
    ]
  end

  # Specifies which paths to compile per environment.
  defp elixirc_paths(:test), do: ["lib", "test/support"]
  defp elixirc_paths(_), do: ["lib"]

I agree, we need a full guide on testing that covers all of this. The reason it hasn’t been done up to this point is probably because most people using Ash are using it with an app they generated with mix phx.gen which gives you the required setup and scaffolding for this automatically.

AshPostgres is backed by Ecto, and so the underlying logic for sandboxing is handled “below” Ash. We could of course wrap it in something we control, but that would obfuscate more than it would help. Sandboxing in this context is ensuring that each test runs in its own transaction, so that each test does not interfere with the data for another test. Without that, you’d have to clean up data after each test, and make each test run serially.

1 Like