Ash 3.4 Migration Error: (Protocol.UndefinedError) protocol Enumerable not implemented for false of type Atom

I’m working with the realworld demo, and I’m trying to add a table using mix codegen or mix ash_postgres.generate_migrations, but each gives me the error:

Compiling 1 file (.ex)                                                                                                                                                                 │
Getting extensions in current project...                                                                                                                                               │
Running codegen for AshPostgres.DataLayer...                                                                                                                                           │
** (Protocol.UndefinedError) protocol Enumerable not implemented for false of type Atom. This protocol is implemented for the following type(s): DBConnection.PrepareStream, DBConnecti│
on.Stream, Date.Range, Ecto.Adapters.SQL.Stream, File.Stream, Function, GenEvent.Stream, HashDict, HashSet, IO.Stream, Iter, Jason.OrderedObject, List, Map, MapSet, Phoenix.LiveView.L│
iveStream, Postgrex.Stream, Range, Rewrite, Stream, StreamData                                                                                                                         │
    (elixir 1.16.2) lib/enum.ex:1: Enumerable.impl_for!/1                                                                                                                              │
    (elixir 1.16.2) lib/enum.ex:194: Enumerable.member?/2                                                                                                                              │
    (elixir 1.16.2) lib/enum.ex:2006: Enum.member?/2                                                                                                                                   │
    (elixir 1.16.2) lib/enum.ex:4402: Enum.reject_list/2                                                                                                                               │
    (ash_postgres 2.1.19) lib/migration_generator/migration_generator.ex:2892: AshPostgres.MigrationGenerator.identities/1                                                             │
    (ash_postgres 2.1.19) lib/migration_generator/migration_generator.ex:2603: AshPostgres.MigrationGenerator.do_snapshot/3                                                            │
    (ash_postgres 2.1.19) lib/migration_generator/migration_generator.ex:2596: AshPostgres.MigrationGenerator.get_snapshots/2                                                          │
    (elixir 1.16.2) lib/enum.ex:4326: Enum.flat_map_list/2

I have no idea where to start debugging this, but have found this topic that looks slimilar: AshGraphql Errors Encountered During Ash 3.0 Migration

From my mix.exs

elixir: "~> 1.16.2",
...
:phoenix, "~> 1.7.14",
:ash, "~> 3.4",
:ash_postgres, "~> 2.1"

Any help is appreciated, thanks.

Run mix deps.update spark and you should be good to go. :slight_smile:

Thanks, after trying that, I get this error…

** (Jason.DecodeError) unexpected byte at position 1168: 0x22 ("\"")                                                                                                                   │
    (jason 1.4.4) lib/jason.ex:92: Jason.decode!/2                                                                                                                                     │
    (ash_postgres 2.1.19) lib/migration_generator/migration_generator.ex:3075: AshPostgres.MigrationGenerator.load_snapshot/1                                                          │
    (ash_postgres 2.1.19) lib/migration_generator/migration_generator.ex:496: anonymous fn/3 in AshPostgres.MigrationGenerator.deduplicate_snapshots/4                                 │
    (elixir 1.16.2) lib/map.ex:257: Map.do_map/2                                                                                                                                       │
    (elixir 1.16.2) lib/map.ex:257: Map.do_map/2                                                                                                                                       │
    (elixir 1.16.2) lib/map.ex:251: Map.new_from_map/2                                                                                                                                 │
    (ash_postgres 2.1.19) lib/migration_generator/migration_generator.ex:491: AshPostgres.MigrationGenerator.deduplicate_snapshots/4                                                   │
    (ash_postgres 2.1.19) lib/migration_generator/migration_generator.ex:353: anonymous fn/4 in AshPostgres.MigrationGenerator.create_migrations/4

From my mix.exs

:jason, "~> 1.4"

:thinking: somehow you have invalid json in one of your snapshots. Have you modified your resource snapshots by hand?

Thanks for the quick reply Zach,

no I haven’t modified my resource snapshots by hand.

When I first tried to do the migration to create a new table, it gave me an odd error - that a name must be provided, although I had, and instead of running the migration, it ran ‘Extension Migrations’, creating:

20240813065843_install_3_extensions.exs

defmodule Realworld.Repo.Migrations.Install3Extensions20240813065842 do
  @moduledoc """
  Installs any extensions that are mentioned in the repo's `installed_extensions/0` callback

  This file was autogenerated with `mix ash_postgres.generate_migrations`
  """

  use Ecto.Migration

  def up do
    execute("CREATE EXTENSION IF NOT EXISTS \"uuid-ossp\"")
    execute("CREATE EXTENSION IF NOT EXISTS \"citext\"")

    execute("""
    CREATE OR REPLACE FUNCTION uuid_generate_v7()
    RETURNS UUID
    AS $$
    DECLARE
      timestamp    TIMESTAMPTZ;
      microseconds INT;
    BEGIN
      timestamp    = clock_timestamp();
      microseconds = (cast(extract(microseconds FROM timestamp)::INT - (floor(extract(milliseconds FROM timestamp))::INT * 1000) AS DOUBLE PRECISION) * 4.096)::INT;

      RETURN encode(
        set_byte(
          set_byte(
            overlay(uuid_send(gen_random_uuid()) placing substring(int8send(floor(extract(epoch FROM timestamp) * 1000)::BIGINT) FROM 3) FROM 1 FOR 6
          ),
          6, (b'0111' || (microseconds >> 8)::bit(4))::bit(8)::int
        ),
        7, microseconds::bit(8)::int
      ),
      'hex')::UUID;
    END
    $$
    LANGUAGE PLPGSQL
    VOLATILE;
    """)

    execute("""
    CREATE OR REPLACE FUNCTION timestamp_from_uuid_v7(_uuid uuid)
    RETURNS TIMESTAMP WITHOUT TIME ZONE
    AS $$
      SELECT to_timestamp(('x0000' || substr(_uuid::TEXT, 1, 8) || substr(_uuid::TEXT, 10, 4))::BIT(64)::BIGINT::NUMERIC / 1000);
    $$
    LANGUAGE SQL
    IMMUTABLE PARALLEL SAFE STRICT LEAKPROOF;
    """)
  end

  def down do
    # Uncomment this if you actually want to uninstall the extensions
    # when this migration is rolled back:
    # execute("DROP EXTENSION IF EXISTS \"uuid-ossp\"")
    # execute("DROP EXTENSION IF EXISTS \"citext\"")
    execute("DROP FUNCTION IF EXISTS uuid_generate_v7(), timestamp_from_uuid_v7(uuid)")
  end
end

(From memory, I installed citext for case insenstive email signup.)

and it also created:

priv/resource_snapshots/repo/extensions.json

{
  "ash_functions_version": 4,
  "installed": [
    "uuid-ossp",
    "citext",
    "ash-functions"
  ]
}

to give the same error as the last one above:

** (Jason.DecodeError) unexpected byte at position 1168: 0x22 ("\"")

That error was replaced however by the one in my original post when I again ran the migration to create a table.

Yeah, so those should be fine, they are generated as part of upgrading.

Are there snapshots for the various tables in the resource snapshots folder? Can I see the new resource you’re adding?

This is where my no shame noobness comes in…

I haven’t created the resource yet. I read another post here where someone said they had, and I must have somehow skipped this part in a tutorial, but was thinking if I had to first create a resource, then Ash would given me a warning when attempting a migration that a resource was missing, and then I’d add it before trying the migration again. But instead I got that jason error.

One reason I hadn’t created the resource, was because I was thinking about how to populate the table, which is a users personal info that they can edit, and I want it to be partly populated by their account (eg email) and user profile (eg real name), so I thought it easier to create the table first and then try to understand how to work with it via Ash.

I guess that’s why the snapshot for the table I was trying to create via the migration hasn’t been created (to answer your first question)?

Ash won’t warn you that a resource is missing when running migrations. The general flow is to work on your resources, and then mix ash.codegen add_something to generate or modify the underlying tables, and then mix ash.migrate to migrate them.

Ultimately the main thing I’m trying to determine here is which file has invalid JSON in it. The extensions file looks fine to me, so there must be something else going wrong here. That error is in loading existing snapshots, so it must be a snapshot for a resource that is having the problem.

I cloned down the real world repo and ran this in iex to decode all the json files that could be relevant:

Path.wildcard("priv/**") 
|> Enum.filter(&(Path.extname(&1) == ".json"))
|> Enum.each(fn file ->
  File.read!(file) |> Jason.decode()
end)

and all the files there currently successfully decode. So one of the new files has invalid JSON.

Your first guess was right, I have edited the snapshots - indirectly via a global find and replace when I first started playing with the realworld demo to see if I could use it as a starting base to learn Ash and prototype my app by changing resource names etc and reseting the db with them.

Sorry, it’s after 2am here, so I’m a bit tired and forgot that obvious fact to answer your first question.

I’m still surprised I’ve messed up the formatting in a snapshot, but it’s possible, although I imagine you’re going to tell me that by just changing them at all (via that lazy global find and replace across all of the demo), I’ve messed up something akin to a checksum? If not or regardless, I’d better track down that invalid JSON.

I really appreciate your patience and help, and thanks for explaining some of the general workflow when using Ash. That’s the bit I’m (obviously) working out now after reading through the docs several times, and I intend to start my app from scratch once I’ve learned enough from them and the demo.

Part of my problem is trying to modify an existing app without enough learning to do so, while only doing so because it has more to teach than the doc’s examples at this time.

Some higher-level overview docs of the Ash workflow would be helpful (if I haven’t missed them sorry) and perhaps with comparisons to using Phoenix eg what typical workflow Ash replaces/changes? I pick up parts of that mixed into the existing docs, but a dedicated page under the Development section next to Project Structure — ash v3.4.1 and Generators — ash v3.4.1 would tie a lot of the docs together - perhaps as https://hexdocs.pm/ash/workflow.html?

That could describe how/when to use generators/ignite to creating resources to running migrations, how to ‘go in reverse’ eg if you want to change a resource, then do you do so and then run another migration or roll the last one back first, etc.

It’s noob stuff, but done right like the rest of the docs, you’d no doubt find a balance that helps people like me and more advanced users wanting to go beyond Phoenix to use Ash with less learning friction.

Sorry to suggest more work for you! Message me if you ever need any feedback on such a page or docs from a noobs perspective and I’ll be happy to help give feedback.

I’m looking forward to your book for some complete examples of common dev uses cases too.

The feedback is much appreciated :slight_smile: We don’t checksum the snapshots so it is totally fine to edit them. My guess is that somewhere in that find and replace an errant " was added somehow.

I definitely agree on all points about higher level docs showing the workflows, patterns, ways of thinking etc. are warranted :slight_smile: We’re working on a book that will help with that kind of thing, but we also will be improving the documentation over time to help with this kind of thing!

1 Like