Check a complex "unique" constraint with Ecto

In my Ecto schema, I have a User who has_many Items. An item has a boolean field active:

User module:

  schema "users" do
    field :name, :string
    has_many :items, Item
  end

Item module:

  schema "items" do
    field(:active, :boolean)
    belongs_to(:user, User)
  end

How can I ensure that a User has at most one item active? Thus, if a user already has an active item, the insertion of another active item should result in an error. The insertion of a new item with active: false should succeed, though.

It seems that exclusion_constraint could be used for that, but I haven’t found any docs showing how to use for anything other than overlapping time intervals.

A partial unique constraint can do that as well.

create unique_index(:table, [:user_id], where: "active = TRUE")
4 Likes

The easiest way to solve this is probably to store nil/NULL instead of false. Then you can use a regular unique index because it only applies to non-NULL values.

1 Like

Adding onto this, you can coerce false to nil in your Item changesets to prevent accidentally writing a false.

I think this approach may work even cleaner.


Either way, after doing one of them you can add :active_item like so to your User schema for easy access:

  schema "users" do
    field :name, :string
    has_many :items, Item
    has_one :active_item, Item, where: [active: true]

    timestamps()
  end
3 Likes

Thank you! I didn’t know that creating a unique index actually creates a constraint.

Nice, didn’t know that you could use the where option like this. :+1:

1 Like

Yep! IIRC it will even populate the active field for you using certain association helpers, tho I forget exactly which.

When making a partial index like this, consider if you’ll ever want to query for “every row for a user_id” in production. If so, make sure there’s also an index that covers inactive records.

One less-visible way you might end up doing those kind of queries is if you cascade-delete records when a User is deleted.

The tricky part is that the system will work fine without that index… for a while. If the all-rows-for-a-user queries aren’t common, possibly a LONG while. And then suddenly :table has 100 million rows in it and your DBA is very very unhappy and your operations team is figuring out how many dozens of hours of degraded performance it will take to build the needed index… :scream_cat:

1 Like

I’d never thought of that before, thanks!