Are these answers from AI correct?
Option 1: Database-Level Default with Trigger (Most Atomic)
Create a PostgreSQL trigger that automatically sets the child_order:
-- In a migration
CREATE OR REPLACE FUNCTION set_child_order()
RETURNS TRIGGER AS $$
BEGIN
IF NEW.child_order IS NULL THEN
IF NEW.parent_id IS NULL THEN
NEW.child_order := 1;
ELSE
SELECT COALESCE(MAX(child_order), 0) + 1
INTO NEW.child_order
FROM entities
WHERE parent_id = NEW.parent_id;
END IF;
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trigger_set_child_order
BEFORE INSERT ON entities
FOR EACH ROW EXECUTE FUNCTION set_child_order();
Option 2: Use Database Sequences Per Parent
Create a sequence-based approach where each parent has its own counter:
# In your change module
def change(changeset, _opts, _context) do
if Ash.Changeset.changing_attribute?(changeset, :child_order) do
changeset
else
parent_id = Ash.Changeset.get_attribute(changeset, :parent_id)
# Use a raw SQL query to get and increment atomically
query = """
WITH next_order AS (
SELECT COALESCE(MAX(child_order), 0) + 1 as next_val
FROM entities
WHERE parent_id = $1 OR ($1 IS NULL AND parent_id IS NULL)
)
SELECT next_val FROM next_order
"""
case MyApp.Repo.query(query, [parent_id]) do
{:ok, %{rows: [[next_order]]}} ->
Ash.Changeset.change_attribute(changeset, :child_order, next_order)
_ ->
Ash.Changeset.change_attribute(changeset, :child_order, 1)
end
end
end
Option 3: Use Advisory Locks
Lock on the parent_id to prevent race conditions:
defmodule MyApp.Schemas.Changes.SetChildOrder do
use Ash.Resource.Change
require Ash.Expr
@impl true
def change(changeset, _opts, _context) do
# Only set child_order if it's not already provided
if Ash.Changeset.changing_attribute?(changeset, :child_order) do
changeset
else
parent_id = Ash.Changeset.get_attribute(changeset, :parent_id)
# Use advisory lock to prevent race conditions
lock_key = if parent_id, do: :erlang.phash2(parent_id), else: 0
MyApp.Repo.transaction(fn ->
# Acquire advisory lock
MyApp.Repo.query!("SELECT pg_advisory_xact_lock($1)", [lock_key])
# Now safely count siblings
sibling_count = MyApp.Schemas.Entity
|> Ash.Query.filter(parent_id == ^parent_id)
|> Ash.read!()
|> Enum.count()
sibling_count + 1
end)
|> case do
{:ok, next_order} ->
Ash.Changeset.change_attribute(changeset, :child_order, next_order)
{:error, _} ->
Ash.Changeset.change_attribute(changeset, :child_order, 1)
end
end
end
@impl true
def atomic(_changeset, _opts, _context) do
:ok
end
end
Made changes.
Option 4: Use a Before Action Hook with Raw SQL
Instead of a separate change module, use a simpler before_action hook with atomic SQL:
# In your create action
change before_action(fn changeset ->
if Ash.Changeset.changing_attribute?(changeset, :child_order) do
changeset
else
parent_id = Ash.Changeset.get_attribute(changeset, :parent_id)
# Atomic SQL to get next order
{next_order} = MyApp.Repo.query!("""
SELECT COALESCE(MAX(child_order), 0) + 1
FROM entities
WHERE parent_id = $1 OR ($1 IS NULL AND parent_id IS NULL)
""", [parent_id])
|> Map.get(:rows)
|> List.first()
Ash.Changeset.change_attribute(changeset, :child_order, next_order)
end
end)
Option 5: Accept Race Conditions and Fix Later
Use a background process to periodically reorder children:
# Set any default order in the change
def change(changeset, _opts, _context) do
if Ash.Changeset.changing_attribute?(changeset, :child_order) do
changeset
else
# Just set a timestamp-based order for now
Ash.Changeset.change_attribute(changeset, :child_order, System.system_time(:millisecond))
end
end
# Then have a background job that normalizes orders
Recommendation
Option 1 (Database Trigger) is the most robust for true atomicity, but requires SQL knowledge.
Option 3 (Advisory Locks) - which I’ve implemented above - is a good middle ground that:
- Uses PostgreSQL’s advisory locks to prevent race conditions
- Stays within Elixir/Ash patterns
- Automatically releases locks when the transaction ends
- Has minimal performance impact since locks are held briefly
The advisory lock approach ensures that only one process can calculate the next child_order for a given parent at a time, making it truly atomic.
You’re absolutely right to question this! Ecto can indeed use fragments and expressions in INSERT
statements, but Ash has architectural limitations that prevent atomic operations on creates. Let me explain why:Perfect! I found the exact answer in the Ash documentation. Here’s why atomic operations don’t work on creates:
The Key Finding from Ash Docs
From the official Ash documentation on “Atomic Updates - Create Actions”:
“Upserts support atomic updates. These atomic updates do not apply to the data being created. They are only applied in the case of an update.”
So even with upserts (which can both create and update), the atomic operations only apply when the record already exists and is being updated, not when it’s being created.
Why This Limitation Exists in Ash vs Ecto
While Ecto can use fragments in INSERT
statements, Ash has this limitation because of its architectural design:
1. Changeset Processing Pipeline
Ash has a complex changeset processing pipeline that includes:
- Authorization checks
- Validations
- Policy evaluation
- Notification systems
- Multi-tenant support
- Resource lifecycle hooks
All of these need to be processed before the data reaches the database, making it difficult to defer expression evaluation to the database level.
2. Multi-Data Layer Support
Ash is designed to work with multiple data layers (Postgres, CSV, ETS, etc.), not just SQL databases. Atomic expressions that work in PostgreSQL might not work in other data layers, so Ash takes a more conservative approach.
3. Authorization and Policy System
Ash’s authorization system needs to evaluate policies before data is written. If expressions were evaluated at the database level during creation, the authorization system couldn’t properly validate the resulting values.
4. Resource Calculations and Aggregates
Ash expressions can reference calculations and aggregates that span multiple resources. These need to be resolved in Elixir before being sent to the database.
The Bottom Line
You’re absolutely right that this feels limiting compared to raw Ecto! Ecto gives you direct database access, while Ash trades some of that flexibility for:
- Better security (authorization/policies)
- Multi-data layer support
- Resource-level abstractions
- Automatic relationship management
For your use case, the advisory lock approach I implemented earlier is probably your best bet for achieving atomic behavior within Ash’s constraints, or you could consider using a database trigger if you need true database-level atomicity.