In Ecto programming, could this be described as an anti-pattern?
# Database operation
Repo.some_operation(...)
rescue
error ->
Logger.error("Failed to do X: #{inspect(error)}")
{:error, "Generic message"} # ← System failure as business error
It seems like the answer is yes. I’m in the process of ripping out a bunch of this in a codebase.
FYI, the Dialyzer unmatched_return
was the hint that I had a problem with my error handling. I’m a believer now.
EDIT: Here's ChatGPT o3-pro's answer to the question
Short answer – yes, it is an anti‑pattern.
Catching every exception that can bubble out of Repo.* and then returning a domain‑style {:error, …} tuple does more harm than good in most Elixir/Ecto codebases. Below is a breakdown of why it’s problematic and what to do instead.
1. What Ecto and Elixir’s “happy path” expect
Condition | Normal, non‑bang call ( Repo.insert/2**,** Repo.update/2**, …)** | Bang call ( Repo.insert!/2**, …)** |
---|---|---|
Validation / constraint failure (expected business error) | {:error, %Ecto.Changeset{…}} | Raises Ecto.InvalidChangesetError |
DB connectivity, malformed SQL, serialization failure, etc. (system error) | raises (DBConnection.ConnectionError, Postgrex.Error, …) | raises (same) |
Take‑away:
- Business failures are already represented by return tuples.
- System failures are already raised.You almost never need a rescue to tell them apart.
2. Why wrapping raised system errors in a generic
{:error, _}
is harmful
- Loses signal The caller can no longer discriminate between user‑facing problems (e.g., “email already taken”) and infrastructure problems (e.g., “couldn’t reach the database”). They both look like {:error, “Generic message”}.
- Breaks fault tolerance Elixir’s supervision philosophy is “let it crash.” A system error usually should crash the process so the supervisor can restart it (and you get a crash report with stack‑trace). Converting it into an ordinary return value prevents that safety net.
- Hides stack‑trace & root cause You log inspect(error), but unless you also log STACKTRACE, you’ve lost the most useful bit. Even then, the caller still has no context.
- Encourages defensive coding everywhere else Downstream functions must now handle phantom {:error, “Generic message”} tuples that were never in the Ecto contract, leading to boilerplate and accidental swallowing of real domain errors.
3. Better patterns
3.1 Use non‑bang APIs and pattern match
case Repo.insert(changeset) do
{:ok, struct} ->
{:ok, struct}
{:error, %Ecto.Changeset{} = cs} ->
{:error, cs} # business error – propagate the changeset
end
If Repo.insert/2 raises (network down, bad migration, etc.) let it crash. Your supervisor, the Logger backend (or Sentry/AppSignal/etc.), and any retry strategy will handle the rest.
3.2 Use bang APIs
inside
supervised tasks
If you conceptually want “success or crash,” call the bang version in a short‑lived process you own (Task, GenServer handle‑call, etc.):
def create_user!(params) do
changeset = User.changeset(%User{}, params)
Repo.insert!(changeset) # raises on infra failure OR bad changeset
end
The caller can choose whether to wrap that in try/rescue or link/monitor the task.
3.3 Translate only
expected
DB error codes
Occasionally you truly need to map a low‑level PG error to a domain error (e.g., unique‑violation for an idempotency key). Catch just that error and translate, preserving everything else:
try do
Repo.insert!(row)
rescue
e in Postgrex.Error ->
if e.postgres.code == :unique_violation do
{:error, :duplicate_key}
else
reraise e, __STACKTRACE__
end
end
4. What to do with logging
- Let your crash reporter or Logger backend capture uncaught errors globally.
- Log business errors where they are handled (often in Phoenix controllers, Absinthe resolvers, or the calling service layer), not inside a blanket rescue.
5. When might rescuing be appropriate?
- External API timeout that you must degrade gracefully (still distinct from business logic).
- Known, isolated DB errors where retry or fallback is viable (e.g., transient serialization failures in a saga pattern).
- Migration scripts / mix tasks where you want the script to continue processing subsequent items.
Even there, rescue only the specific exception types you care about and keep the original Exception.message/1 so the operator can see what went wrong.
Bottom line
Rescuing all Repo. exceptions and returning {:error, _} conflates system failures with domain logic, obscures diagnostics, and undercuts OTP supervision.*
Prefer the natural Ecto contracts—pattern‑match on {:error, changeset} for business cases and let unexpected failures crash (or handle them narrowly).
Adopting this discipline keeps your codebase idiomatic, debuggable, and resilient.