Ecto transactions, db_connection Postgres id and exceptions

Hi,

A really quick 3 questions:

  1. Is there any easy way to get the Postgres connection id so I can put it in my logs?

  2. It appears that certain errors (I have mostly seen serialization errors do this) cause Postgres to close the socket for that connection. This results in an error like:

%DBConnection.ConnectionError{
  message: "transaction rolling back",
  reason: :error,
  severity: :error
}

Usually I would get a %Postgrex.Error{} as an exception. Is that a timing problem with Postgres closing the connection before the correct error code is sent or is it something else?

  1. Is there a “maximum depth” for nested transactions?

Thanks

  1. No, there’s no easy way
  2. Could you please provide an example of code which raises this error?
  3. Transactions are not nested, so there’s no maximum depth (or this depth is just 1)

Thanks,

In this case it failed on a Postgres trigger due to a serialization error that caused the trigger to fail:

2022-08-01 14:20:11.478 EDT [38653] ERROR:  could not serialize access due to concurrent update
2022-08-01 14:20:11.478 EDT [38653] CONTEXT:  SQL statement "update collections
	        set scoped_seq = scoped_seq + 1
	      where id = collection_id
	  returning scoped_seq"
	PL/pgSQL function advance_seq(uuid) line 5 at SQL statement
	SQL statement "select advance_seq(new.collection_id)"
	PL/pgSQL function initialize_file() line 11 at SQL statement
2022-08-01 14:20:11.478 EDT [38653] STATEMENT:  INSERT INTO "file_system" ("collection_id","fs_type","metadata","name","node_id","parent_id","version") VALUES ($1,$2,$3,$4,$5,$6,$7) RETURNING "rev_id","is_latest","base_id","internal_id"

Before I found the cause it would cause the disconnect in about 10% of cases. The trigger was like:

 create trigger fs_initialize_trigger before insert on file_system
for each row when (new.internal_id is null) execute procedure initialize_file();

initialize_file() calls the failing function advance_seq()

create function advance_seq(collection_id uuid) returns bigint as $$
declare
  rev_id bigint;
begin
     update collections
        set scoped_seq = scoped_seq + 1
      where id = collection_id
  returning scoped_seq into strict rev_id;

  return rev_id;
end
$$ language plpgsql;

You can change the isolation level to avoid this error. I’d suggest to set another isolation level and use trigger with read after writes. And I am also not sure that the postgres sends the trigger error description back to client.