I think one thing you’re missing here is that running things inside a transaction by default only prevents you partially writing data, i.e. either all of the changes you made during the transaction are committed, or none are. It does not, by default, prevent other transactions from modifying the database in a way visible to you during the transaction.
In the thread you linked, my issue was that I had a long-running import process occurring over different processes but wanted to keep those inserts in a single transaction so that there would be no partial import persisted in case of error.
Take a look at the Postgres docs for this. In particular, note that at the default isolation level,
In effect, a SELECT query sees a snapshot of the database as of the instant the query begins to run. […] note that two successive SELECT commands can see different data, even though they are within a single transaction, if other transactions commit changes after the first SELECT starts and before the second SELECT starts.
If you want to ensure that the data returned from two different contexts is consistent, you would have to change the isolation level of your transaction which may have an impact on how well your database handles concurrent load.
A nicer solution, IMO, would be to either create a view in the database which would aggregate the data needed for a particular GraphQL endpoint and use that directly, splitting up the data into individual structs on the application side, or even explicitly constructing a query which grabs both sets of data at once.
You may argue that the data is in different contexts and this would break the context separation, but if the sets of data depend on each other to the point where they can become inconsistent over a single GraphQL query, then they likely belong in the same context or at least deserve explicit coupling at the database level. Keeping sets of data consistent in the face of concurrent updates is, after all, exactly what these databases were built for.