Commanded and non-commanded part of the system: how to organize databases and testing

I am learning Commanded by doing a simple project that’s just yet another wrapper for a ChatGPT conversation + buttons to tune the conversation. I’ve got to the point where something works: I can emit commands, events are generated and projections happen (via ecto projections)… and got issues with testing it properly which are possibly due to not so ideal database organization in the first place.

**Context and origin of the problem**
Not all of the system state is in CQRS. Personal information is not good to keep in a system from which it’s pretty hard to remove anything (because GDPR and general respect to people wishes for self-deletion). And then there’s a bunch of “standard” public web service functionality: authentication/login/accounts, password recovery and similar stuff that I wouldn’t be excited to implement in CQRS from scratch.

These “non-core” part of the system I grabbed from , it is more or less traditional LiveView Phoenix app starter with PostgreSQL-Ecto ad a bunch of tests. Commanded part I consider a “core” part of the system and it lives quite a standalone life except that its Ecto projections are displayed in Live View app and Live View app emits commands into this “core”.

Thus I ended up with three conceptual databases stored in two PostgreSQL schemas:

  1. Original LiveView app repository
  2. Separate schema for EventStore. I was not sure how it should happen, documentation and how mix tasks worked seemed [to me] to indicate that it’s a good idea to have it as a separate database, so main source of truth could have some special separate care
  3. Projections living in the same schema as original LiveView app and designed to be accessed by that same LiveView app via Ecto

**The problem**
Problems started when I wrote my first commanded tests following Commanded wiki and conduit app steps. Tests of original LiveView app and commanded seem to be using quite incompatible approaches.

Traditional LiveView app uses Ecto’s SQL Sandbox for testing. If I get it correctly, every test is running within an own transaction which makes it easy to clean the state (transaction is just rolled back) and also allows for running tests in parallel (as separate transactions don’t see effects of each other).

Commanded tests judging from commanded wiki and conduit run on the live [test] databases and just clean everything via SQL’s TRUNCATE at the start (or stop) of each test.

Naturally it is quite difficult to run both kinds of tests in parallel. After all non-commanded test may go over the pages/functionality that touches Commanded that “might just be deleted by the commanded test”.

**How to organize things properly**
While my problem at hand is specifically about testing, I’d love to use opportunity to learn and figure how things should be organized the proper way (somehow learning seems more efficient after you step into issues :slight_smile: )

It would be great to get advice on the following things:

  1. Does the whole idea of having both Commanded and non-commanded CRUD architectures in the same not-too-big service make sense in the first place? Is it how other people do it or do you implement all the logins/signups/emails from scratch in CQRS?
  2. Does it make sense to separate data repositories as I did it (CRUD parts to live in the same repo as Projections, EventStore - separately) or shall it be somehow different like e.g. 3 completely different schemas or just one to rule them all?
  3. How do you (or would you) organize tests to for both CRUD and Commanded parts of the system?
    • Shall I just stop running just mix tests and have a batch script to call the different test bunches sequentially (e.g. as mix test test/crud and mix test test/core)?
    • Or would you get rid of SQL Sandbox and just use the Commanded way for “reset everything” for both kinds of tests (maybe with --max-cases 1 so tests do not have DB purged in the middle of running)?
    • Or would it make more sense the other way around to give each CRUD test case an individual clean EventStore (probably in memory as one real DB schema per test case probably would be too difficult)? And just for the sake of test make commanded projections use separate test database?
    • Or something completely different?
  4. Given that commanded tests (per wiki and conduit approaches) wipe everything at the start (or end of each test), how are they executed in parallel? Or is the usual commanded tests way to run them via mix test --mac-cases 1?