How to have commands constrained on both ends with commanded

I’m playing with the thought to implement some same game using event sourcing to learn more about it.

Therefore I looked into the documentation of commanded and watched a talk about it. From the documentation and the talk it got pretty clear to me how I should model things that are constrained on one end like a bank withdrawal.

But how could one model transactions that are constrained on both ends?

In the context of the game the first thing came to my mind was a player dropping an item into a chest. The player needs to have the item in his inventory and it needs to fit in the receiving chest.

Currently, the only way I see after reading through commandeds documentation was to do it like this:

  1. create a command that moves the item from the inventory to the chest.
  2. check if the inventory contains the item in the execute function
  3. create the event to move it into the chest
  4. create the command that puts the item in the chest from the process manager
  5. check if there is enough space in the chest and for this example it is not
  6. Fire an event that rolls back the transaction
  7. get that one back into the process manager

But is this atomic? Or could there be a race and the player picked already up another item and doesn’t have room left in his inventory anymore?

Are there possibilities I currently do not see because I have no experience in this field?

Is there a way to get my flow from above atomic? (or is it already?)

Thanks in advance!


Aggregate are used to enforce business invariants, but you also want to keep them as small as possible to allow concurrency of operations. Much like the trade-offs involved with deciding how to separate GenServer processes.

When you have inter-aggregate dependencies you have to use eventual consistency as you outlined, which can lead to race conditions. To alleviate this, a command will be validated up front by querying a read model to ensure that it is likely to succeed. In your example you would check the player’s inventory and receiving chest before attempting to drop the item. This limits failures to small race conditions which must be handled by executing a compensating command to undo the event that has already occured, but you’ve now determined is invalid - much like a reverse transaction in accounting. These operations are not atomic, but are possibly long-running sagas.

The reason for this approach is because limiting the size of aggregates and using eventual consistency allows infinite scaling (in theory at least) because you can distribute the aggregate process amongst any number of nodes and you are not relying on distributed transactions (which don’t scale). This is explained in more details in Pat Helland’s position papaer “Life beyond Distributed Transactions: an Apostate’s Opinion”.

1 Like