I have written such a system. The premise is as such:
All Deadlines are wrappers of potential notifications computed based on an Event Date & Time, and entail the following:
- Provider ID
- Event Path
- Event Date & Time (Date & Time with Time Zone)
- Trigger Date & Time (UTC)
The Event Date & Time is with an actual time zone, while the Trigger Date & Time is in UTC; the Trigger Date & Time is when the 1st notification is sent.
The Deadline Provider (delegate) in each client application is responsible, on notification by the Deadlines system, to reply exactly one of the following:
- Retire the Deadline with no further notifications
- Increment the retry counter of the Deadline and re-notify at X date & time (in UTC) — updates the Trigger Date & Time — based on number of notifications already sent and the existing Event Date & Time
Note that on every invocation the Deadline Provider is provided with the original Event Date & Time and the number of notifications, as this allows us to insert code which determines whether to postpone deadlines, and when to postpone them to, based on business logic.
We would create/update deadlines alongside normal operations, such as:
When changing a form from Pending to Active, create a deadline for Submission expiring 14 days from now
When changing a form from Active to Submitted, remove the same deadline only if it exists and is still active
If the user does nothing, then 14 days later, the deadline will hit, and we can notify the user via email (via the registered Deadline Provider)
This allows us to very easily manage thousands of outgoing notifications a day in one of the systems.
At runtime, we split out access with a GenStateMachine per Provider, which is responsible for sending all messages regarding its deadlines. The machine has the following states:
:waiting — the initial state, implying that the server is in a quiescent state with no further immediate action. Will transition to
:load to load the next batch of deadlines. When the Server is newly started, it will remain in this state momentarily, then transition to
:loading on timeout.
:loading — Server is loading the next batch of notifications. In this state we wait for the internal load event to trigger. All calls to create/destroy deadlines will be postponed until the Server enters waiting status again.
:sending — Server is notifying the provider of deadlines that have become due. All calls to create/destroy deadlines will be postponed until the Server enters waiting status again.
The use of GenStateMachine a wrapper around gen_statem essentially transforms the deadline server to an intelligent write-through cache and we have used this subsystem happily for multiple years.
In my opinion, Oban is a task execution framework, not a business logic layer, so you should dispatch tasks for immediate execution upon deadline instead of trying to use the task execution framework to manage business logic, the latter would give you much less control and much less support you could otherwise get, such as type checking from Dialyzer and explicit calendar operations, etc.
For the Postgres savvy — we have had to write a custom Ecto type to store a timestamp with time zone properly, since in Postgres…
timestamp with time zone, the internally stored value is always in UTC (Universal Coordinated Time, traditionally known as Greenwich Mean Time, GMT). An input value that has an explicit time zone specified is converted to UTC using the appropriate offset for that time zone. If no time zone is stated in the input string, then it is assumed to be in the time zone indicated by the system’s TimeZone parameter, and is converted to UTC using the offset for the
So, our solution is to create a Composite Type:
CREATE TYPE user_datetime AS (
This then allows to capture 100% of relevant information pertaining to the original Deadline such as when & where exactly did/will the original event arise.