I’m pretty sure that it is possible to “louse things up” in Play if you can’t be bothered to become competent in the fundamental principles that the framework is based on.
Erlang processes (and by extension OTP) are at the core of what Elixir/Phoenix are all about - if you aren’t leveraging that, why bother using Erlang/Elixir/Phoenix?
The naming of the manifesto is also unfortunate - at least to me personally “reactive” implies that they lifted the moniker from “Reactive-Extensions”, which could be construed as favoring an event-stream based approach - which isn’t something that is born out in the text of the manifesto.
The message driven part of the manifesto could really be more comprehensive. As developer’s we have been told that we should “Never be blocked”, so it’s high time that our designs and their implementations also embody that notion. Message-passing is part of the solution but our thinking needs to move from a (more-or-less) “centralized control flow” to a “distributed control flow”. Message passing is supposed be “asynchronous” but that doesn’t mean it’s impossible implement an inefficient “synchronous” solution. For example, one of Rich Hickey’s criticisms of the Actor model is:
It is a much more complex programming model, requiring 2-message conversations for the simplest data reads, and forcing the use of blocking message receives, which introduce the potential for deadlock.
The material that I’ve being going through suggests to me: “handle_cast when possible, handle_call when needed”, i.e. organize the design like a bucket brigade rather than allocating one person to each bucket - so that we minimize emulating function calls with “2-message conversations” (and then processes block simply because they have nothing to do).
In terms of the Play framework being possibly more “reactive-by-default” than Phoenix? I’d need some convincing. I’m all for tools/environments that make the “right thing easy and the wrong thing hard” - but there are some concerns where tools cannot find a reasonable default.
Reactive Systems rely on asynchronous message-passing to establish a boundary between components that ensures loose coupling,
One sentence and two hard problems:
- choosing boundaries
- putting loose coupling where it needs to be
Choosing boundaries:
Choosing the optimal bounded context for a Domain-Driven Design is essential - otherwise there will be a rough road ahead. Will languages/frameworks/tools ever make this choice trivial? In object-orientation we’ve had the Single Responsibility Principle to guide us to determine the “optimal” boundary around an object. But it isn’t uncommon for developers to succumb to the temptation to add just one more method to the object (and then another one …). Then we run into Command Query Separation and find out that sometimes it is too much responsibility for a single object to mutate and represent it’s own state. So choosing the right boundaries, in general, can be a hard problem - possibly one without a general tooling solution.
Loose coupling:
Nicolai M. Josuttis: p.9 SOA in Practice:
Loose coupling is the concept of reducing system dependencies. Because business processes are distributed over multiple backends, it is important to minimize the effects of modifications and failures. Otherwise, modifications become too risky, and system failures might break the overall system landscape. Note, however, that there is a price for loose coupling: complexity. Loosely coupled distributed systems are harder to develop, maintain, and debug.
So generally loose coupling adds complexity - in the right place (e.g. on relevant boundaries) is pays dividends, in the wrong place it’s just a burden on product development and maintenance. Can tooling ever decide where loose coupling is appropriate?
Sometimes we just can’t expect our tools to do all the work for us.