In play framework website they put " We are Reactive" , and how about the Phoenix?
According to http://www.reactivemanifesto.org/ Phoenix does meets thos criteria, but I guess nobody really cares if Phoenix is reactive or not. Why? I think this guy is spot on why. https://www.quora.com/What-is-the-significance-of-the-Reactive-Manifesto/answer/Alexey-Migutsky And that fits too well into coprporate java world
Agree with @sztosz, but notes anyway from the above link:
Responsive: The system responds in a timely manner if at all possible. Responsiveness is the cornerstone of usability and utility, but more than that, responsiveness means that problems may be detected quickly and dealt with effectively. Responsive systems focus on providing rapid and consistent response times, establishing reliable upper bounds so they deliver a consistent quality of service. This consistent behaviour in turn simplifies error handling, builds end user confidence, and encourages further interaction.
Yeah, Phoenix is definitely responsive, and substantially more so than a Java system could ever hope to be just by its very design of distinct processing units unlike the monolithic java heap.
Resilient: The system stays responsive in the face of failure. This applies not only to highly-available, mission critical systems — any system that is not resilient will be unresponsive after a failure. Resilience is achieved by replication, containment, isolation and delegation. Failures are contained within each component, isolating components from each other and thereby ensuring that parts of the system can fail and recover without compromising the system as a whole. Recovery of each component is delegated to another (external) component and high-availability is ensured by replication where necessary. The client of a component is not burdened with handling its failures.
Uh, BEAM puts anything that Java would ever even hope for in this area to shame, so yes, Phoenix is far more so in this than any Java library by its nature of running on the EVM.
Elastic: The system stays responsive under varying workload. Reactive Systems can react to changes in the input rate by increasing or decreasing the resources allocated to service these inputs. This implies designs that have no contention points or central bottlenecks, resulting in the ability to shard or replicate components and distribute inputs among them. Reactive Systems support predictive, as well as Reactive, scaling algorithms by providing relevant live performance measures. They achieve elasticity in a cost-effective way on commodity hardware and software platforms.
Also very much yes, by default Phoenix has no such ‘contention points’ or anything of the sort, the beam is a preempt scheduler with millions of ‘green’ threads (in java parlance, try doing preempting green threads on java I dare you), so yes, everything stays response. The only points where you could receive response issues is from your own code (like a sole single genserver that does heavy work without slaving it around or something stupid like that).
Message Driven: Reactive Systems rely on asynchronous message-passing to establish a boundary between components that ensures loose coupling, isolation and location transparency. This boundary also provides the means to delegate failures as messages. Employing explicit message-passing enables load management, elasticity, and flow control by shaping and monitoring the message queues in the system and applying back-pressure when necessary. Location transparent messaging as a means of communication makes it possible for the management of failure to work with the same constructs and semantics across a cluster or within a single host. Non-blocking communication allows recipients to only consume resources while active, leading to less system overhead.
Uhhh, this is the very definition of how the BEAM, Erlang, Elixir, and Phoenix work, also significantly more so than any Java program could hope to be (mostly due to Java’s inept memory model in this regard).
Large systems are composed of smaller ones and therefore depend on the Reactive properties of their constituents. This means that Reactive Systems apply design principles so these properties apply at all levels of scale, making them composable. The largest systems in the world rely upon architectures based on these properties and serve the needs of billions of people daily. It is time to apply these design principles consciously from the start instead of rediscovering them each time.
Does this not also perfectly describe the EVM’s Applications concept that everything is built within?
Erlang long ago was reactive before the reactive manifesto was written
And Akka is copy of BEAM for JVM.
What really riled me with the first version of the reactive manifesto is when they touted these things as being “new” and we in the erlang community had been being doing it for over 20 yrs. And we were definitely not the first. Reactive programming has been done since the dawn of computers.
Of course our great failing was not inventing a set of good catchy terms for describing known features.
EDIT: That is why I never signed it
Well, Phoenix does have the issue with the last point of the reactive manifesto, which is:
Message Driven: Reactive Systems rely on asynchronous message-passing to establish a boundary between components that ensures loose coupling, isolation and location transparency.
this actually is true about Phoenix itself. It is pretty decently written piece of software, it’s got a pubsub/bus component of it’s own that handles message passing (and is used internally by parts of the system such as channels).
But this may or may not be true about applications built using Phoenix. It does not force you to build your app in reactive way, even that the framework itself may be written this way.
There is nothing stopping you from closely coupling all of your modules, never start any workers, never send any messages between your components, never do anything more than a procedural code. In fact, if you pick up a book on Phoenix (there’s one so not much choice), much of the code you will be writing is procedural in nature, until the last chapters when they start decouple things.
This is not bad per se. But one cannot label Phoenix as a framework to build ‘reactive’ apps (as in reactive manifesto), because you can very well do just the opposite :).
Interfaces between most ‘components’ are like GenServers and such. If it is ‘Pure’ functions with no side effects and no state and no synchronization, then message passing is an utter and complete waste, just call the function in-process. However OTP itself is entirely message driven. For Phoenix a single connection is a single process (sometimes even more!) and cannot communicate to any other process without messages. Connecting to the DB involves other processes (messages), asking the pubsub for anything involves messages. Everything is either pure or messages. ^.^
I’m pretty sure that it is possible to “louse things up” in Play if you can’t be bothered to become competent in the fundamental principles that the framework is based on.
Erlang processes (and by extension OTP) are at the core of what Elixir/Phoenix are all about - if you aren’t leveraging that, why bother using Erlang/Elixir/Phoenix?
The naming of the manifesto is also unfortunate - at least to me personally “reactive” implies that they lifted the moniker from “Reactive-Extensions”, which could be construed as favoring an event-stream based approach - which isn’t something that is born out in the text of the manifesto.
The message driven part of the manifesto could really be more comprehensive. As developer’s we have been told that we should “Never be blocked”, so it’s high time that our designs and their implementations also embody that notion. Message-passing is part of the solution but our thinking needs to move from a (more-or-less) “centralized control flow” to a “distributed control flow”. Message passing is supposed be “asynchronous” but that doesn’t mean it’s impossible implement an inefficient “synchronous” solution. For example, one of Rich Hickey’s criticisms of the Actor model is:
It is a much more complex programming model, requiring 2-message conversations for the simplest data reads, and forcing the use of blocking message receives, which introduce the potential for deadlock.
The material that I’ve being going through suggests to me: “handle_cast when possible, handle_call when needed”, i.e. organize the design like a bucket brigade rather than allocating one person to each bucket - so that we minimize emulating function calls with “2-message conversations” (and then processes block simply because they have nothing to do).
In terms of the Play framework being possibly more “reactive-by-default” than Phoenix? I’d need some convincing. I’m all for tools/environments that make the “right thing easy and the wrong thing hard” - but there are some concerns where tools cannot find a reasonable default.
Reactive Systems rely on asynchronous message-passing to establish a boundary between components that ensures loose coupling,
One sentence and two hard problems:
- choosing boundaries
- putting loose coupling where it needs to be
Choosing the optimal bounded context for a Domain-Driven Design is essential - otherwise there will be a rough road ahead. Will languages/frameworks/tools ever make this choice trivial? In object-orientation we’ve had the Single Responsibility Principle to guide us to determine the “optimal” boundary around an object. But it isn’t uncommon for developers to succumb to the temptation to add just one more method to the object (and then another one …). Then we run into Command Query Separation and find out that sometimes it is too much responsibility for a single object to mutate and represent it’s own state. So choosing the right boundaries, in general, can be a hard problem - possibly one without a general tooling solution.
Nicolai M. Josuttis: p.9 SOA in Practice:
Loose coupling is the concept of reducing system dependencies. Because business processes are distributed over multiple backends, it is important to minimize the effects of modifications and failures. Otherwise, modifications become too risky, and system failures might break the overall system landscape. Note, however, that there is a price for loose coupling: complexity. Loosely coupled distributed systems are harder to develop, maintain, and debug.
So generally loose coupling adds complexity - in the right place (e.g. on relevant boundaries) is pays dividends, in the wrong place it’s just a burden on product development and maintenance. Can tooling ever decide where loose coupling is appropriate?
Sometimes we just can’t expect our tools to do all the work for us.
It’s also worth mentioning that Erlang/OTP makes us build those sort od loosely coupled systems way easier than say - in Ruby where I come from. It has been designed with this decoupling in mind
As I read your answers I couldn’t help but remember Virding’s First Rule of Programming:
Any sufficiently complicated concurrent program in another language contains
an ad hoc informally-specified bug-ridden slow implementation of half of
With all respect to Greenspun.
Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified bug-ridden slow implementation of half of Common Lisp.
Remind me again, who made Lisp Flavoured Erlang? …
… maybe it’s the best of both worlds.
Hell yeah it does!
But let’s actually take a step back. The original question was if the Phoenix is reactive and we started discussing it like it would be a bad thing if it weren’t.
The concept behind Play framework’s design is very similar. You have limited pool of Java threads, and you want to maximize their usage. So they do not wait for I/O mostly.
Whenever you query database, read file etc. the thread stuff was executing on is being re-used by another request handler. The control is returned back when the other thread finished, possibly to another request handler who needs computing resources.
This very problem is not that much of an issue on Erlang VM. So it is not that much critical that our applications are reactive.
Reactive code, is by it’s nature harder to read and write. Async calls, callbacks, promises etc. It is way easier to write code that at least looks sequential and executes sequential.
In Phoenix/Ecto, for example, when you query database, the connection, query execution etc is executed in different process than your code. This sort of fits reactive pattern. But the original process does wait for the query execution and does nothing in the meantime. It is not interrupted by other workers. Whenever Cowboy figures out that it needs to handle more requests in the meantime, it will just spawn more worker processes.
The reason behind the above architecture is that processes on Beam are very much lightweight compared to system-level threads, and Erlang VM is smarter in the way it schedules them (it understands processes waiting for messages for one thing, so a sequential nature of communication when process does nothing).
This is all good things about Phoenix and Erlang/Elixir ecosystem of apps. We don’t have to be reactive in the same manner that Play framework or Node.js apps need to be, and this is a good thing.
Actually it boils down to the “stock link” in the upper right hand corner on the play framework home page which links it to http://www.reactivemanifesto.org/ - and the fact that the Phoenix home page doesn’t brandish the same link.
While the Play framework may be dealing with the exact issues that you mention there is no requirement in the “Reactive Manifesto” that “reactive systems” have to be dealing with the issues on exactly those terms (which is why I personally find the term “Reactive Manifesto” misleading). The manifesto is really about systems that are in general responsive, resilient, elastic, and message-driven. @OvermindDL1’s analysis comes to the conclusion that Phoenix lends itself to building responsive, resilient, elastic and message driven systems.
There is the separate issue whether Phoenix should associate itself with the Reactive Manifesto in the way the Play framework has.
I believe that when Dave Thomas and his co-signers published the Manifesto for Agile Software Development they were sincerely declaring their commitment towards positive change in the software industry. They weren’t primarily motivated by attempting to create the commercial consulting gravy train that later ensued (though of course some of them benefited from it) and ultimately lead to the state where “Agile is Dead, Long Live Agility” (youtube).
My personal impression is that the “Reactive Manifesto” doesn’t come from a genuine realization that “there’s something rotten in the state of Denmark”. It seems to be an attempt to spark something similar to the commercial effect that grew alongside the “agile movement” - but more in the vein of the SOA(-tooling) era.
I personally think that Rich Hickey got it completely back-to-front wrong here. Using calls you are always synchronous and will always block so the chance of deadlock is enormous. Using actors/erlang processes means that the base for communication of all kinds is asynchronous, you have to be explicit when you want you synchronous. Granted that this requires rethinking on how you structure your system but when you get it right you actually have to work at making the system block.
It is however a real rethink and in many cases you have to seriously change how you do things. For example it is a lot less of “what is the answer” and much more of “send me your answer” and this will change your system. Think email: you mail a request then go out and do other things and when the reply has been sent to you and you have the time to read it then you will process it.
I will try to find some very simple erlang code which illustrates this way of thinking, it is a simple telephony example.
I will not get into the discussion of reactive and phoenix as I don’t know enough phoenix to say something sensible in it.
I am interested into this example.