I wasn’t actually thinking about SPA’s specifically - any page once in the browser is a separate application which uses the browser as a runtime. Sending form data back to the server is an application using an interface.
duplication of logic.
That duplication is often pursued because it’s synchronization costs effort and therefore money. But it ignores the fact that the rules serve entirely different effects on both sides - on the client how client state is presented to the user, on the server whether or not data is allowed to modify server state. So it can be argued that is is necessary duplication - as inconvenient as that may be.
I see Phoenix/LIveView as promising in that regard.
I see nothing wrong with the server providing information via events to the page that allows it to change its state - but I draw the line at sending page fragments which essentially boil down to the server monkey patching the page - it violates the page’s (application’s) autonomy.
At the core Elm/React/Cycle.js have the right idea:
- Events change the state of the page
- The new state is transformed to the visual representation presented to the user
That is simple. Monkey patching all over the place, “mutation heaven” not so much. Now the quality of the frameworks themselves - that is an entirely different discussion.
isn’t that more about dividing an application into “well-bounded problems”,
My point was that when it comes to design, boundaries have an impact everywhere. They are a line in the sand where you have to watch carefully:
- What (shape) of data am I exposing to the outside that is going to limit what I can do in the future on the inside?
- What dependencies am I pulling in from the outside that are going to have a permanent or future impact on the inside?
Nobody can deny that there are separate boundaries around the server and the browser. The same is true for the applications that live on them. Drawing a bigger boundary around both of them and calling it a web application doesn’t eradicate those boundaries - they are still relevant.
I don’t see much room for graceful degradation - it’s going to still require some form of JavaScript to work. Given that “Progressive enhancement” (which doesn’t seem to be that commonly practiced) requires are lot more work and planning (and therefore more complexity) I simply don’t see it happening. As it is service providers seem to have accepted the client being DOA if JavaScript is disabled on the browser.
In my view tearing the page apart and constantly flinging bits of it over the network increases the moving parts and dependencies. I’m sensing the shorter time-to-initial-success effect here.