Escaping the SPA rabbit hole with modern Rails – Jorge Manrubia – Medium

Not sure if he was referring to this pattern but I’ve used this idea for “components” in EEx:

I’m pretty sure (but haven’t tried yet) mixing in the ~E sigil (like LiveView will) instead of a bunch of files can make it cleaner too if well organized.

This means using inline templates instead of EEx files? Fair enough. But what prevents us from doing that right now?

Yeah it’s just a way to organize the templates and call/nest them more like “React” components which is what I assumed @LostKobrakai meant. We can do that today. Looking at his issue in the Elixir repo he’s having issues passing 2 anonymous function callbacks representing “slots” to a “component” Elixir function which syntax looks like it would need to wait for Elixir 1.9.

Don’t get me wrong, I get it, but when it comes to performance concerns there are at least two shades:

  • premature optimization is the root of all evil - i.e. you can always optimize later for the largest gains
  • development approaches that optimize “productivity” in a way that is oblivious to the strengths and weaknesses of the delivery platform that makes any sensible “after the fact” optimization attempt impossible.

We are prone to favour tools/frameworks that support our preferred way of thinking and working and when we get what we want, we often fail to scrutinize whether the tool/framework provider has an approach that harmonizes with the underlying platform (which we don’t want to deal with directly) or whether they blatantly ignore it, just to make things work their way.

From the point of view of pieces like Optimising the front end for the browser lots of JS frameworks aren’t working with the browser but are fighting it.

Even look at the discussions which treat WebAssembly like it’s the second coming - it doesn’t matter what language you write your code in, WebAssembly isn’t going to alter the way browsers fundamentally work - their strengths and weakness will still be the same.

ActiveX, Java Applets, Silverlight, and Flash were initially successful in some situations but have all fallen by the wayside - so now we expect the JS or WebAssembly based version of that approach to be successful in the long term?


This is mostly true, I am not arguing. It’s just that more often than not the devs who wanted to do the optimization / rework regardless of these hurdles were never given time budget for it.

This is crushingly accurate. There’s only one cure – we must learn several different paradigmas: say, imperative/OOP, FP, logic programming, flow-based programming etc. The human brain gets really smart when it sees different approaches to the same problem; we practically make new synapses while we sleep when we are in such an environment.

The trouble I see – and keep seeing it almost every day – is that many people just want to learn several things that are serving their day job well enough and then stop there forever.

In order for one to see different points of view and educate themselves they actually have to… you know, see and learn them. And this is assuming that people want that. Which is very often wrong.

I see no way out of this. I guess group segregation is the only result of this situation, and that’s accidental emergent behaviour anyway.

This and your following JS-related statements are also very accurate. It’s OK when people want some creative freedom; it’s a very valid statement to say: “let us have some amazing transpiled language with awesome libs and then we’ll think how to compile that to maximally efficient and human-readable JS”. However, the last part never really happens in the JS fantasy land. Are there any actual exceptions except BuckleScript, by the way?

So I am OK with the frontenders fighting the browser if that is only the first stage of their war against the ancient cluttered mess that JS is. But it always is the first and the last stage. And that’s not OK because then you have a ton of other poor guys and girls who have to maintain their crap while they are excitedly coding the next dumb hyped “framework”.

(This is not unique to JS at all. It’s just that it is very pronounced in the JS ecosystem lately. F.ex. back in the day some Java devs did their damnest to invent mutable strings and ended up deeply regretting it.)

Creative freedom is good and must be encouraged. But this creativity in the frontend land only accumulates bills to no end – and the people responsible do their best to never pay any technical debt and just move on to their next toy. They are like spoiled children who get easily bored.



I was particularly thinking that browsers are optimized to create the DOM and layout by parsing HTML (and CSS) and that the intent of the DOM API was for altering minor parts of the established document - not recreating the document (and layout) from the ground up in JS (or WebAssembly for that matter; just because you can, doesn’t mean you should).

It would be ironic if one day if/when the dust settles SPAs would be declared a browser anti-pattern.

From a programmer’s point of view I see the appeal of SPAs but lately I’ve been wondering whether that is just a brute force way of dealing with browsers - precipitating all the complexity that we are observing.


I think you are all discussing here apples vs oranges.

SPA is just simply way better in many cases. And it evaporated naturally - because we all saw that having html rendered by backend server and then enhancing it with jquery or backbone, is not really a good long term solution. Then component based frameworks were created, like angular and react - and thanks to it doing a rich UI application is no longer a pain.

Buuut. If you don’t need a big app with all its interactiveness, and state kept on the client, then of course you don’t have to rewrite your app to any frontend framework. Using raw html and css is still fine then.

(Although frameworks nextjs can really simplify having both cake and eating the cake: which is fast ssr and initial csr, without the sacrifice of easiness of doing a web app in frontend component based framework.)

P.S. Splitting your “app” into backend server that has all the business logic layer with some APIs, and then into a web app (no matter whether client or server side rendered), is always a good idea, IMHO.


The thing I haven’t seen mentioned here is that SPAs are not just an end in themselves, they are also a byproduct of the growth of dual interface web/API systems. SPAs let you build a single back-end and then use it both from the browser and programatically. For many - but certainly not all cases - this is cheaper, simpler and gives better quality since you have a single set of interactions to maintain.

APIs developed this way also tend to be, in my experience, a bit more useable and comprehensive since the browser gives a real client, real feedback, and real use cases early in projects whereas just “imagining” an API from scratch can be a challenge.

Be careful not to end up with an API, which is basically the retrieve/update data operations per page of a logged-in end-user, and without any bulk retrieval or bulk update capabilities. In other words, the API is tied to the current version of the GUI and doesn’t support anything else.

You’re absolutely right, the UI doesn’t always cover 100% of the possible client needs.

SPA is suitable under some circumstances when faced with particular requirements - it’s certainly not an approach to squeeze every last bit of goodness out of the client’s browser. And for that matter React’s existence hasn’t displaced Facebook’s use of BigPipe.

One potential problem with SPA is that it requires a good amount of investment up front, both in learning and adopting the technology and in building a product with it. The investment is significant enough that people will easily turn it into a golden hammer just to get some (repeat) return on their initial investment. If you “use what you know” and you always use an SPA stack you’ll likely introduce excess complexity somewhere.

I suspect that Google in 2010 was mainly targeting (“enterprisy”) desktops with high bandwidth/low latency network connections when they started developing AngularJS, so payload bloat wasn’t really an issue. But with the shift towards the mobile web performance budgets made an already bad situation

even worse complexity wise.

More complexity is added by code splitting to minimize the uncanny valley between first contentful paint and time to interactive.

Because of these drawbacks and the accrued accidental complexity people are actively pursuing leaner, more browser-streamlined and web-oriented alternatives. For example:

Now I’m not saying that any of these are a silver bullet or even the next big thing but there are many reasons why SPAs are far from a by default solution. No approach is perfect or universal and usually preference should be given to approaches that take advantage of the strengths of the browser platform and the architecture of the web.

In this context there are some interesting, though admitted biased intercoolerjs blog posts:

which is fast ssr and initial csr

Not everybody is happy being coerced into running (and being locked into) JavaScript on the server side.

is always a good idea, IMHO.

It’s the boundary that is important, i.e. keeping the business logic separate from the web server front end. However that doesn’t automatically imply that a full (public) web facing API is necessary. That really only pays off:

  • if there is something else that is going to consume it (i.e. don’t fall for the “reusability” argument - YAGNI)
  • if it’s needed for some essential optimization.

This resembles Trailblazer Cells. We have a production dashboard that uses it in a Hanami application, but after all I still prefer the usual Phoenix way that is much similar to the Rails traditional way.

1 Like

I believe in a Progressive Enhancement reborn face the new requirements of web access via mobile browsers, this is what we are seeing through the articles of Addy Osmani, such this

Given this context, the web components could offer an important piece of the puzzle.

1 Like

Because of SPA implementations it is often judged that “PWA has nothing to do with Progressive Enhancement”.

Some of the marketing suggests otherwise - the claim is that the app can be progressively enhanced; PWA simply adds “the network is an enhancement”.


… and a day later …

FYI: The Lean Web video from Boston CSS


Have you ever considered that components may actually not be a great fit under all circumstances?

The possibility first occured to me with Elm where it’s a recurring theme to disuade newcomers from “componentizing”, i.e. bundle a bit of model, with a few functions and a bit view.

Vue components are popular because they collect bits of HTML, CSS and JavaScript into one neat package that ostensibly is easier to reason about. All these bits and pieces are tightly coupled and we’re OK with that because it all just belongs together. And then Vue turns around and turns it all into JavaScript! As a consequence you are locked into JavaScript:

  • On the browser that means that component won’t get painted until that JavaScript code finally has a chance to run.
  • On the server side you are locked into JavaScript (most likely node.js) even for generating HTML and CSS.

So web components couldn’t be any better right? They have component in their name. Turns out, it depends on how you use them:

The power of progressive enhancement (demo)

In the HTML you’l find this:

  <h2 slot="trigger">
  <div slot="panel">
<script src="main.js" defer async></script>

For one the JavaScript is loaded asynchronously as to not block HTML/CSS rendering. But additionally the markup that we would expect to be part of the component is actually part of the document. The initial state for the component is found inside the document rather than being supplied by some JavaScript based initialization (JS state in connectedCallback, AJAX, etc). Once the component is activated is takes control of that area of the document and it does what it needs to do.

This is very different from the component thinking associated with SPA design because this design process is based on progressive enhancement.

With this approach you can use EEx (not node.js/JavaScript) to blast the necessary markup in place on the server side while shipping the interactivity with the JavaScript source. The downside is that you have spilt your “components” in two:

  • The EEx view that generates the server side mark up
  • The web component to provide interactivity/behaviour in the browser.

Now tooling could bring those two parts together again (keeping in mind that often “better tooling” may make some things easier but can drive up the complexity elsewhere).


It’s a good point about “tooling” adding complexity elsewhere. My experience with React, especially using UI component libraries like React Material was an awkward amount of coupling between the responsive/Javascript elements, state, and the UI/View elements. I’d like a UI library to take the minimal amount of effort to build things. The idea being that if the perceptibly easiest path is copy-paste driven development using something like MUI or Bootstrap - we’d should at least make the workflow maintainable because it will be done regardless of technical merits. Especially as the front-end Javascript world is a large source of new developers. If I build an app using a UI library and the ecosystem inevitably shifts, do I have to wait for my UI library developers to update their components, then hope that the overindulgent abstraction doesn’t necessitate a heavy change cost?

Seems to me the solution is somewhere in the vague idea of a good separation between responsiveness, state management, and UI/presentation.

So it’s interesting to see Google using Web Components as wrappers for their own Material design libraries instead of re-implementing the whole thing for every Javascript framework. Salesforce is also making similar moves to web components (albeit in typical Salesforce fashion by coupling everything to their infrastructure). From a naive perspective it looks like web standards are just integrating what we wanted from React/Vue anyway. How soon before the value of React and Vue is overshadowed by the simplicity of just using vanilla Javascript and standards like web components?


I don’t think it’s going to work that way. I think it’s more along the the lines of “You might not need React/Vue”.

  • Edge doesn’t even support custom elements yet.
  • While web components aren’t especially hard, I don’t think they’re easy either, the browser’s web-api is rarely easy. Hence why lit-html exists.
  • Even on the lit-html level using web components naively won’t automatically yield positive results. They are still JavaScript based so standalone (rather than cooperative with the document) implementation where a component is responsible for putting the first paint on the screen will still be less than optimal. What’s worse, certain implementation styles won’t work well with some frameworks especially if their SSR technology is used.

It really comes down to knowing what the tradeoffs of each tool in question are and what tradeoffs provide the most value in any particular situation. For example: “The secret of the VDOM isn’t about performance, it’s about simplicity”. That simplicity comes at a cost of running diffs on the client device and on some clients that cost is inconsequential - on others, not so much. If you only have some interactivity a VDOM might be overkill. So Polymer’s PWA starter kit could be enough (though apparently going full Redux is OK; :man_shrugging:).

I’d like a UI library to take the minimal amount of effort to build things.

I think one of the issues is the perspective of native WYSIWYG UI design tools. Native UIs are collocated with the functionality they interact with so seeing the widgets as a representation of the internal component seems natural. SPA’s try to force that kind of environment on the browser by migrating application functionality to the client and then manipulating document fragments as part of a visual component - entirely in JavaScript. But browsers are optimized to deal with pages in layers: markup/visual design/interaction. Now tools could transform a component design view to a page implementation but what kind of design time information do you have to provide to be able to generate semantic markup (in most cases tools don’t bother)?

So while UX design principles apply to both native and web UIs, the design process for a web application page seems radically different to a native UI due the functional characteristics of a browser (and the web in general).

Seems to me the solution is somewhere in the vague idea of a good separation between responsiveness, state management, and UI/presentation.

There is no one solution. Each problem has a different optimal solution. In an effort to adopt the one approach that covers the most scenarios a large part of the industry has landed on SPAs. But flexibility comes at a complexity cost (among other tradeoffs) and under-utilized flexibility is a waste. There is a full spectrum of rendering options but there is always the temptation to use whatever seems the most flexible - “just in case”.

1 Like

Well Edge is swapping to use the chromium backend soon, so… that will soon be fixed. ^.^


Progressive Performance (Chrome Dev Summit 2016)

An entertaining 2016 talk about the constraints that mobile devices operate under to “deliver the web experience”. They are fighting “the laws of physics” every step of the way.

Improve a phone’s performance - by running it on an ice pack.

The 2018 version of the talk isn’t as entertaining but explains why matters aren’t going to improve much in the near future - largely because the performance of the average phone isn’t actually increasing that much due to the economics of silicon foundries:

The New Mobile Reality - Alex Russell (Google)