Escaping the SPA rabbit hole with modern Rails – Jorge Manrubia – Medium

That is correct, almost any JS “runner” / “packer” – or whatever the snake oil selling term of the month is – outputs a source map but people often decide to “optimize” and end up shooting themselves in the foot when they can’t debug a problem on the prod website. Teams of 10+ people scrambling to modify the prod deployment pipeline so the source map can be included is quite the funny thing to watch. :003:

We’re entirely in agreement here. There are no universal rules – or at least there are a very few of them.

I personally am not a fan but I do see the benefits; akin to FP in general and Elixir in particular, you get isolated ids / classes that don’t interfere with each other. That reduces possibility for spaghetti dependencies which is always a good thing. I am just a bit concerned about the resulting concatenated and minified CSS file is all. But I’ll immediately agree that having a slightly bigger CSS file in order to have less layouting problems to debug is a huge win.

I cannot explain it but I’d theorize that people believe the ecosystem around React is complex enough already and are afraid their job is getting that much harder? I might be hugely off-mark though.

Yep! :023:

Lol, at work I use source maps with impunity… ^.^;

Though nicely the javascript is something I almost never have to debug, it’s usually elixir (woo production tracing!) or CSS. Most of my javascript is generated from ocaml though so… ^.^;

Near every shred of my CSS is custom. It was originally a modified Surface library but by now near every selector has been touched by me now to unify things can get things to work across IE11 (and IE10 a lot of the time!) up to firefox and chrome and safari. A set of Elixir helpers to generate the appropriate EEX (my Surface module) makes sure things stay in sync. :slight_smile:

Eh, developer happiness and an actual delivered product are not mutually exclusive (although are often at odds for some reason).

Many take developer happiness to extremes so it really depends on context and what exactly is meant by when we discuss it – in my case, I draw the line at being able to concentrate on the business features and technical stability (performance comes last) and not being annoyed with minutiae at every step. I don’t care about what physical games can be played at an office or that we have discounts for lunch at a local restaurant. That’s not developer happiness.

Developer happiness to me is when I don’t have to type 50 lines of code and imports to be able to do a list filter, map and reduce – as I was doing in Java way back. Developer happiness is creating 3 small functions with my custom filtering / mapping / reducing logic and just calling Enum.filter(...) |> Enum.map_reduce(...).

So I would urge us to not go at the other extreme which would be “get the job done no matter how awful your work will become due to the chosen tech for the project”. That’s leading us nowhere as well.

1 Like

Can you explain a bit more about what components are in EEx?

Not sure if he was referring to this pattern but I’ve used this idea for “components” in EEx: https://blog.danielberkompas.com/2017/01/17/reusable-templates-in-phoenix/

I’m pretty sure (but haven’t tried yet) mixing in the ~E sigil (like LiveView will) instead of a bunch of files can make it cleaner too if well organized.

This means using inline templates instead of EEx files? Fair enough. But what prevents us from doing that right now?

Yeah it’s just a way to organize the templates and call/nest them more like “React” components which is what I assumed @LostKobrakai meant. We can do that today. Looking at his issue in the Elixir repo he’s having issues passing 2 anonymous function callbacks representing “slots” to a “component” Elixir function which syntax looks like it would need to wait for Elixir 1.9.

Don’t get me wrong, I get it, but when it comes to performance concerns there are at least two shades:

  • premature optimization is the root of all evil - i.e. you can always optimize later for the largest gains
  • development approaches that optimize “productivity” in a way that is oblivious to the strengths and weaknesses of the delivery platform that makes any sensible “after the fact” optimization attempt impossible.

We are prone to favour tools/frameworks that support our preferred way of thinking and working and when we get what we want, we often fail to scrutinize whether the tool/framework provider has an approach that harmonizes with the underlying platform (which we don’t want to deal with directly) or whether they blatantly ignore it, just to make things work their way.

From the point of view of pieces like Optimising the front end for the browser lots of JS frameworks aren’t working with the browser but are fighting it.

Even look at the discussions which treat WebAssembly like it’s the second coming - it doesn’t matter what language you write your code in, WebAssembly isn’t going to alter the way browsers fundamentally work - their strengths and weakness will still be the same.

ActiveX, Java Applets, Silverlight, and Flash were initially successful in some situations but have all fallen by the wayside - so now we expect the JS or WebAssembly based version of that approach to be successful in the long term?

3 Likes

This is mostly true, I am not arguing. It’s just that more often than not the devs who wanted to do the optimization / rework regardless of these hurdles were never given time budget for it.

This is crushingly accurate. There’s only one cure – we must learn several different paradigmas: say, imperative/OOP, FP, logic programming, flow-based programming etc. The human brain gets really smart when it sees different approaches to the same problem; we practically make new synapses while we sleep when we are in such an environment.

The trouble I see – and keep seeing it almost every day – is that many people just want to learn several things that are serving their day job well enough and then stop there forever.

In order for one to see different points of view and educate themselves they actually have to… you know, see and learn them. And this is assuming that people want that. Which is very often wrong.

I see no way out of this. I guess group segregation is the only result of this situation, and that’s accidental emergent behaviour anyway.

This and your following JS-related statements are also very accurate. It’s OK when people want some creative freedom; it’s a very valid statement to say: “let us have some amazing transpiled language with awesome libs and then we’ll think how to compile that to maximally efficient and human-readable JS”. However, the last part never really happens in the JS fantasy land. Are there any actual exceptions except BuckleScript, by the way?

So I am OK with the frontenders fighting the browser if that is only the first stage of their war against the ancient cluttered mess that JS is. But it always is the first and the last stage. And that’s not OK because then you have a ton of other poor guys and girls who have to maintain their crap while they are excitedly coding the next dumb hyped “framework”.

(This is not unique to JS at all. It’s just that it is very pronounced in the JS ecosystem lately. F.ex. back in the day some Java devs did their damnest to invent mutable strings and ended up deeply regretting it.)


Creative freedom is good and must be encouraged. But this creativity in the frontend land only accumulates bills to no end – and the people responsible do their best to never pay any technical debt and just move on to their next toy. They are like spoiled children who get easily bored.

</random-rambling>

3 Likes

I was particularly thinking that browsers are optimized to create the DOM and layout by parsing HTML (and CSS) and that the intent of the DOM API was for altering minor parts of the established document - not recreating the document (and layout) from the ground up in JS (or WebAssembly for that matter; just because you can, doesn’t mean you should).

It would be ironic if one day if/when the dust settles SPAs would be declared a browser anti-pattern.

From a programmer’s point of view I see the appeal of SPAs but lately I’ve been wondering whether that is just a brute force way of dealing with browsers - precipitating all the complexity that we are observing.

5 Likes

I think you are all discussing here apples vs oranges.

SPA is just simply way better in many cases. And it evaporated naturally - because we all saw that having html rendered by backend server and then enhancing it with jquery or backbone, is not really a good long term solution. Then component based frameworks were created, like angular and react - and thanks to it doing a rich UI application is no longer a pain.

Buuut. If you don’t need a big app with all its interactiveness, and state kept on the client, then of course you don’t have to rewrite your app to any frontend framework. Using raw html and css is still fine then.

(Although frameworks nextjs can really simplify having both cake and eating the cake: which is fast ssr and initial csr, without the sacrifice of easiness of doing a web app in frontend component based framework.)

P.S. Splitting your “app” into backend server that has all the business logic layer with some APIs, and then into a web app (no matter whether client or server side rendered), is always a good idea, IMHO.

3 Likes

The thing I haven’t seen mentioned here is that SPAs are not just an end in themselves, they are also a byproduct of the growth of dual interface web/API systems. SPAs let you build a single back-end and then use it both from the browser and programatically. For many - but certainly not all cases - this is cheaper, simpler and gives better quality since you have a single set of interactions to maintain.

APIs developed this way also tend to be, in my experience, a bit more useable and comprehensive since the browser gives a real client, real feedback, and real use cases early in projects whereas just “imagining” an API from scratch can be a challenge.

Be careful not to end up with an API, which is basically the retrieve/update data operations per page of a logged-in end-user, and without any bulk retrieval or bulk update capabilities. In other words, the API is tied to the current version of the GUI and doesn’t support anything else.

You’re absolutely right, the UI doesn’t always cover 100% of the possible client needs.

SPA is suitable under some circumstances when faced with particular requirements - it’s certainly not an approach to squeeze every last bit of goodness out of the client’s browser. And for that matter React’s existence hasn’t displaced Facebook’s use of BigPipe.

One potential problem with SPA is that it requires a good amount of investment up front, both in learning and adopting the technology and in building a product with it. The investment is significant enough that people will easily turn it into a golden hammer just to get some (repeat) return on their initial investment. If you “use what you know” and you always use an SPA stack you’ll likely introduce excess complexity somewhere.

I suspect that Google in 2010 was mainly targeting (“enterprisy”) desktops with high bandwidth/low latency network connections when they started developing AngularJS, so payload bloat wasn’t really an issue. But with the shift towards the mobile web performance budgets made an already bad situation


even worse complexity wise.

More complexity is added by code splitting to minimize the uncanny valley between first contentful paint and time to interactive.

Because of these drawbacks and the accrued accidental complexity people are actively pursuing leaner, more browser-streamlined and web-oriented alternatives. For example:

Now I’m not saying that any of these are a silver bullet or even the next big thing but there are many reasons why SPAs are far from a by default solution. No approach is perfect or universal and usually preference should be given to approaches that take advantage of the strengths of the browser platform and the architecture of the web.

In this context there are some interesting, though admitted biased intercoolerjs blog posts:

which is fast ssr and initial csr

Not everybody is happy being coerced into running (and being locked into) JavaScript on the server side.

is always a good idea, IMHO.

It’s the boundary that is important, i.e. keeping the business logic separate from the web server front end. However that doesn’t automatically imply that a full (public) web facing API is necessary. That really only pays off:

  • if there is something else that is going to consume it (i.e. don’t fall for the “reusability” argument - YAGNI)
  • if it’s needed for some essential optimization.
9 Likes

This resembles Trailblazer Cells. We have a production dashboard that uses it in a Hanami application, but after all I still prefer the usual Phoenix way that is much similar to the Rails traditional way.

1 Like

I believe in a Progressive Enhancement reborn face the new requirements of web access via mobile browsers, this is what we are seeing through the articles of Addy Osmani, such this

Given this context, the web components could offer an important piece of the puzzle.

1 Like

Because of SPA implementations it is often judged that “PWA has nothing to do with Progressive Enhancement”.

Some of the marketing suggests otherwise - the claim is that the app can be progressively enhanced; PWA simply adds “the network is an enhancement”.

2 Likes

… and a day later …

FYI: The Lean Web video from Boston CSS

2 Likes

Have you ever considered that components may actually not be a great fit under all circumstances?

The possibility first occured to me with Elm where it’s a recurring theme to disuade newcomers from “componentizing”, i.e. bundle a bit of model, with a few functions and a bit view.

Vue components are popular because they collect bits of HTML, CSS and JavaScript into one neat package that ostensibly is easier to reason about. All these bits and pieces are tightly coupled and we’re OK with that because it all just belongs together. And then Vue turns around and turns it all into JavaScript! As a consequence you are locked into JavaScript:

  • On the browser that means that component won’t get painted until that JavaScript code finally has a chance to run.
  • On the server side you are locked into JavaScript (most likely node.js) even for generating HTML and CSS.

So web components couldn’t be any better right? They have component in their name. Turns out, it depends on how you use them:

The power of progressive enhancement (demo)

In the HTML you’l find this:

<toggle-panel>
  <h2 slot="trigger">
     <span>...blah...</span>
  </h2>
  <div slot="panel">
    <p>...blahBlah...</p>
    <p>...blahBlahBlah...</p>
    <p>...blahBlahBlahBlah...</p>
  </div>
</toggle-panel>
...
<script src="main.js" defer async></script>

For one the JavaScript is loaded asynchronously as to not block HTML/CSS rendering. But additionally the markup that we would expect to be part of the component is actually part of the document. The initial state for the component is found inside the document rather than being supplied by some JavaScript based initialization (JS state in connectedCallback, AJAX, etc). Once the component is activated is takes control of that area of the document and it does what it needs to do.

This is very different from the component thinking associated with SPA design because this design process is based on progressive enhancement.

With this approach you can use EEx (not node.js/JavaScript) to blast the necessary markup in place on the server side while shipping the interactivity with the JavaScript source. The downside is that you have spilt your “components” in two:

  • The EEx view that generates the server side mark up
  • The web component to provide interactivity/behaviour in the browser.

Now tooling could bring those two parts together again (keeping in mind that often “better tooling” may make some things easier but can drive up the complexity elsewhere).

5 Likes