Are browser-based client applications important?

Prolog: I’m making this a separate topic because I don’t want to be responsible for diverting any more of Chris’s valuable time - especially as the core notion of this topic isn’t actually about his work.


Conclusion for the impatient:

As mobile is becoming the primary means of consumption for web (i.e. internet accessible) content, it is becoming increasingly important for businesses and organizations to unify the means of deployment for their content. For the time being “native apps” are still seen as the superior delivery platform.

However over the long haul supporting two separate mobile platforms in addition to the desktop browser is only affordable for more affluent organizations - and likely even wasteful for them. For a minority of use cases “native apps” will always be superior - BUT for the majority of cases it is more important to have a more cost effective, unified solution for which currently the “browser-based client application” is the best candidate. That is not to say that all use cases require a fully featured rich UI - in many cases web sites which are designed “mobile first” can be sufficient.

It still is imperative that the capabilities of browser-based client applications are developed even further so that the gap to “native apps” can be further narrowed. That way the consumption platform for the majority of use cases can be unified. To that end browsers have already access to the Push and Geolocation APIs and others will likely be added in the future (Geofencing has been abandoned for the time being).

Essentially the mobile web needs to get to the point where it is no longer percieved as playing second fiddle to native apps without resorting to “hybrid” approaches. (And really that is what PWAs seem to be about).


Longwinded leadup:

The avalanche has already started. It is too late for the pebbles to vote.

My intent wasn’t to imply that whatever the “big boys” are pushing is “the best”. Far from it - they have enough resources to push sub-optimal solutions to ridiculous extremes. But I’m also not blind to the fact that technologies promoted and endorsed by them can have a foot-in-the-door with a significant segment of the rest of the industry (e.g. Go). Typically the only way they can be countered is with “disruptive” technologies - simply ignoring what they are up to isn’t going to have beneficial consequences though it is important to not succumb to their games of fire and motion.

This is what I find most disconcerting - whiplash technology choices as a response to the JS ecosystem as it exists today. The browser is an essential part of the human-facing web experience and JS is as much part of that as HTML/CSS. So it seems incredibly counter productive to develop or even nurse an aversion to any of the three. And I argue that (vanilla) ECMAScript has improved a lot in more recent years (… still not perfect, nothing ever will be).

Now a slight change in direction:

When somebody calls themself a “Rails developer” or a “PHP developer” is that a bad thing? (Deliberately sticking to “web technologies” here). In itself it isn’t, especially as long as they find themself in situations where they are handed problems that these technologies were designed to solve and as long as they stay within the design constraints of the tool. Where it gets interesting is when the tool is an ill fit for a particular problem - what will this developer do?

  • Distort the problem to fit the tool?
  • Declare it can’t be done because the tool doesn’t fit the problem?
  • Adapt and pull in technologies from outside of the purview of “X developer” - possibly stepping outside of “X”?

To me, it’s incongruous for anyone to work with the “human-facing web” and to somehow expect to get away without JavaScript - I don’t care what “X developer” label they adorn themselves with.

Similarly we have somehow arrived at a situation where it seems justifiable to know only one of way rendering something onto a browser - it seems while one would expect “(insert your favourite framework) developer” to have some competence in JavaScript also expecting in-depth understanding of HTML, CSS, and browser APIs (and how a browser in general functions) is pushing it.

What I’m getting at is that there is a full range of implementation choices when it comes to “rendering in the browser” from static pages to SPAs. But somehow the basic static page gets mostly ignored (because it’s static) which means that people immediately jump onto some batteries included JS framework. So time is pumped primarily into tackling the framework’s learning curve while investment into learning the browser fundamentals is sacrificed.

When it comes to web technology I believe that it is necessary for implementers to have a better awareness of a broader spectrum of the different types of browser-based solutions - most of which involve JavaScript of some kind. That doesn’t meant knowing every SPA framework under the sun but knowing how to achieve “just a little bit of interactivity” with a modest amount of HTML/CSS(/JS).

Nicholas Zakas: Enough with the JavaScript Already (2013) (slides, GitHub)

That being said, do I think that browser-based client applications are important?

Yes. But that doesn’t mean everything needs to be an SPA. I’ll the first one to admit that there seems to be a void right now for anyone needing a “modestly interactive” browser-based solution. I think the JS ecosystem is suffering from “libraries run amok” - i.e. mindless library creation and use. Somehow this has lead to the acceptance of bloat and complexity - often without much value in return. But to denounce (and ignore) the importance of JavaScript because of the current state of its ecosystem is essentially “throwing out the baby with the bathwater”.

« inject above conclusion here »

By extension anyone who is betting on the web but is unwilling to “play inside the browser” (which for the time being invariably leads to some use of ECMAScript somewhere) is painting themselves (and their users/customers/clients) into a corner.

10 Likes

I certainly believe browser based client applications are important, and will continue to be for some time. The web developer community mind share alone is enough to make them so, even if several of the larger libraries weren’t backed by such large companies. As new standards and browser features evolve, these libraries will be the way many developers and companies make use of them. Partially offline applications will likely only get better and more important, and mobile hardware is increasingly being exposed to mobile browsers.

What has me excited about liveview is there being a new tool at a different point of the spectrum between static content and full spa, in the void you mentioned. I’m excited about web assembly for similar reasons, though I see it more as an enabler for new tools.

As a web developer, I feel not knowing javascript and at least a large or several small libraries as being equivalent to not knowing sql, or basic networking, or basic deployment and hosting options. Are there products and user experiences you can build without really knowing those? sure. Would some knowledge in those areas make them better or help build them faster? almost certainly.

The other thing I didn’t see in the previous thread, but seemed obvious to me, is that all these can still be mixed. There is no reason the logic backing a rest or graphql backend can’t be used server side to drive liveview. There is also no reason a page can’t have some dynamic parts driven by liveview, and other parts driven by react or vue. In fact, I can think of several applications I’ve worked on where such a blend would have been ideal.

That was a long winded way of saying I think I agree, and I’m hoping opinions on this thread will tease even more nuance into the discussion.

1 Like

Interesting thread :slight_smile:

I actually think the gap is going to widen.

Why? Custom hardware.

Not sure if you have been following Apple recently, but they have been adding more and more custom hardware to their platform/s, quite deliberately, I imagine, to increasingly separate them from everything else out there. I think they said their cameras can now make 1 trillion calculations per second, and they demoed some really cool ‘computational photography’. They now have a dedicated chip for machine learning, which can now do ‘live’ machine learning (on the fly). Much of those abilities and hardware can be tapped into by developers. The kind of apps possible on such devices are becoming less likely to be seen on your standard PC.

So I think we’re going to see a split - apps suited to ‘browsers’ and others suited to custom hardware within devices like iPhones and iPads. It just happens that, imo, a significant proportion of apps suited to browsers are exactly the kind that will benefit from technologies like Drab and LiveView (and the great thing for us, is Elixir and Phoenix can power the backends for all those other apps too).

Why do we have to choose? Each has their benefits, and I can’t see either dying anytime soon tbh.

1 Like

I think we’re in agreement here on most points.

I really just see the LiveView/Drab approach as a simple means of adding more polish for the static-is-enough or sprinkling-of-JS use case which I personally think is a lot more prevalent than people want to admit.

I’m not denouncing JS or SPA where they are warranted at all…but I am saying that they aren’t warranted in many places where they are currently used.

When you talk about disruption, I agree with you 100% and that’s one of the reasons I’m excited about the potential here - because I believe there is a very significant unmet demand here. From spending time almost daily on HN and other dev forums, I can tell you that there are a whole lot of people who want this if it works. That will be disruptive.

Just to frame it another way - the point of Phoenix is to do incredibly interactive applications. A lot of that comes from what you can do on the server, but the sheer volume, speed, stability and scalability of client connections is the main selling point. If you are building a site that uses websockets and you aren’t using Phoenix on the backend, you are already making things harder on yourself.

LiveView/Drab just makes that rich client experience available for projects that normally would not need it, call for it or have the budget for it. That’s how I see it at least.

2 Likes

Whether you are a frontend or a backend developer you need to know something about “the other side”, i.e. a frontend developer should know something about user auth and databases, and a backend developer should know something about CSS and JS. But I don’t believe in the full stack developer ideal. I think the most productive teams are the ones where there are dedicated frontend and backend developers who master their languages/frameworks in depth. To acheive that kind of mastery of a technology there is a limit to what you can choose to focus on.

For solo projects or smaller teams where you cannot divide the work by specialization I really think there is a place for tools that will hide complexity behind some abstraction. You can do that either way: “hiding” the backend or the frontend. Backend as a service solutions like Firebase hide a lot of backend complexity from people who are primarily frontend devs and increases their productivity by letting them focus on what they’re best at. I see Phoenix’s LiveView and Rail’s Turbolinks as attempts at the same for the backend.

If you have the team size and the resources you don’t need these kinds of abstractions and the cost that comes with them. But if you don’t I think they can be a huge productivity gain.

2 Likes

I’m not saying that native apps are superfluous - they have their place, especially when it comes to exploiting platform specific features to satisfy some narrow market. That is not the space I’m talking about. I’m talking about use cases that have historically targeted the desktop browser but now find themselves in the position of being (most) often visited from mobile platforms.

apps suited to browsers are exactly the kind that will benefit

Now just to be clear in the context of this topic I use “app” to represent an application targeting primarily the mobile market or at the very least where the mobile platform represents a significant portion of the market.

For the time being mobile connections have to be expected, apart from being metered (i.e. pay-per-usage-volume), to be at times unreliable and only capable of supporting a low bandwidth.

Why do we have to choose? Each has their benefits

I’m approaching this from the perspective of the organization footing the bill for a use case where the mobile platforms represent a significant portion of the target market. In the long run they don’t want to be paying for developing and maintaining a browser, iOS and Android version of their solution.

For example, I have one of those loyalty program apps on my phone which displays
a scancode for my account. For the time being the company maintains a separate site for desktop use and separate iOS and Android apps for mobile. To a certain degree this trifurcation is motivated by status because providing native apps is still seen as a superior form of customer service (via UX). But there is another issue. Inside the store where I use the app I can almost never get a signal.

Technically the display requirements and functionality of the app is something that could easily be implemented via web technology - as long as you have a connection. Fortunately the native app caches my scancode so I can get my loyalty program benefits even without a connection.

But the fact is that these days you can have exactly the same functionality from a browser-based application. A web page’s service worker can cache off-line assets while it has a connection, preparing for the eventuality of serving them when access to the home server is absent.

Then there is the other issue:

Building Progressive Web Apps: Chapter 1; The Current Mobile Landscape; How We Behave on Mobile

Majority of U.S. consumers still download zero apps per month, says comScore

The 2017 U.S. Mobile App Report

So it seems due to the mobile trend:

The issue is that we are no longer in the pre-2007 landscape (iPhone launch).

The influence of the mobile market has unbalanced that choice towards hiding the backend via service providers - there really isn’t anything server-based that can counter that to an equivalent extent. So when minimizing the total project skill set for a single developer or a small team the trend is to favour front end skills - as much as that may irk us back end enthusiasts.

So the only choice is to embrace the Generalizing Specialists mindset and acquire a broad set of general front end skills.

First we got “mobile-first” forcing an adjustment to page layout. Now offline-first is “a thing”, forcing an adjustment in the design of page dynamics.

From bradfrost.com

2 Likes

lie-fi
:icon_biggrin:

Apparently that term goes back to at least 2008.

Jake Archibald: Instant Loading: Building offline-first Progressive Web Apps - Google I/O 2016 (more about service worker in particular).

1 Like

JavaScript Concurrency and the DOM - Kristofer Baxter and Malte Ubl

There is an interesting segment describing how the performance gap between mobile phones at extreme ends of the spectrum will keep growing primarily because the low end is simply getting cheaper rather than more performant.

It also references a project called clooney:

Now this has nothing to do with lightweight processes as it simply wraps Web Workers which add a bit of overhead:

  • startup 10ms
  • termination 5ms
  • thread hop delay 1ms - 15ms
  • 4MB V8 isolate

However clooney was likely a precursor to this project

which was developed as a basis for this talk about using Actors on the Browser:

Architecting Web Apps - Lights, Camera, Action! (Paul Lewis, Surma)

The talk cross references another one:
A Quest to Guarantee Responsiveness: Scheduling On and Off the Main Thread (Shubhie Panicker, Jason Miller)

fyi: developIt/Jason Miller is maintainer of Preact.

Bottom line: Google is looking to get a browser based scheduler into the web standards.

Bonus: Comparing Rendering strategies: Rendering on the Web

1 Like

FYI:

Caveat: Unfortunately the course focuses on the App Shell PWA architecture (i.e. SPA) rather than on a progressively enhanced page with the network as a progressive enhancement which Alex Russell seems to be pushing for (the course is 2 years old but seems the be regularly updated).

Edit: The course has been deprecated and replaced:

Aside: The “Developer Experience” Bait-and-Switch

JavaScript is the web’s CO2. We need some of it, but too much puts the entire ecosystem at risk. Those who emit the most are furthest from suffering the consequences — until the ecosystem collapses. The web will not succeed in the markets and form-factors where computing is headed unless we get JS emissions under control.


Beyond single-page apps: alternative architectures for your PWA (Google I/O '18)

The StackOverflow sample PWA doesn’t seem to be using a rendering framework though it uses Workbox to simplify working with the non-DOM browser APIs (it does however use the server side HTML partials to update the page in browser rather than reloading the entire page).

Note: this should be treated as product placement for Google’s (web) Firebase and Cloud Functions for Firebase.


PWA starter kit seems to have seen a lot of activity before the presentation in May 2018.

So whether it’s going to be maintained in the long run is anybody’s guess. As such it serves as a vehicle to demonstrate of how Polymer’s lit-html web components can be used to build a PWA (with can optionally use Redux for state management).

The Redux option seems a bit odd given You might not Redux (2016) but it probably has to do with “tool addiction”.

There are lightweight alternatives like redux-zero (or unistore (original) with lit-html).


Opinionated:

2 Likes

Squoosh is a tech demo PWA that can be accessed at

https://squoosh.app/

It allows for the visual comparison of the effects of different compression algorithms and settings.

Complex JS-heavy Web Apps, Avoiding the Slow (Chrome Dev Summit 2018)

While the design of this app focused on being “lean” for the user’s benefit the development effort required seems much less lean.

Also:

  • Restaurant analogy to justify code splitting 22:16 (though applications have always used splash screens to distract the user from the fact the application hasn’t finished loading yet).
  • Obligatory swipe at React 25:04
  • Favourable mention of rollup.js 27:09

Squoosh implements access to (Android) Web Share Target API:
https://youtu.be/lNOP5dcLZF4

4 Likes

Google sponsored free Udacity course:

As it states it focuses on ServiceWorker and IndexedDB.

I suspect they would be using Jake Archibald’s idb because using the bare bones event-based interface can only described as maximally unwieldy (clearly the IndexDB spec predates promises in the ES spec).

Also

https://serviceworke.rs/

which is the Mozilla supported cookbook for the Service Worker API.

1 Like

https://proxx.app/
GitHub - GoogleChromeLabs/proxx: A game of proximity

Now for my juicy hot take: Our current generation of frameworks makes off-main-thread architectures hard and diminishes its returns. UI frameworks are supposed to do UI work and therefore have the right to run on the UI thread. In reality, however, the work they are doing is a mixture of UI work and other related, but ultimately non-UI work.

On PROXX we actually opted out of VDOM diffing and implemented the DOM manipulations ourselves, because the phones at the lower end of the spectrum couldn’t cope with the amount of diffing work.