That’s what I was hoping for too at first. Liveview could handle the fallback case seamlessly, which means that devs like you and me would write the code once, and the client would either take advantage of JS if available or do with traditional html functionality. I think OvermindDL1 wants just the same thing: with that, you code once in Elixir, and all the use cases just work, and your website can be made compliant with any weird accessibility law from country X that you never heard about with much less work.
Edit: oh well, the website updated the page with the info that the other posts were split into another topic yesterday, after I post that now…
@chrismccord - do you have an ETA for a preview of LiveView?
I’ve asked a couple of times already, so I apologise for eagerness/redundancy, but it’d be really helpful for internal planning if we had at least a tentative timeframe to know when it might land.
Obviously, we’ll be grateful that it even lands at all, but fundamentally it may change our approach to a project if we’re targeting, say, React instead of LiveView in the meantime, vs. if we knew there was going to be a release in the next month or two. In the latter case, it might make sense for us to work on other parts of the project first and tackle UI last rather than rewrite down the line.
That’s exactly what I am doing since I have the luxury of a nice multi-month backlog of required backend/DB work to do in my context app anyway. LiveView as already demo’ed is a perfect match for my requirements and I’m sure it will save me months.
I have a feeling the initial version will be out before my backlog dries up given the latest blog post says “We’re close to an initial release” and he said the plan was to release a “v0.1” version then iterate rather than polish it perfectly first. I guess if it’s not out when my context app backlog dries up then it’s back to React.
I think we might see it soon too. Chris has been pretty good on keeping to his estimates and even though he doesn’t give concrete dates (as missing them could cause issues for people), he does give you an idea, eg, “after Phoenix 1.4 is out” and “currently balancing the book and LV” etc
As @tme_317 mentioned, if you feel LV is going to be a good fit - why waste time on working on stuff with React that you may not end up using? So my vote is that I’d work on the other stuff for now too
Exactly the approach I’m taking at the moment. I’ve got a small prototype built out with Drab to check the premise works well for my use-case, but beyond that; I’m leaving anything frontend for the time being.
I prefer dividing my work in backend and frontend “sprints” anyway…
I can completely appreciate not giving definitive dates. I’m very much of the ‘it’s done when it’s done’ brigade. If it hasn’t yet been released, I’m sure there’s a reason for it.
All I’m really holding out for at this stage is a rough-ish timeline. Like - are there a bunch of pretty massive features that still need implementing, or is it cosmetic stuff? Are there parts of Phoenix that a release is blocking on? Does Chris want to align LV’s release with, say, Phoenix 1.5? Having some idea would make planning commercial projects / selling it to other team members that much easier.
Of course, none of this is directly Chris’ concern, nor should third parties factor into/pressure a release timetable. It’d just be useful to know!
Hopefully doesn’t come to that. In my current project, LV all but eradicates client-side JS. With React being quite an opinionated view layer that tends to bundle both mark-up and business logic, refactoring to LV later would be tantamount to a rewrite in my case. So React isn’t really an ideal plan B this time around.
Instead of an SPA, I’ll prob stick with plain HTML and a thin JS layer on top. Hopefully that’ll be a bit easier to refactor for LV when it lands.
Perhaps it has not been a concern since the mid-2000s for most front-end JS devs . For security-conscious individuals though, disabling JS and re-enabling only as needed is an easy way to prevent all kinds of undesirable garbage from executing on one’s computer. When a website depends on 30 different scripts, and I only need to enable 2 of them to get full functionality of the site, why waste my local machine’s resources to execute the other 28 scripts that are just there to track you and who knows what else?
In terms of dev-hours I’d agree that for most web apps today it isn’t worth it to put much effort into design for users with JS disabled beyond some basic graceful degradation, especially for fancy stuff like graphs / charts. I think this is partly the fault of our tools though - one thing about LiveView that seems interesting is the potential to make it easier to fallback to server-side-rendering only.
I realize this probably is not a priority for the project especially in the beginning, but it’s certainly interesting to think about giving devs an easier fallback path without requiring a ton of extra work.
I had an example of how a fallback might work in the deleted JS thread, but it got really messy, fast.
The problem is, there’s no real substitute for the JS API in the absence of JS. Let’s say an onClick handler is attached to a (non-submitted) button that should send a GraphQL request via a WebSocket, get some data, and update the local state which in turn refreshes the DOM without a full page reload. How do you simulate that in plain HTML? Form handlers won’t work - you’d need to explicitly submit a form by either hitting inside <input> tags, or click a button with type=submit. Even then, you’re limited to POST data, which would mean preserving some kind of state. This was the domain of ASP.NET view state, years back, and even that (IIRC) required some level of JS buy-in.
Unless you’re doing the absolute most basics - submitting forms, and clicking <a> links - there’s really no ‘graceful degradation’ these days for even rudimentary parity with basic app expectations.
JS is 99%+ supported, by default. No plugins, no workarounds - it works in 995 out of 1,000+ visits. For those other 5 out of 1,000 edge cases, you need to make a business decision as to a) why those users have JS disabled and b) whether there’s a ROI working around it.
I’d argue that for the majority of cases, in all but very few exceptions, a simple message asking users to enable JS would be sufficient.
If there’s a need to consume parts of your site/app outside of JS, that’s what REST endpoints are for.
It’s just not a design consideration today. Back in 1998 when I started, absolutely it was a concern. Even in the mid-2000s, when JS support was patchy and non-standard, there was an argument for it. But in 2019, when virtually every mainstream web browser released in the last decade has it enabled by default? This is an old problem that really doesn’t warrant workarounds, for all but the slimmest of use-cases.
(Apologies @AstonJ if this risks repeating the previous JS thread… this is intended to conclude the point rather than branch off into a separate discussion!)
If JS is disabled the client still gets the initial HTML render (no white screen of death) but no live updating as the channel/LV process won’t be subsequently instantiated.
I can’t find it now but I thought he mentioned in this podcast if you were just doing a form you could write a fallback where it POSTs to a controller without needing form state held in the LV process. Of course then you’d need extra code to render the validation error messages server-side after the POST. I doubt any use-case more complex than a standard form is possible with JS disabled.
Although my last job (very much big corporate) had some terminals super locked down like that too though.
Not uncommon in my experiences… ^.^;
On my own devices I use uMatrix though, I like fine-grained whitelisting. Can’t recommend it enough! ^.^
Depends on who it’s being made for. And it definitely doesn’t hurt to keep ADA compliance in mind anymore as it’s been hitting some places hard lately!
That is why should always ‘start’ with the basics so you know they work, then just ‘enhance’ after that with whatever pretty things or special features or so. Like I use GraphQL, but not on the client-page side. ^.^
Honestly such messages are amazingly rude to people that use screen readers. I saw one here go off on a rant once… >.>
Eh, you really aren’t that demographic though, that doesn’t make the demographic that you are in the entire world though. Working at a college really shows me how very important it is to have proper fallbacks (I didn’t really care either before here to be honest…).
Eh, it’s not an “old” thing (although I am stuck on GUI-less consoles quite often) for most, but rather a compliance thing so it is actually useful. People with visual and/or motor disabilities are far far more common than one would think.
Yes I thought I recalled the comment on the client getting an initial HTML render, which would be wonderful as far as I’m concerned. I wasn’t sure if I had heard that correctly though, like if it was in Chris’ video or in that Elixir Mix episode or elsewhere.
Either way I’m pretty stoked to mess around with this thing. It’s great how much focus there has generally been lately in the community around creating interfaces (scenic etc).
The initial request is just a plain old HTTP request and HTML response, so screen readers and crawlers will see the LiveView’s static HTML. After render, the browser reconnects and the LiveView process is spawned on the server.
A company (or just an individual developer that values their time) building apps needs to consider whether a) that 0.2 - 0.7% represents their target demographic, and b) whether the have the time/resources/interest in creating (often large-scale, and sometimes unfeasible) workarounds for JS-only functionality to serve that traffic.
GraphQL on the client-side is one of its major selling points.
In ReactQL, the out-the-box demo component I provide pulls stories from HackerNews via SSR and delivers the result in React-rendered HTML:
Subsequent requests to a GraphQL API server (or routing) are performed on the client, cutting out the need to round-trip to a server to rebuild HTML.
This leverages the capabilities of 99%+ of browsers in exactly what they were designed to do.
^^^ Server rendered initial HTML answers that concern.
Well, yeah. If your demographic isn’t ‘the world’, then design accordingly. That goes without saying.
None of the apps I’ve built the past decade, for myself or for clients, have been for government departments with locked-down JS.
Right, but it’s still exceedingly uncommon to consume a website - as an individual user - outside of Chrome/Safari/Edge, even with a screen reader. And if you’re surfing without a GUI, you can’t expect a site to accommodate this slim use-case beyond basic HTML.
Fewer sites/apps these days even make sense in a HTTP verb-only context. There wouldn’t even be ‘static’ ways to represent functionality of most of the things I typically build now beyond the initial HTML render.
That’s from a UK-centric survey, worldwide the value seems to average 1.1% with some countries averaging well over 2% and TOR users averaging over 10%, so it all depends on your focus as well.
Yep, that’s the useful thing of using it everywhere, you know it works and is well tested, and its trivial for a client to progressively enhance to use it while working without JS.
Not at the college here (well not any JS above some ancient standard, not even react works with it).
Those sound like ones that are browser addons, the ones we are required to use are not because they reach a required level of ADA compliance for the college.
That’s been my life for the past… wow its been over 10 years of work now… >.>