High latency and intermittent connectivity are two of LiveView’s particular weak spots, though this is not called out in the documentation as such. In particular, fully offline usage is flat-out not supported, but is rather important for mobile users as long as it makes sense for the domain - i.e. it’s somewhat nonsensical for a chat product.
Nothing like a real world user experience to reply to your question.
So just jump in a train, car or bus commute and keep using your project with live view and then you will have a first hand experience on it. That’s what I am planning to do when my MVP is ready.
As a train commuter in a country with very bad mobile internet connectivity, even when inside the capital I can say to you, based on using other normal sites, that Live View will perform bad in this scenarios, specially when I am seeing people relying in live view to manipulate everything in the DOM.
While an amazing product the Phrenzy around it made it a disservice, because live view demos floating around are pretty much bad examples of using it in the real world.
My advice is to keep live view to a bare minimum in your web app. Just use it in places where you need to apply business logic to what you are doing, and forget all the other fancy uses. Even so I am to see how bad the user experience will be, and I am doing my app to allow me as much as possible to easily go back to normal Controllers.
So my advice is to cool down our hipe around it and put on the shoes off the end user and go to the terrain and use what we are building with live view.
Ah and remember that not all your users have a powerful laptop or mobile device like yours, thus when going to the field pickup your old laptop and mobile device and use them instead. Nothings beats a real user experience, thus please I ask to everyone to not come back to me to tell that we have tools in the browser to simulate the same or to use the built in Live View debug to throttle it.
YUP. Phrenzy was neat but impractical unfortunately.
Liveview works great when it is used for UI interactions that must involve the server, because that’ll take the same time no matter how you write it. Form interactions are generally fine, live data push updated from the server is great.
No matter how much you use it, you MUST avail yourself of the various status classes the phoenix adds to stuff to provide user feedback. A modal that takes 500ms to pop up can feel completely disfunctional, or it can feel completely fine depending entirely on whether the modal button gets a spinner instantaneously. Phoenix makes this easy, but you do have to do it. There have been really interesting studies on what people perceive latency to be, and you can get away with quite a bit as long as there is some instantaneous feedback that lets people know their click or tap was registered.
And as a last note, we’re doing a wonky thing where all of our live views are actually making GraphQL calls to fetch their data. What this means is that everything they can do a dedicated mobile app can also do without any additional work on the backend. For specific mobile scenarios that need to be highly optimized we just went ahead and built a dedicated mobile app.
Even with feedback this is dysfunctional, unless the modal needs to present the user with some data from the server, otherwise if is just a pop-up for me to read something, then in my opinion is just bad UI and UX.
No matter how many studies you read to feel more comfortable in your choices in the end of the day wins the app that is less clumsy and sluggish for the user, aka the one with better UX/UI.
Please bear in mind that with time some users that use a bad UI/UX tend to feel more comfortable in using it , thus they may even say that it’s a good one down the road, because now they know how to navigate around the bad UI/UX.
So I stand for my opinion, that use live view but be wise when you are using it, because UI/UX will play a major part in the adoption and retention of users, unless you are a big player in you market, because in this situations users tend to complain, but they keep using it. This may be because they don’t have an alternative, they are locked it to it or they don’t trust in the alternatives. From top of my head I can think of some big players in e-commerce and cloud that fit on this, but they got away with it.
As a matter of pragmatism however if your team is 90% Elixir people and your user base is exclusively businesses with solid internet though, it might be a fine provisional choice even for eg: confirmation dialogues.
At the same time, we just built a collapsable side nav and to your point, it isn’t good UI / UX to manage that collapsing state via LV. Confirmation dialogues and a host of other “display only” UI interactions probably fall under the same category.
I think the jury is still out on whether LV + some spartan JS can be a solid (online only still required) SPA like replacement. From a “we need to deliver features fast and are a mostly Elixir team” perspective it’s quite nice. If we could hire a half dozen react developers though I’m not sure that we’d stick with it.
Yes. Some API gateways or Firewalls may not even be designed to work with websockets.
Also if I am not in mistaken in some countries you don’t even have support for them in their internet infrastructure, but live view will fallback to http polling if I am not in mistake.
As I said in my first reply, nothing beats a real world experience, thus studies will have the value they have and may reflect or not the real world of your users.
Plus you are asking to measure 3G/4G reliability that is so volatile. For example in UK despite all the studies that they say it’s good, in practice is just crap, because when you have bad coverage even in the major cities, including the capital I cannot understand how that studies reveal so good results.
This doesn’t make sense as WebSockets run on top of TCP, which is surely supported by every network connected to the Internet. Some oppressive regimes might be using deep packet inspection and man-in-the-middle attacks to disrupt/disconnect certain types of protocols though.
I’ve jokingly said before that if browsers implemented a canonical way to style the blank page that appears when you navigate (with a full window overlay that you could make the background be wtv CSS you wanted, a spinner of sorts in the middle that you could, again, style with CSS) and would just swap the body after the full load, 90% of the cases where you feel that a web page/app needs a SPA would go away.
I completely agree with you, in fact I think the perceived latency is much more important than the real latency and I don’t need studies to prove that, I just need to use websites (though the studies can help driving the point home).
Right now I’m not with super speedy internet, and far-away from all major datacenters, someone posted a link to a page with turbo-links enabled in another thread here (don’t have the link) some days ago. With turbo links, you click the link, nothing happens for a little bit, and then it starts loading. Without JS at all it felt much better user experience, you had instantaneous feedback on the navigation. And this happens also with a lot of SPA/JS framework websites that rely on VDOM out there. There’s no instantaneous feedback and many times you have that split second where you ask if you actually clicked wtv and if it was registered. Even when you try to immediately replace a button/section with a spinner or disable it you can “register” that lapse where nothing happens (almost anything that relies on render cycles and doesn’t do point specific manipulation of the dom).
If your animation takes 500ms to pop a full window spinner and 250ms to close it, it will look perfectly normal as long as it’s immediate upon interaction.