Shortcomings in LiveView - are there any I should look out for?

Hello everybody,

I’m currently considering using LiveView for a new SaaS project. For the past many years i have been using Elixir for building powerful APIs with absinthe, apollo, and react.
By using LiveView, i hope to build out features faster and simpler.
I have done some research, but really want to hear from the community, what are some shortcomings and what to beware of?

Furthermore i was curious about, how integrations with other Javascript libraries is handled? Say i need to trigger events in Google Analytics, or interact with mapbox. Can you simply add custom JS, and trigger events in LiveView?

6 Likes

First of all - LiveView will not work if user will loose internet connection. So some applications that could be used offline as well aren’t possible with LiveView (or at least will have less power and in the end you probably will write them mostly in JS anyway). Other than that the only disadvantage I see is larger network usage, but it should be negligible disadvantage in 99% of cases.

3 Likes

I had high hopes for LiveView and really like the dev experience so far - it’s straightforward to build interactive apps with it. However, after throwing together several small sites I’m doubtful that it will be the next big thing in webdev.

The biggest shortcomings:

Latency

I’m currently in South East Asia while most servers are located in the US or Europe - each time the UX of a full page reload is horrible. The static HTML gets rendered and it seems I can use the website - but then LiveView connects to the server again and I have to wait another 1+ sec until I can interact.

For example, the first times I visited any of the Phoenix Phrenzy projects I was often confused whether a website is finished with loading or not - even after the second mount/3.

I’m somewhat used to this behavior now (let it load for some time before I can get full interactivity) but in the beginning it was a very bad user experience. Avoid LiveView on any landing pages.

Limited use cases

Many LiveView examples do it a disservice - I would never use it for anything game-related. Flappy bird might work on localhost or near a data center but it will be nearly unplayable for any user outside your region. Please take latency into your consideration when planning your app and use the built-in latency test.

Flexibility

Adapting LiveView ties most of your views to its logic. This is fine for simple webapps or admin backends, but it will make mobile apps harder down the road. Which brings me to my next point:

Ecosystem

The ecosystems around Vue or React are a lot more mature. More components, more tutorials , more innovation, more mindshare etc. In my opinion these Javascript frameworks offer a lot of benefits for most medium apps with dedicated frontend/backend-teams.

Despite its shortcomings, I’ll continue to use LiveView and I’m excited what the future will bring. As a solo full-stack dev it has made my life a lot easier and allows me to build features that were out of my reach before.

I hope the community will find ways to deal with its shortcomings (e.g. better loading state feedback for users with high latency, client-side transitions to hide loading times…). I’m particularly interested in real-life experiences with different use cases - but I think it will take sometime to see where LiveView is a good fit and where it should be avoided at all costs.

24 Likes

I haven’t run into the latency in my tinkerings but IMO it’s just “right tool for the job”. For me, as a primarily back-end focused developer who thinks about problems from the perspective of the server and datastores it opens all sorts of doors by preventing me from having to enter the frontend JS technology.

But I’m not going to be making games in my spare time anytime soon.

1 Like

Thanks for the feedback! High-latent clients are definitely a UX concern, but one thing I want to highlight is we take these considerations seriously, and since the LV contest we have added features to ensure users are provided feedback while awaiting page loads and event acknowledgements. This happens through three features:

  1. we dispatch phx:page-loading-start and phx:page-loading-stop events. This allows you to use tools like nprogess to show page loading state before users can interact with the page, which is critical for proper UX on a fast or slow connection. New projects using --live have nprogress out of the box. For “page-level” events, such as submitting a form, we dispatch phx:page-loading-start and stop, and you can annotate other bindings with phx-page-loading to trigger your page loading feedback for events you expect to be page-level
    NProgress: slim progress bars in JavaScript

  2. We apply css loading state classes to interacted elements (phx-click’d els, forms, etc) and those elements maintain the class until an acknowledgement for that interaction is returned by the server. The classes will be maintained even if any in-flight updates from the server are received. Projects using --live have basic css classes to show you how to use these (we simply dim the inputs).

  3. We have phx-disable-with for optimistic UI on the client to swap out the client content while awaiting an acknowledgment for immediate feedback beyond css classes. Likewise, you can make use of the css loading states to toggle content, but disable-with is handy for quick feedback.

Proper UX is one of our biggest concerns. We want LiveView apps to be good citizens on the web, and you can see this desire in our live routes, where we enforce live navigation maintains reachable URLs, so things like open in new tab and sharing links just work without breaking the model like SPAs often do. We also have a enableLatencySim() function on the client to test drive your LiveView apps with simulated latency exactly to ensure proper UX.

I hope that gives some insight. These features have been around for some time now, but it’s possible your experience predates their usage, or predates --live so folks didn’t use them by default. If folks aren’t these features today, then they are doing it wrong :slight_smile:

48 Likes

I agree completely.

3 Likes

Thank you very much for this helpful answer. I hope my post didn’t come over too negative - high latency is not a flaw in LiveView but in the speed of light. :smile:

The defaults in new --live projects are indeed helpful in mitigating latency - that said, I still feel that nprogress & the default spinning wheel don’t provide enough feedback to new users (in case of a first time visit with high latency and/or a slow connection).

I know SPAs with huge loading bars aren’t popular but on a slow connection this user journey:

visit website -> loading bar -> interactive website

beats this one:

visit website -> static non-interactive website -> nprogress loading bar -> interactive website

The crux is that - as a user - I expect to interact with the static HTML after its rendered the first time… and then the second mount/3 gets invoked. Despite the progress bar on top this feels not 100% intuitive.

On the other hand, these needs might not warrant new page-loading defaults (with other tradeoffs). I’m sure you are more experienced in evaluating how opinionated a framework should be - hopefully these impressions shine some light on use cases outside of Western countries and help to improve LiveView. :slightly_smiling_face:

phx-page-loading looks like a great way to improve UX in this regard and might even help to solve most of my issues. Excited to play around with it, thank you for the hint!

And a tip for other readers: I had some success with ‘priming’ visitors - e.g. limiting the use of LiveView to pages with app-functionality. Visitors won’t tolerate long-loading landing pages but most give app.domain.com some time to render.

3 Likes

It’s good to hear it’s taken seriously.

I think we need to treat high latency as a reality of there being a global internet. We’re bound by the speed of light, so latency isn’t going to likely improve much.

If you have a web server sitting in the east coast US, it’s going to take about 90-150ms to reach various European countries on a high speed wired connection in a best case scenario. I know non-LV sites suffer the same fate but for some reason it feels snappier to load a Turbolinks site with 150ms latency vs LV. Maybe it has something to do with how websockets work, I’m not sure honestly.

Based on very light analytics, about 50% of the traffic to my site is outside of the US / Canada. I wonder what the country break down is for these forums. Higher latency is a constant, not an exception.

We used to ship with a css class that dimmed the entire page, but it was deemed to jarring for users. Note that you can write your own css that uses the existing phx-disconnected class to toggle the main content and show a full loading page just like an SPA, but our generators don’t do this out of the box. Another minor nuance is we set cursor: wait; and pointer-events: none; to give feedback no interaction can happen, and also prevent any interaction on links/buttons from happening. ie new app.scss with --live have this:

.phx-disconnected{
  cursor: wait;
}
.phx-disconnected *{
  pointer-events: none;
}

So you can imagine defining a couple more lines of code to do the SPA style content swap :slight_smile:

11 Likes

There is another trick that you can use for private pages only, which is to not render the content on disconnected render. In your live.html.eex, you can do this:

<%= if connected?(@socket) do %>
  <%= @inner_content %>
<% else %>
  <div class="loading">...</div>
<% end %>

Now you don’t send the contents twice and the user has to wait for the page to complete after the initial load (as they would in a SPA). Given how LV optimizes data sending, LV ends-up sending less data over the wire than a complete server render and a similar SPA (which may have to send large parts of the app upfront).

However, keep in mind this means no content is sent on the “disconnected render”. So if you do this for public pages, it means no SEO. But it works fine for private content.

32 Likes

I’ll play with more prominent loading info via phx:page-loading-start first - it’s less obstrusive than a full-sceen css-class and hopefully provides enough feedback for users with slow connections. It’s great to know about this other option though.

Thank you so much for all these helpful explanations, it really makes me happy to be a part of this community!

3 Likes

Your mention there does have me wondering how easy it would be to gracefully degrade the static page.

In my opinion none of the shortcomings mentioned can be attributed to either Phoenix or LiveView.
LiveView does an awesome job of bringing real webpage interactivity back to manageable proportions.

Users need an internet connection to visit any webpage.
Maybe the internet is a global thing, but most online services are not intended to be used globally.
Those that are need to put the required infrastructure in place.

Mounting a LiveView is not a latency problem; it is a UX design problem that needs proper attention.
I think LiveView currently offers too many out-of-the-box solutions to solve everyone’s problems.
It would be better to keep an open mind and to tailor any solution to a specific context.
Or maybe have a UX specialist do his/her thing (they’re probably more talented).
A little less plug-and-play would be a good thing (not talking about the useful hooks mentioned).

LiveView is a new kid on the block and it is not backed by either Google or Facebook.
Of course it doesn’t have the same kind of ecosystem, but it does have an awesome community.
I think Elixir developers in general know their stuff and don’t **** all over the internet.
Sound advice on many Elixir related problems is relatively easy to come by (huge plus).

Organizing code (or anything else) is always a problem.
Any project will derail at some point unless it is carefully managed.
Elixir, Phoenix and also LiveView (with LiveComponents) are power tools.
It’s up to developers to use them wisely and to keep learning from mistakes.

To tie any business logic to any view (using any technology) is just bad practice.
Sometimes business logic should not be implemented inside a frontend application.
Personally I prefer TDD for business logic and then:

defp deps do
  [
    {:phoenix, "~> 1.5.0"},
    {:phoenix_live_view, "~> 0.13.0"},
    ...
    {:business_logic, "~> 1.0"},
    ...
  ]
end

To be honest I am also struggling with organizing my frontend code.
I am very confident though that Phoenix is the right tool and LiveView is the cream on top.

8 Likes

Thank you for all your replies, it’s very insightful, and highly appreciated.
I think I’m gonna give LV a shot and see how it goes. :grinning:

4 Likes

This sums up what I think about Phoenix and LiveView, the cream on top is not always necessary, it is a nice thing to have to make those hard days happier :smiley:

To be clearer, I don’t think we should invest on rewriting every single view to LiveView, but instead, use LiveView where it makes sense to use it: the places that users are used to get a real-time experience, for everything else. The default Phoenix views are good enough to have the trouble of introducing LiveView to it IMHO.

7 Likes

Years ago when Turbolinks came out, it really spoiled people for how fast pages could load without having to do anything except add 1 line of JS to your app (for the most part).

There’s immense value in using LV or Turbolinks on every page you have because it means your browser doesn’t need to pe-parse the <head> of your pages on each page transition, which means it can avoid re-parsing your CSS and JS bundles. That makes a huge difference in load speeds vs what you get out of the box with Phoenix, because you’re not bound by the response of the web server. You’re bound by your browser re-parsing lots of CSS and JS.

It’s a night and day difference to the point where you would never want to not use either strategy ever again.

3 Likes

I honestly can’t tell what exactly you are implying.
Do you believe everybody should be forced to use LiveView for everything?
I think Kelvin’s comment is spot on.

No one is forcing you to do anything you don’t want to do.

I’m just mentioning that using LV or Turbolinks makes an enormous difference for the better for any site where you expect visitors to transition between pages (pretty much every website).

3 Likes

Transitioning between webpages is just a single aspect.
There’s a lot more to phoenix and sometimes a “static” view is the right solution.