Shortcomings in LiveView - are there any I should look out for?

I haven’t run into the latency in my tinkerings but IMO it’s just “right tool for the job”. For me, as a primarily back-end focused developer who thinks about problems from the perspective of the server and datastores it opens all sorts of doors by preventing me from having to enter the frontend JS technology.

But I’m not going to be making games in my spare time anytime soon.

1 Like

Thanks for the feedback! High-latent clients are definitely a UX concern, but one thing I want to highlight is we take these considerations seriously, and since the LV contest we have added features to ensure users are provided feedback while awaiting page loads and event acknowledgements. This happens through three features:

  1. we dispatch phx:page-loading-start and phx:page-loading-stop events. This allows you to use tools like nprogess to show page loading state before users can interact with the page, which is critical for proper UX on a fast or slow connection. New projects using --live have nprogress out of the box. For “page-level” events, such as submitting a form, we dispatch phx:page-loading-start and stop, and you can annotate other bindings with phx-page-loading to trigger your page loading feedback for events you expect to be page-level
    NProgress: slim progress bars in JavaScript

  2. We apply css loading state classes to interacted elements (phx-click’d els, forms, etc) and those elements maintain the class until an acknowledgement for that interaction is returned by the server. The classes will be maintained even if any in-flight updates from the server are received. Projects using --live have basic css classes to show you how to use these (we simply dim the inputs).

  3. We have phx-disable-with for optimistic UI on the client to swap out the client content while awaiting an acknowledgment for immediate feedback beyond css classes. Likewise, you can make use of the css loading states to toggle content, but disable-with is handy for quick feedback.

Proper UX is one of our biggest concerns. We want LiveView apps to be good citizens on the web, and you can see this desire in our live routes, where we enforce live navigation maintains reachable URLs, so things like open in new tab and sharing links just work without breaking the model like SPAs often do. We also have a enableLatencySim() function on the client to test drive your LiveView apps with simulated latency exactly to ensure proper UX.

I hope that gives some insight. These features have been around for some time now, but it’s possible your experience predates their usage, or predates --live so folks didn’t use them by default. If folks aren’t these features today, then they are doing it wrong :slight_smile:


I agree completely.


Thank you very much for this helpful answer. I hope my post didn’t come over too negative - high latency is not a flaw in LiveView but in the speed of light. :smile:

The defaults in new --live projects are indeed helpful in mitigating latency - that said, I still feel that nprogress & the default spinning wheel don’t provide enough feedback to new users (in case of a first time visit with high latency and/or a slow connection).

I know SPAs with huge loading bars aren’t popular but on a slow connection this user journey:

visit website -> loading bar -> interactive website

beats this one:

visit website -> static non-interactive website -> nprogress loading bar -> interactive website

The crux is that - as a user - I expect to interact with the static HTML after its rendered the first time… and then the second mount/3 gets invoked. Despite the progress bar on top this feels not 100% intuitive.

On the other hand, these needs might not warrant new page-loading defaults (with other tradeoffs). I’m sure you are more experienced in evaluating how opinionated a framework should be - hopefully these impressions shine some light on use cases outside of Western countries and help to improve LiveView. :slightly_smiling_face:

phx-page-loading looks like a great way to improve UX in this regard and might even help to solve most of my issues. Excited to play around with it, thank you for the hint!

And a tip for other readers: I had some success with ‘priming’ visitors - e.g. limiting the use of LiveView to pages with app-functionality. Visitors won’t tolerate long-loading landing pages but most give some time to render.


It’s good to hear it’s taken seriously.

I think we need to treat high latency as a reality of there being a global internet. We’re bound by the speed of light, so latency isn’t going to likely improve much.

If you have a web server sitting in the east coast US, it’s going to take about 90-150ms to reach various European countries on a high speed wired connection in a best case scenario. I know non-LV sites suffer the same fate but for some reason it feels snappier to load a Turbolinks site with 150ms latency vs LV. Maybe it has something to do with how websockets work, I’m not sure honestly.

Based on very light analytics, about 50% of the traffic to my site is outside of the US / Canada. I wonder what the country break down is for these forums. Higher latency is a constant, not an exception.

We used to ship with a css class that dimmed the entire page, but it was deemed to jarring for users. Note that you can write your own css that uses the existing phx-disconnected class to toggle the main content and show a full loading page just like an SPA, but our generators don’t do this out of the box. Another minor nuance is we set cursor: wait; and pointer-events: none; to give feedback no interaction can happen, and also prevent any interaction on links/buttons from happening. ie new app.scss with --live have this:

  cursor: wait;
.phx-disconnected *{
  pointer-events: none;

So you can imagine defining a couple more lines of code to do the SPA style content swap :slight_smile:


There is another trick that you can use for private pages only, which is to not render the content on disconnected render. In your live.html.eex, you can do this:

<%= if connected?(@socket) do %>
  <%= @inner_content %>
<% else %>
  <div class="loading">...</div>
<% end %>

Now you don’t send the contents twice and the user has to wait for the page to complete after the initial load (as they would in a SPA). Given how LV optimizes data sending, LV ends-up sending less data over the wire than a complete server render and a similar SPA (which may have to send large parts of the app upfront).

However, keep in mind this means no content is sent on the “disconnected render”. So if you do this for public pages, it means no SEO. But it works fine for private content.


I’ll play with more prominent loading info via phx:page-loading-start first - it’s less obstrusive than a full-sceen css-class and hopefully provides enough feedback for users with slow connections. It’s great to know about this other option though.

Thank you so much for all these helpful explanations, it really makes me happy to be a part of this community!


Your mention there does have me wondering how easy it would be to gracefully degrade the static page.

In my opinion none of the shortcomings mentioned can be attributed to either Phoenix or LiveView.
LiveView does an awesome job of bringing real webpage interactivity back to manageable proportions.

Users need an internet connection to visit any webpage.
Maybe the internet is a global thing, but most online services are not intended to be used globally.
Those that are need to put the required infrastructure in place.

Mounting a LiveView is not a latency problem; it is a UX design problem that needs proper attention.
I think LiveView currently offers too many out-of-the-box solutions to solve everyone’s problems.
It would be better to keep an open mind and to tailor any solution to a specific context.
Or maybe have a UX specialist do his/her thing (they’re probably more talented).
A little less plug-and-play would be a good thing (not talking about the useful hooks mentioned).

LiveView is a new kid on the block and it is not backed by either Google or Facebook.
Of course it doesn’t have the same kind of ecosystem, but it does have an awesome community.
I think Elixir developers in general know their stuff and don’t **** all over the internet.
Sound advice on many Elixir related problems is relatively easy to come by (huge plus).

Organizing code (or anything else) is always a problem.
Any project will derail at some point unless it is carefully managed.
Elixir, Phoenix and also LiveView (with LiveComponents) are power tools.
It’s up to developers to use them wisely and to keep learning from mistakes.

To tie any business logic to any view (using any technology) is just bad practice.
Sometimes business logic should not be implemented inside a frontend application.
Personally I prefer TDD for business logic and then:

defp deps do
    {:phoenix, "~> 1.5.0"},
    {:phoenix_live_view, "~> 0.13.0"},
    {:business_logic, "~> 1.0"},

To be honest I am also struggling with organizing my frontend code.
I am very confident though that Phoenix is the right tool and LiveView is the cream on top.


Thank you for all your replies, it’s very insightful, and highly appreciated.
I think I’m gonna give LV a shot and see how it goes. :grinning:


This sums up what I think about Phoenix and LiveView, the cream on top is not always necessary, it is a nice thing to have to make those hard days happier :smiley:

To be clearer, I don’t think we should invest on rewriting every single view to LiveView, but instead, use LiveView where it makes sense to use it: the places that users are used to get a real-time experience, for everything else. The default Phoenix views are good enough to have the trouble of introducing LiveView to it IMHO.


Years ago when Turbolinks came out, it really spoiled people for how fast pages could load without having to do anything except add 1 line of JS to your app (for the most part).

There’s immense value in using LV or Turbolinks on every page you have because it means your browser doesn’t need to pe-parse the <head> of your pages on each page transition, which means it can avoid re-parsing your CSS and JS bundles. That makes a huge difference in load speeds vs what you get out of the box with Phoenix, because you’re not bound by the response of the web server. You’re bound by your browser re-parsing lots of CSS and JS.

It’s a night and day difference to the point where you would never want to not use either strategy ever again.


I honestly can’t tell what exactly you are implying.
Do you believe everybody should be forced to use LiveView for everything?
I think Kelvin’s comment is spot on.

No one is forcing you to do anything you don’t want to do.

I’m just mentioning that using LV or Turbolinks makes an enormous difference for the better for any site where you expect visitors to transition between pages (pretty much every website).


Transitioning between webpages is just a single aspect.
There’s a lot more to phoenix and sometimes a “static” view is the right solution.

I think LiveView certainly has the technical potential to replace React/Vue, and for small teams that are already focusing on Elixir it will be a huge boon. I have been working on a project for the last couple years that has mainly involved converting a large amount of terrible, unorganized jQuery to a large amount of organized React+Redux, where I have been the sole BE support for a team of 7 FE engineers. It is an internal tool with standard stuff like async datatables, multi-step forms, modals, etc. Nothing fancy. In my estimation, I easily could have accomplished the same thing on my own with a designer with LiveView, probably in less time. However, I would guess that it is currently a lot easier to find 7 React devs than 1 Elixir dev currently.


Have you tried? I don’t think it is the case. It’s easy to find incompetent React devs, but good frontend engineers with good React experience are also very hard to get. There are also much more companies looking.


I was involved in the process yes. I’m not sure our experience was representative though because it’s a niche industry that tends to attract people with a specific interest in working in it, and deters others. I certainly agree it was very difficult to find extremely competent applicants, but my sense is that is generally the case when dealing with larger numbers?