How to correctly handle latency with Liveview

I’ve been using Liveview in my daily job and side hustle for over 1 year right now. So far I love it and I find that my productivity increased a lot with it. But there is one specific part of it that I find lacking in good practices and non-complex solutions. That part would be latency handling.

For example. Let’s say I want to open a modal in my liveview, there are two good ways to do that that I can think about. One is changing the route to a route that will show the modal.

For example, I’m in this route /my_page and if I click in a button to show a modal, I will do a patch navigation to /my_page/modalthat will show the modal.

That works, but it doesn’t handle latency well because it will only show the modal after a round-trip to the server is done. So the user experience can suffer and fell sluggish or non-responsive depending on the user latency (Sure, I can show a loading svg in the button with phx-click-loadingor something like that, but that is still a worse experience than just show the modal right away).

Another way is to use the JS module and do something like JS.show to show the model instantaneously entirely from the client side.

This handle latency well, but it fails for crash/server restart scenarios since in these cases, liveview will not know that the modal should be open and will just re-render the page with it closed, meaning that if an user is doing something inside a modal when I decide to deploy a new version, that user will lose all the work because of the restart.

Another example is a select component that I created because I wanted a select that an user can search.

My first version was to create a component that would receive the FormField see which value is selected and show that as the selected value in the component.

The issue with that approach, again, is latency, if the latency is high, the selected value will take some time to show up as the component selected value.

After that, I used the JS module to update that from the client side so the selected value would be updated in real-time regardless the latency.

That works, but if the latency is high, when the new selected value event reaches the server, it will update the FormField, call the component update function and it will override the selected value.

Because of that, if I change the selected value multiple times really fast and there is a high latency, when these events reach the server, it will replace the selected value in the client too:

a

A workaround for that is to add a flag to the component that is set to true after the first update call to ignore the subsequent ones:

def update(_assigns, %{assigns: %{initialized?: true}} = socket), do: {:ok, socket}
def update(assigns, socket) do
  ...
  {:ok, assign(socket, initialized?: true)}
end

Now the select works great even when there is high latency, but it breaks when the server crashes/restart because, for some reason, the component update function will be called 2 times, the first time the FormField will have the value field set as nil, and the second one with the correct value, meaning that the first time, it will reach my second update function and set the value to nil, but the second time, where it has the correct value field set, it will be ignored because initialized? is already true.

These are just some examples of issues I had when trying to create a liveview that as a good user experience to users that have high latency.

I would love to know if others had the same issue and how they solved it. Feel free to give me suggestion specifically for the examples I showed above, but keep in mind that they are just examples, the point of this topic is to discuss the topic of latency with liveviews and what are the best methods to workaround it.

6 Likes

Keep your state in the URL.

3 Likes

What he said. Open your modal instantly with a JS command and add edit=true or something to the query params.

1 Like

I’ve run into these same issues and there are always ways to workaround the problem, as other comments have hinted at. But I do think there is a fundamental dilemma posed by LV: you are reaping the benefits of server-side state management but you are also paying a cost. Most of workarounds involve moving some portion of the logic/state to the FE, which means, as you say, more complexity/less benefits to using LV. Would love to hear if someone has an idea about how to get the best of both worlds, it definitely seems like there are a lot of people trying to use LV to do everything, but to me it seems only logical that there are going to be times when LV might not be the best choice.

1 Like

No framework can do the job of an engineer, which is, in this case, to determine where state should exist, sometimes that answer is the server, other times it’s the client. There is no silver bullet.

2 Likes

LiveView has been pretty upfront that it is not for every problem. If latency is an issue, ie, you’re serving people with known slow connections, it’s probably not the best fit. if latency is a problem because you are global, then either it’s not a good fit or it means you’re doing well and have a good problem! In this case it can be solved through distribution (though basically that means putting it on Fly rn if you can’t do all that work yourself).

Otherwise, a lot (and I mean a lot) of businesses only operate within their countries or even a smaller district within their countries, so LV is an especially great fit for these use-cases.

All that said, there are doubtlessly ways to abstract away solutions to common problems! They will come about as the community evolves.

Certainly, and I may be reading into the original question some sense that you must be doing something “wrong,” using LV incorrectly, if you find yourself adding extra complexity to deal with these sorts of situations. But my point was that the more you find yourself determining that the “right” place for some state is on the FE, the less benefit you are going to get from using LV, so it might be worthwhile considering alternatives, like mixing LV and other JS frameworks. I feel like I rarely see that advice here and I have to wonder why. Have people really found it simpler to workaround LV than mix frameworks? Genuine question.

Which is why I find it weird that it seems like a sizeable group of people are trying to use it for every (UX) problem. But maybe that’s a false impression. Just something I’ve sensed from threads on here.

I will say I don’t think latency is only a problem for slow connections, if you are comparing UX to client-side JS.

I said slow connections and global apps that are only running in us-east-1 :sweat_smile: That’s a problem no matter what you use. I think LiveView has a decent story in giving quick optimistic UI feedback with :disable_with. It works well in simple cases. And then the more complex cases aren’t unique to LiveView. Watching people use applications a lot and in my own experience I’m not convinced that things like Optimistic UI for data—like posting to a chat log instantly before the network succeeds—are really all that big a boost to UX. They help but I don’t feel like it deters users in a lot of cases, not as much as some people would have you believe, at least. I don’t have any real data there, though.

I also think that LiveView has long moved past a “you don’t need to write JavaScript” framework. I feel like it did so in its first year. To me it’s more of a “Keep the code that should be on the server, on the server!” framework.

I don’t think you are too far off but people do things they really probably shouldn’t and then continue to double down on it. Remember when some guy put JavaScript on the server and people kept using it? :upside_down_face:

1 Like

Yeah, distribution would certainly help, but that is just half of the issue, if your website is being accessed by mobile users, then you should expect that high latency will occur from time to time.

How do you set the URL in this case? I don’t recall seeing a function in the JS module to do that, so I guess I would have to create a new event in app.js using URLSearchParams and set the value using it?

Personally, the way I see this is that you as a developer needs to understand the pros and cons of each stack/tech, but that doesn’t mean that you can’t try to reach for workarounds when there is some need.

As I mentioned above, you should expect some latency when mobile users are acessing your website, and so thinking on how to make their experience as best as possible is worth in my opinion.

Maybe I’m biased on that since I created the topic, but I believe that some of these issues are pretty common, so having some guidelines (or at least a topic in this forum hehe) with workarounds and solutions would help people that are facing the same challenges :slight_smile:

Ya that would be the way—AFAIK at least, this is what I was going to suggest when I read this out on my dog walk, heh. It would be neat if JS commands had a version of patch/navigate that merely updated the URL without calling handle_params. I think you could use JS.dispatch with a custom event, too.

1 Like

I see nothing wrong with your question. I am probably derailing the discussion, if so I apologize. I’ll say again I endorse the advice you’ve already gotten and I’ve used it myself, definitely not opposing those options.

But to make my response more direct: are you wrapping your entire UX in a LV? Did you consider serving a static page and converting specific components to LVs instead?

I agree wholeheartedly with this, though I think this is largely on the community to solve. We just don’t have the army of people that JS/Ruby/Python has so it’s taking time. We do have some very exceptional people, though.

3 Likes

Yep, right now my system if fully written in LV. I didn’t though about using LV just for some parts of the system, if I would go with that route I would probably consider live_state instead, not sure.

Another alternative would use something live live_svelte, one of these days I saw a post or article about someone that used it to create even offline support for his site, pretty neat.

Tbh, for now I’m totally happy with my system being fully LV, even if in some cases I need to do a little bit of more work or a workaround, if I consider how much more work and time I would have to spend to have the same system using some java-script framework, it is totally worth it.

The utter disdain this little hyphen shows. I love it

2 Likes

For a modal you might be able to do something like this…

<button
  type="button"
  phx-click={
    JS.show(to: "#foo", transition: {...})
    |> JS.push_patch_etc("...")
  }>
  Open Modal
</button>

<div id="foo" style="display: none;">
  <div :if={!@edit}>
    Loading...
  </div>
  <div :if={@edit} phx-mounted={JS.show(to: "#foo")}>
    Loaded
  </div>
</div>

So something show’s immediately, and if it fades in for 200ms then you should be able to cover up the latency a little bit.

1 Like

This is going to call handle_params, though, which likely isn’t desirable. At best it’s a waste and at worst it’ll have side effects on the UI.

I constantly mix LV with vanilla JS, we’ve come far in the last ten years, and it can pay dividends not to get sucked into a frontend framework that won’t teach you how to work with what the DOM provides you. Most issues today are with CSS, and even that is negligible with modern CSS. Remember, DOM, JS, and CSS won’t get obsolete, but your framework absolutely will.

Modern CSS and vanilla JS is the future of the front end.

4 Likes

HEAR HEAR

LiveView suits me so well because I actually like writing JavaScript, but only as a DOM manipulation DSL. I do a bit of canvas and have mad respect for the crazy beautiful things people make with it, but for business applications with heavy server-side functionality (which is mostly what I write) JavaScript is a very poor choice compared to all the alternatives.

2 Likes