LiveView append and performance degradation with large number of appended elements

Would love to hear back if this actually solved your issue

Changing the messages, the p tags, to components gave me a definite improvement on that front. Not seeing 500ms times any more, though still getting a lot of 100ms+ layout reflows.

I’ve removed anything that isn’t structural from the following template, but this is what the base LiveView component looks like that takes up the whole page:

<div name="client" class="w-screen h-screen flex flex-row overflow-hidden max-h-screen" phx-window-keyup="hotkey">
  <div class="w-1/5 flex flex-col">
    <div class="flex flex-1 border-4"></div>
    <div class="flex flex-col flex-1 border-4">
      <ul class="flex">
        <li class="flex-1 mr-2">
          <a class="text-center block border border-blue-500 rounded py-2 px-4 bg-blue-500 hover:bg-blue-700 text-white" href="#">Inventory</a>
        </li>
        <li class="flex-1 mr-2">
          <a class="text-center block border border-white rounded hover:border-gray-200 text-blue-500 hover:bg-gray-200 py-2 px-4" href="#">Skills</a>
        </li>
        <li class="text-center flex-1">
          <a class="block py-2 px-4 text-gray-400 cursor-not-allowed" href="#">Settings</a>
        </li>
      </ul>
      <div class="flex flex-1 bg-gray-600 p-2">
        <%= live_component @socket, MudWeb.Live.Component.CharacterInventory, id: :character_inventory, inventory: @client_data.inventory %>
      </div>
    </div>
  </div>
  <div class="w-3/5 flex flex-col">
    <div id="story" phx-hook="Story" class="border-4 border-r-0 min-w-full flex-1 flex flex-col overflow-y-scroll">
      <div phx-update="append" class="flex flex-col flex-1 min-w-full p-2 font-extrabold font-story overflow-y-auto">
        <%= for message <- @messages do %>
          <%= live_component @socket, MudWeb.Live.Component.StoryMessage, message: message %>
        <% end %>
      </div>
      <div id="scroll-to-bottom" phx-hook="ScrollToBottom" class="h-px"></div>
    </div>
    <div class="h-8 flex flex-col">
        <%= live_component @socket, MudWeb.Live.Component.CommandPrompt, input: @input %>
    </div>
  </div>
  <div class="w-1/5 flex flex-col">
    <div class="flex flex-1 border-4"></div>
    <div class="flex flex-1 border-4">
      <ul class="flex flex-1">
        <li class="flex flex-1 mr-2">
          <a class="text-center block border border-blue-500 rounded py-2 px-4 bg-blue-500 hover:bg-blue-700 text-white" href="#">Active Item</a>
        </li>
        <li class="flex flex-1 mr-2">
          <a class="text-center block border border-white rounded hover:border-gray-200 text-blue-500 hover:bg-gray-200 py-2 px-4" href="#">Nav Item</a>
        </li>
        <li class="flex flex-1">
          <a class="block text-center py-2 px-4 text-gray-400 cursor-not-allowed" href="#">Disabled Item</a>
        </li>
      </ul>
    </div>
  </div>
</div>

This is the CommandPrompt template:

<%= form_for @input, "#", [phx_submit: :submit_input, class: "flex flex-col flex-1"], fn _f -> %>
    <%= text_input :input, :content, phx_hook: "Input", phx_blur: "stop_typing", placeholder: "Enter Commands Here...", class: "min-w-full rounded-lg resize-y min-h-full", autocomplete: "off" %>
<% end %>

This is the StoryMessage:

<%= raw(@message) %>

Is this about all I can do from a basic LiveView standpoint as far as client performance while appending to the primary message window is concerned? The delay still isn’t fantastic, but it’s at least tolerable. Would love to target sub 100ms interactions consistently though, so if I need to get fancy I will.

I’m interested why this way was chosen? Drab for comparison does not traverse the DOM at all and instead only accesses an element directly by a random assigned ID (unless the user assigned it directly otherwise) then calls direct calls on it, like add an element or change text somewhere. There’s no walking the dom or anything? Basically it works by just seeing what changed on the assign and directly sending javascript calls to perform edits as necessary. Drab’s EEX processor does require valid HTML code though as it does parsing so it knows the structure though so it knows how to patch the DOM directly without walking.

2 Likes

As a followup for anyone looking about performance information, my biggest issue after changing to the live components was a continued reflow being forced on the client whenever I was appending a new message to the screen. This is 100% due to having scrolling enabled in the browser.

Whatever LiveView does is very quick, but having scrolling while appending to a div forces the browser to do a ton of work and this is just not something you can get around without getting a little fancy.

With the same number, thousands, of messages and no scrollbar I consistently get DOM updates in 50-70ms. With scrolling enabled the exact same logic takes up to 200ms due to forced reflow, which when you have multiple messages coming in quickly can really lessen the experience.

Since I need to be able to scroll back through the messages this does put a damper on a few things. That said, I think I’m going to see about trying to control scrolling via LiveView and hotkeys/buttons.

2 Likes

In such a case I’d really consider switching to client side rendering at least for that performance critical part. You could stick to liveview for pushing the data to the client (this is how phoenix_live_dashboard works) or move to a custom channel based solution. With the data being client side you can separate data management from the dom, which allows you do optimizations on the dom, where server side handling would eat up the saved performance latency by introducing additional network latency.

1 Like

In all honesty I could probably just handle the scrolling interaction on the client side, no reason to go back to the server since I’m going to have to write the JS hook anyway.

But on initial testing having a div with no scrollbar overflow-y: hidden and using JS to scroll up and down in the div is a very smooth and easy process that prevents huge reflows from happening whenever LiveView is appending an element.

So at least in my case where I have thousands of messages, I must be able to stay at the bottom of a div to see the new messages, and I have a stricter latency requirement on the front end I am going to need to override the normal way a browser handles scrolling and do something custom for my client.

Scroll-jacking isn’t really considered best practice. The solutions I know from vuejs use a scrollable container, which has only a single child set to the correct height of all list items. But as children of that inner wrapper only the number of visible list items are rendered. This way you still get native scrolling, but far fewer dom nodes. That option only works with client side templating though.

Scroll-jacking isn’t really considered best practice.

Yeah, I know, and if it were just a normal webpage I wouldn’t consider it. Given that this is the main window of a game client, though, I think I can get away with it since it’s already going to be acting like something other than your typical webpage.

That option only works with client side templating though.

That is the “ideal” solution as far as the technical side of things go. And in the end if I have to make a vue component/app to inject for that one specific area of the screen I’ll do it.

1 Like

Apparently I don’t need to do anything fancy. Setting a max height for the div did the trick.

Apparently the browser (Chrome anyway) is smart enough to know how to show a scroll bar the proper size for a div with a max height and can properly ignore all of the elements that are not in view, which brings down the time it takes to append down to 50-80ms consistently which is entirely within reason for me.

But you toss a div at the browser that doesn’t have a max height and is going to need to scroll it chokes because it cannot, for some reason, determine that most of the elements are not visible and so it does a ton more work. TIL.

Once I include batching of messages from the server side this will work perfectly well for my needs.

Thank you everyone for helping to expand my understanding and for helping me piece together where the issue was.

6 Likes

Even in javascript land, DOM libs move towards not walking tree for dom nodes update. Lighterhtml, Svelte, even latest Vue’s virtual dom new implementation is optimized so they do not walk tree, perform lots of referential equality check (Lazy nodes). Old VirtualDOM style is pure overhead.

At first .leex technique is exactly like JS’ “tagged template literal” where static and dynamic arrays joined together. But since LiveComponent was introduced, the diff-to-string process on client side gets heavier (recursively walking component ids (cids) and nested dynamic parts before hand over to Morphdom), and LiveComponents diffs tracking eat memory (unfortunately I’m lazy to benchmark 100 concurrent users on 100-cids page).

At this point I +1 to Drab’s %{"selector" => %{innerHTML: "..."}} approach. That addresses performance problem on “list/table” and “tree” -like UI update (rather than phx-update=“prepend/append”). Example widgets, “Excel” and “interactive json”. This is not only patching performance problem but also include memory usage.

After using liveview for months, I just realize this! Now I do exactly like Drab %{"selector" => %{innerHTML: "..."}} but with push_event/3 also with pushEventTo/reply and friends. Though for non-complex UI, I keep the current approach since the perf is relatively good.

2 Likes

Would not make sense to open an issue in Github for this?

Or maybe notify(via forum) Chris McCord for this performance issues?

I think it’s conceptual. Current toString(cids) implementation is probably already good for this concept. We are good generally for majority of use cases (i think).

citation needed. If I remember correctly, we had a convo on slack where you were rendering thousands of components per page for a visualized JSON tree? Server bookkeeping for us is very minimal, so I want to make it clear for other folks that we don’t “eat memory” for components. I forget our exact convo, but your usecase was highly specialized, and I provided the exact number of bytes that is our overhead, but a few bytes * 10,000 per page is obviously going to accumulate. It’s not fair to say we eat memory, but it is fair to say a highly specific usecase may not necessarily be served well by LiveView or something we’ve optimized for on the client.

2 Likes

From Phoenix.LiveComponent — Phoenix LiveView v0.20.2

However, be aware that in order to provide change tracking and to send diffs over the wire, all of the components assigns are kept in memory - exactly as it is done in LiveViews themselves.

Therefore it is your responsibility to keep only the assigns necessary in each component

But I’m sorry, my components had already changed and it’s very hard to make a demo. I wrote an anti-pattern components anyway as docs say:

You should also avoid using stateful components to provide abstract DOM components. As a guideline, a good LiveComponent encapsulates application concerns and not DOM functionality

Forget me! (and forgive <3)

1 Like