LiveView performance problems - Morphdom taking up to 1500ms to patch updates

Can you point to some code to have a look at? It might make it easier to understand.

1 Like

Not to my exact code, but I’ll post a reproducible example.

Just to add to the discussion: this should probably be even more obvious, but the amount of data being sent in even simple form validations for nested forms with ~15 user-facing inputs (plus a bunch of hidden fields to implement moving and deleting items in the form) sends to the server around 10kB of data and receives around 10kB of data, which naturally scales up when you increase the number of items in the form.

Honestly, up until this performance problem (this is my first “big” phoenix form) I had never looked at the message size because it’s not too proeminent in the server logs and I don’t have a good intuition in converting nr of characters to kB (the logs show the received map but not the message size). I find it a little scary that I’m sending so much data upd and down for simple form validations. I think that for these amounts of data I’d have to think carefully about network bandwidth and prices… Of course, this is still all dwarfed by more data-intensive apps such as audio streaming or such, but here I am trying to get dynamic forms working and then comparing it to audio streaming (!), even if for short bursts when the user is actually editing the form.

Again, I don’t remember the data transfer problems being highlighted in the phoenix docs: everyone talks about how “minimal” the diffs are (and they are indeed close to minimal, no question about that), but then everyone shows examples of very small forms with minimal data transmission. So I guess that leavs less experienced users like me surprised by how much data can be transfered in more complex UI.

So I guess I’ve been surprised on both fronts:

  1. How slow the DOM updates can be for complex UIs
  2. How much data can be transferred during form validations which I think are pretty simple. The solution, of course, would be to only validate the form on submit (the data must be sent to the server at some point), but that complicates a lot of things in the UI (designing custom events to add sub-items instead of depending one Ecto’s :sort_param). And isn’t the whole point of LiveView to get flashy real-time form vaidations? That’s one of the things more often shown in the demos…

I think a couple of things are needed:

  1. A warning about Morphdom’s performance on “large” lists - definitely not something I’d expect on lists of < 100 components (even if the components are quite complex)
  2. A warning on data transfer amounts
  3. Adding the amount of transferred data in the logs for the event handlers. Seeing how many kBs of data we’re sending every second (or whenever you want to validate your form) can be very illuminating

I’ll post some issues to the LiveView repo to suggest these documentation and log improvements.

Sorry, this is turning into a rant against LiveView, but that’s not how one should interpret this thread. It’s more of a rant about the documentation not being clear about the pitfals of using LiveView for what I consider “simple” UI.

(I guess I won’t be using LiveView for this application)

I don’t think the list size is the issue but rather its contents. DOM is a tree and you are more likely being punished by the tree than the list size. Morphdom works by comparing node by node and using IDs may help it to avoid discarding changes.

Keep in mind you can track a lot of this in the Network tab already of the dev console, where you get payload sizes per WebSocket message, but it would be beneficial making this information more upfront too, so :+1:.

Other than that, the comprehensions contents are sent as a whole if the comprehension changes, because computing the difference between two arbitrary lists can be resource expensive, so you need to resort to either LiveComponent or streams, as already explored. But these should be better documented. I will submit more docs later today.

8 Likes

Indeed, that is obvious. I should have made myself more clear:

  • In reality I’m not updating a list of 100 components. I’m updating a deep DOM tree with 100 branches in which each branch has more many children
  • But when planning to use LiveView for my app I was thinking “I’m just rendering a list of 100 things, how bad can it get?”. And during initial testing I was actually building the forms manually, so they never got to “production size”. Only when I decided to populate the system with synthetic data did I notice the horrible performance.

That I did expect, and it might turn out ok for “simple lists” of components. But then I have actually implemented components that allow you to delete (using the “checkbox disguised as button” trick) and move up and down (again, using buttons with the “checkbox disguised as button” trick) each child in a has_many assoc, which means 5 extra input fields per child. All these inputs are in a sense “dynamic”, and need to be sent to the server for validation and back from the server after validation. And because the lists are somewhat deeply nested (3 levels deep) and due to the way Plug encodes the parameters, each of the deep feelds consumes a lot of space in the message. For example (imagine the following is URL-encoded, which only increases them message size): `project[tables][0][columns][0][choices][0][name]=ChoiceName". And now imagine this repeated for 3 attributes per choice + 5 “hidden” attributes required for the sorting. This is exactly the way a form is sent to the server on HTTP requests, but that happens once in a while when the user is already primed to wait for the server “to do its stuff”.

In my optinion, this has very dangerous performance characteristics because it’s totally manageable for simple things but prohibetively expensive for the slightly more complex things. It’s a linear algortithm with a constant big enough that it can’t be noticed for small inputs but quickly gets very noticeable in a part of the UI the user would expect to be very fast.

When I have the time, I should publish a screencast (together with the code and the message sizes) as a warning about what can happen when one treates LiveView updates as “free”, like I was naïvely doing.

Yes, I know this and that’s how I’ve been monitoring the message sizes, but the explicit promise of LiveView (which I know believe to be misleading) is that you just write elixir code, forget about Javascript and the right thing will happen (those are not exact words from anywhere, but it’s the general vibe of bothn the docs and lots of talks I’ve seenH). I forgot about javascript and the browser so much that I only looked at server logs and forgot about looking at the browser completely, including the network console.

Again, please don’t take this as an attack on LiveView’s design decisions… I certainly wouldn’t kow how to do it better, a is obvious from my UndeadViews/Vampyre experiments from 2018. It’s instead an attack on the mismatch between the “promises” made in educational materials regarding performance and what actually happens in the real world as soon as you deviate a bit from the small tech demos in the talks.

We switched our block editor from Vue to LiveView some years ago and had some real pains with super large forms. These were also complex, nested forms. As a last ditch effort before going back to Vue we split out all our assocs to their own forms. So when saving the “main” form, we traverse and combine all the other forms on the page then add it to the main changeset. It was a bit of a challenge to make it all work but the speed up was enough for us to stay with LV.

2 Likes

So did you have something like this:

<form>
  <input ...>
  <input ...>
</form>
<form id="child1">
  <input ...>
  <input ...>
</form>
<form id=grandchild1">
</form>
<form id=grandchild2">
</form>
<form id="child2">
  <input ...>
  <input ...>

It seems like a good idea, that would allow you to validade changes in one form at a time.

Maybe it will help only with the message size, but I wrote a guide that explains how to switch to MessagePack encoding in the socket. I use it in production for a mobile first web app that sometimes has bad internet speed on the client side.

This will shave around 25% of the message size at the cost of message legibility in the developer console and a small decompression overhead.

And to be clear, that’s not necessarily a problem either, the tricky part is that those are dynamic branches. It seems you are updating a list with a dynamic list with a dynamic list with a dynamic list inside. This is neither common nor simple.

This should not be a concern payload wise, enable compression on the socket and you should be good to go.

In any case, looking at the above, it does make me wonder indeed if three level deep association in a single form is the best design and/or UX. I assume it can get confusing for the user to manage several levels of a hierarchy at once (I think this is the first time I see that many levels in a single form!) Someone already suggested to break the forms apart but you can also do something like this:

TABLE 
[ Column                  EDIT DESTROY ]
[ Column                  EDIT DESTROY ]
[ Column                  EDIT DESTROY ]
[ Column                  EDIT DESTROY ]
                                 [ NEW ]

Where the first level of management is not really a form but something that allows you to manage the children.

It also made me realize that, because you are validating the whole form and all associations at once as the user fills in, the validation on the server itself may become expensive too. It would be similar to an e-commerce platform where you have a single form for the categories, then products, then the variants, and then the images, all at once. I wouldn’t be surprised if another performance issue pops up even if the UI one is addressed.

Anything that runs on platform XYZ will, by definition, require you to care about XYZ at some point. At some point, an Elixir dev may need to care about the Erlang VM. At some point, you may need to care about Unix if you deploy to Unix. And, if you are running in the browser, at some point you will need to care about it too.

Btw, after thinking about this problem a bit, I was wondering if part of the issue is because the updates are mostly the same, the diffing is expensive because Morphdom is traversing everything mostly to find out that almost nothing changed. Similar to how comparing two lists for equality will be naturally more expensive if the two lists are equal. I am speculating here though. If someone has a way to reproduce this, I will gladly take a look. I quite enjoy optimization work!

12 Likes

I’ve been running into similar problems, my biggest problem is that you can’t nest forms according to the spec so you’re stuck between not being able to use the nice benefits of the Ecto/Phoenix integration or building custom javascript hooks trying to merge multiple forms (and same thing on the backend side with multiple changesets).
Most of my basic forms usually end up ad 20-40KiB for each validation trip, UUID’s take a lot of space, every dynamic attribute, lists etc. I’m defaulting to LiveComponents to alleviate it but even so the ID’s and dynamic nature of forms causes quite big payloads.

Other things that causes some headaches are that even with state on the server changesets needs to be complete, which makes partial validation and transmission very hard (or impossible).

Streams work ok for read-only data but any time you get into dynamic stuff it becomes quite troublesome with relying on state on the client.

That said it’s still a very nice way of working, but I think there are a lot of gotchas that aren’t very clear from the docs.

3 Likes

Yeah, I’m looking at GitHub - woutdp/live_svelte: Svelte inside Phoenix LiveView with seamless end-to-end reactivity and GitHub - Valian/live_vue: End-to-end reactivity for Phoenix LiveView and Vue but I’m a bit afraid of the fact that to render useful content on initial page load they seem to depend on server-side rendering, which requires having a NodeJS server running in parallel with the BEAM. which I really, really don’t like. However, that is always a problem with client-side JS frameworks, whether used with LiveView or not…

BTW, even with a “single” list level I’m getting payloads of ~190kB with a list of 60 sub-items (~180 input widgets). And I don’t think I want to have those kinds of diffs…

There has already been a lot of good information in this thread, I want to give my two cents from building DBVisor, which is a highly complex dashboard, where I’m breaking the old myth about too many elements on a page or table javascript - How can I optimise this HTML to render faster in the browser? - Stack Overflow

First of all, there is no free lunch when it comes to Phoenix and LiveView. I’ve done multiple iteration where I would do mostly custom diffing client side to increase rendering and avoid morphdom, but in my latest iteration I’m mainly using streams, function components in one LV, I never use LiveComponents.

I pay special attention to my html construction since that can have great importance an example is this issue Morphing page with a lot (1000 or so) FORM tags is slow · Issue #228 · patrick-steele-idem/morphdom · GitHub which shows that having to many form elements slows the page.

You also want to pay attention to CSS, layout shift, painting etc. lighthouse will give you a decent report, but there is nothing like production.

With everything in software, performance comes by reducing the amount of work. So if you can cut down on unnecessary elements, classes, styles, data, chatter and code. Then you can see gains.

4 Likes

You probably don’t need your form rendered in the initial page load anyway (and even less if your app requires authentication anyway), so is the initial page load a concern if you use live_vue for the forms?

After reading @tmjoen’s suggestion, I was thinking if something like that could be part of LiveView. Basically a way to say that a form is embedded in another form, and we could automatically stitch it back together. In pseudo-code:

<form id="parent" phx-embeddable>
  <input name="posts" />
  ...
  <!-- there may be a better tag -->
</form>

<form phx-embed-at="#parent" phx-embed-as="posts[comments][0]">
  ...
</form>

Then LiveView would put everything under the comments at “posts[comments][0]” for you, so you don’t have to do anything. Can you please tell us about how your solution looks like? What have you tried, what worked and what didn’t?

Also, please try compression on your websocket if you are concerned with payload sizes. IIRC I had 60-70% reduction when I tried it out.

Oh, what a great find! @tmbb, did you profile the performance across browsers? Is it the same in Chrome and Firefox? Also, is there a way for you to temporarily replace the form tag by div and see if it changes anything? You can open up deps/phoenix_live_view, change the .form component, do mix deps.compile phoenix_live_view, and try it out.

3 Likes

I do cross browser testing, Safari is the best, and Chrome and Firefox is equal.

When it comes to highly optimized website with a lot of elements, like a database management tool, then the browser is less important, since the issues you are going to be hit by first is the browsers rendering engine.

So special attention on HTML construction and CSS is important. Tools like TailwindCSS and what not tend to get in the way, and hand written CSS and HTML tends to out win, due to the reduction of code needed to be written.

I mentioned Google Lighthouse, since it’s a fast and easy way to gain some insight. Vs having to spend time on interpreting a flamegraph.

2 Likes

FWIW, I asked Claude to build this small HTML page from the issue and I changed it to render inputs inside form. Firefox is about 5-6x slower when dealing with inputs (30ms for 1000 inputs), Chrome handles it about the same as everything else. However, Chrome is indeed much, much slower for forms.

<!doctype html>
<html>
  <head>
    <title>Morphdom Form Performance Test</title>
    <script src="https://cdn.jsdelivr.net/npm/morphdom@2.7.4/dist/morphdom-umd.min.js"></script>
    <style>
      #metrics {
        position: fixed;
        top: 0;
        left: 0;
        right: 0;
        background: #f0f0f0;
        padding: 10px;
        font-family: monospace;
        border-bottom: 1px solid #ccc;
        z-index: 100;
      }
      #main {
        margin-top: 60px;
      }
    </style>
  </head>
  <body>
    <div id="metrics">
      Last render time: <span id="renderTime">0</span>ms | Average (last 5):
      <span id="avgTime">0</span>ms | Max time: <span id="maxTime">0</span>ms
    </div>
    <div id="main"></div>

    <script>
      // Performance tracking
      const times = [];
      const maxSamples = 5;
      let maxTime = 0;

      function updateMetrics(renderTime) {
        times.push(renderTime);
        if (times.length > maxSamples) times.shift();

        const avg = times.reduce((a, b) => a + b) / times.length;
        maxTime = Math.max(maxTime, renderTime);

        document.getElementById("renderTime").textContent = renderTime.toFixed(2);
        document.getElementById("avgTime").textContent = avg.toFixed(2);
        document.getElementById("maxTime").textContent = maxTime.toFixed(2);
      }

      window.addEventListener("load", (event) => {
        const el = document.getElementById("main");
        window.counter = 0;

        if (el) {
          window.inc = (e) => {
            if (e) e.preventDefault();
            window.counter += 1;

            const startTime = performance.now();
            morphdom(el, template());
            const endTime = performance.now();

            updateMetrics(endTime - startTime);
          };

          morphdom(el, template());
        }
      });

      function template() {
        const data = Array(2000)
          .fill()
          .map(() => `<input name="input-${window.counter}" value="${window.counter}"></input>`)
          .join("");

        return `
                <div id="main">
                    Counter: ${window.counter}
                    <a href="#" onclick="inc()">Increment</a>
                    <form>
                    ${data}
                    </form>
                </div>
            `;
      }
    </script>
  </body>
</html>

I also asked it to generate a React app and the numbers are pretty much the same (slower for inputs), although I have no idea if the React app is decent or not: react-app-patching.js · GitHub

When it comes to rendering speed, Safari is the only game in town.

My best recommendation is to develop in Safari and test your website on FF and Chrome and adjust the differences there will be of your CSS.

If you are experiencing performance issue in Safari, it will be terrible in other browers.

For people wanting to go deeper, then I can highly recommend content-visibility - CSS: Cascading Style Sheets | MDN which can increase rendering speed of dense pages a lot.

2 Likes

Compression is the first thing I turn on, my only issue with it (not Phoenix related) is that it’s incredibly hard to get an actual size of the payload without using packet inspectors, web browsers just show the decompressed payloads :confused:

To be honest, every solution I’ve tried has ending up being too complex for it to be worth it, and I’ve worked around it in terms of the product rather than the technical solution (as an example, disabled validation and only do it on save, partial forms where you only render one piece of the schema, make everything a livecomponent). I think a lot of it stems from not being able to to partial changesets, and I get why, but a lot of data comes in a list and most of the time you only change one of the list items, but you get it all back, and that is what causes the confusion I believe, the granularity of something like React doesn’t exist, again, I’m not saying it should, just that it’s not all that clear.

There’s also some strange things that happen that you would think shouldn’t. I have this super trivial components as an example.

  def fa_icon(assigns) do
    ~H"""
    {App.Components.SVGIconLibrary.icon("fontawesome", @category, @name, class: @class)}
    """
  end

It just takes a name, a category and some classes, but even when nothing has changed you still get the entire element in the diff (see below), I understand it’s flagged as dynamic, but my initial understanding was that components that doesn’t change doesn’t get re-rendered, but maybe that’s just me :slight_smile:

"0": " value=\"2\"",
"1": {
    "0": " value=\"2\"",
    "1": {
        "0": "<svg class=\"size-5 fill-red-400 hover:fill-red-700 cursor-pointer\" data-icon xmlns=\"http://www.w3.org/2000/svg\" viewBox=\"0 0 512 512\"><path d=\"M256 512A256 256 0 1 0 256 0a256 256 0 1 0 0 512zM175 175c9.4-9.4 24.6-9.4 33.9 0l47 47 47-47c9.4-9.4 24.6-9.4 33.9 0s9.4 24.6 0 33.9l-47 47 47 47c9.4 9.4 9.4 24.6 0 33.9s-24.6 9.4-33.9 0l-47-47-47 47c-9.4 9.4-24.6 9.4-33.9 0s-9.4-24.6 0-33.9l47-47-47-47c-9.4-9.4-9.4-24.6 0-33.9z\"/></svg></svg>",
        "s": 2
    },
    "2": "2",
    "3": " value=\"0.4\"",
    "s": 3
},

Now if you have a list of 100 items you get a 100 icons, which are all identical to the ones already in the markup.
Then you also have things like this coming from just form handling in general

"0": {
    "s": 0,
    "d": [
        [
            " name=\"metric[thresholds][2][_persistent_id]\"",
            " value=\"2\""
        ],
        [
            " name=\"metric[thresholds][2][id]\"",
            " value=\"02312daf-fcb3-4ee4-afeb-6cac4922b1ec\""
        ]
    ]
},

Those will always be dynamic and that is quite the payload for something extremely basic. I get that they’re variables and considered dirty, but it’s frustrating to not be able to do anything about it. This is a classic example from a schema with “has_many” and using inputs_for, pretty standard. Now if you have a 100 threshold values even the absolute bare minimum on every validation call gets quite big.

You should rewrite your component and especially have as few dynamic parts as possible. In your example the whole function component is dynamic.

A function component should mostly be html, sprinkle with some dynamic elements, like the value of an input.

Do not use dynamic classes, write proper css instead.

An example of function components would be the core component module that are generated in a new Phoenix project.

“proper css” sound fun, how do you write proper css for customizable components? This is not about simple things like “hidden” etc.

The core components are extremely dynamic, that’s why we have this entire thread… I don’t think there’s a single one that doesn’t re-render heavily actually.
This is how core component icons look

<span class={[@name, @class]} />

Literally a list with variables :smiley:

The idea with “few dynamic parts” sounds great in theory, but nobody has shown me it in reality in a Phoenix project.