Reduce total html payload size with liveview

Hi all
As my app is gaining more features, it has become heavier and heavier. It’s come to a point where this starts to become an issue.
On certain pages, the initial html payload size is ~2.5mb, and the first message in the web socket is ~1.8mb.

The problematic pages essentially work like this:
I render ca. 100 - 1000 rich items. Each of those has a bunch of initially hidden html in them, like e.g. modals, complex menus, etc.
I use JS.toggle etc. to show/hide these menus and modals as needed to avoid a server roundtrip.

It’s not yet a big problem, but I fear it will only get worse as I cram more features into those pages.

I have identified one somewhat easy optimization which will probably have a big impact: Icons
I’m using lucide_icons which is great, but each time an icon is used it renders the full SVG. Each menu item and button of my rich items uses an icon, so a single items renders ca. 20-30 icons. My plan is to switch to Iconify for Tailwind. I expect this to reduce the total payload size drastically.

Are there any other ways to reduce overall html size?
Should I start to use something like live_vue and define these items as vue components? Web components?

(I do not want to introduce streams / virtualized lists to render less items here, I already use those where it makes sense. Also, I do know about gzip and that this repetitive html zips very well, I still want to reduce it)

1 Like

I’d argue that prerendering modals for each item in a list is not a great idea. Yes you don’t have the server roundtrip, but you’re rendering 1000 modals where the user probably interacts with a handful.

5 Likes

But what’s my alternative without introducing a server roundtrip? I’m not gonna compromise the UX for these interactions. If I were using a client side UI framework, this wouldn’t be a problem.

I realise that this is probably a fundamental limitation of server side rendering, so I’m looking for techniques to mitigate it.

I’m open for approaches that introduce a client side library, so I’m also looking for experiences of using live vue, live svelte, web components or similar. Do they help with this issue?

Load it on mouse proximity to button can work on pc, but not on mobile.

There you could load based on distance from viewport?

1 Like

So, we’re talking about 100 to 1,000 items with hidden menus, icons, and dialog boxes, in comparison with opening some of them occasionally with a server round trip. I’m not sure, but I don’t think a server round trip would be worse. Maybe it would be even better?

7 Likes

It depends on where your users are, of course, but I think people simultaneously care too much and too little about server roundtrips with LiveView. If your users are close and menu interactions are sporadic, then it’s websockets and who cares about round trips (if it’s the simpler solution). If your users are distributed and you have 1000s of menus and pop ups, it’s possible LV is the wrong solution or rather LV with a frontend framework (as OP has brought up) would probably be better. Though I do like the idea of conditional loading based on mouse position. At work we currently do render all context menus and show them, though it hasn’t been an issue because the menus are very small (1-5 items) with a faily hard limit per page, so it hasn’t been a problem… YET.

1 Like

Can you clarify how that is solved in a client side UI framework? I mean, in that case you would be sending the full html (with all the modals, dropdowns, etc) in the first page load the same way you are already doing with LiveView right now right?

Regarding the suggestion to just have the round trip, I do agree with you that this is not a solution simply because every system constrains and requirements are different. So for some people, having a round trip is no big deal, for others are, so having a good solution to the problem instead of some workaround that has some big implications (again, for some cases) are no good IMO.

Talking about workarounds, if the issue you are worried about is not actually sending ~2mb through the wire but having it all be in one go (meaning that the user needs to wait some time until the page is actually rendered), then maybe you could use LiveComponents with a “fake” async load as an workaround to split the page load in parts.

What I mean is that you can put a logical part of yout system inside a LiveComponent and have it be wrapped in an .async_result component that will start with the loading state, then you call async_assign which will call a load function that will just return :ok.

That way, the component will first render as an simpler loading UI, and then, after the websocket is connected, it will send its content to the client.

3 Likes

Are you sure this is really a problem after gzip? If they really are the same icons, it should compress very well. The same goes for all the duplicated modals and tooltips

3 Likes

Can you have a single modal/dropdown for each type of item and have each item use that?

1 Like
  • For the icons: see how a stock Phoenix app do icons with Tailwind CSS, SVG icons (from Heroicons, but could be adapted to work with other sources) are compiled into the CSS bundle, so can be cached together with the rest of your CSS and don’t pollute the DOM.
  • For things that repeat on the page, LiveComponents are the go-to API to reduce diffs (that’s not the size of the first rendered HTML though). LiveView 1.1 shipped improvements to for loops in templates, so upgrading (and reading the release notes) can also help.
  • For the first payload, then it comes down to not sending parts of the page the users are unlikely to use. Someone mentioned using assign_async etc. You can also reduce the very first payload by conditionally skipping a bunch of things when connected?(socket) returns false, with the tradeoff of possible layout shifts and a of delay until the livesocket connects and the missing data comes in. You could use that to delay loading modals and below the fold content though, highly depends on your page layout and usability patterns/requirements.
6 Likes

Thanks for all the good suggestions!

First off, @tommasoamici was right to question me regarding gzip, as I have not measured anything properly.
Now that I have, I noticed that gzip was only enabled for static assets!
I’m surprised that phoenix actually doesn’t enable compression by default, it doesn’t even leave it commented out to give a hint how to enable it. After some searching I found I needed to set
config :app, AppWeb.Endpoint, http: [compress: true, ...].
And for websocket I need to set:
socket("/live", Phoenix.LiveView.Socket, websocket: [compress: true, ...])

With compress: true it went from 2.5mb to 65kb.
I do not know how to see the numbers for the websocket, but I hope it changed similarly.

Is there a reason that these settings are off by default in Phoenix? I think they should be always on, even in dev!

Reducing uncompressed size might still provide some processing overhead reduction, so I’ll still try some of the suggestions.

I think that falls in the category of virtualized lists, which always comes with a big complexity cost and potential for tricky bugs like when a user scrolls fast. So I’d avoid this unless absolutely needed, e.g. maybe when approaching 10k items.

I guess it depends a bit on if you pre render on the server, but if you don’t you only send the templates for these modals + the data in a more compact way. But I guess that data is then less compressible and so maybe with proper compression the difference is indeed very small.

That’s an interesting idea, thanks! I’m actually already doing something very similar for the very heavy modals where I load their content only once opened. I will experiment with this a bit.

How would that work? They are not completely identical, they at least differ in their phx-value etc. My list items look kinda like that:

def item(assigns) do
  ~H"""
    <div>
      Stuff
      <.drop_down_menu id={@item.id} />
      <.modal_1 item={@item} />
      <.modal_2 item={@item} />
    </div>
  """
end

Yup, that’s why I was thinking about using iconify, as it also works as a Tailwind plugin and I can keep using Lucide, since that’s one of the many supported icon sets it provides.

2 Likes

We also have iconify_ex v0.6.1 — Documentation

1 Like

I now switched to iconify via the tailwind plugin.
This brought the size down to 1.5mb (57kb compressed). Sizable change in uncompressed, but quite small when compressed.

Next I’ll probably experiment with the assign_async idea, but this is a more difficult change, so it will take some time until I get results.

Hi, I also wanted to optimise LiveView application some time ago. My solution included modals with complex logic and other useful things. I would not like to have everything in one place, so searched a way to split everything into bite-sized pieces.

I looked for an inspiration into Phoenix generators, especially into phx.gen.live which generates own LiveViews.

I liked the idea from the generator output - include modals into the routes. For example, we have a page that lists products. If we want to show a modal with product, we can create a new route for it. When person clicks on the button - we may navigate to the other route. This route may show our modal on top of the list of products from a previous page.

The trick is with the way that we navigate to the new route. We can either do a full page reload, or fetch only the diff. You can look into more details about live navigation here.

Have found an example of this approach with modals.

So, the idea is to have a separate URL for each state of your application. If you open a modal - just navigate to the relevant URL. If you want to close the modal - navigate to the page where it is closed.

I see the pros of such approach:

  • Simpler application logic - you do the relevant thing in a relevant place.
  • You need to load only the data required by your page. If you want to show a modal on top of list of products after a page refresh - you will need to load data for the list as well as for the modal. If your page has multiple modals - you will need the data only for a current one.
  • Separate pages may be easier to test. At least it helped me with my test cases.
  • Page refresh works. If you open a modal - it will remain open after the page is refreshed (or after you have changed some code and LiveReloader refreshed its content for you).
  • May be easier to debug (or reproduce bugs). My LiveView code crashed sometimes due to my bugs. For example, I needed to debug something that is inside the modal. I needed to open a modal, and do some steps to make it crash. LiveView reloaded a page afterwards and I needed to repeat the same steps until fixed the error. It is much faster to keep the modal open after a page refresh. Besides if I know the exact URL where my page crashed - easier to reproduce bugs.
  • Back button works for your browser. A person may navigate back to the previous state of the application.
  • You can share a link to specific feature or to specific modal.
  • URL may represent a current state of your app, for example pagination data or filters. These will also refresh after the page refresh.

Here are the cons:

  • You have written that you want to avoid server roundtrip. This approach do require roundtrips, so possibly will not work for your case.
  • If you want to use recursive modals - open modal inside the modal - it is better to think how to structure the URL for this case. Usually it is more a bug, than a feature.

Another suggestion is to render a limited number of items. The more things you need to show, the longer it will take to get these items from the database, to render everything and to send to the client. Usually apps show something like pagination (which can be a part of URL) or infinite scroll (which can be implemented via LiveView streams).

LiveView is nice, but usually it is not intended for offline-first apps and requires Internet connection. As I remember, there were some libraries to support it, but not sure if it is required here. It is better to test whether roundtrips are so bad for your app. For example, you can do it with latency simulator. If a user has very poor internet connection - every roundtrip will add a cost. But it does not mean that it is bad. For example, it is strange to wait for a server response to animate hover, but it is expected to wait some time until the new data is downloaded. For example, you can open a modal at the client side, but show a spinner inside it until the data is fetched from the server. The same case is relevant for fetching extra data. By the way, the more data you transfer through the poor internet connection - the longer it will take to receive a response.

Sometimes I like to optimise things in my pet or side projects. But in real ones I try to look more pragmatically on this. For example, the requirements are different for the site where I am is the only user on my dev machine from the site where I expect 100-200 requests per second from the whole world. This does not mean that it is OK to write slow apps - but just to measure the exact situation and try to meet the deadlines.

You know your use-case better than me, so can evaluate whether these ideas are helpful.

1 Like

Thanks, those are some great resources!
But for me, a single round trip is a no-go here, so I can’t use that approach. I would love to also update the URL to get the UX benefit, but that is indeed not easy with my render everything and JS.toggle approach.

I would just love it if there was an easy way to get the best of both worlds, instant response times on the client plus the awesome DX of liveview. Maybe I really want hologram?
(Although ideally I would like hologram only on the client and keep using liveview on the server, as it looks like hologram does away with server state per connection, which I think is a step back and will make real time updates harder. But I could be wrong, I haven’t played around with it yet)

1 Like