The "costs" of LiveView

I’m very new to Phoenix and LiveView but I’ve been loving it so far. I’m currently using it on a traditional web app that would otherwise need to be structured as a SPA. Speed/productivity/etc. are incredible.

This got me thinking, though, about the costs of LiveView outside the context of an obvious use case.

For example, imagine a high-traffic online newspaper or blog. I love how quick navigation flows with LiveView and how I can avoid Javascript. That being said, this speed isn’t free.

There is the memory required in keeping open a socket connection and maintaining state for each site visitor.

Also, it’s not clear to me how caching would even work (for example, will a cached page still hit the database after loading given that mount is presumably always called?)

I guess I’m just curious about how I should think about the trade-offs. Besides memory usage and potentially issues with caching, are there any other obvious things to take into account? I love LiveView but want to be realistic about its limitations before depending on it for every project.

9 Likes

Just return static HTML. Do not use LV in such example, because what is the reason for that? Why do I need to have JS (even with LV you have JS, just not written by you) to read, well, plain text?

Serving static HTMLs will also answer your other questions about caching and keeping connections open.

3 Likes

This is a very interesting topic and a question that all responsible engineers should ask themselves when deciding whether to go with LV. For sure I did ask it to myself many times. It’s even more important considering that LV is a highly promoted feature in Phoenix 1.5+, backed by a homepage redesign.

If the answer is “just return static HTML” then by itself it sounds like LV won’t scale well for these cases which is also not something we should state lightly. For sure, I wouldn’t treat this question as silly one and I wouldn’t underestimate the paradigm shift for many “oldschool web dev” that come with websockets and LV statefulness.

For the “viral blog” case there may be many scenarios when LV would still be applicable & profitable even for displaying static blog entries. This includes:

  • rendering entire website via LV e.g. to reduce HTTP connection times
  • rendering typical blog entry interactions e.g. like button with live counter
  • rendering website-wide interactions e.g. notification or search popover in navbar

From my perspective it would be great to know up front if LV scales well (and how well) out of the box, what options are there for optimizing it (like caching) and if there are cases where it obviously won’t work due to scalability limitations. I guess what we really need is a comprehensive benchmark similar to the one about reaching 2 milion connections with single Phoenix instance. I’m not an expert at doing such benchmarks (esp. concerning websockets) but even without one we could just calculate the minimal memory footprint of a single active connection without extra assigns and use that to calculate the best case scenario “users per GB of RAM”. From then on it’d be all about understanding the cost of each assign and keeping that under control with temporary assigns etc - esp. in LVs that contribute to high-traffic pages.

In the end, we may always approach to LV as to “MVP supercharger” - a solution that allows to create dynamic websites at unprecedented pace but that always ends up converted to client-side/SPA code when the scale grows. But perhaps it doesn’t have to? With regular Phoenix, there’s a great chance that (depending of a case of course) you’ll hit DB scalability issues 10 times before you outgrow the webserver (like discussed e.g. here Scaling LiveView for low latency in multiple regions), but is that still the case with LV?

10 Likes

There are a few things I would pay particular attention to when thinking about using LiveView:

Do we need offline support?

Most likely the answer is no but if it’s yes then you can’t use LiveView.

What infrastructure is the application going to be running on in production?

How does it handle websockets?
Are there memory restrictions?
Are you running clustered or single instances?

Yes, LiveView will have limitations but the architectural limitations are just as important.

How will we deal with latency?

This one is extremely important and easy to forget if you only develop locally or with a high speed connection. Simple dynamic interactions like opening a dialog can suddenly become really jarring because of the latency involved. Alpinejs can help in that specific case but that means you might not be able to solve everything with LiveView alone if you want to have a great user experience.

How will we handle connection failures / application updates etc?

Because the state is on the server, how are you going to persist this state and load it back? The application might be updated, the server might crash. Anything can happen. What do you do then with forms that people are filling in at that exact time or things like that. This is of course not a LiveView specific problem but since it’s different from how most web applications work these days it’s wise to pay attention to it from the start.

Scaling LiveView would actually be the least of my worries. You should be doing load/performance tests specific to your application anyway. Hardly anyone is going from 100 users to 1.000.000 users overnight. And if you didn’t expect that load in the first place then most likely the rest of your architecture will fail before Elixir/Phoenix/LiveView does.

You mentioned high-traffic online newspapers and blogs. If the content is basically static I would do as much as possible with static html and just use a CDN to host it. Any framework would be overkill in that scenario. Maybe you want a personalized newsfeed. In that case the most likely bottleneck will be retreiving all the personal content on the backend so you just have to test it.

I don’t know what caching you’re worried about exactly. LiveView is for dynamic pages. This means you normally want to fetch the latest state once you mount again. That’s no different from Vue/React apps and such. If you want to avoid database calls you should use a caching layer in the backend itself.

What your system can handle will depend on the specific application and the complete infrastructure. That’s why I would say that LiveView itself must be dependable when you use it. It shouldn’t break suddenly. If you should use it depends on the situation.

7 Likes

Think of it this way, would it really make sense to build online newspaper or blog even with server side api and react? Answer to this question is no and that’s why lot of them use static site generation. Good example is https://www.smashingmagazine.com and you can read about their transition here https://www.smashingmagazine.com/2017/03/a-little-surprise-is-waiting-for-you-here

You need to choose tech that is best suited to what you are building.

1 Like

Yep - and you can test how jarring it is - LiveView does have a latency simulator. A lot of the Phoenix Phrenzy demos really didn’t work well here in Australia because of the latency (mostly the games).

3 Likes

How will we deal with latency?

You might be able to handle this with some globally distributed database like Cosmos DB and have separate LiveView apps at different regions.

Minimizing the latency is always a good idea but it’s not about getting the data. It’s about user interactions that normally don’t involve a call to the server.

Opening a dialog with LiveView means sending a click event to the server, the server sends data back, then the dialog opens on the client. It’s these kind of roundtrips that are the issue here. In a Vue/React type app you don’t need to go to the server for something like that. This is why you regularly see alpinejs mentioned in combination with LiveView. It can handle certain states on the client without having to do those server calls.

So there are ways to minimize or get rid of that latency but you have to be aware of it from the start. In my experience most developers only take latency into account later in the project (if at all). With LiveView especially this could really bite you.

If you have to do a server call anyway, for instance to retrieve some data, LiveView might even be faster than a normal SPA because the connection to the server is already open.

5 Likes

It makes sense to use something like Alpine.js in situations that need it. I don’t think you can get really good LiveView experience for users in internet facing apps by completely abandoning JavaScript. But you should be able to get by only writing very small amount of JavaScript. If you are writing app for some company’s intranet you might not need any JavaScript.

1 Like

I totally agree. In many cases using just LiveView will be perfectly fine and I wish I could use it on the projects I’m currently involved with. It’s just a good thing to know what trade-offs are being made and what the potential downsides are.

If it can handle a high-traffic website is also a fair question. Let’s leave blogs and such out of it. Maybe you do have a highly dynamic site. We know it runs on the BEAM and we already know how well Elixir apps scale because of that. Handling lots of processes, a single process not hanging the server…the BEAM helps a lot here. Clustering is also supported. The only LiveView specific things I can think of that I might want to know are:

  • what’s the minimum memory usage per connection (just as a base reference, I’d test it with the app anyway)
  • does LiveView have an architectural limitation because of the design or does it scale with the available resources. If it scales with the resources (which I expect it does) then there’s nothing LiveView specific to deal with regarding to scaling.

Let’s say you need more servers than you would with a normal SPA because of the extra memory requirements. Depending on the situation this might actually be more cost-effective.

6 Likes

I also totally agree with that. It would nice to have Elixir as an option at client side well. I hope at some day https://github.com/lumen/lumen gets finished and Elixir gets some awesome client side ui framework developed.

2 Likes

My personal opinion is that the answer to “I don’t want to write JS” (a camp to which I belong) lies in transpiling to JS or WebAssembly (not server pushes like LiveView).

LiveView has use-cases, but I see more potential in interoperable (even crippled) Elixir running in the browser. Although less mature than LiveView at the moment, I feel those solutions have more potential, and will ultimately leap-frog LiveView.

If (when) WebAssembly supports garbage collection, it seems entirely reasonable that lightweight beam (maybe written in Rust?) could run Elixir code; similar to what the Rust community has achieved with wasm-pack and wasm-bindgen. Without GC, the runtime could get quite bloated (see Blazor).

So I could be wrong, but LiveView feels like a stop-gap to the next big evolution of web; which leverages WebAssembly. But hey, even if that’s true, stop-gaps serve a valuable purpose.

2 Likes

I feel like you didn’t check link I provided. Lumen is just what you want “An alternative BEAM implementation, designed for WebAssembly”. Landing page is here https://getlumen.org

1 Like

I’ve found that using modern CSS libraries can almost entirely reduce the need for JavaScript, aside from one or two “toggle css on button” items. I guess that’s what the “alpinejs” is used for. Even better with the Space Toggle trick you could potentially even remove that! Though not sure that trick would work with menus.

For any items that would require an SPA to hit a server API endpoints, I’ve found that LV often feels faster since the the websocket maintains the connection and you generally don’t need to send as much data, only the rendered response.

3 Likes

You know what they say about assumptions…?

I’ve been following lumen for a while (as well as other languages targeting WebAssembly). The problem is the maturity of these solutions, as well as their bloat. Runtimes languages (whether it’s JS, or Elixir, or .NET) require a runtime to be bundled into / downloaded with the app; which can get quite heavy. A lot of that has to do with garbage collection (per above). So languages like Rust are getting more traction in WebAssembly because their “runtime” is so small (essentially non-existent). Once WebAssembly has GC, a lot of the bloat from the runtime can be removed; making GC languages more practical for WebAssembly.

The key is a “lightweight runtime”; not just a WebAssembly version of the Beam.

Elixir ever having really light comparable to Rust WASM runtime is highly unlikely even with WASM GC support. You can put that bundle to CDN. It shouldn’t that big of a deal for most use cases.

This is using Live View for the wrong thing, and this trend of doing so comes for all that demos for games, and menus, etc… Developers wanting to use Live View for real just got it all wrong from the begin, due to this type of examples.

Only use Live View when you need to perform business logic on some data. When is only about user interaction in the browser NEVER use it.

Sometime ago I replied to another thread with a more detailed answer about this problem of using Live View for the wrong things:

An Elixir/Erlang runtime doesn’t need to be the size of the Rust one, but there is a point where size is a deterrent.

I could be mistaken, but I don’t think the runtimes are separate wasm binaries. Rather, they are baked into each wasm file. If that’s correct, then hosting on a CDN doesn’t make much of a difference.

I think the AssemblyScript binaries are getting down to pretty small minimum sizes (~600B). Not really sure how small the Lumen binaries are, and whether they include any kind of tree-shaking.

Yes, people have been creating all kinds of examples to see how far they could push it. There’s only one way to see where things break down and that’s to use it in anger. I consider that a good thing.

Personally I don’t agree with general statements like “never use it for user interaction”. Always and never are words I try to avoid. It’s all about context. Maybe you have an admin ui or very low latency connection where the trade-off is worth it and it’s good enough. Is saying that something is right or wrong without knowing the context any better than someone blindly following the hype?

All we can do is try to educate people so they can make an informed decision.

8 Likes

I tried to find out is it possible you could release runtime as separate WASM file but I don’t think it’s currently possible. Lumen includes or will include (not sure about its current state) dead code elimination according to this talk https://youtu.be/uMgTIlgYB-U?t=797

1 Like