Choosing Phoenix LiveView - The difficulties deciding between Phoenix LiveView and traditional frontend frameworks

If I need to ship a new product in a week, Phoenix LiveView is my go-to framework.

Elixir LiveView is incredible, and an alluring choice for software leaders looking to develop applications super fast. However, in recent experience, I’ve seen teams hit some pitfalls.

The trick is to understand what Elixir LiveView excels at, and what it doesn’t.

TL;DR: LiveView is perfect for internal tools and simple apps. Skip it for complex UIs, offline-first apps, or if your team doesn’t know Elixir well.

2 Likes

A LiveView developer here (from its earliest days)..

Is your team familiar with Elixir?

As opposed to what? JS or Python? The languages every human gets born with knowing full well, so outside of the two learning any other language is like a burdensome extra skill yet to get acquired?

IMO out of all the cons you list there, the only one holding water is the continuous connection requirement, and even that one is only partially true b/c LiveView does handle a loss of connection gracefully when idle, the only catch being you can’t really change the server-side state if the connection is down, but that’s true for every web app.

What you don’t mention in your post is that LiveView has the BEAM VM underneath meaning you can achieve things with processes WITHIN your app you can only dream of in any other framework. That’s not to mention the savings on, say, Amazon services that you otherwise MUST pay for because your Lego-component friendly framework of choice runs in a single server-side thread, has no in-memory DB of its own nor does support horizontal scaling while it’s vertical scaling is non-existent.

The more appropriate questions you could’ve asked in your post are the following:

  1. Are you building a fat client app (for that’s the only kind of web app LiveView is not made for)?
  2. Are you building an app that’s meant to scale without rewriting it each time it gains another order of magnitude more users (b/c with LiveView you won’t have to)?
  3. Are you planning to hand most of your SaaS income over to Amazon (for with LiveView you won’t need to)?

My 2c

12 Likes

Thank you for writing this article, definately thought provoking. In the past I have read similar analyses of liveview’s limitations regarding client side interactions. What I always find puzzling is that when there is critique on liveview for having to write a few js/ts functions through a hook, the solution for not having to do this is bringing in a full js framework complete with npm hell and lots more js to write than if you would be creating a hook.

3 Likes

I wouldn’t really refer to terminating the process and destroying all of the state within it as handling it “gracefully”. It would be fair to say that LiveView could handle it gracefully, but it doesn’t.

I do think for certain use-cases having a thin client with state directly next to the BEAM/database is advantageous. However, it’s not like this is black and white; you could have a client-rendered app that interacts with a BEAM backend and still get all of Elixir’s benefits on the server.

I think this premise is a bit absurd in general, but also I would point out that a thin client approach is more expensive on the server side than a thick client because you have to do more work on the server.

But really I am just restating the previous point: you can use Elixir without LiveView and get those same benefits.

4 Likes

Have you tried pulling the ethernet cable out and then plugging it back in? Unless LiveView was trying to push or render during that period the process(es) would not restart. You wouldn’t even notice it happened.

Of course, I wasn’t arguing about fat client (“client-rendered”) apps. One of my points was in fact that that’s the only case LiveView is not made for. And yes, any app that keeps sufficient state on the client to function in a stand-alone mode and only query/mutate server state on explicit demand is in essence a fat client app.

It depends on the particular requirements. If the app doesn’t require frequent queries/mutations, then sure, but that’s again the fat client type of app that’s not perfectly suited for LiveView.

On the other hand, if the app is chatty, and you have a REST API, you can stay assured the amount of data transferred over the wire on average (not accounting for full page http refresh) will be far less if its LiveView server-rendered than if it’s “client-side” rendered. It’s easy to understand why’s that, and we have also proven it to ourselves in practice.

Btw, I only hope we’re not having this conversation solely because of the LiveView process restarting when it detects it’s out of sync.

2 Likes

Like, it’s all datagrams underneath. I understand that TCP can survive a little blip. But if you lose the connection you lose the process, right? I would love to be wrong about this but I don’t think I am. My understanding is that there is no reconnection behavior at all and you simply get a new process.

And that is a much more reasonable and nuanced position than “LiveView is magic that will reduce your AWS bill”.

I do like LiveView, but I am not a fan of meaningless hype. Clearly you are knowledgeable and capable of carrying on a nuanced discussion of the topic, so why not do that instead? This is literally the Elixir Forum; there is no need to evangelize here.

2 Likes

True. In it’s current form/version when it detects it’s out of sync, there’s no going back. But, again, it’s not the end of the world because..

One needs to evaluate what it is that actually happens (to the app itself) in each of the cases (a LiveView app vs a fat client app) when connection is lost, and more importantly how it affects the user.

For the fat client app, it’s simple (assuming it’s client-side framework doesn’t flip over too in which case it’s gonna be far worse than if a LiveView restarts). But let’s assume it doesn’t crash. The fat client app stays in it’s semi-usable mode until the connection is back on.

As for the LiveView app we have the process restart, but what really bothers people is the loss of client-side state after having reloaded the page. The perceived pain can be substantially alleviated by keeping track of the important client-side data points on the server. For instance in our app we track what was clicked last and where in the stream it was so we can reposition on page reload but not just on page reload triggered by a lost connection, but also in a scenario when the user navigates elsewhere within the app and the navigation itself restarts the LiveView so when the user decides to go back by pressing the browser’s back button they not only get returned to the correct page, but also to the correct location in the stream (if there is one) and even within a nested stream of the stream and so on. In practice, the pattern/mechanism used for this pretty much undoes most of the negatives of a LiveView restart.

So, if it’s all about LiveView restarting on a lost connection, in the end it doesn’t get all too much different for the end user since we save that what’s dear to them.

Agreed.

1 Like

This article has some good bits about the infrastructure complexities and familiarity with the ecosystem, but imo it’s comparing apples to oranges. There isn’t much to compare between a FE framework like React or Vue and Phoenix/LiveView because Phoenix is a full-stack application framework, and React/Vue are UI toolkits. It would be more accurate to compare LiveView to Django, Rails, or Spring and React/Vue to QT or GTK.

I want to be clear that I’m not bashing the author. I actually think this article is good because this is an extremely common comparison people make, and I think a lot of FE devs hear about LiveView and think they can use it in place of a FE framework, and this article points out that trying to replace React with LiveView isn’t going to work very well. You need to replace your entire application stack with LiveView, which is not something a non-fullstack dev is going to be able to do in a day.

I also agree that BE-only devs will pick up LiveView faster than FE-only devs, but that’s because all LiveView code is server-rendered HTML. HEEx templates are just like Django, Jinja, Erb, Mustache, etc. It’s just optimized for partial renders instead of caching the full parse result like traditional server-side rendering does. BE devs are usually familiar with this approach to web UIs. I also think that whether it’s easier or harder to build a complex UI in LiveView or React comes down to familiarity. In my case, it’s much easier in LiveView because HEEx is just HTML at the end of the day, and I know how to write a complex UI in pure HTML. JSX is a hot mess though. I find it really hard to visualize the final output in JSX because I’m writing HTML in JS instead of writing JS in HTML. HEEx is supposedly HTML in Elixir, but its syntax and structure is much more like Elixir in HTML, which is what I’m used to with other HTML templating languages like Jinja (Python in HTML).

At the end of the day, there’s no magic bullet that will let you npm install production-app . It feels like engineers just want to pick a single technology or tool and have it magically solve all their problems for them. I appreciate that the article didn’t fall into this trap of trying to say one tool was always the best choice and instead tried to make a case for picking one or the other based on the project and team requirements.

As for the best choice for getting a simple interactive and/or realtime app running in a day or two, that has to be Python’s Quart framework with Jinja templating for HTML and pure JS using the browser-native websocket APIs. I know that no one will agree with me on that, but in my experience having used all of the technologies discussed so far, that stack is by far the simplest and easiest to get to production. It still requires familiarity with all the tools involved though. :slight_smile:

2 Likes

That’s interesting, do you have code snippets for that? Where do you store the previous location of a page? I feel like this could almost be a standard behaviour of liveview (or a common lib).

You can shove an entire liveview app into a single file with 100 lines.

The requirements.txt or pyproject.toml file that you would have to write to make that work in python doesn’t even fit into those 100 lines.

So, I sincerely, have no idea where you are coming from.

Can’t share the source code because it’s not legally mine, but I can try and explain the logic.

So, every navigation link in the app is a JS structure instance.

Then we have something we call a topic_context_changer which is a function that’s initialized with the actual navigation link (a %JS{} structure) in question and it returns another function taking the params for the desired url replacement that are known at the time of rendering the stream items (such as the item-page and item identifiers, but actually whatever else you need packaged up into the url for a later resolve). Naturally, this assumes that you need to keep track of the data page identifiers (that’s another story and we do it on regular basis because otherwise we wouldn’t be able to diff them and stream only the actually changed items, given that we fetch them on page by page basis - but this has more to do with how the backend API is organized).

So, each stream item (or not a stream item, doesn’t really matter for this purpose) receives an assign with the result of the function those params are passed to (in the template). The function returns a new JS structure with a JS.dispatch( "replace-state", to: to, detail: %{ path: path})) “prepended” i.e. piped before the original navigation link JS so the final JS structure that’s going to be used for the phx-click (or whatever else) is constructed on the fly and consists of those two basic parts - the dispatch and the push/navigate. The path itself gets constructed out of the payload params and contains all the data required for a new LiveView instance to restart with the desired “coordinates”.

In the root.html.heex we have something like x-on:replace-state="window.history.replaceState( null, '', $event.detail.path)" doing the url replace trick.

1 Like

Thanks for your answers. Hm ok, so from what I understand, you are storing details in the URL (by replacing it) when a navigation event occurs. But can’t you just update the URL as it goes? I guess that with your technic you can’t use normal link again because you need to prepend a custom action?

Do you happen to know if by default liveview restore the form with the focus and the position on screen?

It’s funny because there is a discussion happening right now on a subject that seems related: Add `:params` opt to JS.patch and JS.navigate, and opt to merge `phx-value-*`

Not sure what you mean by this. The url path is constructed as soon as all the params are known (btw, we don’t pass just the coordinates, there’s some other app specific stuff, but that’s out of this scope).

You could use a string (the event name) for input but then you would still need to wrap it into a %JS{} so the dispatch can take place in the same turn and before the push/navigate.

For what I know (paid attention to) LiveView “only” restores the form client side input values.

As far as I can tell from the suggestion, it’s about constructing the target path from the params available in the template so it’s clearer what’s going on and I support it, but having this “back/refresh” functionality is more complex than that. There’s also the resolution part the url params get interpreted and acted upon (like el.scrollIntoView stuff).

I agree it’d be nice if the whole pattern was generalized and built into LiveView. However, it’s not as straight forward as it may seem because there are parts that may differ from app to app depending on the requirements.

For instance, in our case we don’t simply append the additional (“coordinate”) params to the url, but we encode them first into a single one. It’s not for some security reason (there was no such requirement), but I wanted to make sure it’s either valid as a whole or invalid (without adding yet another checksum param, for there are length constraints there, especially if you account for the default config limitations of Kubernetes and such). Someone else might have opted for storing them server-side completely (like in a GenServer or ETS or even a frontend DB), while only appending a single id if the requirement was to hide the inner doings completely, and so on.

1 Like

Is that 100 lines for just one liveview, or 100 lines for the whole project? Genuinely asking since I don’t have a lot of experience with liveview yet (though it’s my favorite stack so far).

My understanding is that liveview needs a lot of deps and config and boilerplate code in a bunch of different files. It’s definitely more concise than a barebones framework like Quart, but if we talking about project files like pyproject.toml or requirements.txt, then the mix.exs + config.ex + runtime.ex would probably be longer in a typical project.

This is an interesting idea for a code golf project though. I threw a few hours at it and was able to get a simple chat server with password auth, CSRF, encrypted sessions, and all the other essentials working in a single file with less than 1000 lines. Formatted to 80 columns with documentation and whitespace still included. Jinja template takes up about 450 lines, and I also have all of the deps and the venv setup and everything in the same script. I’ll probably actually try to golf it this weekend and see how many lines it would really take to run something on that stack. Never actually looked at that super objectively before. :slight_smile:

Sorry I will try to be more clear, I was referring to the replace-state logic. To me it seems this dispatch could be skipped (as you are replacing with the known current state so you can go back obviously) if it was applied each time there was a change in the current page. But I reckon that it could be less flexible (and also a bad practice if you have to change the URL every time there is a scroll).

Check here: mix_install_examples/phoenix_live_view.exs at main · wojtekmach/mix_install_examples · GitHub

Its actually 83 lines.

1 Like

IIUC from the posts so far, I think the dispatch @DaAnalyst is talking about is used to update the query params in a URL that is already stored in the browser’s history via the History api before the URL gets updated due to user navigation. (see: History: replaceState() method - Web APIs | MDN ). By modifying the current history entry’s URL with the scroll position before the user navigates away or experiences a crash, you would be able to restore the exact state of the UI with the back button.

Nice! No database makes sense. I wonder how much it would take to add a db to that example? The 1000-line Python example I’m working on has a sqlite db with 4 tables in the schema and also a basic pubsub implementation, but I had to write all that stuff myself since Quart doesn’t provide those things out of the box like Phoenix. I did cheat a bit and used a database library that I wrote which has reduces the boilerplate from SQLAlchemy though. I think I’ll try to do the same in LiveView if I find some time for it. I’ve never code golfed before. It’s pretty fun!

Yes, that too is a part of what may differ from app to app. In our application we decided it was “enough” to do it on clicking the link leading to another page/context, like for example, having a list of people and clicking on one to see their page. So, to us the link is dependent on where you click (on which item in the stream) so we can then scroll that item into view when returning back.

We do not track the scroll position or anything of the kind because the underlying stream resource may may change in the meantime like with new items being added or the old ones removed rendering last saved scroll position obsolete.

2 Likes

If you use long polling I believe if you reconnect fast enough the existing process will be reused. The window for that is 10 seconds by default. Though I never tested that in practice.

1 Like