Let's discuss Phoenix LiveView



I evaluate it based on a few things:

  • Will it make my life easier as a developer for the majority of apps I might create?
  • Will it mean I can build rich/cool user experiences without having to write much JS?
  • Will it mean it might take less time to build my app/s?
  • Will it mean I don’t have to keep two (or more!) languages in my head while building my app?

I don’t think I’m the only one who loves the idea of LiveView either, Drab, the Phoenix library that is similar to it, actually has the largest thread on this forum in terms of total number of posts (plus loads more in threads tagged with it)… which tells me that there is a lot of interest in this type of library :003:


It is a lot of interests in this field, based on my discussions with my friends since XX century :wink:

I’ve been exploring this field for ages, first I’ve found promising was Nagare, then Volt. Then I tried to build the similar thing myself the on top of Rails, but what scared me, was concurrency and threads in Ruby. Phoenix+Channels was an answer to such issues, and this is how Drab begun.


I remember Volt! And Opal! And Ruby Fire! And all the excitement they generated! :wink: :003:

The interest in Volt waned after Phoenix was announced/started to gain traction - I know for me that’s when I lost interest in it anyway.

Things could well have been very different in the Ruby world if DHH had dropped coffeescript for Opal and RubyMotion had been made OpenSource - but that’s a topic for another thread (or another forum) perhaps :lol:



Yep. I am neck deep in that…


My understanding of LiveView is to emulate the Elm architecture where the browser is responsible for displaying the view (HTML/CSS) and sending events to the server. While the server mutates the state in response to these events, re-renders the view and pushes it to the browser to display. The simplicity comes from the fact that state is held and rendered by the server with the browser displaying the rendered view and emitting events.

I can see this approach will fit well for certain types of web applications that are somewhere between mostly static content with some dynamic content and full featured SPAs. I’m at least looking forward to trying it out to see how it works in reality and exactly where it might be be used.


While I can empathize with your points, we as developers can get overly involved with our own short term comfort (and “coolness”; Programmers know the benefit of everything and the tradeoffs of nothing.) - sometimes to the detriment of our users and our own long term interests.

which tells me that there is a lot of interest in this type of library.

Yes, but it’s direct association with Phoenix also muddies the waters more.

My assumption is the @grych had his own valid use case for Drab - because he knew the constraints he was working against. He was generous enough to share his efforts via open source with others in a similar position and further develop it as a mature package. It is there for those wanted it and who where willing to go to the additional effort to make it work for their circumstances.

LiveView being part of Phoenix may have quite a different effect.

Lets remember Brunch shall we? Has absolutely nothing to with Phoenix. It’s there just for asset management. Something to push neophytes in a more progressive direction to use modular JavaScript rather than drowning in a swamp of legacy-style jQuery code.

What happened? Brunch was constantly mistaken to be an integral part of Phoenix - despite constant assertations to the contrary.

LiveView is a legitimate extension to Phoenix but now it even becomes more involved on the client side beyond channels. Anyone in this topic knows it’s optional. But from the outside it could be easily judged as yet another weird way (i.e. non-mainstream) of doing web applications.

“We are a React shop; we’ve heard Phoenix does this LiveView thing for the client so it’s not really relevant to us.”

I will admit that there is a real hole when comes to basic web applications that were historically created with jQuery - something to go together with the EEx aspect of Phoenix - but I don’t think LiveView is the solution for that space (maybe there will be some minimal JS library from Matt Frisbie for Professional JavaScript for Web Developers, 4th Edition - but that seems too much to hope for).


@peerreynders I suspect you have some experience into how the pendulum shifts back and forth in this industry and might remember some dumb terminals and green screen apps in the past. Those kept all the state on the server and the server told the client what to display. Granted the dumb terminals were typically in a LAN or a private network managed by the company rather than the open internet, but the concept is similar. The real question to me in this case is how much the public internet changes those green screen approaches (now that it’s feasible to have stateful connections).


It’s not part of Phoenix, which I thought was pretty clear in the keynote. Granted, the fact that Chris is the one promoting it has an effect, but I expect it can remain distinct.


This is true, we are taking the react model (or elm architecture) and placing it on the server.

This is the exact model we take on LiveView.

So let me preface by saying, are client apps increasingly necessary as the clients become more powerful? Absolutely. That said, do the majority of applications require this complexity for the UX the developers are after? I don’t think so– unless folks are after the SPA usecaes I mentioned in my talk. With solutions like LiveView I think a lot of folks can avoid the complexity and still get the UX they want with way less effort, which I imagine is why this is an exciting way to writing applications for many of us.

I don’t know about the ignorance to HTML/CSS point, but I think in general a large part of the webbev community has reached prematurely for client frameworks for any kind of rich interaction. I think a lot of these frameworks like react are great to “sprinkle” in for rich features, but once react/ember/elm/et al is in place, then it becomes increasingly tempting to adopt your point of view, where the separation of client <-> server is the True Model and the natural path of web development. The issue is we’ve been running this experiment for years now with client-side apps, and so far from what I see is the best folks served by this added complexities are consultancies who get to bill and maintain all thus churning complexity. Who here has had the privilege of upgrading or maintaining Angular 1 for example? So you absolutely can produce incredible client applications with the great available frameworks we have today, but the cost we’ve taken on to make it happen has been enormous and I think folks are more aware of this than ever. It makes sense for some apps, but I know a lot of folks who look back and certainly find it not worth it in retrospect.


I see where you’re coming from, but I view it differently. I’ll break it down in detail when I get some time.


If you aren’t targeting offline support, which the vast majority of us aren’t even if we are building SPA’s, then this is what we do already today. Wether we’re flinging JSON over or HTML it’s the same interaction on the wire. We can debate if patching the DOM from fragments on the server is a valid approach or not, but we’re all flinging bits over the network all the time. I built Phoenix to build real-time apps, so wether it’s phoenix.js with channels or LiveView, we’re pretty great at flinging those bits :slight_smile:


I genuinely believe that you are worrying needlessly. Not only do I think there will be very few people who feel like that (because if they know what React is and know what Elixir and Phoenix are, they will almost certainly know LiveView is optional) but not only that, what you describe as “as yet another weird way” will, I’m sure, be seen by hordes as fresh and exciting.

That article I wrote about Volt/Fire/Opal was one of the most popular articles not only on my blog with regards to number of views, but in terms of being tweeted and retweeted and the conversations that ensued; it generated a massive amount of hype and the excitement around those technologies was very, very real.

This is one of the reasons why I believe LiveView will be one of Phoenix’s killer features… and I think we’re already seeing a glimpse of this by all the conversation and excitement we’ve witnessed on the forum (and I’m sure IRC/Slack), Twitter, and other places like HN and Reddit etc.


What concerns me is that this time around Amazon/Google/Microsoft are pushing (currently) the other way, increasing complexity on the client-side for integrating data from a variety of network sources while LiveView is going in the opposite direction (without being necessarily “disruptive”). The business case to push state to the client is compelling because they don’t have to pay for the client - they have to pay for the server Phoenix runs on. Again “holistically” there are many use cases where “Phoenix-on-the-server” is a more effective solution - but I’m not sure that it is that easy to make an a priori case for it.

My point is not to criticize your effort … I was just hoping that someone could clue me in on what the excitement is all about. We already have PWAs, SPAs, etc. besides “old school web pages”. We constantly talk about “JS framework fatigue” - what about “101-ways of doing a web application fatigue”? Yes, try new things but we can we truly let go of some of the other approaches if we adopt this one? I somehow doubt it.

but I think in general a large part of the webdev community has reached prematurely for client frameworks for any kind of rich interaction.

No doubt - but I think to a large part that was because of the perception that adopting some random JS framework would be the path to some kind of front end nirvana - HTML/CSS is tedious stuff, throwing a JS framework and Bootstrap at it doesn’t change that.

Whether we’re flinging JSON over or HTML it’s the same interaction on the wire.


Not all interactions on the page that change the appearance of the page need to involve the server - however I imagine with server-based rendering everything needs to go through the server which is a lot more chatty and more prone to stuttering (the network connection is likely the weakest link) than an application implementing a protocol that prefers much coarser grained data interactions - an approach which I wouldn’t equate with targeting “offline support”.

Now LiveView wouldn’t be pushing as many packets as something like a live streaming game server - but still.

We can debate if patching the DOM from fragments on the server is a valid approach or not

Even if you supply a functional equivalent of an “X Server on the browser” - it will still be yet another way of implementing a web application - only this one is locked entirely to Phoenix ecosystem.

Reducing server cost by lets say 85% by being able to ditch most node.js express servers for Phoenix would be a much more straight forward argument for driving adoption.

Anyway, just my perspective …

For someone just starting with Elm which version between 0.18 and 0.19 would you recommend?

I agree the approach can be seen as a new way of writing web applications, but I don’t see it that way. One of my goals is to make it a natural extension of the server rendered html approach we have always been taking. It shouldn’t be a massive departure to go from the EEx and controller code we are used to writing and put it in a live view, which I tried to highlight in my talk. So yes, it’s a different approach, but it feels like a natural extension to what we already do with SSR, not a huge departure to a different language or programming model, which I think is an important distinction.


This is, in my professional opinion, where web development has derailed over the last decade (in general).

All of that sounds good, in theory.

In reality, you still have to transfer all of the data to the client so that it can work on it. The act of taking data and converting into to JSON so that it can be transferred up, loaded into the client and them moved around is actually less efficient than just transferring up what you need after trimming it down on the server (or better yet, in the database). The amount of work in generating HTML isn’t all that different from the amount of work in generating JSON.

The idea that transferring things up to be processed in the client is only viable when a lot of data has to be constantly reprocessed in the client, such as animations or live streaming data into a dashboard that will be updating and recalculating for display on multiple parts of the page.

For the case where you’re periodically updating one centrally viewed segment of the page, there is almost no benefit to the fat client approach. It’s overhead cruft with marginal benefit and usually involves the need for at least an additional hire. This is where LiveView / Drab fits. It allows to leave all of that on the server where it has to exist, because no matter what it has to exist there or you create a security issue, and provide a slightly more polished experience than raw HTML/CSS in the browser would give to a user. You still get to take full advantage of the power in your database and you send up the bare minimum of what you need. You gain a ton back in recovered network overhead. For all of those sites that are “open, do something, check something, close”…that is tremendously more efficient and a better overall experience for the end user.

For the other cases mentioned, the offline case, the electron case or the react native case…by all means use the frontend framework. There are a lot of other people, myself included, who are absolutely craving exactly what LiveView/Drab delivers…because so much of the other stuff is totally unnecessary for probably over 90% of the places where it’s used.

The other aspect of this setup that makes it so much cleaner is Elixir and Phoenix itself. With Rails or other frameworks you’d have an if statement in the controller checking to see if the request was made with AJAX or not and if it was, you’d handle it and send back JSON but if it wasn’t you’d handle it and either re-render the form with errors or you’d redirect with a flash message.

By doing this with channels, you completely separate the interaction experience so that the interactive aspects are isolated from channels and the non-interactive exist in the controller. It also fits perfectly with the “Phoenix is not your application” approach by encouraging business logic to be accessible from both locations rather than contained in either.

I can’t wait. I have every intention of diving into this code the day that @chrismccord releases it, even though it’s not for everybody or every use case.


@chrismccord having a bit of experience in SSR JS (Nuxt.js, specifically), there were a few things that immediately occured to me when I saw your demos. I actually think this can be taken even closer to SPA/SSR land. For instance, in Nuxt (SSR Vue.js), there is a tag (<nuxt-link />), which renders to an anchor. When “nuxt links” are clicked, they trigger a route change (not a page change). The URL is updated via pushstate. I think this could be replicated in LiveView using a phx_ directive on an anchor tag. So similarly to Nuxt/Next, initial page requests are sent from Phoenix as a traditional request/response. Subsequent navigation is handled through LiveView. You can even have the phx_link (for lack of a better name) check for 401/404/etc. before responding. That would mean faster page transitions, and the ability to animate between pages. For Edgewise, we do a simple fade-to-white which is far less jarring than a traditional page load (IMO).

Does that make sense?

I have some other ideas of patterns from SPA/SSR land that could port over. I actually think you could take LiveView pretty far…


Yes this makes sense and is a feature we are exploring– transitions to a new view without refresh. Personally, I think the “let a page be a page” approach is the best default case, as renders are fast and new page transitions are least complex if we just let the browser request a new page, but these kinds of features are useful in a lot of cases.


Yeah, I personally think routing is a superior user experience, but I can understand traditional request/response pages as a default. A couple use-cases for pushstate routing off the top of my head:

  1. Transitions. Once you get into FLIP (first/last/invert/play), things really start to feel app-like. Example using Vue: https://css-tricks.com/wp-content/uploads/2018/04/page-transitions-final.mp4

  2. The now-ubiquitous top border loading bar (a la YouTube). You could stream download/upload progress (eg. image uploads), and page loads (eg. pages that require external API calls).

  3. Forced page changes. For instance, if the server session times out, you could push a route change to /login.

  4. Deep linking. For instance, instead of a page dedicated to editing a user, we often have the edit form as a modal on top of the user list. What you really want there is the modal to be injected without changing the user’s scroll position. The url changes to ./edit, the user makes the change, closes the modal, which backs out of the route change, removes the modal element, and the user is still at the same scroll position. Combined with transitions (handling the lightbox fades and dialog bounce via CSS), you get a rich UI experience with little (or no) JS. In Nuxt, these situations are called <nuxt-child />. It’s a “page in a page”, but with a unique URL. In-short, not everything that is a URL is a “page”.


I agree with Chris on this. I remember when Turbolinks was introduced in Rails and so many people complained about their JS breaking and that it added a level of complexity (to do basic JS/jquery stuff) that wasn’t needed.

I only use page transitions on one of my Rails apps - and even then only on certain pages (where certain pages are grouped). So while it is cool to have, I agree with Chris that it shouldn’t be the default.


I’m not suggesting either-or. Basically, you’d need to opt-in to using routing. If you want a fresh page, just use an anchor tag (the default). If you want routing, you’d need to add the directive. A little JS would look for link clicks that have that directive, and use pushstate instead.

We’ve come a long way from TurboLinks (replacement vs. vdom). I’m not really sure that’s a fair comparison. Despite the tooling headaches of JS, there are many valid reasons why so many modern sites are using Vue/React/Angular (and it’s not just offline support, which I’d argue is really only a recent priority with PWA support and service workers).