Let's discuss Phoenix LiveView

This is a post to discuss the new Phoenix LiveView functionality.

From Chris’s talk, it appears that they generate all HTML on the server and then diff it on the client with morphdom. That’s an idea that I’ve had but which I’ve never pursued because I’ve always tried to go the route of small reusable widgets which you could embed in the larger HTML page. It seems rendering the whole page is performant enough, then?

Another issue is the question of using channels instead of normal HTTP requests. Using channels is necessary if you want to push notifications from the server, but it requires two orthogonal forms of authentication (and maybe authorization).

So my other question is: is there a simple way of integrating authorization over channels and over HTML requests? So far, I’ve never seen an accepted idiom for doing exactly+ this.

4 Likes

The authorisation and authentication problem maybe I’m just not seeing? Once you’ve authenticated (I do this as per the docs with a short lived phoenix token), it’s a case of either authorising on channel join and or on handle_out? Your auth checks can just check your existing authorisation modules/rules? I have mine setup very similarly to bodyguard (I’d link it but I’m on mobile), so I can pretty much call them anywhere -cli, api, channel, etc.

Right now liveview would be super helpful for me as it stands, and I’ve love to get my hands on even the some alpha thing to see how well it behaves for my use case - I’m actually using stimulusjs controllers and channels to do a small subset of the features, but I have to hand roll them per page. I will agree It would be nice if it didn’t necessarily render the whole template and layout but the talk did say it’s early stuff and there are optimisations that could be made

1 Like

The LiveView demo was awesome. I’m very keen to play with it even it’d be in alpha stage…

Isn’t it similiar to Drab?

3 Likes

It is very similar to Drab, and I think it’s strange that the existence of Drab wasn’t even acknowledged in that talk. We can’t claim that LiveView was inspired by Drab, because Chris’s idea dates from as far as 2013, but I believe Drab deserved a mention as the first successful implementation of similar functionality in the Phoenix ecosystem.

In my opinion, I’ve always been in favor of diffing on the client, except for the fact that you’re sending large amounts of data back and forth, so I can see myself using LiveView rather than Drab. As I said in my comment, the main reason I didn’t write something like that myself was my (maybe stupid) reluctance in sending the entire page’s HTML down the pipe.

I wonder if LiveView can be “hacked” into reuseable “widgets” which on can embed in “normal” pages. I guess we’ll know when it comes out. Something like unpoly + morphdom.

6 Likes

Yes, that’s true, but something you can do with a controller is adding a plug that handles authorization for all actions. That keeps the bodies of the actions cleaner. I’m not aware of a similar concept for channels.

Maybe Phoenix should try to duplicate the functionality based on HTTP requests on top of the channel infrastructure: you could have a channel router, a channel pipeline, channel plugs, etc.

Or maybe authorization should be handled at the level of the context access functions.

2 Likes

Ive specifically been explicit with my auth checks so far partially to reduce magic for the junior and partially because I don’t always have a nice mapping between permission and action.

I used to have the checks in the context list, get, etc functions only, but I found that as I needed to check in more places like channels or other places where the schemas were already loaded, that I’d need to check them independently. I’m not 100% happy with how I’m dealing with auth checks, but I feel putting them solely in contexts puts you into a corner.

1 Like

I wish its stable version releases with Phoenix 1.4. I wanna use it immediately with Phoenix 1.4.

Thank you @chrismccord !

1 Like

I basically built a channel router myself because of the number of endpoints I had and the duplication of logic between them. I didn’t have to build pipelines and plugs though, because I put all the common functionality in the routing.

3 Likes

OK, can somebody please enlighten me why this is going to take the world by storm because obviously I’m missing something.

Let me explain:

Even back in 2009 when I developed my first (multipage, servlet-based) web application I was using jQuery to make ajax calls, copy DOM fragments from invisible sections of the page, splicing in the retrieved data, and then replacing the relevant section of the page.

To me Drab/LiveView are an evolution to this approach modernized with shadow DOMs and web sockets which allow data to be pushed from the server. So what am I missing?

If this is so amazing then I have to conclude that the majority of the web development community is suffering from framework blindness because they either don’t know or don’t want to know how HTML/CSS is processed in the browser.

Granted I have wondered why approaches like hyperHTML haven’t been getting more press but what do I know?

Then there are the “engineering concerns”. When it comes to page generation most templating approaches provide a relatively clear separation between the server code and the (template) markup code. Furthermore page generation builds the client’s initial state. To keep things simple once the client has that state/page it should be the client’s responsibility to manage it.

So generating HTML in other parts of the server code seems counter productive because you are fragmenting the page (and page management) across the server code base, so it becomes increasingly difficult to get a unified (maintenance) view of the page for layout and styling purposes - there is an unpleasant increase in coupling between the server code and the page.

To me it makes more sense to send plain data (no markup) to the client and leave the responsibility of managing the necessary page changes to the code residing on the client.

6 Likes

It’s not. I agree it’s being overhyped. It doesn’t really allow you to do anything new, it’s just a simpler way of doing what you’ve always done before. And a way to avlid writing Javascript so that the rendering ligic lies only in the server.

Simple from which perspective? Why is it simpler to have the client manage the view changes while the server manages the initial view? To me it’s simpler if the server manages everything. At least it allows you to use only a single programming language (elixir) and a single build pipeline (mix).

I share some of your caution/skepticism, but I’m trying to keep an open mind. This is the first time I’ve really dealt with a server-side technology that makes stateful connections practical in web apps. It’s a boon to backend-oriented developers who want to minimize use of javascript, which I can sympathize with. It will be great for initial prototypes, but at scale it will also exhibit strange timing issues that will require CRDTs or similar to deal with, and that will take away from its simplicity.

I think it’s all going to come down to where it fits in a correctness/simplicity tradeoff.

6 Likes

The main reasons:

Over a websocket it’s faster.

Rendering on the server to do this was a really inefficient idea in almost any other language but the shocking speed of Phoenix rendering makes it plausible.

People are really tired of Frontend JS framework Hell when it’s not necessary.

Handling how to deal with each structure of data that can be sent to the client is how it grows big enough that people start asking for frameworks. Not doing that avoids it entirely.

Basically, this makes sticking to server side in a world of overzealous JS entirely feasible with virtually no negative trade off (and a lot of positive ones). Aside from the case where you actually need a fully in browser application…this could bring balance to the universe. :slight_smile:

10 Likes

This is my main fear too… The demos we’ve seem are running with no network delay, which can make you forget that you’re in a distributed environment (between client and server).

When I tried to implement something like LiveView last year, I ran into this theoretical issue and stopped working on it (I’d have had to implement OT or CRDTs, which was too much. But these demos have made me want to try a quick and dirty solution which ignores the distributed aspect completely.

In my opinion this is just about as misguided as isomorphic JavaScript.

Why is it simpler to have the client manage the view changes while the server manages the initial view?

Boundaries - which are a consequence of cohesion probably the most important design principle but yet apparently the least understood.

  • This is why DDD introduced bounded contexts
  • This is why Phoenix introduced contexts

I see the server generating representational fragments of the client’s page as a variation of inappropriate initimacy - one of the worst forms of coupling.

To me it’s simpler if the server manages everything.

The server should generate and then deploy the page - after that the page starts a life of its own. Otherwise there will be tight coupling all over the place. Granted tightly coupled systems are easier to first build as long as they are reasonably small but they are hell to maintain and grow.

At least it allows you to use only a single programming language (elixir) and a single build pipeline (mix).

The era of monopoly languages is over and it is not coming back and ironically its the Web that precipitated that.

Bruce Tate 2014:

Same here which is why I’m asking. But I’m thinking there should be a more disciplined effort to exploring how to build effective APIs based on web socket technology.

2 Likes

Mmmh.

First thing that comes to mind is that you are describing something that is easier, not simpler.

It’s the distributed systems fallacies all over again. Sure, the naive view looks simple.
But as stated in other comments, complexity due to the usual suspects of distributed systems might kick in if you push the envelope.

I am not saying that this is a dead end, I think it’s a good technique to explore and try to characterize in terms of applicable scope, scale, potential optimizations, etc. I am curious myself. But it’s definitely not something that is going to be simple.

Just trying to understand what would involve reducing the data sent over the wire by having the option to do the DOM diffing server-side while retaining the advantages of client-side diffing makes my head hurt. And this feels like one of the first thing people using this technique at scale might want to try.

1 Like

I agree that separating frontend and backend leads to lower coupling, but it also leads to complexity and duplication of logic. If the project is small, not likely to change etc., doing everything with one codebase can be an advantage. In general, I think SPAs are overused today, and I say this as a primarily frontend developer.

I’ve actually started avoiding SPAs and heavy use of JS for personal projects. So far the most productive I’ve found is Rails with Turbolinks — gives you a lot of SPA responsiveness with just one language and one framework. I see Phoenix/LIveView as promising in that regard.

Also, regarding DDD, isn’t that more about dividing an application into “well-bounded problems”, i.e. things like presentatation vs. data layer are details inside a domain? (I’m not very into DDD, but that’s my rough impression of it.) Think about how Django organizes a project into apps — a user app, a blog app, etc. The apps themselves contain both a view, a controller and a model. The division is not between layers (technical), but between stuff that give meaning to non-technical domain users and can be talked about in laypeople’s terms.

5 Likes

“As it grows” is really where the conversation shifts.

I see this on the same level as a jQuery style graceful degradation approach. Everything works without JS but JS provides some usability improvements. This really just makes that style more polished and simpler without all of the jQuery needing to be added.

But I don’t anticipate it replacing an SPA where an SPA is actually what’s needed. There’s a line that you cross on an application design where you need an SPA and all of the stuff that comes with it. IMO this just helps push back that line.

2 Likes

I share a fair amount of skepticism. I’m curious to see where this leads and I am keeping an open mind. I get the “just because you can doesn’t mean you should” feeling here.

I think it depends on the organizational and team operating requirements.

I understand the benefit of having a front-end team be able to work independently of back-end in a larger team at a more mature point in the project. The organization may need dedicated front-end developers to work unblocked with the back-end. LiveView is probably not targeting that use case.

LiveView, as it’s described, would definitely make sense in a smaller organization where developer resources are scarce and everyone is full-stack. Development time to learn the stack is really costly. The product is still taking shape and iteration time is crucial. We can’t afford the time to learn a new Javascript framework when we need to get to market! The less moving parts and dependencies the better. I think in the smaller/startup team we’d want the capability of rebuilding any given piece in a couple weeks and iterating quickly more than the benefits of free-moving front-end developers. The longer I can get away with the costs of bringing in React/Vue + Apollo/Redux/Mobx and what-not is a big win from a “required-knowledge-to-operate” point of view. If we’d be using Phoenix for a JSON/GraphQL API anyway, and our logic is still re-usable, why not just use Phoenix for rendering the front-end too? Building a JSON/GraphQL API after the fact isn’t that costly as long as our business logic is decoupled.

I don’t really see how LiveView changes how we’re already using Phoenix. To me the coupling between using Phoenix as a JSON API, HTML server, or HTML + Liveview server, or even a GraphQL API isn’t really that different. The logic/model and view separation is what’s important. I would consider Phoenix part of the view layer regardless of the mechanism that gets UI to the client.

We’re just going to have to wait and see what the real world trade-offs are. Speculation is only so useful.

3 Likes

I wasn’t actually thinking about SPA’s specifically - any page once in the browser is a separate application which uses the browser as a runtime. Sending form data back to the server is an application using an interface.

duplication of logic.

That duplication is often pursued because it’s synchronization costs effort and therefore money. But it ignores the fact that the rules serve entirely different effects on both sides - on the client how client state is presented to the user, on the server whether or not data is allowed to modify server state. So it can be argued that is is necessary duplication - as inconvenient as that may be.

I see Phoenix/LIveView as promising in that regard.

I see nothing wrong with the server providing information via events to the page that allows it to change its state - but I draw the line at sending page fragments which essentially boil down to the server monkey patching the page - it violates the page’s (application’s) autonomy.

At the core Elm/React/Cycle.js have the right idea:

  • Events change the state of the page
  • The new state is transformed to the visual representation presented to the user

That is simple. Monkey patching all over the place, “mutation heaven” not so much. Now the quality of the frameworks themselves - that is an entirely different discussion.

isn’t that more about dividing an application into “well-bounded problems”,

My point was that when it comes to design, boundaries have an impact everywhere. They are a line in the sand where you have to watch carefully:

  • What (shape) of data am I exposing to the outside that is going to limit what I can do in the future on the inside?
  • What dependencies am I pulling in from the outside that are going to have a permanent or future impact on the inside?

Nobody can deny that there are separate boundaries around the server and the browser. The same is true for the applications that live on them. Drawing a bigger boundary around both of them and calling it a web application doesn’t eradicate those boundaries - they are still relevant.

I don’t see much room for graceful degradation - it’s going to still require some form of JavaScript to work. Given that “Progressive enhancement” (which doesn’t seem to be that commonly practiced) requires are lot more work and planning (and therefore more complexity) I simply don’t see it happening. As it is service providers seem to have accepted the client being DOA if JavaScript is disabled on the browser.

In my view tearing the page apart and constantly flinging bits of it over the network increases the moving parts and dependencies. I’m sensing the shorter time-to-initial-success effect here.

1 Like