Hotwire by Basecamp

Just saw that dhh announced https://hotwire.dev/
Is it just me or is this essentially live view? :smiley:

Although I like the “iFrame-esque” approach of compartmentalising the application.

Twitter thread: https://twitter.com/dhh/status/1341420143239450624


Edit

Seems like this is the new “hot stuff”: React Server Components ( React Teams take on LiveView?)

2 Likes

To me at least, it seems like it’s a little bit more streamlined than LiveView right now.
Also, with more “moving parts” to use with different workflows - while LiveView is very opinionated.

I’d say it’s more like a combination of Alpine (Stimulus), Unpoly (Turbo) and LiveView (Hotwire).
IMHO, the seemless integration between those worflows is the big deal (aparently).

5 Likes

Here are my first impressions regarding the differences.

What’s good about hotwire is the upcoming Strada with some support for offline mode even if it’s HTML generated on the server-side. I expect it because I see this in the hey app on my iPhone.

What’s good about LiveView - super fast synchronous tests.

Surely there are many other consequences of using one or another. To me LV feels better from the Developer Experience perspective and I would choose this approach whenever I don’t need any offline-mode capabilities.

1 Like

Although both are “HTML over the wire”, LiveView is stateful and Hotwire (as well as React Server Components) are stateless.

This is an important distinction and there is a lot we could discuss under this subject but in a nutshell, stateful allows you to decrease latency because each request/event is immediately handled by the connected socket. On stateless, each request has to parse HTTP headers, do authentication, authorization, load data from the DB, etc.

LiveView builds on top of stateful connections to provide a more complete dev experience. Live form validation? Check. Dashboards? Check. File upload? Check. Drag and drop? Coming soon. This is also why people can build games with LiveView, even though it is not its purpose. The downside of stateful though is higher footprint cost on the server but that’s not fundamentally problematic in Elixir because connections are generally very cheap.

Back on the latency side of things, LiveView also does a lot to reduce the payload sent over the wire. When you update a page/frame/component, we only sent what changed. If you add new instances of a component and the component is already on the page, we send only the dynamic bits. React Server Components seem to do the same but I believe Hotwire does not.

We will have to wait for a bit to see how these differences will translate to programming models in the long term and how some ideas will cross-pollinate. You can always reproduce stateless with a stateful model though by discarding all state after every event.

Yeah, Strada was the part I was most curious about. My guess is the offline mode is managed by the frame-specific caches kept on the client (and not Strada). I expect Strada to rather allow you to replace some specific components/frames by native components (that would go alongside the webview). But I am guessing. :slight_smile:

EDIT Feb/2024: since this message is still references from other places, I should add one important clarification. LiveView and Hotwire ultimately are two different programming models: LiveView is declarative, Hotwire is imperative. What this means is that, in LiveView, you don’t say “when someone clicks this, update this frame or render a stream to update this ID”. In LiveView, you simply change your LiveView state and LiveView re-renders the page. The declarative model requires less from the developer and it comes with the major benefit that, by letting LiveView be the one that does the rendering and patching, it can understand your code and apply a lot of optimizations.

36 Likes

What I find the most interesting about Hotwire so far is Turbo Drive and Turbo Frames allowing you to do body swap page transitions and partial page updates over HTTP with no state or persistent connection on the server and it’s also back-end agnostic, meaning it either requires having to do nothing or very little to make this stuff work with any web framework.

Doing body swap transitions (what Turbolinks 5 did before Turbo Drive existed) makes a huge improvement for perceive page load speeds. The Frames add-on is just icing on the cake to be able to make partial updates to only 1 area of the page. changelog.com uses Turbolinks 5 on their site with Phoenix btw, to give you an idea of what fast body transitions feel like in practice.

I know LV can do this too, but it’s all done over a persistent websocket connection, and the diff of the content is sent over websockets. I know there’s the long polling transport layer but that’s mostly just a worse version of websockets and still requires keeping connections open on the server.

The neat thing about Turbo Streams (also part of Hotwire) is when you do want to do broadcast style events, like having the server push new content to the client or all connected clients you can do that and only pages that use this behavior will keep an active websocket connection open.

I know LV isn’t slow and I’m not worried about keeping connections open on the server with Elixir, but I do not like the user experience this gives when you need to keep a connection open all the time on every page just to do things like fast page transitions.

What’s really interesting in the end is, I think it would be quite possible to build in support for Turbo Streams in other web frameworks too, and Elixir is well equipped to deal with this. The Rails implementation is extracted out in a gem at https://github.com/hotwired/turbo-rails. That includes convenience Rails helpers for Drive and Frames too but the bulk of it is for Streams.

I wonder if we’ll see a community driven LV alternative using Hotwire, or if LV will take inspiration from the good parts from Hotwire.

4 Likes

Agreed on Turbo Drive. I consider it to be completely orthogonal to LiveView though and people can and have been using unpoly or turbolinks to achieve the same behaviour - as you mentioned.

However I am not convinced about turboframes.

The way turboframes seem to work is that it renders the whole page and then just plucks out the frame. At the moment I don’t know if the frame is being plucked on the server or on the client, but because frames are also meant to be functional when accessed directly via the browser, you are still doing all of the work necessary to render the whole page (parsing request, auth, authz, loading all data, etc) only to get a tiny bit from it.

And if you get the same frame multiple times, you are loading the whole frame markup multiple times. I can see how someone could optimize turbostream to introduce some bookkeeping and avoid this repetition but I am really struggling to visualize how anyone would optimize the frames.

When LV was released, most of the questions were how to reduce work done on the server and the amount of data sent over the wire, so I am curious if hotwire is going to be held to the same level of scrutiny.

EDIT: another area where you probably need a stateful connection is live form validation. Otherwise, if you are doing complete requests over a frame to live validate a form, I don’t believe it will be responsive enough.

4 Likes

Fully agreed. You don’t even need turbolink if you just want to swap pages or part of a page. Just make views that render to html snippets and let the client side load them via fetch(). People do that all the time. If you need to use websocket you may as well do one step further and make the whole thing stateful like LV.

2 Likes

To be honest this sounds a bit like “you don’t need phoenix to serve a website. Just serve some html with cowboy.”. The problem often isn’t the fact that things are not possible, but not having a good abstraction to deal with a whole manner of problems with the same blueprint. Those do allow for greater efficiency and less one-off ways of dealing with problems.

1 Like

It is all about the bang for the buck. I for one welcome a Basecamp backed, standard way to do stateless partial page updates in HTML. The people over there has great taste.

1 Like

I don’t know it well enough to answer this question with certainty but I’m pretty sure DHH mentioned this in the video. If you access a frame for a 2nd time and the content didn’t change the server will respond with a 304 content not modified and you can use the cached version.

I suppose that’s one benefit of using HTTP for certain things. You get access to 20+ years worth of optimizations and standards.

I’m not sure if it will right away because most of it is stateless, it’s just HTTP requests all the way down. It was stuff that your web server was doing anyways, and now with frames you only have to return snippets of HTML (let’s say the HTML required to render a new tweet card’s details) instead of the entire page body, so it’s a net win vs the previous implementation. Combine that with caching, and it doesn’t seem like the end of the world to be less strict with bytes over the wire.

But, of course less bytes over the wire is good. Maybe a future version will do dom diffing.

Although after looking at some examples, sometimes not diffing ends up being a feature to write less code. For example image a tweet card being updated. With LV you would need to wire up multiple events to edit the tweet, handle likes, etc… That’s because each surgical update is totally isolated.

But with frames, you would wrap the whole card, write 1 controller endpoint and you’re done. If it ends up updating 1 attribute or 5 attributes that change you only ever have to wire up that 1 thing.

I guess that leads to progressive enhancement too. Since it’s just a controller being rendered without a layout and other cruft when Turbo Streams kicks in, it falls back to a full page load when websockets aren’t available. It’s a bit harder to pull that off in LV without lots of duplication right? Making both LVs and controllers, etc.

It’s too soon to say which approach is better overall but I think there’s definitely pros and cons to both Hotwire Turbo and LV.

1 Like

This isn’t quite right. On broadcast side, your system can broadcast a generic “Tweet Updated” event, and then LiveView will surgically send the diffs based on what changed without any work for the dev – rt count, likes, body, etc. For the write side, setting up a single endpoint for the tweet update isn’t accurate in this case either, because you will likely have a RT endpoint, a like endpoint, an general edit endpoint, etc. In general, some things can be handled by a generic update of params, but atomic updates such as likes/rts that involve specific actions like notifications are necessarily going to require specialized handling, so you are writing your events in LV or inside multiple controller code paths, the latter of which requires extra routes and handling vs LV.

5 Likes

Right, good call on that. I had glanced at something like 9 hours ago thinking there was only 1 endpoint in the Rails example I looked at and now looking back I see there’s multiple ones like you described.

I’ll leave my original comment there so the discussion makes sense.

2 Likes

That’s not quite what I am referring to. I am rather referring to the same frame being instantiated multiple times. For example, take each message in a chat app. In an actual app each message has a lot of markup around it. How do make sure you send only the actual message and not all of the markup multiple times? This is what a SPA would give you - which is the standard LV is typically being held at - and also what LV and React Server Components give.

Or building on your tweets example. On Twitter UI, you almost never load tweets one by one. You either: load a page with a bunch of tweets or you click the “Show latest 10 tweets” banner. In both cases, you’d send the markup of each tweet multiple times with Hotwire.

And HTTP caching is not something that will help you here either. That’s because:

  1. The request either has a bunch of tweets (the whole page or the next 10 tweets) - a combination unlikely to repeat over a request again

  2. You will cache each individual tweet. But if you are caching it with HTTP, then it means you need to do an individual request to load each tweet and if clicking “Show latest 10 tweets” translates into 10 HTTP requests, that’s going to be way too expensive.

EDIT: to take yet another example: live form validation. You won’t benefit from HTTP caching at all as you receive feedback - so if you don’t want to send the whole form over and over again as the user types in or blurs an input, sending only the dynamic parts is essential. And I think that will generally hold for anything interactive/realtime.

6 Likes

With all the HTTP headers and auth per request, it probably will not make much difference anyway. I’d say just send the markup. I agree that in the applications you cited, like chat, twitter or live form validation LV is the better choice. For applications that is not so real time, such as Hey.com, basecamp, or this forum, it could make a lot of sense.

2 Likes

From what I gathered, your frame would be the list of messages and they would be listed out as if it were an HTML response from the server without anything extra going on. If you had 5 messages that would mean having 5 messages each with their own HTML tags associated to them.

It’s not as sophisticated as merging in dynamic content into the static parts of a template like LV does. But we’re also dealing with a day 1 release. Who knows what will happen later.

I’m not too concerned about identical HTML being duplicated because gzip is remarkably efficient to the point where you almost can’t believe how well it can reduce lots of repeated text down into a few characters.

In this twitter example sure, but in a lot of apps I could see cache hits being the norm. Such as a list of lessons in a table of contents, forum threads in a non-super busy forum, the 5 latest blog posts, etc… Basically what @derek-zhou said.

If you haven’t watched it already, the last ~3 minutes of DHH’s demo video gives a glimpse at how these patterns are applied to Hey. It looked like lots of opportunities for cache hits and it shows when they situationally bring in websockets (Turbo Streams) to offer soft real-time broadcasted updates.

1 Like

That’s precisely my point. I didn’t bother about HTTP caching rendered content on any Phoenix app I wrote because:

  1. They all felt fast naturally

  2. Rails default HTTP caching - the one used in the video - is the digest of the HTTP response. This means you still do all work on the server, the database, and in rendering a response. So if a response is slow to render in the first place, the cache won’t really help you, as the only part it skips is the sending of the data

  3. Compression, turbolinks, live links, and others reduce the content size anyway

  4. Small bits of interactivity that we add, such as flash messages, makes HTTP caching generally difficult

You can implement more efficient HTTP caching by writing custom code but given points 1, 3 and 4, I would rather put the development effort elsewhere. If I have to cache, I would do it within the app as it provides more granularity.

On the other hand, live form validation is something I would have used on every app that I have built in my entire life - and without content diffing it isn’t really practical. So to me, when it comes to practice, we are talking about a feature with marginal benefits vs a must have.

EDIT: of course, HTTP caching is still great for static assets and places where you can compute digests upfront (without doing the work).

3 Likes

You are right. I did some tests to see how efficient it is and gzip pretty much removes all of the duplicated content. So when you are sending the same content multiple times in the same payload - it covers it. This is great to know because the dead render in LV does not remove duplicated content and I was wondering how to address it. But apparently I don’t have to worry about it at all.

There is still one case left: when you are streaming messages one by one, such as in chat, you would be sending the same markup every time there is a new message. As they are distinct payloads, gzip won’t help. And there is no HTTP caching here either unless you load each individual message with a separate request.

So to sum up my thoughts so far:

  1. gzip takes care of minimizing payloads and duplicate content on the same render
  2. digest based http caching provides minimal benefits for rendered content - to the point it is not something I would worry about optimizing for
  3. a stateful connection + change tracking is a requirement for the interactive bits
  4. static/dynamic splitting reduce payloads on realtime updates (where gzip doesn’t apply)

I can be convinced that 4 is a nice to have - although it is a requirement if you want to compete with SPAs - but 1 and 3 are must haves. I am not giving up on my hard earned live form validations and general interactivity. :smiley:

6 Likes

On a related note can you even gzip websocket transmissions? I don’t think it’s supported yet by browsers, and I can’t find any docs on nginx’s site about it

Just asking because if you use LV for everything to benefit from page transitions like turbolinks then aren’t you technically sending a very large amount of HTML over a websocket connection that won’t be gzipped?

Use case:

You have a list of blog posts and everything within the main page content results in having 80kb of HTML.

You click a specific blog post and transition to a blog show page with 100kb of HTML to show the blog post itself.

You do that navigation with live_redirect to benefit from keeping the <head> along with some of the nav bar and maybe footer around in the DOM so only the main page content changes.

I know diff tracking is great for things like validation where only a tiny bit of text changes, in which case gzip isn’t super important but for nearly full page transitions you’re dealing with tens, or possibly even hundreds of kb of HTML that can’t be gzipped.

2 Likes

Yes. Set compress: true option for your websocket configuration in the endpoint and you are good to go.

Note that LiveView will also do the static/dynamic and content tracking even on live_redirect. So data inside comprehensions or components is sent only once. So you are getting both LV optimizations and gzip.

8 Likes

Using liveview for a blog is probably not so smart. On the other hand, liveview and turbo can work side by side:

  • using LV on websocket for the chat window, news ticker, live forms, etc
  • using turbo on regular HTTP for infinite scroll, user navigation, etc

All on the same page. That will be sweet.

2 Likes