Let's discuss Phoenix LiveView

Let’s please keep in mind how that particular example is motivated - it isn’t based on technical merit.

Sony is more interested in being able to harvest a recurring “service subscription fee” out of their customer’s pocket rather than providing the best possible experience that technology can offer. To that end they need to be able to lower the barrier-to-entry for their gaming service - i.e. the cost of purchasing a current gaming console. Ideally they want to be able to make it possible for their service to be consumed with “hardware you already have in your home” (e.g. smart TV) or at least for the necessary hardware to be so cheap (i.e. disposable) that they can afford to give it away for free with the purchase of a one year subscription.

The streaming model works for Netflix because most of the information flows one way - from Netflix to the consumer - and most of the information isn’t time sensitive, i.e. many networking rough spots and variabilities can be ironed out by simply buffering a sufficiently large amount of information.

Game streaming targets what is usually referred to as the “casual market” - the games in that space typically don’t have a lot of ‘time sensitive’ interactivity; who cares if it takes 1/2 second between a click and a change in the display for a puzzle game? However there are genres where input lag plays a significant factor in the game’s playability. For example enthusiasts of Shoot 'em ups will often choose CRTs over more modern displays in order to avoid the display lag that is introduced by the typical image processing pipelines.

Given the nature of network latency, “game streaming” will likely never serve certain markets.

Sony also acquired OnLive which was active back in 2011 but was then discontinued in 2015.

3 Likes

When Sony purchased Gaikai, they had run into trouble with their then current console - the PS3. They were selling it a hefty loss - it was just too ambitious and cost too much (and developing for it wasn’t easy either). It was a very rocky time for them.

So while I think that may have been what led them to look at Gaikai to begin with, the benefits you mention would certainly have been appealing to them and a major factor in why they acquired the firm - like you said cost is probably the biggest barrier for console gaming - and so being able to make gaming cheaper, while still offering high end graphics, could only be a good move since it would mean a larger userbase/more subscriptions etc.

Re streaming/input lag, etc they are getting better and better over time. Sony actually make some of the best TVs for gaming now, too.

Going back to LiveView, Drab, Texas etc, do you think it could be a great fit for things like Nerves devices? (As per my post above.)

In what way? Personally I find it telling that a shop that already used Elm/Purescript in the past has gone back to basics with raw HTML/CSS/SVG/JS/DS3.js in their more recent work.

While it makes sense to complete the initial render on the serving device I don’t think IoT devices will take on any more serving than necessary.

Serve as much as is reasonable but no more than necessary.

2 Likes

absolutely - the less you’re pushing to the client and less computations you’re asking the client to do the better

edit: well that holds true for texas - I’ve not had time to look into drab yet - I only just heard about for the first time a couple weeks before the conference! Texas does all the diffing on the server so the only thing the client is doing is applying changes from patches it gets over the socket or building the DOM from a typical HTML string. With all of that said a browser is pretty heavy for IoT

1 Like

grisp, as far as I can see, doesn’t actually concern itself with the frontend and none of Peer Stritzinger’s repos for it have any frontend code in them at all. I don’t think the app in the video is more than an example of what you can have and I wouldn’t read too much in it.

Edit: Personally I thought Phil Freeman stepping down from maintaining it would make PureScript less attractive, but it really doesn’t seem like it’s slowed down at all and he’s still involved. 0.12 was released successfully with a ton of stuff in it and there’s still a healthy turnout in terms of contributors.

1 Like

Is it going to be expected that we use turbolinks for speedy GET page transitions and liveview for partial updates on existing pages, or do you plan for liveview to completely eliminate turbolinks at some point?

1 Like

I’ve been exploring https://unpoly.com/ as a more powerful turbolinks alternative - plus it comes without the bad taste that some people have with turbolinks (which is really because people having issues with turbolinks didn’t read the documentation and understand the library before using it)

I don’t know if liveview intends on making new page requests happen over the websocket, but with texas and drab you should be able to use turbolinks or upoly just fine - and they are very robust solutions to rich user interfaces imho

that’s not to say texas (or drab maybe? can’t speak for the drab folks) won’t ever concern itself with new page requests - that’s just to say unpoly seems to have very thoroughly solved that problem so it’s not a high priority for me anyways

1 Like

No, zero expectations on turbolinks usage. GET requests for phoenix applications are already speedy :slight_smile:

There may be a similar mechanism later on for transitioning from one view to another, without a URL change, but a turbolinks like feature in itself is not a current goal of mine.

Thanks, so continue using Turbolinks (if anything just for full page transitions / URL changes).

IMO even with Phoenix being speedy, you can very much feel the difference between Turbolinks vs no Turoblinks, mainly because the browser has to parse so much CSS / JS on every request without Turbolinks. Even if it takes 500 microseconds for Phoenix to respond, all the asset parsing time is spent client side.

1 Like

the problem might be that phoenix is too fast :joy: - if users get used to really snappy interactions they may start interacting before the websocket is setup - that problem is solved with turbolinks (or my current interest is in unpoly)

that is to say a full page rerender will break and reconnect the websocket every time - unpoly and similar solves that problem by handling the “full page request” behind the scenes with ajax which keeps your current websocket alive

I have looked over unpoly’s docs but I always stuck with turbolinks, mainly because it is pretty drag / drop for just getting fast page transitions and the lib itself isn’t too massive.

Kind of hoping liveview will replace a lot of unpoly’s extra features.

My first thought was hybrid mobile apps built with WebViews. I’d be interested to see how those “feel” using something like LiveView.

1 Like

the main thing I want to take adv. of with texas is unpolys pre-fetch ability…since texas already can easily keep all the data up to date in real-time it could easily send a full rendering up so that link clicks would be instant with fully rendered html just being dropped in place - so yeah that would actually be faster to show usable pages than basically every front-end framework solution I’ve seen

When it comes to web interfaces I don’t expect the majority of IoT devices to have anything more sophisticated than your run-of-the-mill router administration interface as it could be cobbled together with something like apache-asp. Putting SVG/D3.js into the mix makes sense to boost the information visualization capabilities a bit.

By and large, I’d expect IoT devices to

  • expose data and diagnostics in two formats: human readable HTML+ and machine-readable JSON/XML
  • provide some interactivity for the purpose of device configuration

I would not expect them to

  • burn lots of electical power, CPU cycles and bandwidth by providing continuous rendering support for a client app OR by serving some unnecessarily bloated JS SPA.

Anything requiring the “massaging of data” I would expect to be shipped off to some (data centre based) integration service.

I think that’s really interesting and I’d love to see what they’d be like too. I’m guessing it could be pretty awesome :003:

Do you mean Nerves device as the server or the client? As the server, I think it would work mostly fine as long as you’re on the right network. For the client, I can think of a lot of cases where it’s not a good idea. Some IoT devices are necessarily mostly/always offline. Those types of devices need the brains to be local.

1 Like

Phoenix can run on the device itself, and I think a lot of the nerves folks are using Phoenix + headless chromium UI to power a lot of GUI interfaces, so in this case LiveView would be great even for offline usecases.

2 Likes

I meant, if the nerves device were the client. But, thinking about it, that doesn’t make much sense.

1 Like

This discussion focuses a lot on Turbolinks’ promise to load pages faster. And indeed with the performance of Elixir and Phoenix this is much less of an issue than with slower languages.

However, server-side apps also have the issue that all transient state (scroll positions, unsaved form values, etc.) is lost with every click:

Unpoly addresses this by only using the changed fragments of a server response. This solves most problems with transient state being destroyed:

We’ve found that in many apps this is enough to not need technology with more moving parts, such as SPA frameworks or web socket connections with persistent server-side UI state.

Here is an example app that illustrates this:

5 Likes

Hi,

My 2 cents, my webapp/website uses pjax-api. It’s like turbolinks but with the capacity to selectively update certain part.

I use it to maintain an audio stream and/or a YouTube video during navigation.
I configure pjax-api to replace only the main tag. No need for SPA.

It’s a good usecase for liveview. .

1 Like