Off-topic posts: Phoenix LiveView Info

To serve your personal convenience and ease of using “just a name” in the source code during development time you think it’s OK to send a smidgen of data halfway around the world, to have it processed, then sent back halfway around the world, repeatedly at runtime, rather than just locally in the browser pass a tiny command function that can accomplish exactly the same objective much more efficiently?

Apparently I need to spam this a few more times:

Programmers know the benefit of everything and the tradeoffs of nothing

Assuming you’re talking about LiveView in general: absolutely it can be a better option to serve your personal convenience (read productivity, maintainability, time to market, etc), provided your usecase allows for the latency. All kinds of user interaction on the client already requires sending data to the server and back, so framing it it “Programmers know the benefit of everything and the tradeoffs of nothing”, isn’t accurate. Even some use-cases like autocomplete don’t work or optimistic UI’s, so you are no better off in that case with either choice.

If you’re talking about the the pattern that English3000 laid out, they are referring to strictly server-side LiveView messaging, so there is no round trip to the client and back.

3 Likes

I wasn’t talking even remotely about LiveView.

Personally I see LiveView’s primary use case for internal apps over a corporate network. I’m much less convinced when it comes to targeting mobile devices on the go. But that is just my personal opinion.

I was referring to:

one needs to pass an action function … channel events for actions

if something happens in one component [in the browser], just as it can send itself an event [on the server] … it could send an event to another component’s [in the browser] state process [on the server].

A typical action function passed from an (owner) container component to an (ownee) presentation component handles something as trivial as a click. The landscape being painted is one where there are numerous isolated “components” active in the device’s browser which can only collaborate with one another through their server-based processes - rather than the components simply and trivally communicating locally right in the browser.

@peerreynders, not really sure what you’re talking about?

What I was saying is for those use-cases where you do need the server (i.e. saving data–for persistence across sessions, on crashes, etc).

And even if I were just using the client, I’d still need to pass a function to a React child component to modify the state of its parent.

Using channels/message passing is for when you have a DOM event which interacts with the server and you want multiple parts of your app to do something with the response.

Can you give a concrete example to clarify what exactly you’re criticizing? :slight_smile:

1 Like

Imagine you have a button with plus sign, and a box with integer in it. With LiveView, theoretically, clicking button would involve round-trip to server just to increase that integer. At least that’s how I understood it :wink:

5G on the horizon will make streaming even high res (4K60fps) games on mobiles a reality :slight_smile: (projecestream required just a 25mb connection for 4K 30fps IIRC).

The three big players are all very much into it:

So in some ways, I see LiveView doing similar for the web - and I’m really excited about it :003:

Your opening statement:

One place where React struggles is when one wants an event in a child component to “bubble up” to its parent. Basically, one needs to pass an action function (with a captured reference to the parent’s state).

React’s primary use case is single page applications. The primary premise of single page applications is to manage complex client state on the client side. The justification for an SPA over simple, dynamically server generated HTML/CSS pages is that client-side state is necessary for improved user experience that server-generated HTML/CSS pages cannot deliver for one reason or another.

Using channels/message passing is for when you have a DOM event which interacts with the server and you want multiple parts of your app to do something with the response.

In essence rather than “eliminating client side state” you are merely relocating client side state back to the server. Typically that approach is a lot more sensitive to the 8 fallacies of distributed systems (the first three are the most important):

  • Full server side HTML/CSS pages always have to completely reload but that is usually mitigated by designing each page in such a way that each page is as effective as possible.
  • Server HTML/CSS pages with jQuery/Ajax style DOM twiddling try to optimize a bit as they typically don’t require as many page loads by asynchronuosly loading additional data, introducing some client-side state which in turn causes client-side renders beside page loads.
  • SPA drives this idea to the extreme by committing to a massive (or staggered) page load in the beginning, in order to later minimize server interactions to an “only as needed basis” driving client side renders primarily from changes in client side state.
  • Progressive web applications (not to be confused with web pages designed with Progressive Enhancement) also enable the the page to cache itself with associated data and client side state in the browser itself to be able to offer some reduced functionality while the server cannot be reached.

So the technology trend is actually toward more and more client side autonomy in the absence of the server once the primary load is complete. This trend is partially driven by acknowleging the first 3 of the 8 fallacies of distributed systems:

  • The network is reliable.
  • Latency is zero.
  • Bandwidth is infinite.

As a consequence:

  • Network communication should not be chatty.
  • You should transfer more data to minimize the number of network round trips. You should transfer less data to minimize bandwidth usage. You need to balance these two forces and find the right amount of data to send over the wire.

Your proposal of realizing (potentially fragmented) client-side (react-style) component state as server side processes:

  • Is incredibly chatty over the network as each little component has to interact with it’s server side state to collaborate with another component that also renders itself on the client side.
  • Requires a large number of round trips
  • Potentially requires “abundant, available bandwidth”

In comparison to “React” a much more likely implementation is a “lifted up” application state (or redux-style store) on the server. Any part of the client is capable of dispatching an “action” towards the server so that the server evolves the current application state (most likely stored in a single process), which leads to a new render on the server, generating a render diff to be dispatched to the client.

There seems to be little benefit to fragmenting application state according to (visual) component boundaries over long lived server processes. React’s functional components would likely find equivalents in simple render functions that are fed the relevant fragment of application state. During renders independent parts of the view could split among short lived processes to be stitched together when completed.

However there likely is very little reason to keep more than one long lived process (to maintain application state in memory) between state update/render cycles per client.


Now in the context for internal apps over a corporate/institutional backbone network the LiveView trade-offs can be an effective solution for reducing development costs, possibly even for the couch-based consumer in well serviced, high availability urban areas but there has been a general trend of web consumption shifting to mobile devices and that is what is driving browser-based client technologies.

3 Likes

One thing to keep in mind though is that a SPA / PWA does only really save on network trips/size of requests after it’s been fully downloaded and is running. So it makes lot’s of sense for applications / websites, which are often used and tend to be already cached. For ones where this doesn’t fit I can certainly see a smaller JS footprint and reasonable number of network roundtrips (LiveView is debounced and optimized to send just the data needed) to be actually less data transfered than sending a full client side app on first visit – especially if any meaningful action within the app does need a connection anyways. I wouldn’t expect a react request sending json to the server to be considerably more light weight than what LiveView sends.

That’s what I actually feel as well. In a really optimized scenario state for visual changes could fully stay on the client side, while only actions changing application state need to go to the server. But that’s really not the usecase LiveView is targeting. This needs way more involved client side logic and needs templating to work the same on the client as on the server.

1 Like

The Cost of JavaScript in 2018 shows that there is an awareness that the current trend of ever increasing JavaScript payload sizes is not sustainable for the desired level of UX. So there could very well be an implending shift in technique of how browser-based clients operate. JavaScript-based VDOM rendering created an excuse to do everything in JavaScript all the time.

Now other less JavaScript-centric approaches may be explored like for example content template based partial rendering based on client application state.

I wouldn’t expect a react request sending json to the server to be considerably more light weight than what LiveView sends.

JSON payloads should only be large for prefetching data, i.e. load something before it’s needed (of course some prefetched data may never get used but to a certain extent that is a design issue).

if any meaningful action within the app does need a connection anyways.

The issue with mobile connections is that the connection quality can vary wildly during any one session. Media streaming can compensate by grabbing more content than required when the connection is good so that there is sufficient content buffered to continue operation when the connection quality drops.

Server based interactive applications require a consistently high quality connection to respond to every user interaction in a timely manner. Whenever the connection quality drops the user experience invariably degrades.

2 Likes

Are you serious? The market will be horribly small for many years as the technology expands, and more importantly the batteries will be dead way before you can end your game session. Network is a huge drain on battery already, and it will get worse with 5g so a game + constant 5g connection is mental.

2 Likes

I agree, but you forgot an important point : the battery drain caused by network access is enormous, which is why it’s really better to avoid sending small amounts of data too often (mobile devices shut off the antenna to preserve the battery, but cannot do it if apps send data all the time. That’s why good apps aggregate their data locally and send them in bulk at longer time intervals).

Really, it doesn’t matter than the amount of data is minimised/optimised/compressed: if you want to save the batteries the network should be used sparingly, in bulk.

I was thinking about using liveview for forms but as no fallback will be in place when the client has no JS I don’t see the point of it aside from corporate apps in LANs.

1 Like

But it’s not really a game - it is just a ‘video’ being streamed. No major CPU or GPU intensive tasks (compared to running the game on the device).

Not sure about Android phones but iPhones have excellent battery life, and most will also be moving to USB-C which allows charging while being connected to a display. So mobile phones could essentially replace consoles. Connect to TV and charge at the same time with one lead, controllers via bluetooth …and Bob’s your uncle :003:

I have no doubt there are requirements out there for JS disabled clients, and if you have such requirements then LiveView isn’t a good fit. That said, in my ~10 years of consulting across dozens of projects and domains, I have never, not once, had a requirement to handle clients without JavaScript.

If you have those requirements, LiveView is not for you, but I don’t think we should hang LV’s merits on this aspect.

6 Likes

Please define excellent battery life :wink: For me “excellent” would last several days. But anyway iPhones and Android battery lives are partially defined by how much they can put their antennae to sleep. I assure you that your iPhone with an app that prevents its antenna from going to sleep is going to deplete its battery in a few hours at most.

Of they will, I agree. They (the better ones) have more than enough power to be used as PC already, and only lack proper OSes that would offer adaptable UIs.

But because they have so much power already, they can run the game engines just fine, and this would require proper tests but I’d wager that it would drain the battery more to use the network so much in order to show a video (30 fps? 60 fps? No people have started to ask for 120…) than to compute the frames locally.

When I studied gaming tech in university 10 years ago, there were people thinking that soon it would be possible to play on a potato with powerful servers rendering the frames from afar. Such companies exist now but because users want better and better resolutions, with higher and higher fps, it won’t do. Or more precisely, I think it will become viable when optic fibers will be everywhere.

So, rendering a game on a server to display it via a potato (phone, small PC, …): yes but only when docked with access to optic fibre network. 5g will be way too power hungry for that.

1 Like

Hahaha for that you best get a Kindle :lol:

I would expect similar battery life to streaming youtube videos :023:

I’m sure other people with different requirements won’t see a problem with that indeed! It’s a personal choice, as someone who activates JS only when necessary and don’t like when basic HTML behaviours begin to require JS.

What a world we live in that considers a few days of battery life to be too much :lol:

Youtube videos are optimised for streaming, and that is done ahead of time with algorithms which take advantage of the whole video and plenty of CPU/GPU power as time is not a problem. So it cannot be as optimised, really. I would be curious to find out how less optimised it would be network-wise though…

Personal choice is a fair argument to use or not use LiveView, but claiming you “don’t see the point of LV outside of corp LANs” because how you personally choose to browse the web is a bit of hyperbole.

2 Likes

This is technically possible in my country’s 4G network today. The real issue are the carriers with the limits and price tags they put in place. So while many awesome things are possible even today, they don’t happen until somebody is satisfied that these awesome things will be a cash cow.

1 Like

Ah, but I do think that outside of LANs it might become a problem (depending on how it is done of course) as people will prefer using competing websites that won’t drain their batteries so much by sending data through the net all the time. It is not easy to see where the happy medium lays, and I might be too pessimistic here, but there is an equilibrium to be reached between developers ease of use (and doing everything in one language on the server sure helps) and users happiness, and considering the mean power of even phones today I think more should be done on the client side to save battery life and quicken the interaction.

As an aside, I know Liveview is not the same beast as drab, but my curiosity led me to the drab website the other day. There was a very small preview on the page which didn’t do much, I went to bed and got awoken during the night by my PC fans. When I went to check several years later I could see my single webpage open on the drap site had been using 100% cpu, during the whole night. That’s not a good use of resources, and I hope you will make Liveview the leanest possible for both network and cpu!