I’m wondering if it’s possible to assign a pipeline (in the Phoenix sense) to Hologram routes. More specifically: I would like all my Hologram pages – but not necessarily pages Phoenix is still routing – to pass through a bespoke MyApp.Plugs.Auth.
One of the issues with LiveView is the double render when going from dead render… to live render and the fact we lose all state on disconnection. This issue proposes for us to render a LiveView (a Phoenix.Channel really) upfront and then it gets "adopted" when necessary. In a nutshell: * On disconnect, we keep the LiveView alive for X seconds. Then on reconnect, we reestablish the connection back to the same LiveView, and we just send the latest diff * On dead render, we already spawn the LiveView, keep it alive until the WebSocket connection arrives and "adopts" the LiveView, so we just need to send the latest diff However, this has some issues: * If the new connection happens on the same node, it is perfect. However, if it happens on a separate node, then we can either do cluster round-trips on every payload, copy only some assigns (from `assigns_new`) or build a new LiveView altogether (and discard the old one). * This solution means we will keep state around on the server for X seconds. This could perhaps be abused for DDoS attacks or similar. It may be safer to enable this only on certain pages (for example, where authentication is required) or keep the timeout short on public pages (e.g. 5 seconds instead of 30). On the other hand, this solution should be strictly better than a cache layer for a single tab: there is zero copying and smaller payloads are sent on both connected render and reconnects. However, keep in mind this is not a cache, so it doesn't share across tabs (and luckily it does not introduce any of the caching issues, such as unbound memory usage, cache key management, etc). There are a few challenges to implement this: * We need to add functionality for adoption first in Phoenix.Channel * We need to make sure that an orphan LiveView will submit the correct patch once it connects back. It may be we cannot squash patches on the server. We would need to queue them, which can introduce other issues * We may need an opt-in API While this was extracted from #3482, this solution is completely orthogonal to the one outlined there, as `live_navigation` is about two different LiveViews.




















