Can you hook up a LiveView to a prerendered static page?

Since you are already running on fly.io, then why the extra complexity of the html CDN dance? Your LiveView is already at the edge, so your initial page render will be super fast. Rather than introduce a complex pre-rendering step, simply cache any expensive operations and let the LV do its normal thing. For example, if you run read replicas on fly, you are already doing cached local reads, and the LV render should be indistinguishable from your static HTML example. That’s one of the many things I love about fly – it removes entire operational layers. You don’t need a CDN for your html and css and js because Plug.Static is your CDN when you’re running on the edge near users :slight_smile:

8 Likes

@chrismccord - Do you suggest even the file uploads in the static folder rather than an S3?

While this is true, there are still benefit of running a separated CDN: Etag based caching, and updating static assets without taking down the app. Also the separated CDN shall have the economy of scale and are cheaper from an egress bandwidth point of view.

Although I haven’t tried myself, I think booting LV from a static page from another domain shall be possible. If there are enough people interested, we could make this one of the supported ways of deployment.

1 Like

Strictly speaking about js/css assets in this case. Yes you can update assets without a deploy with a CDN but the trade off is extra operational plumbing, another vendor, version syncing, etc. the default phx.digest deploy is fantastic as a default for most folks. I don’t think booting from a static page is ever something we’ll support. I’m not convinced it’s needed and more so LV was created exactly to avoid these kinds of cached prerender gymnastics

3 Likes

Two reasons: 1) Becomes a bit expensive to run fly.io on every single region (many projects I don’t even make any money on) and many times first load matters a lot if people come from google 2) fly doesn’t exist in every region (yet). For example no fly region in Stockholm (and sure London/Amsterdam isn’t so far away but still).

Sure, I don’t “need” to do this and if it’s not possible it wont really be a reason to not use phoenix but I feel it should be possible. Just think it would be really cool to be able to speed up the site. It was twice as fast when caching with cloudflare compared to using a far away fly.io (and yes, it could be solved by adding every region but again sometimes its not worth the cost but you still want it to be fast across the globe).

2 Likes

What is your ping time on https://livebeats.fly.dev ? A single region Amsterdam or Frankfurt is going to give most folks around Europe fantastic latency and TTFB. Most folks don’t need to run in every region. Usually handful will do and get you speedy access for your regional users.

Cost wise, you can run LiveBeats on Fly for about $6.80/mo per region. ~$200/mo for all 22 regions, but again you’d only need like six or so to cover most of the planet, unless you are hyper optimizing. Keep in mind this is also with postgres replicas running in every region and local disk volumes. I might sound like a shill, but I am actually surprised it’s this cheap :smiley:

All that said, static CDN html + LV just doesn’t make sense to me. Since you understand the value of CDN’d content (faster page loads, TTFB), then you undrestand why running a full stack app on the edge is the same value-add, but even more so. Under your scenario, your static HTML is fast, but then your websocket LV interactions and DB reads are slower because they are talking to servers farther away. In my scenario, you simply run your full stack app on the edge, get great TTFB and initial page load, then all interactions thereafter are extremely fast as well. So CDN html is more moving pieces, more infrastructure, and more complexity for overall worse UX.

3 Likes

I asked this question because, in the livebeats app, the Avatar files are uploaded to local file system. With edge deployment, is it the best approach or due to educative nature of the application, this aspect is ignored?

It depends what I’m building. Scalable cloud storage exists for a reason, but I would only reach for it if I knew I had unbounded storage requirements or I was looking for an easy offsite backup of data. For LiveBeats, I wanted to show some elixir strengths like processing the file on the server before storing it in its final local (ie no lambda to farm out to to parse the mp3). LiveBeats could still very easily throw the files onto S3 instead of disk after we parse them, but I think local disk make sense as an example application folks can deploy anywhere with a disk. I also already released a screencast showing both disk and S3 LiveView uploads, so that’s covered well already. The neat thing about LiveBeats using local disk is when I went from single node to multinode, I only needed to add read replicas and proxy the files across instances by streaming from the controller:

Less than 100 LOC and we have file streaming across servers using Mint. You could also easily cache these files on the local hosts for CDN’d assets, but for the streaming audio usecae, a bit of extra latency is fine for what we’re doing in LiveBeats.

6 Likes