Good/best architectures for 'resilient JS sync-with-server' LiveView pages?

This is all rather ‘architecture astronautics’, so keep that in mind and feel free to ignore if that offends you! :upside_down_face: :upside_down_face:

I have a little (private, effectively closed-source) task/reminder app that uses Phoenix and LiveView, but it’s not ‘ready for production’, at least not for all the things I want it to (eventually, someday) do.

I’m using another site/service/app and I’ve noticed that it seems to send JS requests to the server, to ‘sync’ with it (really, send an update), and for every ‘atomic update’, e.g. checking a checkbox to mark a task as completed. They seem to all be made thru a single ‘synchronous bottleneck’.

I’m curious what a better (or the best) alternative would/could be. I suspect Phoenix+LiveView might (now) provide this (or something very close) ‘out of the box’. Please let me know if that’s the case!

An outline of what I’ve cobbled together in my head:

  1. Maintain a queue of requests.
  2. Use a reasonable debounce frequency, but otherwise send a queued request immediately, if a response to a previous request still isn’t being waited for.
  3. If multiple requests build up in the queue, send them as a batch (up to some reasonable limit) when ‘the request sender’ is again available.
  4. Automatically retry (some) failed requests, e.g. if they’re not ‘the request itself is bad’ failures/errors.

What am I missing? What do you use? Is this all ‘plug in play’ with the current version(s) of Phoenix and LiveView?

Thanks in advance! :slight_smile:

If you’re diving into the Elixir world, it’s a great fit for what you need. Instead of getting stuck on how to connect it to the frontend (which is pretty much sorted and can be done with basic JS, LiveView, or any JS framework), let’s focus on what you really need—a system to handle requests smartly.

Think of it like this: you want a lineup for tasks, where tasks can be grouped together and retried if needed. Now, there are a few ways to do this in Elixir. You could go for something like Broadway with batching or Oban.Pro with batching. These are like ready-made tools that do the heavy lifting for you.

If you’re feeling adventurous, you can even create your own system using GenServer, a kind of Elixir component that manages a queue. But really, there are many ways to tackle this in Elixir. The ones I mentioned are just some that came to mind. The point is, Elixir’s got your back for this kind of thing. Don’t hesitate to explore a bit and find what suits you best!

1 Like

These ongoing “JS requests” generally become much simpler and lighter websocket events/messages thanks to the persistent connection that LiveView opens up.

For a rough mental model, stateful LiveViews replace stateless controllers, websocket events via LiveView bindings replace HTTP requests via AJAX, and LiveView server side handle_event callbacks replace server side traditional stateless API endpoints and their controller actions.

Pretty much, LiveView ships with a client side JS library that handles communication with the server. It includes a fair number of bindings e.g. <button phx-click="toggle_checkbox" ...> that minimizes the need for writing javascript for common tasks. You can even debounce/throttle at the binding level via phx-debounce="1000" and phx-throttle=1000.

The really powerful thing is when you have multiple clients accessing and interacting with the same dashboard/list of tasks and reminders since Phoenix + LiveView makes it fairly simple to broadcast updates across clients.

1 Like

Sorry – I’ve been using Elixir+Phoenix, professionally, for 5 years now or so, and have some experience with LiveView too across my professional project and several (many) side projects. (One problem I have is that I’m using several different versions of, e.g. Phoenix and LiveView, and thus have trouble keeping track of what features are available in which version for each of the projects.)

Broadway and Oban seem more like backend/server-side solutions – I want a nice system to handle requests on ‘the frontend’, i.e. in the user’s browser. I want the ‘browser system’ (i.e. bunch of (‘bundled’) JS code) to, e.g. ‘lineup requests as tasks’. A good example of the kind of things I’d like to be able to handle gracefully is ‘flakey cell phone Internet connection’ prevents requests from being sent/received/whatever. I want the requests, from the ‘user client’ to the ‘(web) server’, to be queued up and, e.g. retried/batched/whatever automatically.

You’re absolutely right about the things you mentioned/outlined working great. I’ve got all kinds of really nice little custom GenServer solutions for this kind of thing – on the server-side of my web apps.

These ongoing “JS requests” generally become much simpler and lighter websocket events/messages thanks to the persistent connection that LiveView opens up.

For a rough mental model, stateful LiveViews replace stateless controllers, websocket events via LiveView bindings replace HTTP requests via AJAX, and LiveView server side handle_event callbacks replace server side traditional stateless API endpoints and their controller actions.

Yes – I know that (some versions of) LiveView have, if not ALL of what I want, a LOT of it.

Pretty much, LiveView ships with a client side JS library that handles communication with the server. It includes a fair number of bindings e.g. <button phx-click="toggle_checkbox" ...> that minimizes the need for writing javascript for common tasks. You can even debounce/throttle at the binding level via phx-debounce="1000" and phx-throttle=1000.

The really powerful thing is when you have multiple clients accessing and interacting with the same dashboard/list of tasks and reminders since Phoenix + LiveView makes it fairly simple to broadcast updates across clients.

I still suspect that the scope of the thing I am (apparently poorly) gesturing at is NOT something that LiveView handles (entirely) ‘out of the box’.

Is there a standard way, using (the current ‘release’ version of) LiveView, to not only debounce and throttle individual requests, but to group ALL requests (or some specific subset) so that, e.g. if one of those kinds of requests is throttled, all of the other requests { of that kind / in that group } are also throttled (and queued up together)?

I’d also like to extend what I tried to outline in the preceding paragraph so that, if enough requests (or maybe even just two of them) get throttled/queued-up, they can be ‘batched’, i.e. combined into a single request. (I’d be very surprised if LiveView handles this specific feature out of the box because, so I’d think, it’s SO dependent on the specific backend/web API and how it would handle ‘batched requests’.)

Tbh it sounds like you don’t want LiveView to begin with. LV is a tool for server driven UI. It has means of sending messages as a convenience to support certain UI requirements, but it’s not a messaging system.

If you’re sending messages going with channels is likely the better idea. Wrap channels in some client side queue to do any batching, queueing when offline, retries and so on.

10 Likes

No?

Channels would definitely work, but I expected (the current version of) LiveView to handle, or support, a lot of what I want to do too.

Can you elaborate on that expectation? LiveView is marketed first and foremost as a server driver UI framework is it not?

1 Like

That’s a good question!

Yes, I think of LiveView as an honestly marketed “server driver UI framework”.

And, thinking about it some more, I think that might be, all on its own, sufficient for what I was looking for. It already abstracts the client-server messaging, and uses a web socket for it too, and that probably solves a bunch of the ‘problems’ I identified in my original post.

It is also trivial to, e.g. queue a Task server-side for messages received from clients.

The one aspect where Channels might make sense, but I wasn’t sure whether LiveView had any new-to-me possibly relevant feature(s), is ‘maintaining state’ client-side.

It probably wouldn’t be that hard to both maintain some minimal state client-side and use the LiveView messaging features, e.g. when the client is offline.

Remember that LV is using channels to communicate.

1 Like

Okay? :face_with_monocle::nerd_face:

Some relevant links I found: