Are there elegant ways to sub/unsub when user navigated between routers in same liveview?

Hi, I have some usage scenario to ask:

Users navigate between ` /product/product_a_id ` and ` /product/product_b_id` , and more ;
these routers are in the same liveview;

I have background running jobs will send events (and data) to users who is staying on specified product detail page; this may approach by using Pubsub or Register;

The user who left ` /product/product_a_id ` should not receive those event sent to product_a, but he should receive event sent to product_b for he navigated there.

For its in the same liveview, I have to add logics in handle_params to detect url changed then unsub, sub product_id related things.

are there some tools do this more elegantly ?

Thanks !

You basically need a piece of code which filters the pubsub events based on the currently viewed product. I don’t think theres a much better solution to what you have - using handle_params. You can subscribe to receive all the pubsub messages of the background jobs and filter in handle_info based on the identifier of the currently viewed product. This avoids the unsubscribe/subscribe dance but introduces more messages in the mailbox.

6 Likes

Unfortunately LiveView lacks the primitives needed to perform generalized incremental computation. For example React has useEffect() and useMemo(), but Phoenix lacks even a full implementation of the older (inferior) lifecycle paradigm: it has mount() and update(), but no unmount().

This makes interop between declarative and imperative code rather… difficult.

Of course you can abandon the incremental approach entirely. The root of the LiveView can subscribe to everything and then meter updates out to individual components.

def handle_info({:product_update, message}, socket) do
  send_update Components.ProductA, id: "product-a", message: message
  send_update Components.ProductB, id: "product-b", message: message
  send_update Components.ProductSidebar, id: "product-sidebar", message: message
  # ...
end

But now, and this kind-of hints at the problem, the structure of your application has become entirely rigid and static. Yes you can unmount A or B and the messages will be ignored, but you have lost the ability to dynamically subscribe. The ability to dynamically mount/unmount components and incrementalize computation is the fundamental raison d’etre for React-style engines, and LiveView is unfortunately derelict of its duty in this department.

If you’re clever you might think you could transmit subscriptions from dynamically mounted components back up to the root. Believe me, I thought that too. But putting aside for the moment that component ids in Phoenix don’t compose (an unrelated issue), this is still insufficient. You cannot tell if a component has been unmounted, so there is no way for a component to clean up after itself.

Right about now some of you may be getting some ideas about how you might hook into the code that leads to an unmount, upstream, and unsubscribe there. Like at the router level, or in response to an event. Let me stop you right there: you are giving up not only incrementalization, but declarative programming as a whole. You are writing imperative code and will be cursed by the gods. Many stronger than you have ventured down this path, and all have fallen. Abandon hope, all ye who enter here.

To be clear, there is no fundamental or technical reason why LiveView cannot provide this functionality. Instead I think it’s mostly a cultural difference; that is to say, most of the devs using LiveView do not understand or need any of this, and would probably get by just fine with something like HTMX if they didn’t happen to be writing Elixir. I used to think that it would be a good idea to extend LiveView with more incremental functionality, but in reality the depth of the changes needed would likely serve to annoy those who don’t want, need, or understand them in the first place. Plus it’s not like I’m volunteering to do the work.

It will likely be easier to write a new framework from scratch.

(And in general I do think it would be nice to have more app frameworks in Elixir, which is why I’m rooting for Hologram!)

TLDR: No, not really.

4 Likes

I don’t think OP is using live components. They’re probably navigating to the same live view and because it’s in a session, only the handle_params callback is called.

Minor aside. Using sens_update to update child live components is such a code smell to me, I can’t even start to explain. Please use assigns.

2 Likes

The fact that there is any noticeable distinction between LiveViews and LiveComponents is another unfortunate design wart, but what I wrote still applies if you only have one component (the root LV).

I would love to see you try, but I think there is some misunderstanding here. The code snippet in my post is a PubSub router. It’s not holding state, it’s forwarding messages. Really it’s doing what you suggested above, just extended to components (which cannot receive messages). I do think this is a bad idea, but for much deeper reasons.

1 Like

Thank you above two;
Of course, I am not talking about live component; my scenario is just simple case of liveview router/naviagtion.
Now I am considering , to offload these logics into some on_mount hook, to reduce code repeat.

1 Like

You really can almost get away with it, though.

Conjuring a reconciler from the void:

def handle_params(%{"product_id" => new_id}, _uri, socket) do
  if (old_id = socket.assigns[:product_id]) != new_id do
    if old_id, do: unsubscribe(old_id)
    subscribe(new_id)
  end
  {:ok, assign(socket, :product_id, new_id)}
end

It handles mount and update because handle_params is called by the engine for both and the reconciler is idempotent. It does not handle unmount, but because this is the root LiveView we get lucky and the PubSub system will throw away our subscription when the LiveView dies.

However, we are cheating quite badly. First of all, this trick only works because the root LV always replaces itself. If you were to replace the root LV with another component in the same process (say by navigation), you would be back to square one because there’s no unmount. Same problem if you tried to use components. The fact that you can get away with this at all is sheer luck, and this solution is a house of cards that will quickly buckle as soon as you add real functionality to your app.

Also, notice that we are conveniently running the state through the router. If the product_id was a part of the app’s actual state (which serious apps must be able to do, contrary to what many think), there would be no single place to observe changes. Next thing you know you’re writing imperative code again, cursed by the gods, etc.

If you’re wondering what a real solution to this problem looks like, here it is:

function Product({productId}) {
  useEffect(() => {
    subscribe(productId);
    return () => unsubscribe(productId);
  }, [productId]);
}

This is not perfect either because React has warts of its own: for one, useEffect is not synchronous (and neither is useLayoutEffect; they lied to you).

But it’s certainly a lot better. The function returns a continuation that the runtime can slice properly into the three lifecycle events (mount, update, unmount) while you get to write code that looks like it’s in the proper order (subscribe first, unsubscribe later). And there’s no messing about with the diff because the runtime has a reconciler built in.

Seconding @krasenyp , and also would ask, why create a topic for each product rather than have a single “product_jobs” or something topic where the event contains the product id along with whatever metadata about the event, which you can then compare to state of current view, whether LV or whatever?

1 Like

Because if you take this to its logical conclusion it won’t end well.

Say you want to add product reviews to your app. Now you’re back where you started: do you want to subscribe to product_events or review_events or review_events[product_id] or some combination thereof?

Applying your strategy again, you could then create a unified product_and_review_events topic and the client can again filter as they wish. Your app gets some more users and the client sure is receiving a lot of events now but it’s probably fine.

And then you decide to add “seller updates” to the product page. Okay, product_and_review_and_update_events it is, I guess. Whatever.

This happens a few more times, so you finally give in and just rename the topic to literally_all_events_from_the_entire_database. Every minuscule facet of your app is globally broadcasted to every user. You are essentially streaming the Postgres WAL to every single client in parallel.

Unfortunately this will not scale.

You can, by the way, shard your setup in some way such that you can stream all events for a particular shard. For example, if you had a Slack-style app it might be totally reasonable to send the client all events for a given workspace_id.

But, hey, how do you choose which workspace_id to subscribe to? What happens if the user switches workspaces?

And so we have invented the same problem again.

Not sure about any of your other hypothetical scenarios–those are not what OP has described and I think it would be prematurely optimizing to handle them in advance (YAGNI). But given the parameters OP has actually described, there’s certainly the tradeoff (already mentioned) that the product view is going to receive more irrelevant messages. Given a high enough number of products actively generating new events, it could create a problem, but personally I’d make that optimization then and not before because, unless you’re Amazon or Temu or whatever, OPs storefront most likely is going to have a fairly constrained number of “active” products at a given time. But my opinion is based on experience dealing with more pain from prematurely (and often poorly) optimized implementations than unperformant ones.

2 Likes

Well, many people have faced this problem and the solutions is to have a piece of code which filters the messages. The live view can have a sidecar process which is subscribed to a few topics and filters, and routes the messages to the live view process.

If more data is needed for some messages, the sidecar can enrich them by calling some internal or external API.

1 Like

The problem here is not so much optimization of software performance (though of course that matters) but rather optimization of developer performance. That is: not turning your codebase into a spaghetti disaster. We are all programmers on here and we all know what this feels like, so I don’t need to justify this further.

The question the OP actually asked is a question of resource management by components. How can a component allocate a resource (in this case PubSub, but it could be literally anything) and then release it when the component is unmounted. Ideally declaratively and idiomatically within LiveView.

What I have explained (hopefully clearly, but maybe not idk) is that it is not possible to solve this properly in LiveView because components (including the root LV, which is also a component) do not have an unmount callback. It has been proposed several times but so far no dice.

I want to be very clear: there is nothing wrong with your or @krasenyp 's suggestion to filter events on the receiving end. This is a legitimate technique, and that’s what I was getting at with the Slack bit at the end. There are times when you can and should do this.

However, the filtering trick does not solve the problem in the OP. The inability to manage resources will come back to haunt you either way. There is a reason most frameworks have some sort of unmount. React post-Hooks came up with something even better (the useEffect continuation/slicing trick) which I also described, and which is also described in much more detail in one of those articles I linked.

I will try to give one last example. Say you have a LiveView at /products, like the OP, and it receives events. Now say you have another completely unrelated page in your app, like /settings. You make use of the live navigation feature to allow users to navigate between pages over the socket (as the OP did).

A user:

  • Loads /products/1 and mounts a Product LiveView
  • Subscribes to the "product_updates" PubSub channel
  • Clicks on a link to /settings which mounts the Settings LV in the same process

The Settings LiveView now receives product updates.

Put aside all performance concerns here and just trust your experience. Can you honestly say that you think this sounds like a good idea? Can you honestly say that down the road, when your app has 100 pages, a bunch random events bouncing around unpredictably between all of them won’t cause confusion or bugs?

It is not premature optimization to design systems properly from the start, and this is not an edge case. There is no easy way to paper over this with some hack, because it will always come back. It will come back with components, it will come back with other types of resources, and so on.

The OP asked for an elegant solution and I don’t believe there is one. If they had asked about React (or really any other app framework) I could have just handed them the useEffect() example and that would be it.

If anyone else has an actual solution that I haven’t thought of please post it. Filtering, while entirely valid, does not solve this problem.

1 Like

If the problem you’re referring to is not one software performance, then I don’t think there is one, unless maybe the hard problem of naming. As @krasenyp already said above, the scenario as stated by OP is a common problem and this solution does in fact work. Whether it’s the “best” solution, or even good is a much, well, harder problem. So if your question “Can you honestly say that you think this sounds like a good idea?” is genuine, then yes, I can. If it’s rhetorical, I can just ask you the inverse one, which is why we should avoid rhetorical questions.

I quite frankly don’t think this is a meaningful statement. If “premature optimization” has any meaning at all, it is precisely that “from the start” you don’t know what the “proper” design for the system will be in the future, and so you need to attempt to constrain your design to the problems that are known. Like anything else of significance, this is not necessarily clear or easy, and indeed I do think it falls under a broad interpretation of the problem of naming (i.e. that names directly reflect the design and can only be as “good” as the design itself).

edit

From what you’ve written elsewhere I would guess your real gripe is with the design of LiveView itself in view of its ability (or lack of it) to elegantly handle the problems most web apps eventually face more than it is with the “filtering strategy” for handling PubSub messages. And in that regard I’m actually pretty sympathetic to your view. Just think it’s a bit off topic in this thread where LV is not on trial and really is not very related to this problem, since any system with processes needing to ingest data from some set of PubSub channels would face a similar problem.

1 Like

If you are writing genuinely exploratory code then it is indeed much harder to structure everything correctly the first time.

However, if you are working with established patterns you can absolutely lean on past experiences. Both your own, and those of others. In the first section of the useEffect docs React gives a proper example of how to manage a resource. Clearly this is not an obscure problem but an extremely common one with known solutions.

Please re-read the OP. You will see that it very clearly describes this exact problem.

Again: if there is something I have overlooked, I would very much like to hear about it. But as far as I can tell it is not possible to manage a resource in LiveView at all.

“Allocate the resource and then never free it” is not a solution to this problem. Freeing the resource by terminating the LiveView process is also not a solution to this problem, as I have now demonstrated three separate times.

This does not mean I take any issue with either of you suggesting the filtering trick. It’s relevant! It just doesn’t solve the problem.

I have gripes with everything. LiveView is fine.

In this case, the low-hanging-fruit solution would be to add an unmount callback to LiveViews and LiveComponents. Note that I did not advocate for this in my first reply (which was days ago), but I have since thought about it more and I think clearly an unmount is just fundamentally necessary.

I strongly disagree that this is off-topic.

1 Like

Navigating between LiveViews starts a new process. Any subscription a previous LV had are gone as part of the BEAM process model.

The only kind of navigation within the same process is live patch (which is probably the wrong design choice for navigating between different product pages).

3 Likes

OK, yeah maybe that’s too strong given LV is in the OP, but I do think a general discussion about what features LV doesn’t have or should have is at best tangential to actually answer the question which is about how to use the tools it does have.

An added complication is that I think this is an XY problem because with the filtering strategy you don’t care about the lifecycle at all because the subscription no longer needs to be managed. When the product page live view is running, it’s receiving product events. The handler will sort out what to do with them (if anything). Still not sure about your complaints with that suggestion.

1 Like

Oh, does it? My bad, I thought it remounted in the same process, but you’re probably right. Fortunately that example is not load-bearing because this problem is still exhibited everywhere else. I was just trying to find something that was irrefutable even in the simplest possible case.

And yet:

But really, I disagree that this is a bad idea. Actually doing everything within one LiveView is a great idea. There is no reason to reload all of the parts of the page that are shared between products (discarding all state in the process). The whole point of LiveView is that it’s supposed to be richer than a regular HTML page!

1 Like

You’re right, but the problem is that you think it’s an XY problem and I happen to think the OP’s approach is actually correct.

Again, there’s nothing wrong with doing the filtering thing. It’s just that the question from the OP, “how do I manage resources”, is going to come back anyway. Honestly, I tried pretty hard to explain it, but maybe there is no substitute for suffering through this yourself. I have been through that many times, where I thought others were just over-complicating things until I actually felt enough pain to understand the justification.

Except I am not the only one in this thread pointing out that it doesn’t necessarily involve any “suffering” at all. It works fine, except in the hypothetical scenarios you introduced. I guess I just don’t find them convincing. Whatever approach you take to designing your topics, there are going to be places where they need to be filtered, unless you decide to have a topic for every single client’s use case. So it’s really just about finding the balance between the number of topics that need subscriptions, and the number of events that will be pushed to each topic, based on known use cases. The issue of managing resources may or may not come back, but not because of OPs problem.

But even this is off topic at this point, because it’s since been pointed out that the real XY problem is that a new LV process should be mounting for each product anyway, so I’ll leave it there.

1 Like