LiveView InifiniteScroll misses events during event storms

Firstly I have refrained from using the stock implementation as well but I’d not refer to it as avoiding it. It only became relevant again once I noticed that my mostly independent implementation shows the same errant behaviour as seen in the stock version.

The sentiment of “until they add more features/support…” doesn’t sit well with me though. Shouldn’t we all be prepared or even eager to contributed towards such features/support rather than expect some “them” to do all the work before “we” are prepared to use it.

I have several questions regarding your approach, not the biggest of which is that your code using the data-scroll attribute to designated a scrolling container suggest that you’re ready for or expecting any number of scrolling containers to be present on the screen at once. Logically that’s not a problem but I’ve concluded from looking at UA scroll implementations across multiple devices including touch devices that the usability of interfaces decline rapidly when there’s more than one scrolling area on the screen at once. You end up with the highly frustrating user experiences associated with web pages carrying a boatload of advertisements all fighting for screen space and grabbing even accidental clicks which require the user to take extreme care to navigate around without clicking in the wrong places. Add to that the general disconnect there exist, again most apparent in touch devices, between whole page scroll with pinch-to-zoom and scrolling within a container. Apart from the technical aspects of implementing scroll event handling for the purposes of infinite scroll, would you lean into or away from multiple scroll containers on a page?

This forum’s software, specifically how code snippets are presented in scrollable multi-line containers which makes things quite awkward when you’re scrolling through the thread. I don’t profess to know what a better solution would be, and it could be argued that it’s better if you don’t mind clicking on the code segment by mistake as it has no consequences. But on more predatory pages you have to avoid clicking on adds designed to sweep you away to dark and dangerous places.

As for the code itself, I don’t know if it’s justified or not in a modern context, but I find the notion of adding and removing an event listener as a matter of course to be a good way of making sure that your long-running page will miss at least some events along the way just because the user triggered them at the worst possible instant. I don’t see may way clear to adopt that as a strategy. I prefer adding the event listener once and only once and handle turning it on or off to avoid runaway triggering explicitly using something akin to a semaphore or critical section.

So once the user scrolled to one end of the content, it’s all over, no more infinite scroll in the other direction and/or away from that end and back there again? Doesn’t sound like the kind of user interface I’d like to experience.

1 Like

I do not, but actually that would be a very good library of igniter templates!

I very recently had to make one that I think would be very useful to share.

Let me know if you plan on making one, otherwise I’ll do it

Opinions on this then? From the main bindings docs:

Rate limiting events with Debounce and Throttle

All events can be rate-limited on the client by using the phx-debounce and phx-throttle bindings, with the exception of the phx-blur binding, which is fired immediately.

phx-viewport is an event firing phx- binding, it should trigger debounce and throttle and you shouldn’t have to unset it based on the docs.

Why can’t a “lock” be triggered by debounce/throttle?
What if people just want to debounce or throttle?

phx-debounce and phx-throttle are meant to rate-limit events, and phx-viewport triggers events. Whether its your preferred method or not, they should work with phx-viewport- but currently seem to do nothing.

I don’t need to though. The delay is more than long enough to provide smooth scrolling, and at worst if latency occurs and two events fire it’s not going to break anything. For the use case, 500ms is absolutely fine. When I was using phx-viewport it was firing 8-9 events instantly when the page hit the bottom, and that’s only because 8-9 sets of events was the limit.

It depends on whether the features being requested are genuinely necessary or just conveniences for individual workflows. Are you asking the LiveView team to update something simply because you find it tedious, or because there’s a legitimate issue or missing functionality?

Infinite scroll works well for basic use cases, which seems to be its intended scope. For anything more complex, hooks are available and easy to implement. It’s not core to LiveView, so if they choose to expand it, great, I’d be more inclined to use it.

I don’t know where the other questions are, but I didn’t know multiple scroll containers on a page was the type of issue people take sides on o.O. If you need multiple scrollable elements, are you just going to avoid using them? Or just update the JS to handle them? You mentioned this site having them but in what way does this site seem complicated? You can manage which scroller is active based on focus/blur/cursor position.

Are you designing a predatory page? Why is this even being mentioned? Most predatory pages designed to trick-click you have page wide invisible overlay links.

Context? You seem to be talking about users scrolling through a list of items and not getting items that were added whilst they were scrolling? If so that’s a pubsub thing, or an exceptionally marginal edge case that’s hardly worth addressing. In what example would you need an ACID type infinite scroll where? Are users just going to sit the page forever and never reload? I don’t understand the context where you would have users scrolling through a list and just sitting in it while other users create non real-time content. If you want to prepend real time updates to something like a timeline while a user is scrolling, you would you use handle_info + pubsub to insert it.

You make an awful lot of assumptions in your responses about how an entire system works based on a single code block lol.

There are never contexts in which I would want to or need to load content in an infinite scroll that is loaded as a user is scrolling. I remove events because once you have no more results to load, it stops triggering server requests. Why would I keep the event active once all content is loaded on the page?

I mainly use Scylla as well, so events must be removed or you can crash the page by sending an invalid paging_state.

You sound like whatever system you are designing is strictly ACIDic and all content must be 100% correct at all times, but that’s not same for everyone. “Missing” content based on edge/race conditions is acceptable when using Eventually Consistent models.

I should also add, I have no issues with infinite scroll, whilst you apprently do. I decided to share my thoughts on this topic with example code and you ended by complaining my example doesn’t fit your personal use case of scrolling up or loading content that has yet to be created.

Try not to end by making a bunch of assumptions followed with a condescending “you suck” message when someone gives a basic example on how they do things and you decide to extrapolate it into something its not.

There are two things being conflated here. There is the debouncing of the “scroll event” on the client, and then debouncing the “load event” sent to the server. The first part you quoted, which was not directed at you, is referring to the former, though this of course in turn affects the latter downstream.

When I say you need to remove the phx-viewport-top/bottom events, I mean when you reach the end of the virtualized list: when there are no more pages to load. At this point, debounce or not, you will repeatedly fire load events forever unless you pause the hook. Because the load events won’t load anything, so you remain at the bottom of the page.

If you are referring to the bottom of the “virtualized content” (i.e. the very last page), that’s why you are supposed to remove the attribute once there are no more pages.

If you mean that multiple events are sent in a row during normal scrolling, the internal LiveView hook should be preventing that with the lock, so it’s not clear to me how that could happen. But it’s definitely a bug if it does!

It seems obvious to me that locking out the events until the next page is loaded is more correct than using an arbitrary delay, and is trivial to do (as I showed), but I am not your boss and of course you can do as you please!

You could still debounce the event, the two are not mutually exclusive.

Sounds like at the very least the docs should be updated.

I know there’s a difference between the server and client side, the issue is that the docs state that that debounce should apply to all client side events, phx-viewport- is a client side event trigger that is not effected by debounce or throttle. You can’t throttle events that you should be able to, and as a result, multiple events get sent to the server all at once.

I’ve set phx-viewport- bindings up in the past to trigger events, and it always takes hours of screwing around with the page, and its never smooth. Custom JS takes like 5 minutes and gives better results imo.

Why should the docs be updated rather than the debounce/throttle apply to the events though?

I get that you don’t like debounce as a rate limiting method, but it is viable and more widely used rate limiting method for infinite scroll than locking because it creates a smoother user experience that doesn’t make it look like the page has frozen.

Neither method is “correct” but delays/debounce are more widely used because in most cases they are all thats needed.

You should not have to throttle the viewport events to prevent this specific problem because they should only send one event at a time before the new elements enter the DOM. When that happens, the scroll is no longer in the “load more pages” zone and there should not be another page load. If that is not happening, there is a bug somewhere.

There is nothing wrong with debouncing, the fact that it isn’t needed here is entirely an implementation detail of how these infinite scroll hooks work. I have no vendetta about debouncing in general, this is not some sort of crusade lol, I did not even notice it was unnecessary until this morning!

When I mention locking I am referring to locking out sending another event until the page load is complete, not locking the scroll position. In your implementation you also lock out the event, you just did it with a delay instead of using the callback which I think is a slightly worse approach.

Incidentally I think the LiveView hook does lock the scroll position, which is one of the reasons I don’t use it either. As you say, it’s very annoying.

The phrasing “at the very least” was designed to imply that fixing the behavior would be preferable, as you say.

Nope, but I’ve come to badly behaving scroll containers trying to prevent people from scroll down past them by capturing events that touched inside their boundaries but the user meant as page scroll events and treating those as “their events”. I other words, the page itself might not be predatory but the advertisers and their developers are and they’re using scrolling as one of the ways to defeat the page owners’ defence mechanisms. It’s a side issue at most, but one I was curious to hear your thoughts about because the structure of your suggested code suggested that you intentionally catered for a multitude of scroll areas while I have it marked as something to avoid exposing my users to.

The context is simply underlying data that is too large for a user’s browser or the session on the server to load into memory all at once. I.e. the use case for infinite scroll as opposed to lazy/delayed loading. It’s not about content that get’s added to what a user has loaded at all. Like you said, there’s PubSub for that. It’s about scrolling back and forth at will through data that can’t all fit in memory at the same time. You could get to an end of the data especially if you’re talking about linear data like a list - either end would be that - while in hierarchies every branch would eventually lead to its own end at leaf level. When the user has navigated to such and end it does not mean they are now done navigating through the data, merely that scrolling further along that branch would yield no more data. But they can scroll back a little and a laterally a little and find many other branches that carries on and on for days. The point being that they don’t end up with all the parts of the tree they’ve visited staying in memory. Not in the browser’s and not in the server’s either. As they scroll along one direction and load more data in that direction to scroll into, they shed data on the opposite end to make room in memory for the new data being loaded. The idea is to allow the user to make the amount of memory they are willing and able to dedicate to buffering something they control and therefore make the implementation take that setting into account when deciding what to load and what to discard. Similar choices exist on the server side which may also need to limit the memory each active client occupies and likewise it has to be controlled by runtime parameters.

Isn’t that more aligned with basic lazy loading of content (where the point where all the data is loaded could legitimately be reached) rather than with infinite scrolling (where such an all-loaded condition is never reached)?

Once again it seems like a conflation between infinite scroll and lazy/delayed/ background loading wish starts when the page first loads but postpones the rest until the user scrolls far enough, but the result is the same in that once the user has scrolled far enough all the data has been loaded. In my book that’s not the infinite scroll use case at all. With infinite scroll the scroll handling must stay active once the user reached the end because for the user to scroll back the other way and maybe off the the sides it needs to reload data that has either been shed before or had never been loaded in other directions. It’s even possible to consider the content to be wrapping around, making it so there’s no discernible end to the data, only where in this infinite world they (the users) find themselves at any given point in time. That type of infinity is by my reckoning the domain of infinite scrolling, and the solution I am chasing renders that for the user with a constant yet adjustable amount of resources being used on the server and on the client and algorithm complexity and database access times as close to O(1) as the O(log n) or better index access mechanisms allow me to get. Having a gigantic database with all the world’s data in it would be pointless if its size got in the way of effortlessly sailing through it.

That’s fairly far off base, but I will say this. (I think you made reference to background loading somewhere and it didn’t sound like you were in favour of it. Couldn’t find it on my phone interface again now.) The ideal I am working towards a little better each day is scrolling as seamless as you wish and can afford (the resources for). Scrolling happens natively in HTML but the scroll events we’re talking about fire after the browser has completed scrolling the available content. The idea is simply to a) load more data on all sides (I scroll in x and y which translates into 9 distinct scroll directions) than there is room to show inside the scroll container (overflow on all sides) and b) as the user scrolls into any one of those areas the browser already has the right content to display so that automatically happens straight away. Then, so c) I suppose in response to the user just having eaten into the previously unseen buffered content, replenish the buffering in that direction(s) in the background so that if browser, network and server resources allow, the browser would once again have the next bit of content the user might choose to scroll into already in memory before it needs to be shown. It doesn’t have to be guaranteed to always succeed, in fact there are cases where the user might make big jumps to bookmarks in the content that have nothing in common with what is loaded before the jump meaning the next update will be a complete reload anyway. It may also be that the networks and servers are overloaded and the user may end up becoming aware of waiting for the content they are scrolling around in to arrive from the server. If they have it, they can volunteer more memory (increase the buffer sizes) until for their server, latency and bandwidth their experience becomes smooth enough for their tastes, but if it doesn’t bother them or they don’t tend to scroll around all that much or don’t have more resources to apply they can accept the buffering as is and nothing will break as a result. It will just be a little laggy and slow. I working off the premise that users would engage with the entirety of the this vast database through a single viewport application which may under ideal circumstances be updating the same original LiveView mount for hours if not days on end. Actual page reloads, if they are needed at all, should happen seamlessly as well. That’s the ideal I am working towards and whether it’s realistic or not at this stage is not as relevant as the direction it provides which ultimately determines what I can and cannot accept in terms of flaws in the design or implementation of the tools I use along the way.

Agreed, they should just let you debounce them so there’s no need to throttle.

I forgot to add earlier, the bindings docs you mentioned add 200vh padding above and below to get the events to work. So you still need to calibrate the DOM based on content size to get the infinite scroll to work effectively with Liveviews set up. With a pure hook, you can set limits wherever and however you want.

I know what you mean by locking, I said page freeze because if you lock the event no new events fire which means the page effectively looks frozen to the user if there’s a delay in retreiving results. Having a time based trigger allows you to calibrate the smoothness of the scroll because even if results don’t load immediately, you can add a loading animation and start loading the next set of results. A lot of sites that show image galleries have this kind of scroll. While the images are loading, you can keep scrolling and scrolling. It’s up to the user to stop and let the images catch up, rather than making the user pause will everything loads each time.

Imo, locking or pausing events makes the scroll clunky, whereas debounce/delays are smooth.

Your saying I’m conflating terms whilst you are making up your own definitions for them.

Infinite scroll isn’t about endless data or loops, it’s a method of “lazy/delayed/background” loading content triggered by scrolling instead of pagination buttons.

This thread uses infinite scroll. You can scroll down and eventually reach the end, at which point no more content loads.

You have a very specific and strict view of infinite scroll, treating the “infinite” part literally as truly unbounded data that must stay active even after reaching the end. That’s fine if your application needs it, but most applications don’t. Most infinite scrolls have a beginning and an end, and aren’t updated inbetween.

Whether the content is infinite, near-infinite, or fixed-length, the term covers that loading behavior, not just a strictly endless dataset for your specific use case. Whether you agree with that or not is unfortunately irrlevant as well, because the term is commonly known to describe the action of lazy loading content on scroll, so your definition and description of the term is just wrong.

Sure, you may stick with the one-dimensional simplification of reality in which a bunch of programmers get to agree to reassign a term such as infinite a lesser meaning because lazy loading or background loading doesn’t sound impressive enough and call my use of the term simply wrong all day long. Doesn’t make you right though. If it’s true that infinite scroll has been taken up in the common language contract to mean lazy loading of a finite dataset then what term should apply to what I describe as infinite scroll? At most you borrowed the term to boost the image of limited vision and technology but now that the use case and capability is upon us the honourable thing to do is to stop using the term infinite scroll to name a lesser activity and revert back to lazy/delayed/background load.

This is an interesting point, essentially pipelining the requests. It’s not something I considered.

However, if you were going to do this I think you would want to work off of scroll position/distance rather than time. By using a timeout this way you will just keep loading pages until you get one back, even if the user is scrolling slowly. With high latency I would think you would be at risk of unloading the current (visible) page when the new pages load in if the user has hardly scrolled.

If the list were properly virtualized, with a proper height set for each “unloaded” page, you could simply request pages as they near the scroll position with no regard for locking and the result would always be correct. Note that this is exactly what your example (an image carousel) does: each image has a placeholder, and the images which should be in view (or near in view) are loaded.

I agree this is the holy grail, but it does not come without difficulty: we don’t always know the true height of our yet-to-be-loaded content, and we don’t always know how many pages exist. But perhaps you could get good results by estimating these things. I’ll have to try that in the future.

You can also tune the page size, perhaps even dynamically.

I brought it up a few times as one of the keys to my approach, to be fair.

We’re in agreement about scroll position rather than time being the primary trigger, but ideally time does have a role to play in the form of an idle-timeout. We don’t want to trigger an update of the loaded data on every scroll, only when they get at or close to the edge of what’s been loaded. If they scroll some distance but not enough to trigger an update to the loaded set, and then stop there (presumably consuming or interacting with the visible data) for a considirable time (subject to tuning) doing either nothing or other things than scrolling around, it might be a good idea to use that time to to a background reload to ensure the optimal buffering around the currently visible data. If done correctly it would be transparent from the user’s perspective but ensure that when they eventually start scrolling around again it would be as smoothly as possible.

If I assume height here refers to the rendered size rather than the number of elements on the page, I’d have to assume you’re right about that even though I don’t understand where locking would come into the equation even then.

If however your height reference is to the number of elements on the page in the scroll scrolling dimension my confusion about what you’re aiming to achieve involving locks becomes pertinent and I’d love to come to understand your reasoning about that.

Once again, if you refer to lement pixel heights my opinion is unimportant but if by true height you refer to the total element count in the dimension being virtualised, then I agree but also hold the opinion that it might not be such a big deal. The primary implication I can foresee is that the scroll bar is “wrong” in that its size does not accurately reflect the portion of the data being shown and its position does not reflect where in the overall context of that dimension the user is scrolled to. In the bigger picture though I would contend that there are better ways to present the user with accurate context about the content being displayed than what could be reflected in a scrollbar which on some platforms are only shown while actively scrolling anyway. Especially hierarchies have many effective dimensions in them which does not translate well into the limited scope of scroll bars. Keeping ancestor nodes visible in the data is of much more direct value to the user than scroll position.

In that regard, the scroll bar size and position only needs to be an indication of what scrolling options are available to the user. If it’s somehere in the middle, both scroll direction are available. If it’s at either end, no more scrolling is possible in that direction. Simple as that. If the user clicks on the scrollbar handle itself they can drag the scroll position and on every load it will reset to somewhere in the middle until it doesn’t meaning they have reached the end in that direction. If the user clicks on the area next to the scroll bar handle it scrolls in that direction as far as possible and if even more scrolling can happen the scrollbar will once again reset to somewhere in the middle. Scrolling to the very end is firstly achieved by repeatedly paging until the end is reached. It would be possible but requiring catching and checking modifier keys to offer a way or command that will scroll to either end in a single action.

Yes, you can and as mentioned in my descriptions, it has to be dynamic. Specifically controlled by whomever’s resources are impacted by the choices- the users’ in most cases but also the server admins’.

I’m warming up to using IntersectionObserver which is a new tool to me. It is certainly less prone to generating (spurious) intermediate events like scroll does.

However, as it stands now, I believe going down that path may well result in a (modified) endless loop. Ultimately in response to a thrreshold being crossed in the intersection you’ve set up the observer for, you’re going to end up reloading data which will reset the intersection which should be seen as another crossing of the threshold by the observer and trigger the callback again. I’m hopeful that it can be avoided by considering the direction of the crossing so that scrolling “outwards” may be picked up as a threshold crossing in one direction while reloading data and resetting the scroll context and therefore intersection would appear like an “inward” scrolling and threfore crossing of the threshold in the other direction which would either not invoke the callback or can be used in the callback to recognise that reloading is in progress and not to be called again.

From the description on infinite scroll wikipedia entry we’re both technically wrong. I want nothing to do with the type of infinite scroll Aza Raskin invented for the exact reason he expressed regret at the invention. But what he invented can in no way be mistaken for a commonly used alternative name for lazy or bacground loading of finite content as you claimed.

I think a minor adjustment of Aza’s definition would counteract his concerns about the concept getting abused to induce scroll addiction and pinning users to a page. Such adjustment would call for strict and repeatable ordering of content, exluding any form of randomisation or repetition to artficially create the illusion of infinity. Users should see the exact same vritual whole however the scroll around it.

We live in a world where our movement around the globe, wrapping around in any direction we go, is exactly the same as (the duly constrained) infinite scrolling. Yet this infinite scroll world with full wrap-around leaves nobody confused about what they call home and where they’d expect to get to when they go in any direction. When the infinite scroll we implement in UX mimics that, it loses all it’s bad side-effects and becomes something anyone can identify with, except perhaps flat-earthers. The key is to ensure that the content itself provides the user with the context of “where on earth” they’re at within the overall content.

I think what you would want to do is “lock” each individual pageload, that is only send one request per page, but allow them to be pipelined by scroll distance. This requires buffering the empty space in case the user keeps scrolling, and you would want the estimated height of the (not yet loaded) page to be somewhat accurate to prevent jumping around too much. This is much more complicated than the basic one-page-at-a-time approach and I’m not sure it’s worth the difficulty for most apps. If your latency is reasonable then loading one page at a time with large enough pages should provide roughly the same experience.

Another thing to note is that if you’re using keyset pagination (which is ideal) then you can’t load pages out of order on the backend, so while you can pipeline additional pages you have to be very careful to process the requests in order so that you can retrieve the next starting key. The LiveView socket approach should make that much easier, though, since I think it would guarantee total order. If you were making stateless HTTP requests this approach would be more difficult.

Like I said you would actually want to “lock” each individual pageload, so what I wrote before was not strictly correct. But you could still pipeline the next pageload if the user scrolls further. Note that we also don’t need any retries, again because of the socket connection.

I meant the rendered height of the loaded page or an estimate thereof, but the “total height” of all pages would also be relevant, if you wanted the scrollbar to never jump at all. Not all applications have a knowable total height, though. For example, a human could probably spend their entire life scrolling Twitter and never hit the end, so at that point the height effectively is infinite.

At the risk of repeating myself I’ll make another attempt at explaining what I consider to be the appropriate approach. It becomes more of a pressing issue firstly when the scrolling happens in multiple dimensions and secondly even more so when at least one of those directions are non-linear, i.e. expanding like a tree does. I’m referring of course the fact that the notion of pages and pagination is entirely open to interpretation and nothing like a printed book. Even though I too call it pagination at the code level for historical reasons, it’s really just because I’m too lazy or stupid to have thought of a more appropriate term. Point is, what we’re talking about isn’t a sequence of pages but a moving / sliding window.

This has many implications.

First of all the notion of skipping over pages is meaningless except perhaps for skipping to either end or extreme. Since the order of the content is defined but subjective rather than objective, any navigation such as scrolling about and zooming are relative operations. Relative to something that is being shown to the user or known to the user through other means such as bookmarks or breadcrumbs. That’s also where the hierarchical structure that complicates matters also saves the day by constantly being visible or at least accessible on the screen for users to maintain context of where in the content they find themselves at any point of time. Presenting a sizeable or infinite flat list of items such as blog posts or the tweets you mentioned would in my view have disastrous effects on usability.

Which boils down to where it makes little to no sense to consider locks or a pipeline of pages. Using the tools at our disposal all the requests the client could make from the server for additional data will be sent and replied to in sequence regardless of latency. Sure, if we were in a stateless server setting and had many servers worldwide that could through the magic of the internet get to respond to client requests out of sequence we might have had to add something to catch when that happens and deal with it. Having a well-defined session serving process on a single instance of however many clusters of servers serve your users, it’s fairly safe to assume that sequence to be reliable and if it breaks the management system around your servers will reset what it needs to recover to a known state. So I say we should be good without our own locking and sequencing code in play.

That leaves the question of separate pages. My contention is that especially since there’s nothing physical about page boundaries, it imposes an unnnatural and unwanted burden on everything to try to enforce or even honour page boundaries. Our moving or scrolling windows fundamentally overlaps each other somewhere between 0% and 100%. If a window is requested that has some overlap with the previous, regardless if that previous window has or we even know whether it has reached the client yet, the data is dispatched to the client as a delta / patch update from the previous window’s data. If there’s no overlap, it’s dispatched as a complete update replacing everything there used to be.

Realistically the opportunities for clients to generate requests for disjunct data windows or “pages” before the previous request has delivered would be few and far between. It could happen under extreme conditions for sure, but it wouldn’t be the norm and in all the cases I can imagine the user’s expectations of experiencing seamless or smooth scrolling to the disjunct dataset would no longer exist - they’d expect to wait a bit for the entirely different dataset to load. That situation cannot realistically arise from scrolling too fast simply because the act of requesting new data as a result of scrolling requires data in the buffers to be present, meaning that the client code would need to wait for the data previously requested before it would know how to ask for the next update. If you want to see that as a locking mechanism you believe is required, be my guest, but it’s not really locking just the natural order of things.

I’m fully aware of the implication that it means the user cannot scroll to an arbitrarily chosen position in the overall sequence with a single click or action using the scroll bar and sizes, real or estimated. I content that if that is required, those “positions” or places where the user might want or need to scroll to should be explicitly represented as part of the content itself, not approximated by a scroll bar. To put this in the infinitely scrolling tweet context, it would mean that the list of tweets would be listed by date and at the bottom of the list there would be headings such as yesterday, last week, May 2025, April 2025, Q1 2025, 2024, … would be shown in case the user wants to “jump” down the virtual list rather than scroll through everything. Not that it works like that today on such platforms - they would be against their interests and far too accommodating😏 but that what I mean by putting the intermediate scroll stops inside the content rather than on the scroll bar.

If I haven’t said it before then let me say it now. The pagination (for lack of a better term) I am pursuing at the moment is exactly what I believe you mean with keyset pagination. I literally compute the delta between two consecutive “page” requests as the difference between the keys they contain represented as two MapSets.

Scratch that. Once I had a better look at IntersectionObserver it’s proven to be shaped way too specifically for its original intended use-case which is to keep track of how much of what ads had been visible to the user for how long. To make it useful to scrolling would require inventing and setting up a whole eco-system for it specific to scrolling making it far more appropriate to just use the scrolling events directly and deal differently with the discomfort of the user agent natively generating a flurry of scroll events for every actual scroll action initiated by the user.

1 Like

Agreed on the observer API, probably the result of a little too much google influence on the standards process…

I can see how it would be useful for lazy loading images and such, though that is now supported natively with loading="lazy" anyway.

Presumably the scroll events pre-date modern smooth scrolling and come from a time when mouse wheels scrolled by discrete ticks. More modern APIs like touch or drag/drop have accompanying start/stop events to make things easier.

But if you are e.g. trying to keep something in sync with the scroll you do need a large number of events. Maybe nowadays using requestAnimationFrame and then polling the target scroll position is a better approach though. I’m not sure.