I just wanted to clarify that my view has changed over time based on my experiences. The above represents my current viewpoint. Maybe in the future it will change again!
Your use-case is unusual and I understand youâve since spent a lot of time working towards a solution that works for your case. I am not commenting on that (though you of course should, as it may be helpful for others).
I will, for sure. I even hold out hope that in some way it might even end up being helpful to you, but that woukd be for you to guage, not me. I do hope that weâll see you around the project Iâm creating as a working space for different use-cases, vantage points, approaches, feedback about what ended up working well and what made life hell to come together as we each go about building our own and/or helping each other build experimental implementations on the data framework Iâve spent some time putting together. Some objective metrics to compare as opposed to the subjective stuff we can never seem to agree on will eventually help us define the smallest possible set of use cases and for each a demonstably best approach with regards to practical liveview. You in?
I am very avoidant of closed platforms, but if you post examples publicly I will gladly comment on this thread about them
The work I have discussed on here will be public within a couple months so I will be able to provide some real examples of what I tried and couldnât make work, if people are interested in that.
Will a public project on GitHub good enough for you then? Iâve gone and created a completely fictitional project whih, while having absolutely nothing in common with my real project I cannot speak about yet either, contains enough use-case overlap to be representative of my needs. I suspect that the bulk of the conversations around it will be easier to have on GitHub itself than on this forum though.
I post on this forum because it is open and independent, and I appreciate the community here. I choose not to used closed social platforms, GH included, though of course many others do and thatâs their choice.
If what you publish is publicly available I will gladly read it, though
Youâve actually caught my code in an awkward spot with a half-implemented change from using a div scroll container to having the whole page as scroll container with a padding for the header because of trouble Iâve seen with touch devices handling scrolling in the container in a widlyl different manner. So what Iâve done is create the project, made some notes in the README.md meanwhile to start the conversation, and I will try to get the code stable and uploaded over the weekend.
Pardon my ignorance. Which alternative collaborative project platform would be open and independent enough? I donât get how youâd want to have several people contributing to one codebase purely via this forum. From what I could see all the development of the platform happens via GH so I would assume all if not most of the folks here also have accounts on that. What am I missing? What factors into your decision about what consitutes closed vs open social platforms? Is this related to efforts to either comply or defy rules and regulations imposed by an employer, government or laws? I respect your choices, they yours to deal with. I have my own to deal with as well, so normally weâd both stay out of ieach otherâs choices. When someoneâs choice have an impact on anotherâs life though, thatâs when I believe the affected party has cause to either gain a better understanding of that personal choice or take other measures to reduce the impact on them. Iâd like you to be able to get actively involved here but you choose to avoid GH, so either I learn why and adapt in a way to accomodate you or make peace with excluding you from proceedings.
It was not my intention to derail the thread with a discussion about platforms, I was just letting you know that the extent of my âcontributionâ would be to discuss LiveView/architecture on this forum (as we have been doing).
If you are looking for others to contribute code and so on I am not the right person for that, but I hope you find what you need.
itâs âone levelâ in the sense that we track if assign foo changed, but the diff for maps is deeply nested, so if foo is a map we diff the tree all the way down and only send the changed subtree/values
Yes, that true, and I make use of that. However, and this is at the heart of the problem, that applies to assigns and not to streams because assigns stay in memory so there is the previous value to compare a new one to. With streams the server does not keep the previous value around so cannot diff and the client has the dynamix parts derived from the record in the DOM but not the whole original record. Thatâs whatâs been cathing folks out when trying to use streams. Itâs not as big an issue for linear data like a list of records (though some would say it is) but when it comes to serving big recursive data to large numbers of clients, the option of keeping a potentially explosive amount of data in session memory for detecting changes becomes too expensive to contemplate in a production setting, and thatâs where the approach Iâm developing come in handy.
I make full use of LVs ability to diff assigns but the idea is to not put the actual record data in assigns. Only structure data goes into the assigns, and only after being restricted at the database level to represent one exacting rectangular slice of the data. You may have a different opinion but from where Iâm approaching this that is small, contained and controllable enough to keep in memory so LV can do its thing. Which âdataâ records are to be sent from the server to the client then becomes a matter of calculating the difference (in set terms) between the MapSet of keys for which there is HTML on the client already and the MapSet of keys the client should be showing after the update and sending the HTML generated for those through using the streams update mechanism. Any record from the MapSet that should be on the serve that we know from the business logic or PubSub subscription has changed since it was last sent through, is also added to the stream and thus updated on the client screen.
I donât know if this is making sense to you (yet). I fell a bit behind over the weekend with the planned publish of the code I have, but I should have it on the repository soon. That will give us something more concrete to discuss and therefor hopefully somewhat easier. Right now youâre having to guess at what Iâm actually referring to so it probably sounds like a load of waffle.
I think this is referring to static map accesses, documented here. I donât think Phoenix can diff dynamic accesses, but maybe Iâm wrong about that? It just means that if you access @foo.bar.baz in your template then that dynamic wonât be diffed for every change of @foo and @foo.bar.
But anyway, what you need to replace LiveComponents is the ability to diff ordered collections. UIs are generally trees of ordered components, where the order matters very much. You wouldnât want a file tree where the files are re-ordered whenever you scroll!
But diffing arbitrary trees is very expensive (I believe O(n^3)), so you need to do something to bring that down. The problem is that comparing two children of a given node to see if theyâve been re-ordered requires recursively comparing all the way down the tree.
But generally the changes are small, so what you can do is assign a static key to each child of a given node and then âinformâ the algorithm that the same key means the same object, and that keeps you on the fast path. Thatâs why React does it that way.
Of course Elixir maps side-step this entire problem because they are not ordered at all! Which unfortunately means theyâre useless for a rendering engine like this. There was work in this PR to extend diffing to arbitrary collections, with discussion of eventually using it for this purpose (with a key). But I donât believe that has materialized yet, so you still need LiveComponents for now.
LiveComponents solve this, by the way, because they do have a static key (the id). With the caveat that it must be globally unique instead of unique per-node-child. This is less convenient but does have some theoretical performance benefits over the wire if you move a component to a completely new place in the DOM. Though in practice Iâm no longer certain this optimization actually works, as Iâve seen some weird diffs. More research needed.
Weâll discuss this again once youâre able to see how Iâve handled that including the deterministic order. The order of children forms part of the structure part of the data, i.e. of how they are related and should be represented on screen relative to each other that is extracted from the recursive data but not referred to during the rendering of the data portion so itâs not subject to diff calculations at that level. Changes to the structure including order of children and where in the tree nodes should appear affects the structure data for which a relevant portion (rectangular region) is kept in memory per client session. That part is for getting live changes to currently displayed data as efficiently to the clients as possible but overall tracking of how the data in the database changes over time is not a matter for the UX but for the business logic that tracks how users change the data. It would be insane to try to detect what changes has occurred in a massive recursive database by comparing different time-based snapshots of the database, which is a vulgarisation of what youâre suggesting should be done. If your appâs business logic does not keep track of what changes it makes to the data thatâs where your trouble starts. If it does track that, then itâs not complex at all to follow the associations in the data to bubble up changes or cascade them as required, as long as it is being done on the server with business logic and not attempted in UX code such as LiveView and JS. The key being to not âlet goâ of the event which initiates a change until all the consequential changes and updates had been dealt with. If you drop the ball your database and servers will have to work really hard to recover it, but if you keep it in hand basically all the operations you need to be doing can be done quickly with key-only access because you know where to look for things. Itâs a bit of a mixture beween relational type thinking and event-driven thinking, but itâs well worth the effort to ride that edge.
The purpose of my reply was mainly to clarify what LiveView is natively capable of WRT diffing, in response to that tweet.
There are of course other ways you can accomplish these things, like what youâre doing.
It is insane to do this against the entire database, yes. But it is not insane to use this approach against a view of a portion of the database (which as I understand is what youâre working with anyway).
In general, the purpose of these declarative/reactive frameworks like React or LiveView is to allow people to build interfaces in such a way that itâs easy to localize and incrementalize the updates to the interface without going down the âbad pathâ (diffing the entire thing), but while still maintaining the mental model of diffing the entire thing.
Of course, the result of that careful design is that most people using React and similar frameworks donât actually know how they work. A blessing and a curse.
As an analogy, database isolation often works a similar way: you are allowed to think of transactions as being isolated and serialized, even though they are actually concurrent over the same storage space!
Yes, but exclusively for a carefully controlled minimal subset of the structural portion of the data. That part only changes when users reorder children, move parts of the tree around, add or remove nodes of the tree. As long as every client session manages ensures that it only subscribes to changes to things that are actually represented on the browsers of the clients theyâre representing itâs fairly trivial to get those from the business logic layer through to the sessions and for them to get it through to their clientsâ screens.
Which brings a second aspect into play. The business logic should also have a fairly easy time of distinguishing between changes to the structure and changes to the content of the nodes. If only the structure changes then the HTML rendering for the content can be reused without a hassle on one condition - the nodes need to have been rendered flat in the DOM, i.e. none of the recursive hierarchy should be present in the HTML. I initially thought that wasnât likely to be possible and that Iâd have to keep track of the nesting as well in order to control the updates with surgical accuracy. But I discovered since that there are quite a few ways of keeping the HTML flat and therefore for the nodes, even though they represent deeply nested / recursive data, to have no overlap with each other. In fact, though nesting is a very common thing in HTML and DOM the bulk of its primitives are fully geared towards big sets of peer elements. Only a handful of the higher level constructs in HTML and CSS has any support for nesting at all, but itâs usually quite a mission to get it to work as youâd want even if you know in advance all the levels of nesting you need to handle. When you canât know in advance it tends towards insanity. I was very relieved when I figured out how to map a indefinitely recursive dataset onto a stock standard flat structure like a table, list or grid, which is what I am using at this point. Thatâs right! Iâm showing the hierarchy in a grid. Looks and works brilliantly. The parts that are order dependent as you mentioned as important are isolated into a small structure that calculates quickly and cheaply and takes almost no memory because it only deals with how the recursive data maps onto the flat substrate.
The nodes themselves, freed from overlapping at the UI level, becomes very easy to deal with as well. At the moment the prototype donât even use any LiveView component for that because Iâm still only showing a single label field, but in time I have the option of using either a function component or a LiveComponent to manage the contents. Any and all of them being super simple because it doesnât have any burden from the recursive nature of the data to carry or represent. Each node is on its own little island, visually, unaware that to the user they appear in a hierarchy brought about and managed without their involvement.
Iâm not entirely sure I can agree with your sentiment here. By my reckoning whether itâs mental or actual, diffing âthe entire thingâ is simply the wrong way to think about the problem and no UX tools like LiveView or React has a legitimate role to play in the solution of that level of problem. UX tools and frameworks should be used to represent data and the how people may manipulate it and that is it. Nothing else. Effecting the changes requested by users via the UX tools, handling the knock-on effects of changes and getting word out to any subscriber to data that had been touched in the process is at the heart of the business logic which I strongly believe should live on the server (what Phoenix calls context apps, I think) i.e. not in files that live in the âŠWeb directory. Sure, while youâre still experimenting with different options youâd maybe do stuff in a controller to make it quick and easy to work in a single file that changes a lot and often. But as it becomes apparent what the operations youâre going to need to make work for the data to be managed properly, those would disappear from the controller logic into context apps where they have standardised access to the database and things like pub sub management can be dealt with without repeating yourself. Thatâs my view anyway.
Iâll take your word for it. Managing a complex recursive dataset is still in my view not a problem any UX toolset will do for you even if you understand perfectly how it works. Theyâre just the wrong type of tool for that job.
Thatâs an interesting analogy. I used to be a fully qualified Oracle (Tuning) DBA and was the âownerâ
of many other databases for big companies over the years as well. Strange thing about transaction isolation is that itâs almost never properly implemented. I was once asked by a new employer to recommend which database Iâd want to do the new system on and I went straight for Oracle. Because of the price that raised many eyebrows and I was of course asked to motivate and defend my recommendation as there are so many cheaper alternatives. It took me less than 15 minutes, most of which was spent firing up databases, to run the same few statements from different terminal sessions against each of the databases. Oracle has was the only one that produced the correct result, all the others failed in the same way, essentially not honouring transaction boundaries and allowing data that has not been committed yet to be seen by others who could make their own changes to the database using the changed data which stuck even when the original transaction was rolled back. It not only made my case but also explained issues they were facing in systems that didnât use Oracle. Iâve not had license to run Oracle in more than a decade now, but I have respect for it. Your comment implying that transaction isolation is essentially a myth doesnât really hold true. It just takes a really well designed and stable database to actually implement it as specified in the standards, but it really isnât as fuzzy and unpredictable as most non-Oracle users tend to experience transaction isolation. But I get your point which I read as referring to using tools in their abstract form while trusting them to turn the abstract into reality in a predictable manner. Iâm just saying especially when it gets to the bigger concerns it can be hard to find tools that do their abstract forms justice when it comes to practice.
But yes, to be clear, my point was that database transactions give the illusion of being serialized when they are actually concurrent. They do this while preserving correctness through proper concurrency control.
Of course there are some databases (manyâŠ) which are literally broken, but thatâs not what I meant.