Thank you Chris and everyone who has worked on making it such an awesome release
I can’t wait to get stuck in and I reckon we’ll be running a Programming Phoenix LiveView (PragProg) book club soon… if anyone would like to join please PM me (we should also have a few copies to give away). For all you @BartOtten fans, you’ll be pleased to learn he has already agreed to be one of our club leaders
I’m not a web-dev-guy, had to do it every once in a while and hated it.
You guys made me love web-dev. I’m really relaxing when I spin my components and liveviews. Now with tailwind I’m even creating good looking stuff (at least the cat says that).
While these changes are a big step forward for new applications, it probably makes sense to maintain your current application conventions and skip these steps for established, large 1.6 applications.
Blog.list_posts/0 calls Repo.all/1 which then fetches all records, right? The stream/3 even calls for on said collection (to add generated dom_id and -1 index to a new Tuple). Where we win here when talking about stream?
From what I can see using Repo.stream/2 instead of Repo.all/2 would not give us much as per mentioned for call that returns a List, right?
I guess that after rendering said (empty) table the live stream is rendering one row at a time and sends the DOM to JavaScript which is then appended to table element, right?
So let’s say I have 1 million database entries and I want to stream them using Repo.stream/2only when needed to have infinite scrolling. Can I do so with this new API?
The win for streams will come when pushing updates to LV via PubSub. It is not in the generated code but streams get us closer to that and adding the remaining PubSub bits should be easy.
I can confirm that I was successfully able to modify several items at once, may be not in a very efficient way, within one handle_info callback and they seem to arrive as one update to the client. This is what I did. I need to modify checkbox selections of several items at once:
Unfortunately, in my case the payload is around 8K-14K per each round. I’m blaming Heroicons as they get included as inline svg. I show them by “if”, but I also tried “ifing” hidden / block CSS class to a surrounding div and couldn’t achieve any better results yet. Still playing.
Edit: There’s also Edit & Delete links with JS coming for each updated row - another source of a heavy bandwidth. Plus Tailwind class names, as now they are also part of each message. I think this will at some point require attention, I just don’t want to prematurely optimize while sketching the app.
@chrismccord Is there a known show stopper for Phoenix to send the static segments of components only once? Then over the lifespan of a session a user would build ‘a library of components’ and only dynamics have to be send for each component which was sent earlier in the session. Especially useful with rows.
Asking directly as you went to great extend to minimize data over the wire so maybe you have thought of it earlier and know the answer already.
—-
…and next we request to prebuild the component lib for CDN etc etc. One step at the time…
I tried to cover this in the writeup, but even ignoring the PubSub use case, we get huge wins with streams in phx.gen.live compared to the 1.6 approach. We used to push_navigate back to the PostLive.Index LV when you submitted the modal form, which would require refetching the entire listing every time you created or edited a post. Now with streams, we can push_patch back to the index LV and stream_insert the created or updated post. No extra fetching of the listing required.
Our generated context functions have nothing to say about how you’d handle pagination, so this is a little tangential to our generated code. But LV streams could absolutely be used to do “realtime feeds”, especially because you can prepend or append (or insert any any place) on demand. Hope the helps!
Awesome job Chris & Team, been looking forward to this release for long! Exciting stuff! The wait wasn’t too bad though as the rc releases were already stable enough for me