def user_list(assigns) do
~H"""
<div class="user-list">
<div :for={user <- @users} id={user.id}>
<.user_entry user={user} />
<div>
</div>
"""
end
And then you had more than one on the page, with overlapping entries, like:
def mount(socket) do
socket = assign(socket, %{
# Imagine these are loaded from the DB...
new_users: [user1, user2, user3],
top_users: [user2, user4, user6],
})
end
def render(assigns) do
~H"""
<div>
<.user_list title="Top users" users={@top_users} />
<.user_list title="New users" users={@new_users} />
</div>
"""
end
Now you would be in trouble, because the global ids donāt compose properly. In React the components are mounted implicitly based on the tree position (children are keyed all the way up) so they donāt have this problem. I donāt mean to harp on this in particular (this is still a big improvement!), I was mostly just curious. It would be nice if we didnāt need global ids, but they are at least forward-compatible with non-global ids so itās something that can be kicked down the road for now.
If I could offer one piece of feedback, though: I worry the āimplicit prefixingā trick you mention could create more confusion, since it will work sometimes. I think it might be better to be explicit about requiring global ids until the problem can be solved āproperlyā in the future.
Itās a huuge improvement! Now developers can get these huge diff size gains without much effort.
I was looking into this, and thereās these two possible improvements:
when :key is present, payload still increases linearly with length of the collection due to always sending a full list of LiveComponent ids. I think it helps when reordering, but is it necessary when order doesnāt change?
memory overhead is much higher than I thought it would be. I might be measuring it in a wrong way, just I have a feeling LiveComponents were not really optimized for memory overhead.
Without keys:
Array length
Heap size
Payload size
0
987
78 (empty)
1
987
112
10
987
238
100
2586
1497
1000
17731
14098
With keys:
Array length
Heap size
Payload size
0
987
78 (empty)
1
987
138
10
4185
175
100
28690
625
1000
121536
6029
Both these issues are less important when using more complex items, since I assume memory overhead for LiveComponent record is constant (didnāt measure it though). But now letās remember LiveComponent usage will be much much easier thanks to :key syntax, so optimizing it might be more important? Still - canāt shake that feeling it could be done almost without memory overhead. Like, compare previous assign to the current one without using LiveComponents at all. Iāve tried doing it by myself and failed, engine code is not the easiest one to grasp
This is a simple livebook Iāve used:
Mix.install([
{:phoenix_playground, "~> 0.1.6"},
{:phoenix_live_view, github: "phoenixframework/phoenix_live_view", branch: "main", override: true},
])
defmodule DemoLive do
use Phoenix.LiveView
require Logger
@list_size 1000
def mount(_params, _session, socket) do
list = Enum.map(1..@list_size, fn i -> %{index: i, value: random_value()} end)
report_memory_usage()
{:ok, assign(socket, list: list)}
end
def render(assigns) do
~H"""
<button phx-click="randomize">Randomize</button>
<div :for={item <- @list} :key={item.index}>
{item.value} {item.value + 1} {item.value + 2}
</div>
"""
end
def handle_event("randomize", _params, socket) do
index = Enum.random(1..@list_size)
new_list = put_in(socket.assigns.list, [Access.at(index - 1), :value], random_value())
report_memory_usage()
{:noreply, assign(socket, list: new_list)}
end
defp random_value() do
Enum.random([1, 2, 3])
end
defp report_memory_usage() do
:erlang.garbage_collect()
Logger.info("Heap size: #{Process.info(self())[:total_heap_size]}")
end
end
PhoenixPlayground.start(live: DemoLive)
Itās necessary right now because thatās how LiveComponent rendering works and itās really just LiveComponents under the hood, no further optimizations apart from some change-tracking trickery.
Thank you for experimenting with this! Each LiveComponent comes with some overhead for its state (socket, lifecycle), so yeah, itās definitely not perfect for this use case.
I agree that it would be nice to optimize this further, but that would require a whole new diffing algorithm + diff format for keyed comprehensions, which is not something I was able to wrap my head around yet.
I am sure this can be optimised in future releases, so I am not very worried about it. Optimising it would mean refactoring the Diff module, which is one of the most complex parts of the codebase, so ti should be a win-win.
If this code is going to be refactored heavily at some point I would like to quickly throw my hat in the ring and argue that LiveComponents (or similar) should be diffed with local keys rather than global keys, because (as I showed a couple posts up) global keys do not compose properly and cause problems. I have even managed to run into collisions a couple times in my own apps, where I 100% control the keys, and it can only get worse from there.
I have read some of the LV code, but I donāt have near the understanding needed to know how hard this would be. IIRC you are the original author of the LiveComponent feature, so maybe you can shed some light there. (Though TBH I donāt usually remember how my own code works after even a few months lol)
If there are multiple comprehensions in a component, you canāt get away with having one node in the tree per component. The tree structure has to match the dom node structure because only a node is guaranteed to have a single list of children.
Given that HEEX splits components into statics/dynamics, I have a sneaking suspicion accounting is all done at the component level, although there are probably structures to deal with comprehensions and conditionals, right? Maybe those could be modified to do the job, Iām not sure.
Conditionals are easy as long as they return the same number of children (nil is a valid child node). Comprehensions are what necessitate the :keys to keep the diff fast. I think React mounts fibers for every node (not just components) because I donāt see how else they would deal with this, but strangely I have been unable to find any mention of that fact anywhere. Iāll probably have to look at the code to know for sure.
But yeah, I worry this would turn into ārewrite the entire LiveView engineā. Which would probably be worth it TBH, but thatās a lot easier to say when youāre not the one doing itā¦
The new code only sends a position when it changed or moved.
Also, the new branch does not use live components any more, so the keys are all local.
This also adjusts all other comprehensions to be implicitly keyed by their index, so nested change tracking also applies whenever you omit the key, which is useful for things like slots.
@garrison Much appreciated your ability to communicate in a way that people could see why and a clear path for implementing this. Something I know a lot of us has tried to advocate for but failed to get over the line!
The need was always clear but we just never knew how. The main insight though was to indeed use the position in the tree as a key but even then this is the third or fourth try in making this work in the last 5 days, so we needed a few more breakthroughs!
I donāt have much time now to re-run all the tests, but for 100 & 1000 items and keys enabled:
Array length
Heap size
Payload size
100
6772
138
1000
46422
138
Payload size is constant, unless we move items around or insert in the middle. Worst case is prepending without explicit keys, but itās much better anyway. There seem to be some overhead on memory, I assume you keep around previous keys (didnāt had time to look into the implementation).
Super excited for this I think it might be one of the biggest performance optimizations in LiveView!