Update attribute if other dependent attributes change like Observables?

I have a slightly different opinion.

Signals as I’d like them are a way to do reactive, declarative programming.

Instead of saying each time

new_x = 123
socket 
|> assign(:x, new_x)
|> assign(:y, compute_new_y(new_x))

I’d like a way to “declare” that y is a derived value and do not worry about updating it explicitly in every place - it leads to bugs.

To some extend, I can do it in render function

def render(assigns) do
  assigns = Map.put(assigns, :y, compute_new_y(new_x))

  ~H"""
  ...
  """
end

but it has a drawback - template thinks that y is always updated, since it’s not present in __changed__. You can do some dark magic to update assigns.__changed__ manually as I’m doing in live_vue, but it’s often an overkill.

Btw, phoenix already has a logic we want. If we’ll define computed value inline in template, everything works just fine - it’s recomputed automatically when one of ingredients is updated

def render(assigns) do
  ~H"""
  <div>{ @x * 2 }</div>
  """
end

I think we can even use function there

def render(assigns) do
  ~H"""
  <div>{ compute_new_y(@x) }</div>
  """
end

but the problem:

  1. we need to put that function explicitly in each template which would like to use it
  2. it would be computed multiple times, one for each invocation
  3. It get’s more and more complex if we want to use result of one computation inside another one.
  4. We cannot do it outside of HEEX template

I’m experimenting a bit with a macro for doing it - at compile time, calculating dependencies of a function. It seem to work, but it might be tricky to make it “fully” working. In JS they’re using proxies to detect property access, here we don’t have it - we’d need to either provide our own way of accessing these values so we could track accesses, or statically determine all the dependencies. Either not ideal and tricky…

    test "logs accessed keys and returns computed value (dot access)" do
      assigns =
        assign_computed(%{val1: 5, val2: 10}, :result, fn data -> data.val1 + data.val2 * 2 end)

      # 5 + 10 * 2 = 25
      assert assigns.result == 25
      assert assigns.__deps == %{val1: [:result], val2: [:result]}
    end
1 Like

I feel there are lots of specific way reactivity could be implemented depending on the specific UI needs, and that it wouldn’t need to be done in-framework nor in-Liveview rather than in-Elixir …

A few samples of the last weeks (I’ve been digging this topic and have put a few notes online) :

  • Explicit recomputation of derived properties where derived properties are known, and the author (you) wraps logic within a recomputation helper. Very “caveman” approach but it is enough in a lot of my UIs. Limited but often enough.

  • Pointers to in-memory maps - don’t mind the benchmarks, they’re not very useful, but in this specific project that has no database, I pass anonymous functions around. The idea is to stay close to “send the code to the data”. Yes, this also means some global state somewhere. But it’s useful (and safe) in this specific project where state is just some parameters for calculations that one single source of truth sets and other use.

  • Declarative DSL for derived properties where you can declare “computers” that define their inputs and outputs and automatically track dependencies. This one is also because I often need to build little “calculators” that are very generic UI-wise and could be described as a collection of inputs and types, and outputs and types.

Sorry for the roughness of those samples, I use my blog as a “public thinking” space but it’s mainly for myself.

2 Likes

Between this thread and the other one I think I have actually made every single point you listed here as well, so I think we are very much on the same page. What I was getting at in my reply is that I’m not sure Signals as they are currently proliferating in JS-land are the solution I want for this problem.

I will try to go into a bit more detail here.

What we want, of course, is a declarative (“reactive”) interface where things update arbitrarily based on their dependencies. We do not want to write imperative code (element.appendChild(...)) because that approach does not scale.

The simplest way to accomplish this is to re-render the entire document from scratch on every state change. This is prohibitively expensive (and also breaks some HTML elements), so the second simplest is to render the entire document and then compare every element in the DOM to “diff” them. (This can be made a bit cheaper using a virtual DOM to avoid excessive DOM reads, but this is an abstraction you don’t have to think about.)

React works this way. React components, post-hooks, follow an idempotent rendering model. Props and state come in, and rendered content comes out. The first time the component is called with the given props/state, it is allowed one round of memoized computations (useMemo) and side-effects (useEffect). On each subsequent call those hooks will be idempotent, hence the component will be idempotent. All other computations (those which are not memoized) are simply recomputed on each render.

This design is enormously clever. It means that the component is declarative (reactive) and idempotent, but not pure (meaning it can still load state from a DB, etc). Crucially (in my view, thus far) the mental model of a component is scoped to that component. Each time it is rendered everything happens top-down, and the abstraction of memoization is easy to wrap your head around.

Signals, I think, are the opposite. There is no longer a “component model” to comprehend, components exist simply to register an endless series of unaccountable callbacks into the engine. I find this rather terrifying.

Indeed, and this is what I was getting at when I argued that all we really need is memoization, somewhere. Ideally everything would happen in something like update/2, though I’m beginning to think that the update/mount cycle should really be replaced with render/1. I’m not sure if that is something which can be retrofitted into Phoenix without massive changes, though.

In the meantime some sort of “computed assigns” thing, as Zach and I had discussed above, could probably tide things over. I am also interested in long-term solutions, though I don’t yet know what they would look like.

Doing these things at compile time is a dangerous road. Svelte tried to do this, and made a big deal about how it’s the better approach and it would replace React. And now? They’re doing it at runtime. Funny how that works!

I would not worry too much about getting fancy with dependency computation. Manually passing in dependencies is a good enough starting point. There are also cases where you might want to use a dependency without recomputation, which violates the model but is nevertheless sometimes useful for advanced cases.

I think an assign_memo(assigns, key, func, deps) API would be a good start, implemented similarly to what Zach showed earlier. They’re scoped to a single component, avoiding the worst of signals (global state), and as long as you initialize them up-front (e.g. in update/2 or mount/1) they should preserve an idempotent mental model.

def update(assigns, socket) do
  socket
  |> assign_memo(:greeting, fn assigns ->
    "Hello, #{assigns[:name] || "user"}"
  end, [:name])
  |> assign(assigns)
  |> then(&{:ok, &1})
end

Note that this inverts React’s model. If we wanted to go down that road we would have to instead call update/2 (or perhaps just render/1) in response to assign() and set up everything there, which would be a substantial change to LiveView. Of course this is exactly the transition React went through with hooks :slight_smile:

2 Likes

Can we use Ash.Flow or Ash.Reactor to do this or are they only for compile time?