Update attribute if other dependent attributes change like Observables?

Is there anything in Ash or Elixir to update a value in a nested map when a value in another map changes based on conditions from dynamic user supplied templates?

map0 = %{
  data: %{true_sum: 0}
}

map1 = %{
  data: %{true_count: 0, field: false}
}

map2 = %{
  data: %{true_count: 0, field: false, visible: false}
}

# user toggles field
map1.data.field = true

# system updates
map2.data.visible = true
map1.data.true_count = 1
map0.data.true_sum = 1

# user toggles field
map2.data.field = true

# system updates
map2.data.true_count = 1
map0.data.true_sum = 2

# user toggles field
map1.data.field = false

# system updates
map2.data.visible = false
map2.data.field = false
map2.data.true_count = 0
map1.data.true_count = 0
map0.data.true_sum = 0

This is what ChatGPT-4o suggests. Is there a library or anything that already does this like an ECS?

To implement this in Ash or Elixir with dynamic forms, maps, fields, and user-supplied conditions, you can create a template that defines the structure and logic for the dynamic updates. Here’s how you can approach it:

1. Define a Template for User-Supplied Logic

The template can be a JSON or Elixir map that specifies the fields, conditions, and actions to be performed when a field is toggled.

%{
  maps: %{
    map0: %{data: %{true_sum: 0}},
    map1: %{data: %{true_count: 0, field: false}},
    map2: %{data: %{true_count: 0, field: false, visible: false}}
  },
  rules: [
    %{
      condition: %{map: :map1, field: :field, value: true},
      actions: [
        %{map: :map2, field: :visible, value: true},
        %{map: :map1, field: :true_count, value: 1},
        %{map: :map0, field: :true_sum, value: 1, operation: :increment}
      ]
    },
    %{
      condition: %{map: :map2, field: :field, value: true},
      actions: [
        %{map: :map2, field: :true_count, value: 1},
        %{map: :map0, field: :true_sum, value: 1, operation: :increment}
      ]
    },
    %{
      condition: %{map: :map1, field: :field, value: false},
      actions: [
        %{map: :map2, field: :visible, value: false},
        %{map: :map2, field: :field, value: false},
        %{map: :map2, field: :true_count, value: 0},
        %{map: :map1, field: :true_count, value: 0},
        %{map: :map0, field: :true_sum, value: 0, operation: :reset}
      ]
    }
  ]
}

2. Implement the Logic Processor

Create a module to process the template and apply the rules dynamically.

defmodule FormApp.DynamicProcessor do
  def process_template(template, map_name, field, value) do
    maps = template[:maps]
    rules = template[:rules]

    # Find matching rules
    matching_rules =
      Enum.filter(rules, fn rule ->
        rule[:condition][:map] == map_name &&
          rule[:condition][:field] == field &&
          rule[:condition][:value] == value
      end)

    # Apply actions for matching rules
    Enum.reduce(matching_rules, maps, fn rule, updated_maps ->
      Enum.reduce(rule[:actions], updated_maps, fn action, acc_maps ->
        apply_action(acc_maps, action)
      end)
    end)
  end

  defp apply_action(maps, %{map: map, field: field, value: value, operation: :increment}) do
    update_in(maps[map][:data][field], &(&1 + value))
  end

  defp apply_action(maps, %{map: map, field: field, value: value, operation: :reset}) do
    put_in(maps[map][:data][field], value)
  end

  defp apply_action(maps, %{map: map, field: field, value: value}) do
    put_in(maps[map][:data][field], value)
  end
end

3. Example Usage

You can use the DynamicProcessor module to process user actions and update the maps dynamically.

defmodule FormApp.ExampleUsage do
  alias FormApp.DynamicProcessor

  def run_example do
    template = %{
      maps: %{
        map0: %{data: %{true_sum: 0}},
        map1: %{data: %{true_count: 0, field: false}},
        map2: %{data: %{true_count: 0, field: false, visible: false}}
      },
      rules: [
        %{
          condition: %{map: :map1, field: :field, value: true},
          actions: [
            %{map: :map2, field: :visible, value: true},
            %{map: :map1, field: :true_count, value: 1},
            %{map: :map0, field: :true_sum, value: 1, operation: :increment}
          ]
        },
        %{
          condition: %{map: :map2, field: :field, value: true},
          actions: [
            %{map: :map2, field: :true_count, value: 1},
            %{map: :map0, field: :true_sum, value: 1, operation: :increment}
          ]
        },
        %{
          condition: %{map: :map1, field: :field, value: false},
          actions: [
            %{map: :map2, field: :visible, value: false},
            %{map: :map2, field: :field, value: false},
            %{map: :map2, field: :true_count, value: 0},
            %{map: :map1, field: :true_count, value: 0},
            %{map: :map0, field: :true_sum, value: 0, operation: :reset}
          ]
        }
      ]
    }

    # User toggles map1.data.field to true
    updated_maps = DynamicProcessor.process_template(template, :map1, :field, true)
    IO.inspect(updated_maps, label: "After map1.field = true")

    # User toggles map2.data.field to true
    updated_maps = DynamicProcessor.process_template(updated_maps, :map2, :field, true)
    IO.inspect(updated_maps, label: "After map2.field = true")

    # User toggles map1.data.field to false
    updated_maps = DynamicProcessor.process_template(updated_maps, :map1, :field, false)
    IO.inspect(updated_maps, label: "After map1.field = false")
  end
end

4. Output

Running the FormApp.ExampleUsage.run_example/0 function will simulate the user actions and print the updated maps after each step.

Explanation

  1. Template: The template defines the initial state of the maps and the rules for updating them.
  2. DynamicProcessor: This module processes the template and applies the rules based on user actions.
  3. Example Usage: Demonstrates how to use the processor to handle dynamic updates.

This approach allows you to define user-supplied templates and dynamically process updates based on conditions and actions.

Yeah you’re pretty much on your own here. You can detect changes within maps etc. and apply updates to maps with Ash etc., but nothing builtin that will do that automagically.

2 Likes

You should try to do this “declaratively” by computing the output as a pure function of the input maps. Every time any value in the input(s) changes you run it all through the function to get the new output.

How and where you actually implement this depends a lot on what you’re doing, but that’s the general idea. We might be able to provide better advice if you share more details about what you’re trying to do.

Agreed, in Ash we have calculations for this purpose :smiley:

I’m trying to have users create dynamic forms with conditions to show/hide fields and update values on other fields. Fields can depend on multiple other fields that also depend on other fields.

I thought there was a dependency solving library for Elixir or Ash, but those may be for static structs and forms. If it’s possible, I would take user supplied forms and compile them to Elixir modules or generate Ash projects.

There is a lot of context here that makes it kind of a vague question TBH. Ash can solve for dynamic calculations (i.e not known at compile time) in some ways but its not necessarily a sort of “arbitrary dynamic data dependency solver”, its targeted to a relatively specific use case. I don’t think you’d find anything that directly solves this problem, unless you reach for some very generic graph-based tools, but even then you’ll be building this kind of logic on top of them.

EDIT: not so vague w/ the description of creating dynamic forms w/ hide/show conditions. I think I’d likely do this with relatively straightforward hand-written code as a processor of some kind, not necessarily a dependency solver. i.e when building the data that renders the form, or when rendering, you check for its conditions elsewhere in the data structure etc.

1 Like

Yeah, this is not trivial. I’m sure you are going to have to essentially “whiteboard” out the exact implementation here, e.g. in what ways fields can depend on each other.

But once you have a good idea of that, what you want to do is write a function compute_form(...) which takes in the current form state (and schema, or whatever) and outputs a “materialized” form structure. The output will be a pure function of the inputs (easy in Elixir).

Then you can render the output form structure into actual Phoenix forms using components. If this is updated in real-time I assume you are using LiveView, so you would essentially assign(:form_structure, ...) and then LiveView will re-render the whole form.

Then when an event comes in you call your compute_form(...) and re-assign it to trigger the re-render.

Not that I know what I’m talking about WRT Ash, but these are in the “model” layer, right? Like, they exist on Ecto schemas or similar?

What would be more relevant for this problem is a similar feature but in the “UI” layer, like LiveView. I was actually advocating for something exactly like this recently in another thread replying to Jose. Something like a React useMemo() for LiveView which can re-compute an assign based on some dependencies (other assigns). This is a good example of where functionality like that is useful, as the implementation is similar but it allows you to centralize the “update” logic instead of fragmenting the calls into a dozen event handlers (and possibly forgetting one).

Sort of. They can be used for rendering etc, but maybe not best in this case.

This is a really complex topic. I think that patterns like that can easily turn into a sort of rats nest of conflated UI tooling and business logic, where you get into this wacky land of having to try to game a rendering engine to implement business rules etc. I agree that there may be room for a certain kind of higher level tooling over assigns, but I would much rather (personally) have a pure function of state → rendered page for this kind of thing.

i.e

def form_data(form_config, form_data) do
  Enum.flat_map(form_config.fields, fn field -> 
    if conditions_met?(field, form_data) do
      []
    else
      [field_representation(field, form_data))]
    end
  end)
end

This is something I can test and reason about without needing to tie in my application logic with any given rendering engine. Things like useState, useMemo and useEffect in react are simultaneously really useful and also make life really complicated IMO.

1 Like

It’s funny because I saw a similar response from Jose and some others the last time I posted about this too. I feel like there might be a “React PTSD” factor here where people see me cite React and instantly recoil in horror. And, I mean, I get it.

But what I am actually proposing here is almost exactly as the same as your example (which you might note is in turn the same as my example two posts up!), which is that you have a pure function of form_data(some_assigns) -> more_assigns.

The problem is that, when you already have said function, you have to actually call it when some_assigns changes. This could be one assign, or several (in your example it’s form_config and form_data).

So think about what this might actually look like in a complex app. These inputs could be modified by:

  • A PubSub event which comes in from the DB/Context layer
  • An event handler in your LiveView
  • A message sent by another component

And so on. So the form_config might be updated in the database, and you want to refresh. Or the form_data could be updated by one of several events, and is probably split across several forms, etc.

What this means is that you have to remember to call form_data(...) |> assign(...) every single time you update any of its dependencies. I have built apps with on the order of a dozen of these “update functions” with many dependencies. At scale it quickly becomes messy. What I am advocating for is this:

socket
|> memo(:form_output, [:form_data, :form_config], &form_data/2)

And then you can assign to form_data and form_config at will without remembering to re-compute its dependencies and scattering those calls everywhere (where they promptly get lost and cause bugs).

But importantly, form_data is still pure! In fact, the only difference between doing it here and in your template is that the latter would obliterate your performance. Hence the name: memo(ize).

1 Like

Ultimately we can factor React out of the equation. Better not to worry about psychological reasons others disagree with us as it’s ultimately reductive in the debate and could easily be flipped around :smile:

I agree with your problem statement, but I disagree with the proposed solution. I’m open to it, but I think you could get exactly the same stuff without having to add complexity to the internal model of LV.

defp magic_assign(socket, key, dependencies, function) do
  socket
  |> assign(key, function.(Map.take(socket.assigns, dependencies)))
  |> update(:recompute, fn recompute ->
    Enum.reduce(dependencies, recompute, fn dependency, recompute ->
      Map.update(recompute, dependency, [key], &Enum.uniq([key | &1]))
    end)
   end)
   |> update(:callbacks, fn callbacks ->
     Map.put(callbacks, key, fn assigns ->
       function.(Map.take(assigns, dependencies))
     end)
   end)
   |> recompute(key)
end

defp recompute(socket, key) do
  socket.assigns[:recompute][key]
  |> List.wrap()
  |> Enum.reduce(socket, fn recompute, socket ->
    socket
    |> assign(callbacks[recompute].(assigns))
    |> recompute(recompute)
  end)
end

and now I can

|> magic_assign(:form_output, [:form_data, :form_config], &form_data/1)

I just threw the above together (like literally in this forum text box), it’s untested and bug riddled I’m sure. It also has two major flaws:

  1. it will not prevent infinite recomputation cycles (i.e cycles in the dependency graph)
  2. if multiple dependencies cause one thing to recompute, it will recompute the targets mutliple times.

Both can be solved by using something like LibGraph.

My point isn’t that functional reactive programming, or declaratively modeling data dependencies is bad. My point is that I don’t want my renderer to be in charge of that, primarily because I just don’t see a compelling reason that it should be. It can all be done in a way that is fully unrelated to LV itself, could be tested with basic maps, etc. And it also doesn’t stop me from a whole host of custom optimizations (like bypassing magic_assign when stuff gets weird, because stuff always gets weird).

3 Likes

Mm, what I meant is that I worry my use of React as an example of the API I am looking for is a mistake because it could mislead the reader into thinking React is an example of the implementation I’m looking for. I say this because…

No, your proposal here is almost exactly what I had in mind! I never mentioned any changes to the runtime, and in past comments (which obviously I do not expect you to have seen) I mentioned I was interested in implementing this as a library for myself. Which I will get around to, eventually.

But there is one thing: I would really prefer to maintain the assign() syntax for updating the dependencies of the assigns. I’m not sure the level of degenerate macro-hacking I will have to engage in to override Phoenix’s assign(), though.

As for the flaws you mention, I had them both in mind. Cycle detection wouldn’t be too hard I think. The extra re-computes are unfortunate for performance but I don’t think it would matter very much in practice unless you are updating many dependencies within the same update cycle (which is unlikely in practice, probably?).

Either way, I think we are in near total agreement :slight_smile:

P.S. I cannot imagine how much experience you need to hold that many reduce/update invocations in your head, I must have read that at least 10 times lol

I’m just a freak of nature :stuck_out_tongue_closed_eyes:

Honestly I wouldn’t personally worry too much about this. I think it’s much better to have something like assign_reactive and assign as separate concepts entirely. Helps with context switching when you have to work on something where it isn’t available also.

The graph solver would solve the recomputations in addition to the cycle detection because you would start at leaf nodes and walk your way to the nodes w/ no remaining neighbors. So both of those flaws are easily addressed.

Noice :smile:

1 Like

I could live with it, but I would want to use assign_reactive globally in that case because any usage of the normal assign could end up causing bugs. For example, if I add a normal assign, and then down the road add a dependency to that assign, I would have to go back and “fix” all assign() invocations, which would be no good (very easy to miss one).

Overriding assign() here is the cognitive equivalent of a “zero-cost abstraction” - if you’re not using the magic_assign/memo functionality, the behavior is identical to normal assign().

If there’s no good way to override with a macro (I haven’t thought about it enough yet) then I’ll probably end up with a s/assign/rassign or something.

Just add a credo rule to never use assign and you’re good to go :laughing:

1 Like

@Jskalc mentioned your post in a thread about signals.

@MarthinL would observables or signals be helpful for dynamic recursive trees?

At the outmost that would depend heavily on the use-case i.e. the problem you’re trying to solve with dynamic recursive trees.

From the little I was able to surmise about what observable or signals refer to in the context of the thread you linked to the domain in which is discussed is solidly UX related. In general terms I have always favoured approaches that allows relevant code to respond to actual things that happen as it happens without dropping the ball in the hope the change gets picked up correctly off the database or message queue.

Assuming signals and observables does what the names suggest I’d say they shouldn’t be new tools even for the UX domain and would be very helpful all around if they are coming into that domain only now.

That said, the functionality we seem to be talking about isn’t any more or less applicable to managing dynamic recursive trees than it is to a UX related problem.

To make my main point i need to differentiate between “dynamic recursive trees” as a tool in the UX developer’s toolbox and “an indefinitely recursive dataset” as a primary or significant central concern of an app or solution. The point being that the former offers little to no help in implementing the latter and could even create more problems than it addresses.

As I ventured deeper into simplifying the problem by (finding and ) using separation of concerns I got the best results from keeping as much of the recursion and dynamic nesting that appears in the data well out of reach of children. That is to say, do not map parent-child relationships in the data onto parent-child relationships in HTML. There are some HTML primitives like div and span for which nesting semantics are well defined, sure, but if you nest it indefinitely it becomes genuinely complicated to manage regardless of the toolset you use. There are additional clues about the direction of development in HTML leaning away from indefinite nesting to be found in the semantic HTML tags. Don’t go there, is my advice. Instead, find a way to flatten your dynamic recursive artefacts at the UX level and handle the essence of those long flat lists of elements being related to each other in some hierarchical manner as a separate concern altogether.

By my reckoning based on my own use-cases and what I’ve seen this split is best done deep within the server itself, not at the level of the UX code or framework at all. In the Elixir/Phoenix/LiveView eco system it means that all the observables and signals you need are already available to the server in the form of the PubSub mechanism. By the time a user experiences the recursive data and can manipulate it in any way, the bulk of what goes to the browser maps directly onto stock standard non-recursive HTML artefacts which relay all the actions taken on them which may be relevant to how they are interrelated with and impacting on other elements on the same or different levels of recursion back to the server who can deal with that natively at the business logic and database level.

It’s never been wise to conflate UX and core logic, a.k.a. putting business rules in user interface code. The same holds true with today’s UX frameworks and approaches and exponentially so when indefinitely recursive datasets are involved.

In summary, unless your dynamic recursive trees are trivial and are manifested purely for UX purposes, I would recommend you steer clear of trying to use UX tools (anything related to or running directly in the UA/browser including LiveView, React and VueJS/X) to address the core logic problems associated with large indefinite recursive datasets.

The beauty of the Elixir/Phoenix/LiveView eco system is that it allowed us to write almost all the code we need in one language and in the same development environment which obscures the boundaries between the different type of code and concerns present in the solution. LiveView is just one of those and perhaps React is a rich toolset as well but choosing that doesn’t help with the non-UX parts of the solution.

In trying to get a grip on what this thread is really about I find it most pertinent to figure out to what class of problems you’d want to apply something like that. “When a value in another map changes…” suggests that you consider at least some of the maps in question as primary sources of truth with the further implication that the master copy of this map resides in server memory. I’d find that a highly unusual scenario I wouldn’t expect in real life. Normally maps are in-memory representations of persisted data and any changes to the data would need to be made to the persisted master copy first and foremost. Usually that would be the point where the change is “detected” from the user actions and/or input stream processing that results in the change being made. The database / persistent storage update mechanism would be responsible for cascading the dependent changes potentially even using PubSub and PubSub would also take care of propagating changes to data already in memory.

Could it be that in the context of this line of questioning the notion of a map or nested maps are being used an analogous to objects in OOP? It would in my opinion be a mistake to conflate those two concepts but since this is on the Ash forum it is possible that one of the underlying concepts of Ash’s approach is to consider a persisted schema and a map in memory transparently as the same thing declares in the model. I keep saying I don’t know enough about Ash to even have an opinion on it and it’s a giant hill to climb to gain that understanding. From a pure Elixir and Phoenix perspective though maps contain transient copies of underlying data loaded as and when required and forgotten as soon as possible. In that world it make no sense to me when you’d have the need to trigger on changes somewhere in a nested collection of maps.

I’m always happy to learn and correct my deficient understanding. In this case it means figuring out where you see the capability you ask about coming into play.

This thread precipitated me deciding that I did not know near enough about reactive frontend programming, and I spent a lot of time after this studying the architecture of a number of frontend frameworks to get a better grasp. I don’t think I will have solid opinions on this until I spend enough time writing my own framework(s), but my views have changed a bit since I last commented on here so I will share some “WIP opinions” if you like.

I am very skeptical of signals as a paradigm thus far. React’s mental model is that of an idempotent component which is re-computed based on the props (assigns for us) passed into it, and optionally some component-scoped state (also assigns, for us). When either of those things changes, the component (and therefore its children) are re-rendered and diffed.

It is easy to forget this if you started by writing JSX, but in React there is no template. Components are code which return a DOM representation, not HTML which contains code. There is a substantial difference between these things, and the former is vastly superior to the latter.

LiveView components, by the way, are actually the same thing. Here, look:

def foo(assigns) do
  if assigns[:hello], do: ~H"<div>Hello</div>", else: ~H"<div>Goodbye</div>"
end

In a templating language you can’t do that. For simple cases you might think you can do that, but it always breaks down.

Anyway, back to signals. Signals are saying, instead of an idempotent component which is re-rendered, we can great a great big “soup” of dynamic “parts” and render them incrementally in response to changes. The idea being we will get better performance by computing an explicit dependency graph of the entire application and then being very careful about what we update.

I am skeptical of this because it violates the clean mental model of a component as a function. All of a sudden you have to think of a component as many pieces which can be updated independently. React components are not pure functions (idempotence is not purity), but the mental model is close enough to be helpful.

Put another way, signals give me global state vibes, and I don’t like it.

BTW, LiveComponents are not properly idempotent because of the mount() function. Looking back at my own code I’ve noticed that I tend towards avoiding it and performing idempotent checks in update/2 using changed?/2 and similar. I did not understand at the time why I was doing this, but now it is becoming clear.