This discussion started in the Does Hologram support two way data binding? thread and evolved into a deeper conversation about reactivity patterns in Hologram. We’re exploring questions like: Should Hologram support computed properties? How do signals fit with functional programming? What’s the right abstraction for derived values?
An example of signals based framework is Datastar. In Datastar if I do
<input type=”text” data-bind=”name”/>
A signal name would get created. Typing in the input would update the value of name. Updating the value of name from outside would change the value of input element.
More on this here - Reactive Signals Guide
Interesting, not to go too far off track here but I found solid.js reactivity-model to be very intuitive and appealing at a first glance. @LostKobrakai probably has lots of knowledge here:)
How relevant is this to hologram in terms of databinding and other client side aspects, could something like it be used?
Signals are compelling, but they’re fundamentally a JavaScript reactivity primitive. Hologram transpiles Elixir to JS, preserving Elixir’s functional semantics - immutable data structures, explicit state updates via put_state, etc. Adopting signals would mean either reimplementing them in Elixir semantics (complex) or breaking from Elixir patterns (defeats the purpose). The current explicit approach aligns better with Elixir’s philosophy.
That said, a middle ground with similar purpose is planned - computed values/properties. This would give you automatic recalculation of derived state when dependencies change, similar to Vue’s computed properties, while maintaining the explicit, declarative, functional style that fits naturally with Elixir.
I would only suggest that you be wary of half-measures. If you will recall I also once suggested adding computed assigns to LiveView, but having since educated myself further I now realize that this approach does not go nearly far enough.
What is interesting about post-hooks React is that they finally came up with a way to structure entire components declaratively. Even pre-hooks React did not live up to this goal, just as LiveView fails to live up to it today.
The difference between a signals approach and React’s approach is, philosophically, somewhat similar to the template debate. They are both declarative, but I think that signals take power away from the programmer rather than grant it. The difference here is much more subtle and involved than the template case, though.
Computed properties are, in isolation, the same thing as signals. You are forced to model each little piece of the code in terms of how it responds to changes in other parts. There is no forest; you are only concerned with the trees.
React’s model is much more interesting: you write the components declaratively as a whole. You are concerned with how a component responds to changes in its state, not each individual fragment. This matters because the code within a component is meant to be deeply interconnected and and cannot and should not be pulled apart.
I worry the “just add computed properties” approach will fundamentally lead to an imperative style with little bits of declarative code thrown in there at random, which is no solution at all.
At least signals frameworks commit to using them for everything, even if I think that solution is ugly and inexpressive.
When I said “middle ground” I didn’t mean “half-measure” - that was unfortunate wording on my part. I’m talking about reaching similar goals - automatic recalculation, performance optimization - but in a way that aligns with functional programming principles. I think computed properties actually fit naturally into the functional paradigm: they’re pure functions that transform immutable state into derived values. That’s fundamentally functional.
I know you’re not a fan of templates, but computed properties actually strengthen the template-as-presentation-layer approach by letting you move business logic out of templates into small, focused, testable functions.
To be clear: when I talk about computed/derived properties, I mean values like full_name, valid?, or formatted_price that derive from existing state - not template fragments. (Partials like <my_partial(param_1, @param2) /> would be a separate feature for generating template fragments.) These are tools in your toolbelt, not fundamental architectural commitments.
I think it’s worth considering the concrete benefits they provide - they solve real problems:
- Performance: Automatic memoization of expensive calculations
- Consistency: Derived state that can’t get out of sync when multiple values depend on each other
- Readability: Clear separation between data and derived values
- Testability: Small, pure functions that can be tested in isolation
Regarding “there is no forest; you are only concerned with the trees” - I understand your concern about losing the holistic view. But I think the key difference is that computed properties in Hologram would be values derived from state, not a reactive programming model that forces you to think in terms of dependencies and updates. You still write your component declaratively as a whole - the state updates explicitly through actions, and computed properties are just transformations of that state. The component’s logic remains interconnected and whole. Computed properties are simply a way to extract and name derived values rather than calculating them inline in the template.
Additionally, when components are small and properly isolated with focused responsibilities, it becomes easier to keep the entire component in your head at once - the “trees” are manageable and the “forest” emerges from how components compose together.
The key point: computed properties in Hologram would be optional and complementary to the existing explicit approach, not a replacement. They’re a tool for specific use cases where the benefits are clear. I want to be pragmatic about giving developers useful tools while maintaining a coherent architecture.
Hi @bartblast !
I agree with @garrison
The React (and Elm) paradigm of ui = fn(state) IMHO fits most naturally with a functional paradigm and is easier to reason about. Efficiently keeping the ui in sync with the state is an implementation detail.
I have not tried the more recent frontend frameworks that use Signals, but I did use KnockoutJS and EmberJS many years ago which used computed properties–something similar to signals. I found them harder to reason about because you have all these values with interdependencies scattered everywhere. It becomes much more convoluted.
I would raise the question: what is the benefit of having computed computed properties over a ui = fn(state) functional paradigm?
IMO, computed properties are really nice and I use them liberally in svelte with $derived. It’s nice to keep logic out of the template, assign it to a variable and insert that variable in the template {@something}. I think they also memoize automatically.
Hi @venkatd, thanks for the feedback! ![]()
I completely understand your wariness based on KnockoutJS/EmberJS - those frameworks created convoluted webs of interdependent computations that were hard to trace.
However, I think those issues stemmed from their mutation-based reactive model: mutable observables, dependency graphs spanning multiple files/components, scattered observers with side effects, and hard to trace execution flow when changes cascade through dependencies.
Hologram’s approach is fundamentally different:
- Immutable state with explicit updates
- Dependency graphs scoped to component state and props only - local and bounded
- Pure functions, no scattered side effects
- Clear data flow: action → state update → re-render
Yes, there would be a dependency graph for computed properties, but it’s deterministic, local to the component, and traceable by reading the code. No hidden global reactivity. This explicit nature would even enable a Time Travel Debugger to track all actions, commands, and the computed properties graph in real-time.
Computed properties would be named, memoized pure functions that transform state. They’re still part of ui = fn(state) - the function is just composed of smaller, named, cached transformations.
Why memoization matters in ui = fn(state):
The key is that some form of memoization is needed anyway. Without it, you face: repeated function calls with identical inputs, expensive recalculations on every render, composed calculations where all functions in the chain recalculate even when inputs haven’t changed. Your options are: recalculate everything (expensive), manually cache (boilerplate), or automatic memoization. Computed properties provide the latter while keeping the functional paradigm intact, plus they help move business logic out of templates into named, testable functions.
Computed properties would be completely optional - just a tool in your toolbelt for cases where they provide clear value.
Does this address your concern? I know the term “computed properties” carries baggage from various JS frameworks, but I think the fundamental paradigm difference (immutable, functional, local) makes them a different beast entirely. Would love to hear your thoughts on solving these memoization challenges, or if different terminology would help clarify the distinction.
I haven’t replied again out of concern for this specifically. I want to make sure I’ve seen the API you have in mind before I give further thoughts. Plus it will probably be easier to explain what I have in mind if I can demonstrate it in terms of what you already have.
BTW, your dedication to responding to feedback on here is amazing. Don’t think it goes unnoticed!
Hi @bartblast thanks for the clarification
Perhaps because the term “computed properties” means many things to many people which is what is causing the confusion.
If I understand right, you are still following the ui = fn(state) pattern. Computed properties really means memoizing functions so re-renders are efficient? If that is correct I feel a lot better about computed properties.
And btw I think memoization might be a clearer mental terminology for what is going on here.
As @garrison said I think some pseudocode would give us a good idea of what you mean ![]()
Thanks!
Honestly, responding to feedback like this is a great way for me to crystallize what I have in my head and discover new ideas I hadn’t considered. These discussions really help shape Hologram’s direction.
Alright, here are some ideas to show what I have in mind! @garrison @venkatd
The core idea: a simple, component-scoped DSL that defines how derived/computed/memoized values (terminology is up for discussion) are calculated. Hologram automatically updates these values when any dependencies change, and the values are injected into template vars accessible with the @ syntax, e.g. {@my_derived_value}.
The dependencies for derived values can be state, props, and other derived values, so a dependency graph is built underneath and topological sort is applied to manage the recalculation order.
The DSL can be implemented in different ways (full_name probably isn’t the best real-world use case, but it’s easy to understand):
Option 1: Macro with explicit deps
derived :full_name, [:first_name, :last_name] do
"#{first_name} #{last_name}"
end
Option 2: Attribute annotation
@derived deps: [:first_name, :last_name]
def full_name(vars) do
"#{vars.first_name} #{vars.last_name}"
end
Option 3: Function-like syntax
defderived full_name(first_name, last_name) do
"#{first_name} #{last_name}"
end
…and other permutations and hybrids of these approaches.
Key characteristics:
- Pure functions transforming immutable state (no side effects)
- Component-scoped dependencies (local and bounded)
- Automatically memoized (recalculated only when dependencies change)
- Deterministic dependency graph (fully traceable)
- Still fits the
ui = fn(state)paradigm
This is fundamentally different from the mutable observer patterns in KnockoutJS/EmberJS that created convoluted webs of interdependencies. There’s no global reactivity, no scattered side effects, no mutation-based cascading updates.
Important clarification: This is purely for deriving/computing values without any component struct modifications. For reactive state updates, effects/watchers would be a separate concept – but that’s a different discussion.
Would love to hear which syntax option resonates with you, and what other ideas you might have!
I am going to try to briefly describe where modern React came from. To do this, we have to go back in time to when React had class components.
Early React indeed had this idea that UI should be a pure function of state. The render() method was supposed to take the state/props and produce a pure HTML representation from them. This is an enormous improvement over trying to imperatively update the DOM by writing delta code for each state transition (like people used to do with jquery). React was better.
However, over time people began to build more and more complex apps with React, and as a result React’s engine slowly became a UI runtime. That is, it grew past the “V(iew) in MVC” and became the source of truth for the execution trace of the application itself.
The problem is that, while the render() function was declarative, the rest of the component was not. Declarative programming is an extraordinarily powerful tool for preventing bugs, but class components failed to take advantage of this. When you break apart the init/lifecycle/render codepaths you violate the declarative principle of tearing the intermediate state down and rebuilding it.
The hard-to-swallow-truth here is that pure functions were actually the problem, because by making the render function pure you are taking away its ability to initialize state. The path forward, ironically, is to pare the render function back to being idempotent. This is what React’s hooks did.
A function component executes from top to bottom on each render, declaratively. React then gives you tools (hooks) to keep track of and memoize state as you choose so that you can optimize your program. React keeps the tree in sync so that local state “lines up” on each execution, and the hooks are designed to be composed so that they line up even if factored out into libraries.
This solution is very good, because it keeps the programmer in control the whole way through.
There are two things that I have fundamentally come to accept (and I did not start out wanting to accept these things):
- The React team of the time was full of incredibly smart people who were extremely dedicated to solving this problem properly
- It took them over half a decade, operating well-funded and at enormous scale, to come up with a workable solution
Given this, if you are going to deviate from React’s model you should do so with enormous caution. You must be aware of the endless graveyard of half-baked and failed solutions to these problems.
I will now give you my opinion: I think you are trying to impose structure where there needn’t be any. It is my decision whether to memoize or recompute a property, not yours. It is my decision whether that property should exist within a given render at all.
The purpose of a UI runtime is to allow me to write turing-complete code for my application. No matter how fancy your DSL, you are forcing me to statically declare computed properties at compile time and taking away my control over the execution of the program. This is fundamentally inexpressive.
fwiw, I like the svelte approach of making dependency tracking opt-out (using untrack) rather than opt-in.
I think this would be nice:
derived :full_name do
"#{first_name} #{last_name}"
end
The derived stuff starts suspiciously looking like Ash’s calculations. Which is to say I like it.
I wholeheartedly agree with embracing React’s programming model.
The sense I got, though, is that @bartblast is really talking about memoization of function calls and not computed properties as signals are. What is your stance on the ability to memoize function calls?
It seems like there could be an ergonomic way to handle this with module attributes or a bit of macro syntax sugar.
If there is no memoization available, what would you propose?
Yeah, absolutely, we should be cautious about deviating from proven solutions.
That said, the React team operated under specific constraints: backwards compatibility, evolving from class components, and JavaScript’s dynamic runtime. Hologram leverages Elixir’s purely functional foundation with immutable data and compile-time metaprogramming - different foundations allow for different approaches.
However, I’d like to challenge your argument on two fronts:
Challenge 1: Hooks don’t actually give you complete control and freedom.
Hooks themselves have implicit rules and limitations that constrain how you write code. There are quite a few footguns:
Only Call Hooks at the Top Level
- Never call hooks inside loops, conditions, or nested functions
- Hooks must be called in the same order on every render
- This allows React to correctly preserve hook state between multiple
useStateanduseEffectcalls
Only Call Hooks from React Functions
- Call hooks from React function components
- Call hooks from custom hooks (functions starting with “use”)
- Don’t call hooks from regular JavaScript functions
Hooks Must Be Called Unconditionally
- You cannot conditionally execute a hook based on runtime logic
- Bad:
if (condition) { useMemo(() => {}, []) } - Good:
useMemo(() => { if (condition) { /* logic */ } }, [])
In practice, you kind of have two separate parts in the render() function - the hooks part at the top (with strict structure) and the “template” code at the bottom. I’d say that’s imposing structure already. It’s not that you can do whatever you want. You need to know what you’re doing and follow the rules.
Challenge 2: The compile-time approach is essentially equivalent to useMemo through static analysis.
I’d argue that a derived macro would be essentially equivalent to useMemo in terms of expressiveness. The useMemo dependency arrays are built from props and state. Hologram could determine these dependencies at compile-time through static analysis. For example:
derived :my_value do
{my_prop, my_state.x.y}
end
Here Hologram could determine through compile-time dependency analysis that my_value depends on my_prop and my_state.x.y (so if only my_state.x.z changed, this wouldn’t need to recalculate).
Not having to manually specify dependencies is actually more declarative, in my opinion. It removes boilerplate and prevents bugs through compile-time analysis (like missing dependencies in the dependency array, which is a common React bug).
Regarding expressiveness: You mention that this DSL approach is “fundamentally inexpressive”. I agree that DSLs are generally less expressive than raw functions - but it depends on how close they stay to pure functions. The less constraint a DSL imposes, the less expressiveness you lose. In this case, the derived macro is essentially generating pure functions underneath - you’re writing the same functional code you would in useMemo. The difference is that you gain compile-time static analysis and eliminate the need to execute hook calls on each render. You’re not losing expressiveness; you’re trading runtime flexibility for compile-time guarantees and performance.
I’m confident this compile-time approach handles typical scenarios (probably 99.9%) through appropriate dependency modeling. If there are common patterns you think require runtime flexibility that this wouldn’t cover, I’d be genuinely interested to understand them - but I suspect most real-world use cases map cleanly to this model.
Additional benefit: Compile-time performance
Moving derived values outside the render cycle and determining dependencies at compile-time eliminates runtime overhead. In React, every hook call (useMemo, useState, etc.) executes on each render - even if the memoized computation doesn’t re-run, there’s still overhead from calling the hook and checking dependencies. With compile-time analysis, this overhead disappears entirely, making the code both simpler and more efficient.
I want to jump in here to clarify, since I think there’s some confusion about what I’m proposing.
You mentioned “memoization of function calls” versus “computed properties as signals are” - but I think this creates a false dichotomy. What I’m proposing is actually both, unified into a single pattern.
Here’s the key insight: signals as typically implemented in JavaScript frameworks don’t align with purely functional programming. Traditional signals use mutable observables - you mutate a signal’s value and side effects cascade through the dependency graph. That approach is incompatible with immutable data and pure functions.
However, signals/computed properties solve real problems:
- Automatic dependency tracking - knowing what data a computation depends on
- Memoization with reactive updates - caching results and recalculating only when dependencies change
What I’m proposing achieves these same goals but within functional programming constraints:
- Immutable data and pure functions (not mutable observables)
- Compile-time dependency analysis (not runtime tracking)
- Component-scoped (not potentially global)
- Explicit state updates via actions (not implicit reactivity everywhere)
So yes, I’m proposing reactive computed properties that solve the same problems as signals, but implemented with functional programming principles and compile-time analysis. It’s the same category of solution - just architecturally different because it needs to work within a purely functional paradigm.
An added benefit: this pattern lets you extract business logic from templates into small, focused, pure functions that are independently testable, while keeping templates clean and focused on presentation.
Hope that helps clarify!
I like it - it could provide great DX.
The key question: how much “magic” is acceptable to the Elixir community? Automatic dependency tracking through compile-time analysis removes boilerplate and prevents bugs (like missing deps in arrays). But is this too “revolutionary” for Elixir’s explicit culture?
I personally favor a holistic view and lean toward pragmatism over dogmatism. The philosophical question is: should we sacrifice practical benefits to avoid 0.1% of edge cases - cases that could still be handled through alternative approaches?
Important: these are still pure Elixir functions underneath. The macro simply extracts business logic and enables compile-time analysis. It would generate testable functions like derived_full_name/2 for isolated unit tests, plus perhaps MyComponent.derived(state, props) to test the entire computation graph, returning something like %{full_name: "Bruce Wayne", ...}.
I think this strikes a good balance - you get the convenience of automatic tracking while maintaining testability and the functional foundation Elixir developers expect.




















