I’d like to introduce a conceptual idea I’ve been developing, which I call NKTg Law. It’s a physics-inspired framework aimed at modeling how an entity’s motion tendency shifts when its mass varies over time—something that could map well into Elixir/Erlang architectures, especially for distributed systems needing dynamic behaviour (e.g., actor systems with changing load, battery mechanics in games, scalable simulations).
NKTg Law – Conceptual Overview
I define two core values:
NKTg₁ = x × p, where:
x = position or displacement metric,
p = m × v = linear momentum (mass * velocity).
Interpretation:
NKTg₁ > 0 → object tends to move away from equilibrium (amplifying motion).
The BEAM excels at modeling concurrent, stateful processes—each could track its own x, v, m, dynamically adjusted in real-time.
This could empower novel features:
Adaptive rate limiting: processes slow down as their “mass” (load) increases.
Self-throttling workflows: nodes in a cluster reduce throughput under heavy resource exhaustion.
Game mechanics: actors that gain momentum as they “power up” (mass ↑) or slow when “hurt” (mass ↓).
Questions for the Community
Has anyone used Elixir/OTP to model systems with dynamic “mass” or weight affecting behavior? What patterns did you use (GenServer, GenStage, Flow, etc.)?
Would it make sense to encapsulate dm/dt and momentum logic into a reusable module or behaviour, rather than peppering logic across individual processes?
Do you have ideas for visualizing or monitoring these dynamics—perhaps via :telemetry, custom dashboards, or observability tools?
Would you be interested in sample code implementing a simple Elixir GenServer that updates NKTg₁ and NKTg₂ each handle_info(:tick, state)?
I’d love to hear your feedback or pointers to similar mechanics—especially any distributed systems concepts using BEAM that factor in changing resource usage or adaptive behavior.
You’re not the first person to identify the Beam as a potentially good platform for a simulation. The issue however when doing things like physics simulations is at least 3 fold:
it’s critical that every entity gets the same number of ticks. If not, then some entities are essentially moving faster in time than others
it’s critical to deterministically handle interactions between entities. This often reduces the practical concurrency as entities are having to wait to talk to each other.
most critically, physics simulations are wildly CPU bound, and thus benefit most from languages that can produce ideal low level code.
In all 3 aspects you’re running against the grain of the BEAM. It is generally fair to all of its concurrent processes but not at the “over 1 million ticks every GenServer will get exactly the same number of ticks” fair. And from a CPU performance standpoint you’re going to get obliterated by languages which model this problem as essentially zipping through arrays of values.
Concurrency in the BEAM is tuned toward IO related use cases. It does quite well on the CPU tasks that happen along side those, but for problems where the entire problem is a number crunching exercise it just doesn’t play to the BEAMs strengths.
The only caveat is that if you can model this problem in say Nx and then it’s actually compiling to GPU code then that’s a whole other thing. I have next to no experience with that though.
EDIT: rereading your post it’s possible I misunderstood your goal here. Is it less about modeling physics and more about using physics ideas to in some way regulate ordinary elixir processes?
I see your point about the BEAM not being ideal for raw number-crunching physics simulations, especially when fairness in ticks and CPU-bound performance are critical. That makes sense if the goal is to replicate a high-fidelity physical world.
But the intention behind the NKTg law is slightly different. It is not about simulating physics per se, but about using a physics-inspired principle (variable inertia under force) as a metaphor and a mechanism for regulating process dynamics in Elixir systems.
In NKTg, inertia is not fixed: processes can “gain or lose effective mass” depending on their interactions, which makes them accelerate or decelerate relative to the applied force. Translated to BEAM, this means we’re not demanding strict synchronization of millions of ticks, but rather a proportional adjustment of scheduling and concurrency load based on this varying inertia idea.
So instead of fighting the BEAM’s grain, the law tries to align with it:
Concurrency fairness becomes “mass distribution” — some processes naturally move slower/faster, and that is modeled intentionally.
Determinism is not enforced globally, but replaced by relative predictability of how processes evolve under varying loads.
CPU-bound heaviness is reframed: the law doesn’t assume continuous crunching, but rather adapts inertia as a regulating factor for real-world Elixir workloads.
In short, the NKTg law is not asking the BEAM to be a physics simulator. It is offering a physics-inspired abstraction to reason about process behavior, variability, and scaling. From that perspective, the BEAM’s strengths in fairness and concurrency are actually a good fit.