TLX - TLA+ formal verification for Elixir via a Spark DSL

TLX lets you write TLA+/PlusCal formal specifications in Elixir syntax and verify them with TLC, the TLA+ model checker. It’s built on Spark and targets Elixir developers who want to verify their state machine designs without learning TLA+ syntax.

What it does

You define specifications using an Elixir DSL — variables, actions with guards and transitions, invariants, temporal properties, concurrent processes — and TLX emits valid TLA+ or PlusCal that TLC can model-check. TLC exhaustively explores every reachable state, finding bugs that testing and code review miss: race conditions, deadlocks, invariant violations, liveness failures.

import TLX

defspec TrafficLight do

  action :to_green do
    guard(e(color == :red))
    next :color, :green
  end

  action :to_yellow do
    guard(e(color == :green))
    next :color, :yellow
  end

  action :to_red do
    guard(e(color == :yellow))
    next :color, :red
  end

  invariant :valid_color, e(color == :red or color == :green or color == :yellow)
end
mix tlx.emit TrafficLight          # emit TLA+
mix tlx.check TrafficLight         # run TLC model checker
mix tlx.simulate TrafficLight      # Elixir random walk (no Java needed)
mix tlx.watch TrafficLight         # auto-simulate on file changes

Features

  • Elixir DSL — variables, constants, actions, guards, branches, processes, invariants, temporal properties, refinement checking
  • Multiple outputs — TLA+, PlusCal (C and P syntax), Elixir round-trip
  • Expression system — e() macro captures Elixir expressions as TLA+ AST. Supports if/else, quantifiers (forall, exists), set operations, function application (at, except), CHOOSE, CASE, LET/IN, sequences, records, implication, and more
  • Elixir simulator — mix tlx.simulate gives fast feedback without Java. Good for rapid iteration during design
  • TLC integration — mix tlx.check runs the real model checker. Parses results, formats counterexample traces
  • Refinement checking — verify that a concrete spec (from code) satisfies an abstract spec (from design) via refines blocks
  • Spec discovery — mix tlx.list finds all spec modules in your project
  • File watcher — mix tlx.watch auto-recompiles and re-simulates on changes
  • Import — mix tlx.import parses TLA+ or PlusCal back into TLX DSL
  • Skeleton generator — mix tlx.gen.from_state_machine generates a spec from a GenStateMachine module
  • Agent skill — a formal-spec workflow skill for AI-assisted specification, shipped via usage_rules

Background

I’m not a TLA+ expert. I’m an Elixir developer who wanted to prove a large project’s design was correct. I’ve known about TLA+ for years — used at Amazon, Microsoft, MongoDB to find bugs in distributed systems — but the syntax was a barrier. TLX is the tool I wished existed: write specs in Elixir, let the DSL handle the TLA+ translation.

The project was built using AI-Assisted Project Orchestration patterns. 192 unit tests + 87 integration tests (SANY and pcal.trans validated). Full Diátaxis documentation.

If you’re a TLA+, formal methods, or Spark/Ash expert and find this useful, I’d welcome collaborators.

Links

10 Likes

Hey, I’m in the intersection of people who use Elixir professionally and have also used TLA+ professionally (probably not a large set). It’s interesting timing, as I was thinking about the overlap between the two languages recently. Unfortunately due to the lack of support for arm64 from the TLA+ toolset (I only really use Macs nowadays), I haven’t followed through on any ideas.

I had some vague notion of using Elixir syntax that is closer to ‘normal’ Elixir and then walking the syntax tree to build the TLA+ equivalent. I was also interested in whether you could build a model checker in Elixir (similar to a property checker) and then even potentially use distribution to explore large state spaces.

It looks like you have a pragmatic approach and the breadth of your solution is impressive. The only downside I see is that if you already know TLA+, this Elixir syntax is quite far away from that. If you don’t know it though. its not a good way to introduce it. I think the best way to learn the semantics of TLA+ is to write the actual math. The math in TLA+ should be well within the reach of anyone professing to be doing any sort of ‘engineering’, and most of the semantics are already known to anyone who’s done elementary set theory and predicate logic, with TLA+ adding a few things.

Good luck with the project!

3 Likes

Thanks :slight_smile:

I was also interested in whether you could build a model checker in Elixir (similar to a property checker) and then even potentially use distribution to explore large state spaces.

I specifically chose to emit TLA+ and use TLC, and not reinvent them as I’m NOT an expert in formal specs. That said, as TLX perform a DSL → IR → TLA+ translation, you could use the IR (the Elixir struct representation) to explore those ideas.

The only downside I see is that if you already know TLA+, this Elixir syntax is quite far away from that.

Yes… but:

  • I’d say the same about PlusCal.
  • TLX constructs are based on TLA+ constructs. I should document that more explicitly.
  • as mentioned in the FAQ, I tried a more mathematical syntax but, unfortunately, Elixir doesn’t recognise mathematical symbols as function identifiers. Try the “–emit symbols”, though :wink:

What bugs me most, is your implementation is strictly tied to “common Elixir syntax” whatever is means. Do you really need/want these endless action do: guard() spec()?

When I’ve been starting to work on finitomata, I gave a thought to what my users really want to use as an FSM declaration. And instead of rolling out my own DSL, as literally all other FSM libs do, I ended up with mermaid/PlantUML syntax.

It’s relatively easy to construct your DSL under the hood using nimble_parsec library as I do here. That approach still allows my to report syntax errors in FSM description gracefully. HEEX would be another example.

Just saying.

Hello :slight_smile:

Specs, variables/constants, actions, guards/awaits, next, invariants,… are all constructs from TLA+.

As I mentioned, my own objectives were to prove that the design and implementation of a large Elixir project were correct.

I knew TLA+ was the right tool for the job, but its syntax is meant for mathematicians, not for mere mortals like me :slight_smile: And I’m still trying to understand PlusCal, with its particular syntax…

Thus, I used Spark to create an easy-to-read DSL, based on TLA+, meant to be emitted as TLA+ and processed by the TLC tool.

Another observation about TLA+ tooling was the disconnection between the formal specs and the implementation languages.

In TLX, your DSL specs translates to an Elixir IR that can be used by different emitters or tools. You can also read TLA+ specs and get the Elixir IR (a subset of TLA+, though).

You can also extract state machines and OTP patterns from Elixir and Erlang code to generate spec skeletons.

That means you can have abstract specs to test your design before writing code, then extract concrete specs from your implementations and test them. Then you can refine them both to prove that your implementation respect your design. And that, I think is not common in the TLA+ ecosystem.

Moreover, you keep TLX specs with your code. The same way you keep Architecture Decision Records and C4/Structurizr architecture models with your code, if you use architecture-as-code.

I only discovered your project a couple of days ago, btw. I find ASCII diagrams parsing fascinating :slight_smile: But our objectives and approaches are different, though.

Thanks for the feedback :slight_smile:

2 Likes