Access optimizations from Hissssst's compiler

Funny thing

is that both map.key and map[:key] are examples of bad decisions in the language design:

The main problem is that the difference between these two ways of accessing a value in map is not just in a

  1. map.key works even when map is an atom !!!
    This is introduced with the ability of language to call functions without parentheses. And this ability actually introduces runtime penalty! So, if we take a look at the erlang term format, we’ll see that map.key is actually compiled to
case map do
  map when is_atom(map) ->
    apply(map, :key, [])
  %{^key => value} ->
  ... ->
    raise KeyError

Which handles this extremely rare situation where we actually make a call without parentheses (by my analysis in all of the Ecto, Phoenix, Plug and EctoSQL it happens only once!!!) in the most common construct in the whole language (well, after the function call)! And it actually introduces performance penalties

  1. While map.key is working with structs, map[:key] is not. And it is because Access has is a behaviour instead of a protocol (which was a good decision in the beginning, but just needed some polishing in the compiler).

  2. map[:key] is polymorphic, since it also works with keywords and everything Access-able, while map.key works with structs and maps.

  3. map.key is the same as map.key() and vice versa, parentheses syntax means nothing here. Live with it

The right way to do this

Just use pattern-matching. It is the most powerful and efficient construct in the language. For nested data use Pathex. And if you already have a codebase where everything is dotted, just use optimizing compiler called Tria.

1 Like

Come on mate. You can’t tell people to “just” use your experimental compiler instead of vanilla Elixir. That is not the “right way” to deal with a code base where the dots are used.

I think we’re safer just to pattern match ourselves or borrow some code from Membrane.

1 Like

Why do you think so?

Risk. Given the choice between a) a minor performance penalty; b) writing code using pattern matchine style; or c) replacing the language compiler with one written by one bloke whose claims about the Elixir compiler and their own replacement are at times disproven and is not battled hardened, you’d be hard pressed to find many people advising option c. The time investment and risk is unlikely to be worth the cost.

Which makes key access 2-3 times slower

That’s cool, but it would require refactoring a lot of existing code, which sometimes is not that obvious.

I’d like to hear some real examples about my claims about compilation which were disproven.

That’s true, but it is true for every new technology.

Why is that? I mean, compiler is not 100% ready yet, but is in the state which shows that my ideas around performance optimization are working. It just requires some polishing. As soon as it will be ready, there will be no risks, my aim is to provide as-much-as-possible compatibility with existing compiler.

And I think that it is totally fine to advertise not-yet-ready solution, because it will provide some battle-testing by early adopters (one of whom is a member of already mentioned Membrane project), some useful feedback, and might even help me get some money for my work

I guess since it’s a compiler so long as it compiles there is no risk, or does it produce code that could hit runtime exceptions?

That’s an interesting point, I never thought about that! I do feel you would almost have to be trying to write code that would ever hit this scenario, but it sure would be a subtle bug: You happened to pass in an atom instead of a map that also happened to be an erlang module that also happens to have a zero-arity function with the same name as the key you’re trying to access.

And here I am, never using map[:key] or map.key at all… :003:

Every compiler does.

Sorry, terribly worded. I guess I’m asking how different it is from Elixir aside from the optimizations listed. I can always just look at the code.

There are several types of things that can be different

  1. Unexpected. I am human, I make mistakes, so I tried to invest a lot of time into making debugging of the compiler as simple as possible.

    • I have made different verbosity levels
    • I made compiler tracing, per-function or per-feature
    • I made automatic github issue submission
    • I’ve made a way for a regular user to peek at what code is generated, but in regular Elixir
  2. Expected. And I’ve divided those into four parts:
    2.1 Compatible. And I am trying to make most of the optimizations compatible as much as possible. So that is compatibility in terms of side-effects ordering, in terms of exceptions, and of course in terms of results returned by functions
    2.2 Incompatible, but invisible. This means that my compiler generates different code from what vanilla compiler does, but nobody actually relies on this behaviour or this behaviour changes in Elixir from time to time. For example, my compiler slightly changes module compilation order, but it doesn’t affect anything, unless the user is relying on unspecified compiler behaviour (like reading .beam files during compilation to affect the result).
    2.3 Incompatible, but with workarounds. This is runtime recompilation with appup or relup (recompile in iex is compatible). I will provide a different interface to these problems, in case anybody will ever be using runtime recompilation in prod with Elixir.
    2.4 Incompatible, but which require Elixir patching. And there’s the only one of them, it is exception formatting. Tria regroups functions into modules, but not in a way a developer originally wrote. So function Module.function/2 can actually become TriaGenerated.Module__function/2. And stacktrace would have this ugly signature instead of the original one. I am just going to introduce the patch to the Elixir and make sure that it is accepted in the upstream, which will allow exceptions formatting.

Plus, there is also this tiny case of Incompatible, but useful when my compiler detects errors that regular Elixir compiler wouldn’t. This is extremely rare and can be found only in cases like

x = %{1 => 2}
case x do
  %{^y => ^y} -> :ok # This clause will never match
  _ -> :error
1 Like

Hey folks I moved these posts out of the other thread because we’re now squarely into a discussion about the readiness or not of @hissssst’s compiler.


1 Like

It’s obvious you’re quite a bright person and have invested significantly in developing tria.

I think diversity with compiler implementations is a healthy thing, there are always lessons to be learned and different perspectives that can yield improvement.

But the statement “just use tria” was a bit naive in my view.

Just look at the the long road to implementing erlangs JIT to where we finally have something viable.

Maybe tria may discover some fertile ground and lead to improved performance, we can only hope.

However “just” jumping to another compiler is not going to be viable for most, until tria reaches battle tested status.

Until then, may be you can identify some selective use cases where tria may prove useful, eg as a set of processing nodes in a cluster where the benefit of tria performance can be realised for some safe subset of the code base.

e.g. does tria work with Nx? would tria be compelling for a numerical compute node? If not, then what is the compelling use case for tria?

Getting actual adoption for some concrete use cases would provide important lessons and feedback until fully semantic Elixir/Erlang compatibility is achieved in tria.

I think clearly communicating and being honest about where the rough edges are is essential.

My sense is you have rubbed the community the wrong way, claiming tria is a drop in replacement for elixir which solves everything when it actually doesn’t RIGHT NOW.

Also think about how could one integrate with say one or more tria nodes in their cluster? This might help with both getting real-world use and ultimately adoption until fully battle tested elixir compatibility is realised.

The benefit of even considering tria needs to warrant the additional risk and complexity and that’s always the challenge.

I don’t think there is a single person in the Elixir community who is convinced, other than you, the author.

You need to change that by being real.

I currently don’t see a compelling reason for tria other than my own intellectual curiosity, but ultimately already have it relegated as an “incomplete elixir implementation by a wannabe who agitates the community.”

I’m saying what most are thinking, but beyond that I actually do think you’re very smart and capable so hopefully you take my suggestion and feedback the right way so that your contributions are constructive and lead to improving the state of elixir in ways we can’t predict.

If you do expect the community to take you seriously and to have any chance of competing against the status quo of Elixir on the BEAM you need to be cooperative rather than antogonistic and identify the compelling use case that tria fills.

Could any of it be incorporated into the BEAM or elixir proper?

Can others even work with you?

Elixir thrives because Jose both leads and works with others towards outcomes that benefit the community. I am sure if collaboration was possible it would have already happened.

However if tria js just an academic excercise in compiler writing to be able to say “FIGJAM” then please ignore my post while the rest of the elixir community continue to ignore you and tria.


Yes, it does

That’s true, and that’s why I am talking about the project whenever it is possible: to find someone who wants to try it out locally, not in production, but just in personal setup.

Where did I say that Tria is suitable for usage right now? I didn’t say that. But Tria will be a drop-in replacement. It is almost one already. There is no unsolved problem, except stracktrace formatting, but there are a lot of bugs, which can caught only during the testing. When I wrote “Just use Tria”, it was merely advertising, and I’ve explained what I meant in the next post.

Compiler is completely irrelevant to these kinds of problems. Using Tria or Erlang, or Elixir or Gleam won’t change anything in the BEAM cluster.

That’s true, and that’s what’s written on the top of the README of the project.

Performance. The main goal of Tria is performance. Write some Enum pipes, use some dots, run some benchmarks and see the impact yourself.

Where did I say anything antagonistic? I am always cooperative, and I am open for constructive feedback about compiler’s features and I am open to contributions. To be honest, I value your attention and feedback, but I don’t see anything constructive in it. Perhaps you could give compiler a try, perhaps you could read it’s code, analyze it’s approaches and then improve it with some useful criticism about technical implementation.

Yes, absolutely. I’ve even found some bugs in Elixir during development of the compiler

Yes, I’ve even invited several people personally. I sent mails to people who were doing projects in the same area, we’ve shared ideas, tried to collaborate on some features. You can find a topic with collaboration invite somewhere on this forum.

I am not from academia, nor this project is academic. Tria is aiming to be production-ready with an engineering set of goals. Tria is about bringing existing optimizing compilers’ features to the language, it is not a project about researching new ones.

Why is that? And why are sure that collaboration is impossible?

My overall expression about your post is that you don’t know anything about the project and that you made several wrong assumptions while reading my posts. If you’re really interested in writing performant code in Elixir, you should try reading Tria source, you should try running Tria with TRIA_DEBUG=all env variable to take a look at generated code inside the created tria_generated_context.ex file. It is really interesing.

You can try to take a look at constant folding, at map.field optimizations, at Enum pipes, at inlining, at exception recreation, and so on.

1 Like

That’s inaccurate. We actually compile to:

case map do
  %{^key => value} -> value

So the most common case comes first and the fact map.key is equivalent to remove.function() does not slow down its lookup (except in the cases the key is missing, which raises an exception anyway).

Also inaccurate. We do emit warnings in several occasions, for example:

warning: incompatible types:

    map() !~ atom()

in expression:

    # iex:1

where "atom" was given the type atom() in:

    # iex:1

where "atom" was given the type map() (due to calling var.field) in:

    # iex:1

hint: "var.field" (without parentheses) implies "var" is a map() while "" (with parentheses) implies "var" is an atom()

Conflict found at
└─ iex:1: Foo.remote/1

And we have been slowly moving towards emit warnings at runtime but we obviously have to be careful towards that.

In any case, I think this is beyond the point. Those are micro-optimizations and you should write code that is clear. If pattern matching is clearer, then certainly do that. For example, I doubt that the change from map.field to pattern matching will yield any measurable improvement on your application response time.


Thanks for following up to my post with some good information. Much appreciated.

I have not reviewed the full implementation of your compiler but I have found your posts interesting to read, particularly your obversations on performance and how we might go about improving things.

I have seen a few posts that seem to provoke some negative responses from the community, however that may just be my (potentially incorrect) interpretation and reading of how people responded.


I might be mis reading the tone here, but I sense some adversarial personality traits that may make progress less than optimal, but hopefully not impossible.

Of course I could be completely misreading things yet again.


I am on the same side with the things said above about shoving the optimizer down the people throats.

It would be nice to see some articles of actually applying it in practice and the real impact on a real project, don’t take me wrong, benchmarks are a good way to measure performance, however for me these are just numbers that mean absolutely nothing, as a lot of times measurements are not made correctly or get influenced by some hardware specific things.

Folks, let’s go easy. No one is shoving a compiler down other people’s throats. Perhaps the initial suggestion could have used more disclaimers, but all of the necessary disclaimers are also in the project’s README. :slight_smile:


In this case, clause ordering doesn’t improve the performance. If we take a look at beam, there is type tag lookup and jump based on it’s value.

That’s true, I meant that the both constructs execute the same code.

I agree, but I would also mention that there is no point in losing performance, when it can be improved automatically, without having to rewrite anything

Sure, if we can optimize, we should. My point is that we can’t fully optimize this yet, as we need to worry about backwards compatibility. It is much easier to optimize if the assumption is that we can break code.

1 Like