I respectfully disagree. I think that TE is asking a completely wrong question.
In my opinion the question is never which tool is the fastest. Instead, one of the question should be, is it fast enough. Some other important questions might be, can it scale, is it fault-tolerant, how is the ecosystem, what is the operational support, etc.
None of these seem to be in the focus of TE. They run some synthetic benches, give us flashy graphs, and this is supposed to give us a clue about which tool can should we choose. In my personal experience, in many cases the perf differences in such synthetic benches either don’t matter in the grand scheme of things, or can be significantly reduced by with proper algorithmic and technical interventions.
In some cases, squeezing every nanosecond might matter. I personally think that such cases are special, and a team facing such challenge will get a much more informative answer by conducting their own tests against a simulation/approximation of their own use case, instead of looking at TE, or any other generic/synthetic bench.
Therefore, I personally think that TE is fundamentally flawed in its premise. I also don’t have a high opinion of its implementation, but that’s irrelevant
Maturity, approachability, community, ease of use, flexibility, runtime guarantees, ecosystem, scalability, fault-tolerance support, are some things that come to mind. And, yes, performance is also relevant, and therefore worth looking into, perhaps by making your own tests and seeing if the tech can deliver.