My best guess is that the map lookup is the factor that slows down the ‘reduce’ approach. Although you need to traverse the list only once, you need to perform a key lookup on every item. With the group_by approach on the other hand, might have the group lookup better optimized (although I cannot think of a much better optimization out of hand than using a map for looking up the group…)
On my system the difference is, however, also not that significant. The ‘reduce’ approach is about 1.2 times slower…
##### With input Range #####
Name ips average deviation median 99th %
reduce 4.65 215.08 ms ±12.47% 209.29 ms 338.53 ms
group 2.47 405.21 ms ±18.65% 411.79 ms 552.56 ms
Comparison:
reduce 4.65
group 2.47 - 1.88x slower +190.13 ms
So your original hunch, might be correct after all.
Indeed, it appears that when running in iex, the ‘group_by’ version gets an unfair advantage, probably because it gets to use a compiled reduce loop behind the scenes, while the ‘reduce’ version has an interpreted reduce loop.