After Matrax I have realized that concurrent mutable matrices of 64 bit integers with atomic updates might be a very special use case.
So while I was in the mindset I have created a similar library, but this time on top of Erlang
:array to have a different set of trade-offs. For ex. use any term as an element.
Benchmark suite executing with the following configuration: warmup: 2 s time: 5 s memory time: 0 ns parallel: 1 inputs: none specified Estimated total run time: 14 s Name ips average deviation median 99th % max_get 39.84 K 0.0251 ms ±14.15% 0.0249 ms 0.0294 ms m_reloaded_get 0.0945 K 10.59 ms ±1.56% 10.55 ms 11.38 ms Comparison: max_get 39.84 K m_reloaded_get 0.0945 K - 421.84x slower +10.56 ms Name ips average deviation median 99th % max_set 23.58 K 0.0424 ms ±2.95% 0.0422 ms 0.0451 ms m_reloaded_update 0.0861 K 11.61 ms ±1.53% 11.56 ms 12.39 ms Comparison: max_set 23.58 K m_reloaded_update 0.0861 K - 273.77x slower +11.57 ms Name ips average deviation median 99th % m_reloaded_transpose 498.99 2.00 ms ±4.49% 1.98 ms 2.55 ms max_transpose 104.20 9.60 ms ±4.23% 9.41 ms 10.79 ms Comparison: m_reloaded_transpose 498.99 max_transpose 104.20 - 4.79x slower +7.59 ms
set/3 is much faster while actually
transpose/1 is slower. (Similarly to Matrax).
(Please benchmark your use case to decide between libraries. Also check Matrex which uses NIFs.)
Feedback is welcome.
(I’m also looking for employment in case you are hiring please PM me)