I have written simple comparison between Elixir’s Float.round
and custom solution made by utilising :io_lib
:
defmodule Bench.Float do
def round(float, 0) when is_float(float), do: :erlang.round(float) |> :erlang.float()
def round(float, precision) when is_float(float) and precision in 1..15 do
str = :io_lib.format('~.*f', [precision, float])
:erlang.list_to_float(str)
end
end
and benchmarking code:
small = 5.55
med = :rand.uniform |> Float.round(10)
large = :rand.uniform * 10000
inputs = %{
"Trivial | small precision" => {small, 1},
"Medium | small precision" => {med, 1},
"Large | small precision" => {large, 1},
"Trivial | med precision" => {small, 5},
"Medium | med precision" => {med, 5},
"Large | med precision" => {large, 5},
"Trivial | large precision" => {small, 12},
"Medium | large precision" => {med, 12},
"Large | large precision" => {large, 12},
}
Benchee.run(%{
"built in" => fn {f, p} -> Float.round(f, p) end,
"erlang" => fn {f, p} -> Bench.Float.round(f, p) end,
}, inputs: inputs)
And I get almost constant improvement over current algorithm:
Operating System: macOS"
CPU Information: Intel(R) Core(TM) i5-5250U CPU @ 1.60GHz
Number of Available Cores: 4
Available memory: 8 GB
Elixir 1.7.3
Erlang 21.1
Benchmark suite executing with the following configuration:
warmup: 2 s
time: 5 s
memory time: 0 μs
parallel: 1
inputs: Large | large precision, Large | med precision, Large | small precision, Medium | large precision, Medium | med precision, Medium | small precision, Trivial | large precision, Trivial | med precision, Trivial | small precision
Estimated total run time: 2.10 min
Benchmarking built in with input Large | large precision...
Benchmarking built in with input Large | med precision...
Benchmarking built in with input Large | small precision...
Benchmarking built in with input Medium | large precision...
Benchmarking built in with input Medium | med precision...
Benchmarking built in with input Medium | small precision...
Benchmarking built in with input Trivial | large precision...
Benchmarking built in with input Trivial | med precision...
Benchmarking built in with input Trivial | small precision...
Benchmarking erlang with input Large | large precision...
Benchmarking erlang with input Large | med precision...
Benchmarking erlang with input Large | small precision...
Benchmarking erlang with input Medium | large precision...
Benchmarking erlang with input Medium | med precision...
Benchmarking erlang with input Medium | small precision...
Benchmarking erlang with input Trivial | large precision...
Benchmarking erlang with input Trivial | med precision...
Benchmarking erlang with input Trivial | small precision...
##### With input Large | large precision #####
Name ips average deviation median 99th %
erlang 203.10 K 4.92 μs ±379.86% 4 μs 13 μs
built in 136.72 K 7.31 μs ±621.54% 6 μs 17 μs
Comparison:
erlang 203.10 K
built in 136.72 K - 1.49x slower
##### With input Large | med precision #####
Name ips average deviation median 99th %
erlang 238.65 K 4.19 μs ±508.17% 4 μs 9 μs
built in 176.05 K 5.68 μs ±315.60% 5 μs 11 μs
Comparison:
erlang 238.65 K
built in 176.05 K - 1.36x slower
##### With input Large | small precision #####
Name ips average deviation median 99th %
erlang 259.83 K 3.85 μs ±884.75% 3 μs 11 μs
built in 213.63 K 4.68 μs ±557.26% 4 μs 9 μs
Comparison:
erlang 259.83 K
built in 213.63 K - 1.22x slower
##### With input Medium | large precision #####
Name ips average deviation median 99th %
erlang 141.67 K 7.06 μs ±1693.57% 4 μs 30 μs
built in 135.84 K 7.36 μs ±381.06% 6 μs 22 μs
Comparison:
erlang 141.67 K
built in 135.84 K - 1.04x slower
##### With input Medium | med precision #####
Name ips average deviation median 99th %
erlang 229.73 K 4.35 μs ±895.45% 4 μs 14 μs
built in 162.83 K 6.14 μs ±297.60% 6 μs 13 μs
Comparison:
erlang 229.73 K
built in 162.83 K - 1.41x slower
##### With input Medium | small precision #####
Name ips average deviation median 99th %
erlang 247.05 K 4.05 μs ±883.74% 3 μs 12 μs
built in 201.51 K 4.96 μs ±372.99% 5 μs 10 μs
Comparison:
erlang 247.05 K
built in 201.51 K - 1.23x slower
##### With input Trivial | large precision #####
Name ips average deviation median 99th %
erlang 203.42 K 4.92 μs ±602.69% 4 μs 15 μs
built in 166.37 K 6.01 μs ±372.15% 6 μs 12 μs
Comparison:
erlang 203.42 K
built in 166.37 K - 1.22x slower
##### With input Trivial | med precision #####
Name ips average deviation median 99th %
erlang 244.21 K 4.09 μs ±765.31% 3 μs 12 μs
built in 164.72 K 6.07 μs ±427.71% 5 μs 16 μs
Comparison:
erlang 244.21 K
built in 164.72 K - 1.48x slower
##### With input Trivial | small precision #####
Name ips average deviation median 99th %
erlang 241.01 K 4.15 μs ±987.70% 3 μs 14 μs
built in 140.80 K 7.10 μs ±1761.24% 4 μs 17 μs
Comparison:
erlang 241.01 K
built in 140.80 K - 1.71x slower
I remember that when I lastly brought up this topic @josevalim mentioned some drawbacks of this method, but now I cannot recall them (and IIRC it was on IRC which is currently unlogged).
I have also written stream_data
test with some manual “edge cases” I could think of but it have showed no difference between these two implementations. Have I missed something?
defmodule Bench.FloatTest do
use ExUnit.Case
use ExUnitProperties
cases = [
{0.1, 0},
{0.4, 0},
{0.5, 0},
{0.9, 0},
{0.49, 0},
{0.0, 0},
{0.01, 1},
{0.04, 1},
{0.05, 1},
{0.09, 1},
{0.049, 1},
{0.00, 1},
{0.0000000000000001, 15},
{0.0000000000000004, 15},
{0.00000000000000049, 15},
{0.0000000000000005, 15},
{0.0000000000000009, 15},
{0.00000000000000001, 15},
{0.00000000000000004, 15},
{0.000000000000000049, 15},
{0.00000000000000005, 15},
{0.00000000000000009, 15}
]
for {f, p} <- cases do
test "return the same as built in for #{f} and -#{f} with precision #{p}" do
f = unquote(f)
p = unquote(p)
assert Float.round(f, p) == Bench.Float.round(f, p)
assert Float.round(-f, p) == Bench.Float.round(-f, p)
end
end
property "Float.round and Bench.Float.round works in the same way" do
check all f <- float(),
p <- integer(0..15)
do
assert Float.round(f, p) == Bench.Float.round(f, p)
end
end
end