Comparison of Decimals not logical

All right. This is a very important piece of knowledge and in my opinion it should be included in the Computing Science 101.

Try the following calculation in any programming language (all non-archaic languages follow the IEEE 754 floating point standard): 2.3 - 0.3. Is the answer what you would expect?

The problem with floating-point numbers is that we are trying to save an arbitrary number with a radix point in a fixed-size binary representation. If you type the float 7.1, what is stored is not 7.1. It actually is ± 7.099999999999999644728632119949907064437866210937500000000000. If you want to see it for yourself, try: :erlang.float_to_binary(7.1, decimals: 60). These inaccuracies become even more defined when dealing with large numbers, because then the amount of possible binary numbers between two whole numbers becomes a lot smaller. There even is a point where whole numbers are rounded, because they are less significant than the rounding issue.
Compare :erlang.float_to_binary(12345678987654321.0, decimals: 1) with :erlang.float_to_binary(12345678987654321.1, decimals: 1). (Actually, the rounding error is so prevalent here, you can just check IEx default output instead)

Another example: (0..10) |> Enum.reduce(fn x, acc -> acc + 0.1 end).

When we do arithmetic operations with floats, these inaccuracies amplify. This is fine for calculations where we are working with external measurements that are imprecise by definition, or difficult mathematical equations that we can only approximate. It is however a problem when we are working with a known, exact, decimal amount, such as when we are counting something. A good example: Money.

These rounding issues have made money disappear in the past. Please use decimals when dealing with money.


If you want to read more about this, there is a great explanation in the Python documentation (while the syntax is in Python, the examples are true for all other programming languages). There also is What every Computing Scientist should now about Floating Point, which is a very juicy article with a lot of details.

6 Likes

And if you can’t, you can use integers and work in pennies/cents.

I understand the issues about Float and rounding. What I want to understand is how Decimal is used to solve those issues, since changing the precision only delays the onset of when rounding errors occurs.

What exactly about Decimal makes it appropriate for use as Money? Is is the fact that the significand is base10 rather than binary? Is it that you set the precision appropriate to the scale
of money that you are dealing with? Why do you trust Decimal, but not Float used within it’s
reasonable range?

I can make Decimal work correctly with >, but if that breaks the entire purpose of Decimal, then there is no point in submitting the patch.

Actually you should work in mills and round to the nearest penny.

1 Like

As far as I can tell, Decimal does give you fixed point numbers oposed to floating point numbers.

A fixed point number will always have the same precision, regardless how often you do something with it. Lets say we have a fixed and a float of 1. The fixed has a precission of 3 digits after the point. So you can multiply the fixed by a quadrillion and add a 0.1 and you will have a quadrillion and a tenth. If you do the same with the float of 1, you will have only a quadrillion (or even less due to how floats deal with values they can represent exactly)

Also for fixed (a + b) + c == a + (b + c) does hold, while it does not for floats.

1 Like

Decimals (most implementations including the one in the Elkixir library) are created from two Bignums. A bignum is an integer of arbitrary size; internal logic ensures that when the number becomes larger than a small fixed-size ‘fixnum’ can handle, it is stored in multiple separate memory locations. Of course, using bignums is not always straightforward. In the case of JS, for instance, one would need a special library that does it. However, Erlang/Elixir has built-in support for them; all integers you create are secretly convertes to and from bignums whenever necessary.

A Decimal is just (mantissa * (10 ^ exponent)) where mantissa and exponent are both bignums.

A side note: Using an arbitrary integer subdivision for money works fine, until you want to convert from one valuta to the next.

1 Like

I think you’re correct, rounding only occurs when Decimal is converted to another type. My mistake was thinking that it was just an extended floating point representation.

I looked at mostly the README and not the source code and got side tracked by all the references to rounding errors and floating point.

1 Like