All right. This is a very important piece of knowledge and in my opinion it should be included in the Computing Science 101.

Try the following calculation in any programming language (all non-archaic languages follow the IEEE 754 floating point standard): `2.3 - 0.3`

. Is the answer what you would expect?

The problem with floating-point numbers is that we are trying to save an arbitrary number with a radix point in a fixed-size binary representation. If you type the float `7.1`

, what is stored is not `7.1`

. It actually is ± `7.099999999999999644728632119949907064437866210937500000000000`

. If you want to see it for yourself, try: `:erlang.float_to_binary(7.1, decimals: 60)`

. These inaccuracies become even more defined when dealing with large numbers, because then the amount of possible binary numbers between two whole numbers becomes a lot smaller. There even is a point where whole numbers are rounded, because they are less significant than the rounding issue.

Compare `:erlang.float_to_binary(12345678987654321.0, decimals: 1)`

with `:erlang.float_to_binary(12345678987654321.1, decimals: 1)`

. (Actually, the rounding error is so prevalent here, you can just check IEx default output instead)

Another example: `(0..10) |> Enum.reduce(fn x, acc -> acc + 0.1 end)`

.

When we do arithmetic operations with floats, these inaccuracies amplify. This is fine for calculations where we are working with external measurements that are imprecise by definition, or difficult mathematical equations that we can only approximate. It is however a problem when we are working with a known, exact, decimal amount, such as when we are counting something. A good example: **Money**.

**These rounding issues ***have* made money disappear in the past. Please use decimals when dealing with money.

If you want to read more about this, there is a great explanation in the Python documentation (while the syntax is in Python, the examples are true for all other programming languages). There also is What every Computing Scientist should now about Floating Point, which is a very juicy article with a lot of details.