Oh yeah

a blog by Jaime Abbariao

0.1 + 0.2 !== 0.3

Adding decimal numbers together, i.e. numbers like 0.1, 3.14, etc, can be one of the most confusing aspects of programming.

As I was teaching students in my introduction to computer science class about the addition operator, I mentioned the fact that when we add decimal numbers together, we don't really check for equality. We typically check if the absolute difference between the two values are less than some ϵ\epsilon which denotes that the difference between the two values is small enough that we can consider them more or less equal.

A small code snippet would look something like this in JavaScript

const EPSILON = 1e-9

function isDecimalEqual(a, b) {
  return Math.abs(a - b) < EPSILON
}

// Expected: true
console.log(isDecimalEqual(0.1 + 0.2, 0.3))

So now the bigger question to answer is why is this the case?

Computers will need to compile the programs that we write into bytecode for it to understand. This means that all our expressions and operations will have corresponding binary representations.

Decimal numbers are interesting in that they can have either finite or infinite binary representations.

We say that a decimal number has an finite binary representation if and only if it can be expressed as a reduced fraction with the following form:

α2n\frac{\alpha}{2^n}

for any α0\alpha \neq 0 and n0n \ge 0.

For example, if we take 0.1, we can see that as a fraction it has the form

110\frac{1}{10}

where we can see that 10 is not a power of 2. So we know that 0.10.1 has an infinite binary representation.

Similarly, 0.2 when reduced has the form

15\frac{1}{5}

where we see again that 5 is not a power of 2. So the sum of two values that have infinite binary representation will also have an infinite binary representation.

Now where does the rounding error come in? The programming language has constraints on how many bits it stores. In JavaScript's case, we keep 64 bits. From the infinite number of bits that the sum of 0.1 and 0.2 give us, we'll keep 64 of them.

This difference in bits between 0.1 + 0.2 and 0.3 are enough to cause an error when comparing the two values!

Here is some code in order to prove this:

function toBinary64(num) {
  let buffer = new ArrayBuffer(8)
  let view = new DataView(buffer)

  view.setFloat64(0, num, false)

  let binary = []
  for (let i = 0; i < 8; i++) {
    binary.push(view.getUint8(i).toString(2).padStart(8, '0'))
  }

  return binary.join(' ')
}

console.log('0.1 + 0.2:', toBinary64(0.1 + 0.2))
console.log('0.3:', toBinary64(0.3))

Note that the first statement will log: 0.1 + 0.2: 00111111 11010011 00110011 00110011 00110011 00110011 00110011 00110100

The second one will log: 0.3: 00111111 11010011 00110011 00110011 00110011 00110011 00110011 00110011

Notice that both values above have about the same binary representation in 64 bits until the last 3 bits.

This is enough for the comparison operator to say that the values are not equal!

So to recap, the reason why operations between decimal values will sometimes show unintuitive results is primarily because of they are represented in memory. We have values that have infinite representations in binary, but need to be truncated or rounded for certain architectures.

This is why we generally use closeness instead of true equality when comparing decimal values.