### As cartoon-me mentioned above, floating-point numbers get their name from the way that the point can “float” anywhere among the number.

Floating-point numbers are stored in computer memory as a mantissa (aka significand) [1], base, and exponent, like so (except a computer would use base 2 instead of base 10, since computers operate in binary): [2]

Floating-point numbers come in two basic flavors in Swift: **Float** and **Double**. If you declare a floating-point variable without explicitly specifying type, the Swift compiler will assume that your variable is a **Double**.

You must explicitly declare **Float**s, and you have the option to explicitly declare **Double**s, although it’s the “Swifty” way to let the compiler infer type whenever it can.

**Double **~~Trouble~~ Precision

You now know why floating-point numbers are called **Float**s, but what are **Double**s?

**Double**s are just **Float**s with double the storage and more than double the precision. *Precision *is a fancy term that means “the amount of bits that a number takes up in computer memory.” Since everything is represented in terms of 0s and 1s in memory, precision is super important because rounding errors can occur from lack of approximation. A **Float** “has 32-bit precision” — it takes up 32 bits — and a **Double** has 64-bit precision.

**Double**s are preferred because they are more accurate representations of floating-point numbers, which is why the Swift compiler automatically assumes that any implicitly declared floating-point number is a **Double**.

You can do the usual operations on floating-point numbers:

And you still can’t do operations on mixed types:

**“I’m Not Who You Think I Am”**

*Credit to this part goes to **Swift Programming: The Big Nerd Ranch Guide*. *I consult a lot of references to make this series, and **Swift Programming** was the only one that mentioned this particular pitfall. The example below has been adapted from Page 30 of the book.*

No matter how good their precision, floating-point numbers are still inherently imprecise — which means that two values that should be the same sometimes aren’t, simply because of the way they’re stored in binary on the computer.

Let’s say you have two ice-cream floats, one with no cherry and the other with one cherry on top. Assume that each scoop has a value of 2, and each cherry has a value of 0.1. In code, they would look like this:

Printing the values would give you the following output:

What would happen if you added a cherry to the first ice-cream float?

You would assume that **iceCreamFloat1** + **cherry** has the value of **iceCreamFloat2**, and you would be right …

… Sort of.

Since you’ve got good engineering principles, you decide to write a simple test rather than assuming this to be true:

However, the compiler gives you a totally unexpected result:

WTF? After testing it a few more times and getting the same result, you write another test to make sure that you aren’t losing your ability to do basic math:

Okay, now you’re really suspicious. There must be something wrong with your compiler. Maybe your Xcode is bugging out. Maybe you have a defunct Mac?

As it turns out, this isn’t a mistake so much as it is a limitation that comes from the way your computer stores floating-point numbers. It doesn’t store the value of **iceCreamFloat1 + cherry** as 2.1 exactly — rather, it stores it as a value of 2.1000000000000001 or similar. The value of **iceCreamFloat2**, which is a literal that you typed in, is stored as a value of 2.199999999999999, or similar. When asked to print, Swift rounds both values to 2.1, but since the two values aren’t “equal” under the hood, equality operators probably won’t work on them.

To make this even more confusing, sometimes the two values *are *stored as the same, and the equality operator will work:

Yeaaaaaahhhhh. This is really annoying (and you can test this out for yourself: the code is available here). Floating-point precision is pretty … imprecise, so you shouldn’t use floats or doubles for values that really need to be precise (leave them out of your finance-calculating apps!).

**Exercises**

- Come up with a few cases of your own to test the equality operator with floating-point numbers. Which numbers work? Which don’t?

**Notes**

[1] “Mantissa” is such a funny word, isn’t it? At first I thought it was the game with the wooden board and glass pebbles that my sister used to make me play all the time, but that’s *Mancala*, not *Mantissa*. The word “mantissa” is of Latin origin and means … “of unknown origin.” Yes, really.

[2] Exact implementation details are hand-waved in this article, but if you’re wondering exactly how floating-point numbers are stored in memory (and how to convert them to binary yourself!), check out this video.

**Sources**

Decimal to Floating-Point Conversions

Mantissa is also an interesting late novel by John Fowles.

Early Pentium chips were infamous for their “floating point” errors. It’s not just theoretical. (Among other weird phenomena, it caused various games–notably Bungie’s now-forgotten classic Myth–to lock up or crash when the physics engine had to handle too many collisions at once.)

@ Troy – It’s pretty interesting to see the real-life implications of “theoretical” things such as floating-point precision. Makes me appreciate the fact that I’m a programmer in the age of Google, Twitter, and advanced IDEs 😀