Wolever @wolever Excited to be here today talking to you about floating point numbers - What brought you here today? What are you hoping to learn? Tweet at me, I’d love to hear! - Folks in the front: can you take a picture? - Say this up front: I’ve taken some artistic license with the title; 754 can be exactly represented in floating point, please accept my apology
aren’t the worst But we are definitely stuck with them Over this talk, I’d like to tell you a bit more about what floats are, how they work, and why they do the weird things they do. And hopefully by the end … well, I don’t expect you to like them. But at least understand them.
two’s complement is used to handle negative number. It’s a thing of beauty, and I’d really suggest looking it up. But alas, it’s outside the scope of this talk.
real world, it’s not uncommon to deal with numbers outside this range But in this naive system where we’re fixing the position of the decimal place - a fixed point number, if you will - begins to run into problems, because it can’t represent the range of numbers we’ll encounter. … to move that decimal place around - to float it. Almost like you’ve implemented a floating point.
M (7.5 billion kilometres) Water molecule: 2.8e-10 M (0.28 nanometers) In the real world, it’s not uncommon to deal with numbers outside this range But in this naive system where we’re fixing the position of the decimal place - a fixed point number, if you will - begins to run into problems, because it can’t represent the range of numbers we’ll encounter. … to move that decimal place around - to float it. Almost like you’ve implemented a floating point.
M (7.5 billion kilometres) Water molecule: 2.8e-10 M (0.28 nanometers) >>> distance_to_pluto = number(7.5, scale=12) >>> size_of_water = number(2.8, scale=-10) In the real world, it’s not uncommon to deal with numbers outside this range But in this naive system where we’re fixing the position of the decimal place - a fixed point number, if you will - begins to run into problems, because it can’t represent the range of numbers we’ll encounter. … to move that decimal place around - to float it. Almost like you’ve implemented a floating point.
F F F F F F Sign (+ or -) Exponent Fraction (also called mantissa) … if you’ve ever seen scientific notation, this is exactly the same thing; this is base 2 scientific notation.
F F F F F F Sign (+ or -) Exponent Fraction … if you’ve ever seen scientific notation, this is exactly the same thing; this is base 2 scientific notation.
F F F F F F Sign (+ or -) Exponent Fraction (also called mantissa) … if you’ve ever seen scientific notation, this is exactly the same thing; this is base 2 scientific notation.
F F F F F F Sign (+ or -) Exponent (if you’re trying to sound fancy) Fraction (also called mantissa) … if you’ve ever seen scientific notation, this is exactly the same thing; this is base 2 scientific notation.
F F F F F F Sign (+ or -) Exponent (if you’re trying to sound fancy) frac × 2exp value = sign × Fraction (also called mantissa) … if you’ve ever seen scientific notation, this is exactly the same thing; this is base 2 scientific notation.
can we get? How big can we get? We can measure the distance to Pluto (but it won’t be reliable down to the meter) We can measure the size of a water molecule (but not a billion of them at the same time)
doubles have 15 significant digits 2. Precision is lost when adding or subtracting numbers with different magnitudes: >>> 12345 + 1e15 1000000000012345 >>> 12345 + 1e16 10000000000012344 >>> 12345 + 1e17 100000000000012352 (multiplication and division are fine, though!)
0.20000000000000001110 >>> 0.3 0.29999999999999998890 >>> 0.1 + 0.2 0.30000000000000004441 >>> sum([0.1] * 10) 0.99999999999999988898 >>> 0.1 * 10 1.00000000000000000000 One of the problems with this roundoff error is that it compounds
0.20000000000000001110 >>> 0.3 0.29999999999999998890 >>> 0.1 + 0.2 0.30000000000000004441 >>> sum([0.1] * 10) 0.99999999999999988898 >>> 0.1 * 10 1.00000000000000000000 One of the problems with this roundoff error is that it compounds
0.20000000000000001110 >>> 0.3 0.29999999999999998890 >>> 0.1 + 0.2 0.30000000000000004441 >>> sum([0.1] * 10) 0.99999999999999988898 >>> 0.1 * 10 1.00000000000000000000 One of the problems with this roundoff error is that it compounds
0.20000000000000001110 >>> 0.3 0.29999999999999998890 >>> 0.1 + 0.2 0.30000000000000004441 >>> sum([0.1] * 10) 0.99999999999999988898 >>> 0.1 * 10 1.00000000000000000000 One of the problems with this roundoff error is that it compounds
0.20000000000000001110 >>> 0.3 0.29999999999999998890 >>> 0.1 + 0.2 0.30000000000000004441 >>> sum([0.1] * 10) 0.99999999999999988898 >>> 0.1 * 10 1.00000000000000000000 One of the problems with this roundoff error is that it compounds
0.20000000000000001110 >>> 0.3 0.29999999999999998890 >>> 0.1 + 0.2 0.30000000000000004441 >>> sum([0.1] * 10) 0.99999999999999988898 >>> 0.1 * 10 1.00000000000000000000 One of the problems with this roundoff error is that it compounds
operation introduces some error (nothing you can do about this) 2. Be careful when comparing floats (especially to 0.0) dot product of two perpendicular vectors
operation introduces some error (nothing you can do about this) 2. Be careful when comparing floats (especially to 0.0) >>> np.isclose(0.1 + 0.2 - 0.3, 0.0) True >>> def isclose(a, b, epsilon=1e-8): ... return abs(a - b) < epsilon >>> isclose(0.1 + 0.2, 0.3) True dot product of two perpendicular vectors
np.array([0.0]) RuntimeWarning: divide by zero encountered in divide array([inf]) >>> 1.0 / 0.0 … ZeroDivisionError: float division by zero Result of dividing by zero (sometimes):
np.array([0.0]) RuntimeWarning: divide by zero encountered in divide array([inf]) >>> 1.0 / 0.0 … ZeroDivisionError: float division by zero Result of dividing by zero (sometimes):
about how precision can be lost when numbers of different magnitudes are added or subtracted And we’ve talked about the problems that come up with trying to convert between binary and decimal fractions
Floating-Point Arithmetic": http://docs.sun.com/source/806-3568/ncg_goldberg.html (note: very math and theory heavy; not especially useful) • "Points on Floats": https://matthew-brett.github.io/teaching/ floating_point.html#floating-point (much more approachable) • "Float Precision–From Zero to 100+ Digits": https:// randomascii.wordpress.com/2012/03/08/float-precisionfrom-zero- to-100-digits-2/ (a good series of blog posts on floats and precision) • John von Neumann’s thoughts on floats: https://library.ias.edu/files/ Prelim_Disc_Logical_Design.pdf (section 5.3; page 18)