Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Floats are Friends: Making the Most of IEEE 754.00000000002

Floats are Friends: Making the Most of IEEE 754.00000000002

David Wolever

May 04, 2019
Tweet

More Decks by David Wolever

Other Decks in Programming

Transcript

  1. Floats are Friends: Making the Most of
 IEEE 754.000000002 David

    Wolever @wolever Excited to be here today talking to you about floating point numbers - What brought you here today? What are you hoping to learn? Tweet at me, I’d love to hear! - Folks in the front: can you take a picture? - Say this up front: I’ve taken some artistic license with the title; 754 can be exactly represented in floating point, please accept my apology
  2. @wolever Floats are Friends They aren’t the best They also

    aren’t the worst But we are definitely stuck with them Over this talk, I’d like to tell you a bit more about what floats are, how they work, and why they do the weird things they do. And hopefully by the end … well, I don’t expect you to like them. But at least understand them.
  3. @wolever Why do Floats Exist? need to take a step

    further back and look at how numbers are represented in computers
  4. @wolever Whole Numbers (Integers) Pretty easy one-to-one mapping between binary

    numbers and decimal integers (you might notice negative numbers are missing here …
  5. @wolever Whole Numbers (Integers) Pretty easy 0 0 0 0

    0 0 0 one-to-one mapping between binary numbers and decimal integers (you might notice negative numbers are missing here …
  6. @wolever Whole Numbers (Integers) Pretty easy 0 0 0 0

    0 0 0 1 0 0 0 0 0 1 one-to-one mapping between binary numbers and decimal integers (you might notice negative numbers are missing here …
  7. @wolever Whole Numbers (Integers) Pretty easy 0 0 0 0

    0 0 0 1 0 0 0 0 0 1 2 0 0 0 0 1 0 one-to-one mapping between binary numbers and decimal integers (you might notice negative numbers are missing here …
  8. @wolever Whole Numbers (Integers) Pretty easy 0 0 0 0

    0 0 0 1 0 0 0 0 0 1 42 1 0 1 0 1 0 2 0 0 0 0 1 3 0 0 0 0 1 1 ⋮ 0 one-to-one mapping between binary numbers and decimal integers (you might notice negative numbers are missing here …
  9. @wolever Whole Numbers (Integers) Two’s Complement … a representation called

    two’s complement is used to handle negative number. It’s a thing of beauty, and I’d really suggest looking it up. But alas, it’s outside the scope of this talk.
  10. @wolever Whole Numbers (Integers) Work Pretty Well INT_MIN (32 bit):

    −2,147,483,648 INT_MAX (32 bit): +2,147,483,647 2 billion 9 quadrillion
  11. @wolever Whole Numbers (Integers) Work Pretty Well INT_MIN (32 bit):

    −2,147,483,648 INT_MAX (32 bit): +2,147,483,647 LONG_MIN (64 bit): −9,223,372,036,854,775,808 LONG_MAX (64 bit): +9,223,372,036,854,775,807 2 billion 9 quadrillion
  12. @wolever Fractional Numbers (Reals) A Bit More Difficult 0 0

    0 0 0 . 0.125 0 0 0 1 . 0.25 0 0 1 0 .
  13. @wolever Fractional Numbers (Reals) A Bit More Difficult 0 0

    0 0 0 . 0.125 0 0 0 1 . 0.25 0 0 1 0 . 0.375 0 0 1 1 . 0.5 0 1 0 0 . 0.875 0 1 1 1 . ⋮
  14. @wolever Fractional Numbers (Reals) A Bit More Difficult FIXED (16,

    16) smallest: 1.5 ⋅ 10−5 ≈ 2−16 FIXED (16, 16) largest: 131,071.999985 ≈ 217 − 2−16
  15. @wolever Fractional Numbers (Reals) A Bit More Difficult FIXED (16,

    16) smallest: 1.5 ⋅ 10−5 ≈ 2−16 FIXED (16, 16) largest: 131,071.999985 ≈ 217 − 2−16 FIXED (32, 32) smallest: 2.3 ⋅ 10−10 = 2−32 FIXED (32, 32) largest: 4,294,967,296 ≈ 232 − 2−32
  16. @wolever Fractional Numbers (Reals) A Bit More Difficult FIXED (16,

    16) smallest: 1.5 ⋅ 10−5 ≈ 2−16 FIXED (16, 16) largest: 131,071.999985 ≈ 217 − 2−16 FIXED (32, 32) smallest: 2.3 ⋅ 10−10 = 2−32 FIXED (32, 32) largest: 4,294,967,296 ≈ 232 − 2−32 (ignoring negative numbers)
  17. @wolever Fractional Numbers (Reals) A Bit More Difficult In the

    real world, it’s not uncommon to deal with numbers outside this range But in this naive system where we’re fixing the position of the decimal place - a fixed point number, if you will - begins to run into problems, because it can’t represent the range of numbers we’ll encounter. … to move that decimal place around - to float it. Almost like you’ve implemented a floating point.
  18. @wolever Fractional Numbers (Reals) A Bit More Difficult Pluto: 7.5e12

    M (7.5 billion kilometres)
 Water molecule: 2.8e-10 M (0.28 nanometers) In the real world, it’s not uncommon to deal with numbers outside this range But in this naive system where we’re fixing the position of the decimal place - a fixed point number, if you will - begins to run into problems, because it can’t represent the range of numbers we’ll encounter. … to move that decimal place around - to float it. Almost like you’ve implemented a floating point.
  19. @wolever Fractional Numbers (Reals) A Bit More Difficult Pluto: 7.5e12

    M (7.5 billion kilometres)
 Water molecule: 2.8e-10 M (0.28 nanometers) >>> distance_to_pluto = number(7.5, scale=12)
 >>> size_of_water = number(2.8, scale=-10) In the real world, it’s not uncommon to deal with numbers outside this range But in this naive system where we’re fixing the position of the decimal place - a fixed point number, if you will - begins to run into problems, because it can’t represent the range of numbers we’ll encounter. … to move that decimal place around - to float it. Almost like you’ve implemented a floating point.
  20. @wolever Floating Point Numbers ± E E E E F

    F F F F F F … if you’ve ever seen scientific notation, this is exactly the same thing; this is base 2 scientific notation.
  21. @wolever Floating Point Numbers ± E E E E F

    F F F F F F Sign (+ or -) … if you’ve ever seen scientific notation, this is exactly the same thing; this is base 2 scientific notation.
  22. @wolever Floating Point Numbers ± E E E E F

    F F F F F F Sign (+ or -) Exponent … if you’ve ever seen scientific notation, this is exactly the same thing; this is base 2 scientific notation.
  23. @wolever Floating Point Numbers ± E E E E F

    F F F F F F Sign (+ or -) Exponent Fraction
 (also called mantissa) … if you’ve ever seen scientific notation, this is exactly the same thing; this is base 2 scientific notation.
  24. @wolever Floating Point Numbers ± E E E E F

    F F F F F F Sign (+ or -) Exponent Fraction … if you’ve ever seen scientific notation, this is exactly the same thing; this is base 2 scientific notation.
  25. @wolever Floating Point Numbers ± E E E E F

    F F F F F F Sign (+ or -) Exponent Fraction (also called mantissa) … if you’ve ever seen scientific notation, this is exactly the same thing; this is base 2 scientific notation.
  26. @wolever Floating Point Numbers ± E E E E F

    F F F F F F Sign (+ or -) Exponent (if you’re trying to sound fancy) Fraction (also called mantissa) … if you’ve ever seen scientific notation, this is exactly the same thing; this is base 2 scientific notation.
  27. @wolever Floating Point Numbers ± E E E E F

    F F F F F F Sign (+ or -) Exponent (if you’re trying to sound fancy) frac × 2exp value = sign × Fraction (also called mantissa) … if you’ve ever seen scientific notation, this is exactly the same thing; this is base 2 scientific notation.
  28. @wolever Floating Point Numbers 0 1 0 0 0 0

    0 1 0.5 1 × 23−4 Exponent bias: half the exponent’s maximum value
  29. @wolever Floating Point Numbers 0 1 0 0 0 0

    0 1 0.5 1 × 23−4 caveat: FPs are stored normalized, so the leading "1" is omitted
  30. @wolever Floating Point Numbers 0 1 0 0 0 0

    0 1 0.5 0 1 0 1 1 1 0 1 3.25 1 × 23−4 13 × 23−5
  31. @wolever Floating Point Numbers 0 1 0 0 0 0

    0 1 0.5 0 1 0 1 1 1 0 1 3.25 1 × 23−4 13 × 23−5 1 0 0 0 1 0 1 1 -88 11 × 23−0 caveat: FPs are stored normalized, so the leading "1" is omitted
  32. @wolever Floating Point Numbers 0 1 0 0 0 0

    0 1 0.5 0 1 0 1 1 1 0 1 3.25 1 × 23−4 13 × 23−5 1 0 0 0 1 0 1 1 -88 11 × 23−0 1 1 1 0 0 0 0 1 -0.0125 1 × 23−6 caveat: FPs are stored normalized, so the leading "1" is omitted
  33. @wolever Neat! We’ve got a number system that can scale

    to represent very small numbers or very large numbers
  34. @wolever Floating Point Numbers exponent fraction smallest largest 32 bit

    (float) 8 bits 23 bits 1.18e-38 3.4e+38 64 bit (double) 11 bits 52 bits 2.2e-308 1.8e+308
  35. @wolever Floating Point Numbers A Tradeoff Precision Magnitude How small

    can we get? How big can we get? We can measure the distance to Pluto
 (but it won’t be reliable down to the meter)
  36. @wolever Floating Point Numbers A Tradeoff Precision Magnitude How small

    can we get? How big can we get? We can measure the distance to Pluto
 (but it won’t be reliable down to the meter) We can measure the size of a water molecule
 (but not a billion of them at the same time)
  37. @wolever Floating Point Numbers wat >>> 1.0
 1.0
 >>> 1e20


    1e+20
 >>> 1e20 + 1
 1e+20
 >>> 1e20 + 1 == 1e20
 True
  38. @wolever Floating Point Numbers wat >>> 1.0
 1.0
 >>> 1e20


    1e+20
 >>> 1e20 + 1
 1e+20
 >>> 1e20 + 1 == 1e20
 True
  39. @wolever Floating Point Numbers wat >>> 1.0
 1.0
 >>> 1e20


    1e+20
 >>> 1e20 + 1
 1e+20
 >>> 1e20 + 1 == 1e20
 True
  40. @wolever Floating Point Numbers wat >>> 1.0
 1.0
 >>> 1e20


    1e+20
 >>> 1e20 + 1
 1e+20
 >>> 1e20 + 1 == 1e20
 True
  41. @wolever Floating Point Numbers wat do? 1. Rule of thumb:

    doubles have 15 significant digits
  42. @wolever Floating Point Numbers wat do? 1. Rule of thumb:

    doubles have 15 significant digits 2. Precision is lost when adding or subtracting
 numbers with different magnitudes:
  43. @wolever Floating Point Numbers wat do? 1. Rule of thumb:

    doubles have 15 significant digits 2. Precision is lost when adding or subtracting
 numbers with different magnitudes: >>> 12345 + 1e15
 1000000000012345
 >>> 12345 + 1e16
 10000000000012344
 >>> 12345 + 1e17
 100000000000012352
  44. @wolever Floating Point Numbers wat do? 1. Rule of thumb:

    doubles have 15 significant digits 2. Precision is lost when adding or subtracting
 numbers with different magnitudes: >>> 12345 + 1e15
 1000000000012345
 >>> 12345 + 1e16
 10000000000012344
 >>> 12345 + 1e17
 100000000000012352
  45. @wolever Floating Point Numbers wat do? 1. Rule of thumb:

    doubles have 15 significant digits 2. Precision is lost when adding or subtracting
 numbers with different magnitudes: >>> 12345 + 1e15
 1000000000012345
 >>> 12345 + 1e16
 10000000000012344
 >>> 12345 + 1e17
 100000000000012352
  46. @wolever Floating Point Numbers wat do? 1. Rule of thumb:

    doubles have 15 significant digits 2. Precision is lost when adding or subtracting
 numbers with different magnitudes: >>> 12345 + 1e15
 1000000000012345
 >>> 12345 + 1e16
 10000000000012344
 >>> 12345 + 1e17
 100000000000012352 (multiplication and division are fine, though!)
  47. @wolever Floating Point Numbers wat do? 3. Use a library

    to sum floats: >>> sum([-1e20, 1, 1e20])
 0.00000000000000000000
 >>> math.fsum([-1e20, 1, 1e20])
 1.00000000000000000000
 >>> np.sum([-1e20, 1, 1e20])
 0.00000000000000000000
  48. @wolever Floating Point Numbers wat do? 3. Use a library

    to sum floats: >>> sum([-1e20, 1, 1e20])
 0.00000000000000000000
 >>> math.fsum([-1e20, 1, 1e20])
 1.00000000000000000000
 >>> np.sum([-1e20, 1, 1e20])
 0.00000000000000000000
  49. @wolever Floating Point Numbers wat do? 3. Use a library

    to sum floats: >>> sum([-1e20, 1, 1e20])
 0.00000000000000000000
 >>> math.fsum([-1e20, 1, 1e20])
 1.00000000000000000000
 >>> np.sum([-1e20, 1, 1e20])
 0.00000000000000000000
  50. @wolever Floating Point Numbers wat do? 3. Use a library

    to sum floats: >>> sum([-1e20, 1, 1e20])
 0.00000000000000000000
 >>> math.fsum([-1e20, 1, 1e20])
 1.00000000000000000000
 >>> np.sum([-1e20, 1, 1e20])
 0.00000000000000000000 See also: accupy
  51. @wolever Floating Point Numbers A Tradeoff Every real number can’t

    be represented Some are infinite: π, e, etc Some can’t be expressed in binary: 0.1
  52. @wolever Floating Point Numbers wat >>> 0.1
 0.10000000000000000555 >>> "%0.20f"

    %(0.1, )
 0.10000000000000000555 Note: floating point values will be
 shown to 20 decimal places:
  53. @wolever Floating Point Numbers A Tradeoff 0.1 3.1415926… 0.100000005 3.1416

    0.5 1.0 (the difference between a real number and
 the nearest number that can be represented
 is called "relative error")
  54. @wolever Floating Point Numbers wat >>> 0.1
 0.10000000000000000555
 >>> 0.2


    0.20000000000000001110
 >>> 0.3
 0.29999999999999998890
 >>> 0.1 + 0.2
 0.30000000000000004441
 >>> sum([0.1] * 10)
 0.99999999999999988898
 >>> 0.1 * 10
 1.00000000000000000000 One of the problems with this roundoff error is that it compounds
  55. @wolever Floating Point Numbers wat >>> 0.1
 0.10000000000000000555
 >>> 0.2


    0.20000000000000001110
 >>> 0.3
 0.29999999999999998890
 >>> 0.1 + 0.2
 0.30000000000000004441
 >>> sum([0.1] * 10)
 0.99999999999999988898
 >>> 0.1 * 10
 1.00000000000000000000 One of the problems with this roundoff error is that it compounds
  56. @wolever Floating Point Numbers wat >>> 0.1
 0.10000000000000000555
 >>> 0.2


    0.20000000000000001110
 >>> 0.3
 0.29999999999999998890
 >>> 0.1 + 0.2
 0.30000000000000004441
 >>> sum([0.1] * 10)
 0.99999999999999988898
 >>> 0.1 * 10
 1.00000000000000000000 One of the problems with this roundoff error is that it compounds
  57. @wolever Floating Point Numbers wat >>> 0.1
 0.10000000000000000555
 >>> 0.2


    0.20000000000000001110
 >>> 0.3
 0.29999999999999998890
 >>> 0.1 + 0.2
 0.30000000000000004441
 >>> sum([0.1] * 10)
 0.99999999999999988898
 >>> 0.1 * 10
 1.00000000000000000000 One of the problems with this roundoff error is that it compounds
  58. @wolever Floating Point Numbers wat >>> 0.1
 0.10000000000000000555
 >>> 0.2


    0.20000000000000001110
 >>> 0.3
 0.29999999999999998890
 >>> 0.1 + 0.2
 0.30000000000000004441
 >>> sum([0.1] * 10)
 0.99999999999999988898
 >>> 0.1 * 10
 1.00000000000000000000 One of the problems with this roundoff error is that it compounds
  59. @wolever Floating Point Numbers wat >>> 0.1
 0.10000000000000000555
 >>> 0.2


    0.20000000000000001110
 >>> 0.3
 0.29999999999999998890
 >>> 0.1 + 0.2
 0.30000000000000004441
 >>> sum([0.1] * 10)
 0.99999999999999988898
 >>> 0.1 * 10
 1.00000000000000000000 One of the problems with this roundoff error is that it compounds
  60. @wolever Floating Point Numbers wat do? 1. Remember that every

    operation introduces some error
 (nothing you can do about this) dot product of two perpendicular vectors
  61. @wolever Floating Point Numbers wat do? 1. Remember that every

    operation introduces some error
 (nothing you can do about this) 2. Be careful when comparing floats (especially to 0.0) dot product of two perpendicular vectors
  62. @wolever Floating Point Numbers wat do? 1. Remember that every

    operation introduces some error
 (nothing you can do about this) 2. Be careful when comparing floats (especially to 0.0) >>> np.isclose(0.1 + 0.2 - 0.3, 0.0)
 True
 >>> def isclose(a, b, epsilon=1e-8):
 ... return abs(a - b) < epsilon
 >>> isclose(0.1 + 0.2, 0.3)
 True dot product of two perpendicular vectors
  63. @wolever Floating Point Numbers wat do? 3. Round floats to

    the precision you need before
 displaying them: >>> "%0.2f" %(0.1, )
 '0.10'
 >>> "%0.2f" %(0.1 + 0.2, )
 '0.30'
 >>> "%0.2f" %(sum([0.1] * 10), )
 '1.00'
  64. @wolever the weird parts inf / -inf 01 1 1

    0 0 0 0 ± >>> inf = float('inf')
 >>> inf > 1e308
 True
 >>> inf > inf
 False
  65. @wolever the weird parts inf / -inf >>> 1e308 +

    1e308
 inf
 >>> -1e308 - 1e308
 -inf Result of overflowing a large number:
  66. @wolever the weird parts inf / -inf >>> np.array([1.0]) /

    np.array([0.0])
 RuntimeWarning: divide by zero encountered in divide
 array([inf])
 >>> 1.0 / 0.0
 …
 ZeroDivisionError: float division by zero Result of dividing by zero (sometimes):
  67. @wolever the weird parts inf / -inf >>> np.array([1.0]) /

    np.array([0.0])
 RuntimeWarning: divide by zero encountered in divide
 array([inf])
 >>> 1.0 / 0.0
 …
 ZeroDivisionError: float division by zero Result of dividing by zero (sometimes):
  68. @wolever the weird parts -0 00 0 0 0 0

    0 0 ± >>> float('-0')
 -0.0
 >>> -1e-323 / 10
 -0.0 Result of underflowing a small number:
  69. @wolever the weird parts -0 "Useful" to know the sign

    of inf when dividing by 0: >>> np.array([1.0, 1.0]) /
 ... np.array([float('0'), float('-0')])
 array([ inf, -inf])
  70. @wolever the weird parts -0 Otherwise behaves like 0: >>>

    float('-0') == float('0')
 True
 >>> float('-0') / 42.0
 -0.0
  71. @wolever the weird parts nan 0 Not A Number Nan

    is a number that - like it says right there on the box - is Not A Number
  72. @wolever the weird parts nan >>> float('inf') / float('inf')
 nan

    Result of mathematically undefined operations: 01 1 1 0 0 0 1 ±
  73. @wolever the weird parts nan >>> float('inf') / float('inf')
 nan

    Result of mathematically undefined operations: 01 1 1 0 0 0 1 ± >>> math.sqrt(-1)
 ValueError: math domain error Although Python is more helpful:
  74. @wolever the weird parts nan >>> nan = float('nan')
 >>>

    nan == nan
 False
 >>> 1 > nan
 False
 >>> 1 < nan
 False
 >>> 1 + nan
 nan Wild, breaks everything:
  75. @wolever the weird parts nan >>> nan = float('nan')
 >>>

    nan == nan
 False
 >>> 1 > nan
 False
 >>> 1 < nan
 False
 >>> 1 + nan
 nan Wild, breaks everything:
  76. @wolever the weird parts nan >>> nan = float('nan')
 >>>

    nan == nan
 False
 >>> 1 > nan
 False
 >>> 1 < nan
 False
 >>> 1 + nan
 nan Wild, breaks everything:
  77. @wolever the weird parts nan >>> nan = float('nan')
 >>>

    nan == nan
 False
 >>> 1 > nan
 False
 >>> 1 < nan
 False
 >>> 1 + nan
 nan Wild, breaks everything:
  78. @wolever the weird parts nan >>> a = np.array([1.0, 0.0,

    3.0])
 >>> b = np.array([5.0, 0.0, 7.0])
 >>> np.nanmean(a / b)
 0.3142857142857143 Useful if you want to ignore invalid values:
  79. @wolever the weird parts nan >>> math.isnan(nan)
 True
 >>> nan

    != nan
 True Check for nan with isnan or x != x:
  80. @wolever the weird parts nan Pop quiz: how many nans

    are there? 252 01 1 1 X X X X ±
  81. @wolever the weird parts nan Why not us all those

    nans as pointers? * The top 16-bits denote the type of the encoded JSValue: * * Pointer { 0000:PPPP:PPPP:PPPP * / 0001:****:****:**** * Double { ... * \ FFFE:****:****:**** * Integer { FFFF:0000:IIII:IIII (from WebKit’s JSCJSValue.h)
  82. @wolever the weird parts nan JsObj JsObj_add(JsObj a, JsObj b)

    {
 if (JS_IS_DOUBLE(a) && JS_IS_DOUBLE(b))
 return a + b
 if (JS_IS_STRING_REF(a) && JS_IS_STRING_REF(b))
 return JsString_concat(a, b)
 ...
 }
  83. @wolever the weird parts nan JsObj JsObj_add(JsObj a, JsObj b)

    {
 if (JS_IS_DOUBLE(a) && JS_IS_DOUBLE(b))
 return a + b
 if (JS_IS_STRING_REF(a) && JS_IS_STRING_REF(b))
 return JsString_concat(a, b)
 ...
 }
  84. @wolever the weird parts nan JsObj JsObj_add(JsObj a, JsObj b)

    {
 if (JS_IS_DOUBLE(a) && JS_IS_DOUBLE(b))
 return a + b
 if (JS_IS_STRING_REF(a) && JS_IS_STRING_REF(b))
 return JsString_concat(a, b)
 ...
 }
  85. @wolever I want to offer you some hope we’ve talked

    about how precision can be lost when numbers of different magnitudes are added or subtracted And we’ve talked about the problems that come up with trying to convert between binary and decimal fractions
  86. @wolever decimal Exact representations of decimal numbers The "nearest number"

    rounding will still happen, but it will be more sensible
  87. @wolever decimal Exact representations of decimal numbers The "nearest number"

    rounding will still happen, but it will be more sensible Precision still needs to be specified…
  88. @wolever decimal Exact representations of decimal numbers The "nearest number"

    rounding will still happen, but it will be more sensible Precision still needs to be specified… … but the default is 28 decimal places
  89. @wolever decimal >>> from decimal import Decimal
 >>> d =

    Decimal('0.1')
 >>> d + d + d + d + d + d + d + d + d + d
 Decimal('1.0')
 >>> pi = Decimal(math.pi)
 >>> pi
 Decimal('3.141592653589793115997963…')
  90. @wolever decimal >>> from decimal import Decimal
 >>> d =

    Decimal('0.1')
 >>> d + d + d + d + d + d + d + d + d + d
 Decimal('1.0')
 >>> pi = Decimal(math.pi)
 >>> pi
 Decimal('3.141592653589793115997963…')
  91. @wolever decimal >>> from decimal import Decimal
 >>> d =

    Decimal('0.1')
 >>> d + d + d + d + d + d + d + d + d + d
 Decimal('1.0')
 >>> pi = Decimal(math.pi)
 >>> pi
 Decimal('3.141592653589793115997963…')
  92. @wolever decimal >>> from decimal import Decimal
 >>> d =

    Decimal('0.1')
 >>> d + d + d + d + d + d + d + d + d + d
 Decimal('1.0')
 >>> pi = Decimal(math.pi)
 >>> pi
 Decimal('3.141592653589793115997963…')
  93. @wolever decimal In [1]: d = Decimal('42')
 In [2]: %timeit

    d * d
 100,000 loops, best of 3: 7.28 µs per loop
 In [3]: f = 42.0
 In [4]: %timeit f * f
 10,000,000 loops, best of 3: 44.6 ns per loop
  94. @wolever decimal In [1]: d = Decimal('42')
 In [2]: %timeit

    d * d
 100,000 loops, best of 3: 7.28 µs per loop
 In [3]: f = 42.0
 In [4]: %timeit f * f
 10,000,000 loops, best of 3: 44.6 ns per loop
  95. @wolever decimal In [1]: d = Decimal('42')
 In [2]: %timeit

    d * d
 100,000 loops, best of 3: 7.28 µs per loop
 In [3]: f = 42.0
 In [4]: %timeit f * f
 10,000,000 loops, best of 3: 44.6 ns per loop
  96. @wolever decimal >>> from pympler.asizeof import asizeof
 >>> asizeof(42.0)
 24


    >>> asizeof(1e308)
 24
 >>> asizeof(Decimal('42'))
 168
 >>> asizeof(Decimal('1e308'))
 192
  97. @wolever decimal >>> from pympler.asizeof import asizeof
 >>> asizeof(42.0)
 24


    >>> asizeof(1e308)
 24
 >>> asizeof(Decimal('42'))
 168
 >>> asizeof(Decimal('1e308'))
 192
  98. @wolever decimal >>> from pympler.asizeof import asizeof
 >>> asizeof(42.0)
 24


    >>> asizeof(1e308)
 24
 >>> asizeof(Decimal('42'))
 168
 >>> asizeof(Decimal('1e308'))
 192
  99. @wolever decimal >>> from pympler.asizeof import asizeof
 >>> asizeof(42.0)
 24


    >>> asizeof(1e308)
 24
 >>> asizeof(Decimal('42'))
 168
 >>> asizeof(Decimal('1e308'))
 192
  100. Selected References • "What Every Computer Scientist Should Know About

    Floating-Point Arithmetic": http://docs.sun.com/source/806-3568/ncg_goldberg.html
 (note: very math and theory heavy; not especially useful) • "Points on Floats": https://matthew-brett.github.io/teaching/ floating_point.html#floating-point
 (much more approachable) • "Float Precision–From Zero to 100+ Digits": https:// randomascii.wordpress.com/2012/03/08/float-precisionfrom-zero- to-100-digits-2/
 (a good series of blog posts on floats and precision) • John von Neumann’s thoughts on floats: https://library.ias.edu/files/ Prelim_Disc_Logical_Design.pdf (section 5.3; page 18)