Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Floating Point 101
Search
kida
February 06, 2013
Programming
7
310
Floating Point 101
A very very basic introduction to FP.
With some inaccuracies.
kida
February 06, 2013
Tweet
Share
More Decks by kida
See All by kida
Cognitive Supervision for Laser Phonomicrosurgery
kida
0
44
Towards Cognitive Supervision in robot-assisted surgery
kida
0
190
Other Decks in Programming
See All in Programming
Deep Dive into Kotlin Flow
jmatsu
1
360
請來的 AI Agent 同事們在寫程式時,怎麼用 pytest 去除各種幻想與盲點
keitheis
0
120
より安全で効率的な Go コードへ: Protocol Buffers Opaque API の導入
shwatanap
2
670
250830 IaCの選定~AWS SAMのLambdaをECSに乗り換えたときの備忘録~
east_takumi
0
400
Ruby×iOSアプリ開発 ~共に歩んだエコシステムの物語~
temoki
0
350
Design Foundational Data Engineering Observability
sucitw
3
200
プロポーザル駆動学習 / Proposal-Driven Learning
mackey0225
2
1.3k
デザイナーが Androidエンジニアに 挑戦してみた
874wokiite
0
550
Navigation 2 を 3 に移行する(予定)ためにやったこと
yokomii
0
340
AIコーディングAgentとの向き合い方
eycjur
0
280
アルテニア コンサル/ITエンジニア向け 採用ピッチ資料
altenir
0
110
AI Coding Agentのセキュリティリスク:PRの自己承認とメルカリの対策
s3h
0
230
Featured
See All Featured
Embracing the Ebb and Flow
colly
87
4.8k
Principles of Awesome APIs and How to Build Them.
keavy
126
17k
The Straight Up "How To Draw Better" Workshop
denniskardys
236
140k
VelocityConf: Rendering Performance Case Studies
addyosmani
332
24k
GraphQLの誤解/rethinking-graphql
sonatard
72
11k
A better future with KSS
kneath
239
17k
What's in a price? How to price your products and services
michaelherold
246
12k
The Psychology of Web Performance [Beyond Tellerrand 2023]
tammyeverts
49
3k
Fight the Zombie Pattern Library - RWD Summit 2016
marcelosomers
234
17k
Making the Leap to Tech Lead
cromwellryan
135
9.5k
Six Lessons from altMBA
skipperchong
28
4k
How to train your dragon (web standard)
notwaldorf
96
6.2k
Transcript
FLOATING 101 POINT
FLOATING 100.999998 POINT
engineers we are
researchers we are
3.14159265358979 3238462643383279 5028841971693993 7510582097494459 2307816406286208 NUMBERS WE PLAY WITH ALL
DAY LONG
well, sometimes even at night. (yawn).
So, what is a floating point?
A floating point is ± D 1 .D 2 D
3 ···D n x Be
A floating point is sign ± D 1 .D 2
D 3 ···D n x Be
A floating point is significand ± D 1 .D 2
D 3 ···D n x Be
A floating point is base ± D 1 .D 2
D 3 ···D n x Be
A floating point is exponent ± D 1 .D 2
D 3 ···D n x Be
A floating point represents ± (D 1 + D 2
* B-1 + D 3 * B-2 + … + D n * B(n-1)) * Be
For example + 3.14 x 100 = (3 + 1*0.1
+ 4*0.01)*1 = 3.14
The point can float ! + 3.14 x 10-1 =
0.314
The point can float ! + 3.14 x 10+1 =
31.4
What if B = 2 ? + 1.00 x 2+2
= 4.0
Like machines do. http://grouper.ieee.org/groups/754/
Normalization of floating point
Multiple representations + 0.01 x 22 = 1.0 + 0.10
x 21 = 1.0 + 1.00 x 20 = 1.0
Normalized representation + 0.01 x 22 = 1.0 + 0.10
x 21 = 1.0 + 1.00 x 20 = 1.0
Normalized representation + (1.)000 x 20 1 is omitted
Normalized representation + (1.)000 x 20 there's room for an
extra digit!
Excess-127 representation -127 → 0 -126 → +1 … -1
→ +126 0 → +127
#include <float.h> FLT_MIN, FLT_MAX, ... #include <math.h> M_PI, M_E, NAN,
INFINITY, ...
Why no exact representation for 0.1?
FLOATING POINT REAL NUMBERS is used to represent
FLOATING POINT RATIONAL NUMBERS denotes a (finite) subset of
0.1 cannot be expressed as a power of 2 +
??? x 2??
+ 00 x 20 1 It's also a matter of
precision
+ 01 x 20 1 1.25 It's also a matter
of precision
+ 10 x 20 1 1.25 1.5 It's also a
matter of precision
+ 11 x 20 1 1.25 1.5 1.75 It's also
a matter of precision
+ 11 x 20 π/2 It's also a matter of
precision
+ 11 x 20 π/2 It's also a matter of
precision
+ 00 x 21 1 1.25 1.5 1.75 2.0 Not
just a matter of precision or basis...
+ 01 x 21 1 1.25 1.5 1.75 2.0 2.5
Not just a matter of precision or basis...
+ 10 x 21 1 1.25 1.5 1.75 2.0 2.5
3.0 Not just a matter of precision or basis...
Like death and taxes rounding errors are a fact of
life. http://wiki.octave.org/FAQ
+ 110 x 21 Operands that differ greatly + 100
x 2-2
+ 110000 x 21 Operands that differ greatly + 000101
x 21
+ 110000 x 21 Operands that differ greatly + 000101
x 21 = 110
None
Operands that are really close + 111 x 21 -
110 x 21 = 001 x 21
Operands that are really close + 111 x 21 -
110 x 21 = 100 x 2-2
None
Fixed point representation + 100.001010 = 22 + 2-3+ 2-5
= 4.15625
POINT WHAT'S THE WITH FLOATING
FP ARITHMETIC IS FAST Embedded in HW.
Single precision up to ~10+38. FP REPRESENTS A WIDE RANGE
HE APPROVES FP
Anyway, errors still there.
Okay, what about increasing the number of digits use decimal
representations estimating errors think before you type
More digits, please! double (52 significant bits) long double (112
significant bits) arbitrary precision * * language support needed
Use decimal representations! decimal (C# only) BigDecimal (Java only) std::decimal
(C++, coming soon)* * after IEEE-754 2008
Estimate the error of your algo rel_err = fabs(f –
fp) / f
Use float to represent time float time; while (true) time
+= 0.20;
Use float to represent time float time; while (true) time
+= 0.20; This is BAD. And you should feel BAD.
Compare float numbers (a == b)
Compare float numbers (a == b) fabs(a -b) <= FLT_EPSILON
Compare float numbers (a == b) fabs(a -b) <= FLT_EPSILON
fabs(a - b) <= max(fabs(a),fabs(b)) * pc
There is no silver bullet.
Use libraries (when available).
Vector addition (naive) float t[SIZE]; float result; for (i =
0; i < SIZE; ++i) result += t[i];
RESCUE GNU GSL TO THE
None
that's all folks! @lorisfichera – https://kid-a.github.com References and source code
available at https://github.com/kid-a/floating-point-seminar Credits Font: Yanone Kaffeesatz (http://www.yanone.de/typedesign/kaffeesatz/)