Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Floating Point 101
Search
Sponsored
·
Ship Features Fearlessly
Turn features on and off without deploys. Used by thousands of Ruby developers.
→
kida
February 06, 2013
Programming
7
320
Floating Point 101
A very very basic introduction to FP.
With some inaccuracies.
kida
February 06, 2013
Tweet
Share
More Decks by kida
See All by kida
Cognitive Supervision for Laser Phonomicrosurgery
kida
0
55
Towards Cognitive Supervision in robot-assisted surgery
kida
0
190
Other Decks in Programming
See All in Programming
フルサイクルエンジニアリングをAI Agentで全自動化したい 〜構想と現在地〜
kamina_zzz
0
400
今こそ知るべき耐量子計算機暗号(PQC)入門 / PQC: What You Need to Know Now
mackey0225
3
370
AI Agent Tool のためのバックエンドアーキテクチャを考える #encraft
izumin5210
6
1.8k
Oxlint JS plugins
kazupon
1
860
CSC307 Lecture 02
javiergs
PRO
1
770
Data-Centric Kaggle
isax1015
2
770
AIで開発はどれくらい加速したのか?AIエージェントによるコード生成を、現場の評価と研究開発の評価の両面からdeep diveしてみる
daisuketakeda
1
2.4k
そのAIレビュー、レビューしてますか? / Are you reviewing those AI reviews?
rkaga
6
4.5k
LLM Observabilityによる 対話型音声AIアプリケーションの安定運用
gekko0114
2
430
AI時代の認知負荷との向き合い方
optfit
0
150
0→1 フロントエンド開発 Tips🚀 #レバテックMeetup
bengo4com
0
560
MDN Web Docs に日本語翻訳でコントリビュート
ohmori_yusuke
0
650
Featured
See All Featured
Rails Girls Zürich Keynote
gr2m
96
14k
Leading Effective Engineering Teams in the AI Era
addyosmani
9
1.6k
"I'm Feeling Lucky" - Building Great Search Experiences for Today's Users (#IAC19)
danielanewman
231
22k
Learning to Love Humans: Emotional Interface Design
aarron
275
41k
Building Better People: How to give real-time feedback that sticks.
wjessup
370
20k
SEO for Brand Visibility & Recognition
aleyda
0
4.2k
AI Search: Where Are We & What Can We Do About It?
aleyda
0
6.9k
WENDY [Excerpt]
tessaabrams
9
36k
Primal Persuasion: How to Engage the Brain for Learning That Lasts
tmiket
0
250
Product Roadmaps are Hard
iamctodd
PRO
55
12k
Pawsitive SEO: Lessons from My Dog (and Many Mistakes) on Thriving as a Consultant in the Age of AI
davidcarrasco
0
63
What Being in a Rock Band Can Teach Us About Real World SEO
427marketing
0
170
Transcript
FLOATING 101 POINT
FLOATING 100.999998 POINT
engineers we are
researchers we are
3.14159265358979 3238462643383279 5028841971693993 7510582097494459 2307816406286208 NUMBERS WE PLAY WITH ALL
DAY LONG
well, sometimes even at night. (yawn).
So, what is a floating point?
A floating point is ± D 1 .D 2 D
3 ···D n x Be
A floating point is sign ± D 1 .D 2
D 3 ···D n x Be
A floating point is significand ± D 1 .D 2
D 3 ···D n x Be
A floating point is base ± D 1 .D 2
D 3 ···D n x Be
A floating point is exponent ± D 1 .D 2
D 3 ···D n x Be
A floating point represents ± (D 1 + D 2
* B-1 + D 3 * B-2 + … + D n * B(n-1)) * Be
For example + 3.14 x 100 = (3 + 1*0.1
+ 4*0.01)*1 = 3.14
The point can float ! + 3.14 x 10-1 =
0.314
The point can float ! + 3.14 x 10+1 =
31.4
What if B = 2 ? + 1.00 x 2+2
= 4.0
Like machines do. http://grouper.ieee.org/groups/754/
Normalization of floating point
Multiple representations + 0.01 x 22 = 1.0 + 0.10
x 21 = 1.0 + 1.00 x 20 = 1.0
Normalized representation + 0.01 x 22 = 1.0 + 0.10
x 21 = 1.0 + 1.00 x 20 = 1.0
Normalized representation + (1.)000 x 20 1 is omitted
Normalized representation + (1.)000 x 20 there's room for an
extra digit!
Excess-127 representation -127 → 0 -126 → +1 … -1
→ +126 0 → +127
#include <float.h> FLT_MIN, FLT_MAX, ... #include <math.h> M_PI, M_E, NAN,
INFINITY, ...
Why no exact representation for 0.1?
FLOATING POINT REAL NUMBERS is used to represent
FLOATING POINT RATIONAL NUMBERS denotes a (finite) subset of
0.1 cannot be expressed as a power of 2 +
??? x 2??
+ 00 x 20 1 It's also a matter of
precision
+ 01 x 20 1 1.25 It's also a matter
of precision
+ 10 x 20 1 1.25 1.5 It's also a
matter of precision
+ 11 x 20 1 1.25 1.5 1.75 It's also
a matter of precision
+ 11 x 20 π/2 It's also a matter of
precision
+ 11 x 20 π/2 It's also a matter of
precision
+ 00 x 21 1 1.25 1.5 1.75 2.0 Not
just a matter of precision or basis...
+ 01 x 21 1 1.25 1.5 1.75 2.0 2.5
Not just a matter of precision or basis...
+ 10 x 21 1 1.25 1.5 1.75 2.0 2.5
3.0 Not just a matter of precision or basis...
Like death and taxes rounding errors are a fact of
life. http://wiki.octave.org/FAQ
+ 110 x 21 Operands that differ greatly + 100
x 2-2
+ 110000 x 21 Operands that differ greatly + 000101
x 21
+ 110000 x 21 Operands that differ greatly + 000101
x 21 = 110
None
Operands that are really close + 111 x 21 -
110 x 21 = 001 x 21
Operands that are really close + 111 x 21 -
110 x 21 = 100 x 2-2
None
Fixed point representation + 100.001010 = 22 + 2-3+ 2-5
= 4.15625
POINT WHAT'S THE WITH FLOATING
FP ARITHMETIC IS FAST Embedded in HW.
Single precision up to ~10+38. FP REPRESENTS A WIDE RANGE
HE APPROVES FP
Anyway, errors still there.
Okay, what about increasing the number of digits use decimal
representations estimating errors think before you type
More digits, please! double (52 significant bits) long double (112
significant bits) arbitrary precision * * language support needed
Use decimal representations! decimal (C# only) BigDecimal (Java only) std::decimal
(C++, coming soon)* * after IEEE-754 2008
Estimate the error of your algo rel_err = fabs(f –
fp) / f
Use float to represent time float time; while (true) time
+= 0.20;
Use float to represent time float time; while (true) time
+= 0.20; This is BAD. And you should feel BAD.
Compare float numbers (a == b)
Compare float numbers (a == b) fabs(a -b) <= FLT_EPSILON
Compare float numbers (a == b) fabs(a -b) <= FLT_EPSILON
fabs(a - b) <= max(fabs(a),fabs(b)) * pc
There is no silver bullet.
Use libraries (when available).
Vector addition (naive) float t[SIZE]; float result; for (i =
0; i < SIZE; ++i) result += t[i];
RESCUE GNU GSL TO THE
None
that's all folks! @lorisfichera – https://kid-a.github.com References and source code
available at https://github.com/kid-a/floating-point-seminar Credits Font: Yanone Kaffeesatz (http://www.yanone.de/typedesign/kaffeesatz/)