Slide 1

Slide 1 text

WELCOME

Slide 2

Slide 2 text

Bayes is BAE

Slide 3

Slide 3 text

No content

Slide 4

Slide 4 text

Introducing our Protagonist

Slide 5

Slide 5 text

No content

Slide 6

Slide 6 text

No content

Slide 7

Slide 7 text

No content

Slide 8

Slide 8 text

No content

Slide 9

Slide 9 text

No content

Slide 10

Slide 10 text

No content

Slide 11

Slide 11 text

Divine Benevolence, or an Attempt to Prove That the Principal End of the Divine Providence and Government is the Happiness of His Creatures

Slide 12

Slide 12 text

&

Slide 13

Slide 13 text

An Introduction to the Doctrine of Fluxions, and a Defence of the Mathematicians Against the Objections of the Author of The Analyst

Slide 14

Slide 14 text

Harry Potter & the Sorcerer’s Stone

Slide 15

Slide 15 text

Why do we care?

Slide 16

Slide 16 text

1720

Slide 17

Slide 17 text

No content

Slide 18

Slide 18 text

1720s

Slide 19

Slide 19 text

1720

Slide 20

Slide 20 text

No content

Slide 21

Slide 21 text

No content

Slide 22

Slide 22 text

No content

Slide 23

Slide 23 text

No content

Slide 24

Slide 24 text

No

Slide 25

Slide 25 text

No content

Slide 26

Slide 26 text

No content

Slide 27

Slide 27 text

No content

Slide 28

Slide 28 text

Machine learning

Slide 29

Slide 29 text

Artificial Intelligence

Slide 30

Slide 30 text

They Call me @Schneems

Slide 31

Slide 31 text

Maintain Sprockets

Slide 32

Slide 32 text

Georgia Tech Online Masters

Slide 33

Slide 33 text

Georgia Tech Online Masters

Slide 34

Slide 34 text

No content

Slide 35

Slide 35 text

No content

Slide 36

Slide 36 text

Automatic Certificate
 Management

Slide 37

Slide 37 text

SSL

Slide 38

Slide 38 text

Heroku CI

Slide 39

Slide 39 text

Review Apps

Slide 40

Slide 40 text

Self Promotion

Slide 41

Slide 41 text

No content

Slide 42

Slide 42 text

No content

Slide 43

Slide 43 text

No content

Slide 44

Slide 44 text

No content

Slide 45

Slide 45 text

No content

Slide 46

Slide 46 text

No content

Slide 47

Slide 47 text

No content

Slide 48

Slide 48 text

But wait Schneems, what can we do? “

Slide 49

Slide 49 text

Call your state representatives

Slide 50

Slide 50 text

But wait Schneems, what can we do more? “

Slide 51

Slide 51 text

degerrymander texas .org

Slide 52

Slide 52 text

Un-Patriotic Un-Texan

Slide 53

Slide 53 text

Back to Bayes

Slide 54

Slide 54 text

Artificial Intelligence

Slide 55

Slide 55 text

No content

Slide 56

Slide 56 text

No content

Slide 57

Slide 57 text

No content

Slide 58

Slide 58 text

No content

Slide 59

Slide 59 text

No content

Slide 60

Slide 60 text

No content

Slide 61

Slide 61 text

No content

Slide 62

Slide 62 text

Low Information state

Slide 63

Slide 63 text

No content

Slide 64

Slide 64 text

Predict

Slide 65

Slide 65 text

Measure

Slide 66

Slide 66 text

Measure + Predict

Slide 67

Slide 67 text

Convolution

Slide 68

Slide 68 text

Kalman Filter

Slide 69

Slide 69 text

No content

Slide 70

Slide 70 text

No content

Slide 71

Slide 71 text

No content

Slide 72

Slide 72 text

Do you like money?

Slide 73

Slide 73 text

No content

Slide 74

Slide 74 text

No content

Slide 75

Slide 75 text

No content

Slide 76

Slide 76 text

No content

Slide 77

Slide 77 text

P(A ∣ B) = P(B ∣ A) P(A) P(B)

Slide 78

Slide 78 text

Probability P(A ∣ B) = P(B ∣ A) P(A) P(B)

Slide 79

Slide 79 text

Probability of $3.7 mil given Heads P(A ∣ B) = P(B ∣ A) P(A) P(B)

Slide 80

Slide 80 text

Probability of $3.7 mil given Heads P(A ∣ B) = P(B ∣ A) P(A) P(B)

Slide 81

Slide 81 text

probability of heads P(A ∣ B) = P(B ∣ A) P(A) P(B) P(B) =

Slide 82

Slide 82 text

probability of heads P(B) = H H H T

Slide 83

Slide 83 text

P(B) = H H H T probability of heads P(B) = 0.5 * 0.5 + 0.5 * 1 0.75 P(B) =

Slide 84

Slide 84 text

0.75 P(A ∣ B) = P(B ∣ A) P(A) P(B) P(B) = 0.75

Slide 85

Slide 85 text

P(A) = P(A ∣ B) = P(B ∣ A) P(A) P(B) 0.75 probability of $3.7 million

Slide 86

Slide 86 text

probability of $3.7 million $$$ Nope P(A) =

Slide 87

Slide 87 text

$$$ Nope 0.5 probability of $3.7 million P(A) = P(A) =

Slide 88

Slide 88 text

P(A ∣ B) = P(B ∣ A) P(A) P(B) 0.50 0.75 0.50 P(A) =

Slide 89

Slide 89 text

P(A ∣ B) = P(B ∣ A) P(A) P(B) P(B ∣ A) = 0.75 0.50 probability of heads given $3.7

Slide 90

Slide 90 text

probability of heads given $3.7 H T P(B ∣ A) =

Slide 91

Slide 91 text

P(B ∣ A) = 0.5 P(A ∣ B) = P(B ∣ A) P(A) P(B) 0.75 0.5 * 0.5

Slide 92

Slide 92 text

$3.7 mil given Heads P(A ∣ B) = P(B ∣ A) P(A) P(B) 0.75 0.5 * 0.5 P(A ∣ B) = 1 3 = 0.3333

Slide 93

Slide 93 text

P(A ∣ B) = P(B ∣ A) P(A) P(B)

Slide 94

Slide 94 text

P(A ∣ B) = P(B ∣ A) P(A) P(B)

Slide 95

Slide 95 text

YouTube Channel: Art of the Problem

Slide 96

Slide 96 text

P(A ∣ B) = P(B ∣ A) P(A) P(B)

Slide 97

Slide 97 text

I lied about Bayes Rule

Slide 98

Slide 98 text

P(A ∣ B) = P(B ∣ A) P(A) P(B)

Slide 99

Slide 99 text

P(Ai ∣ B) = P(B ∣ Ai ) P(Ai ) ∑ j P(B ∣ Aj ) P(Aj )

Slide 100

Slide 100 text

P(A ∣ B) = P(B ∣ A) P(A) P(B) P(Ai ∣ B) = P(B ∣ Ai ) P(Ai ) ∑ j P(B ∣ Aj ) P(Aj )

Slide 101

Slide 101 text

Total Probability

Slide 102

Slide 102 text

$3.7 mil $0

Slide 103

Slide 103 text

$3.7 mil $0 Heads

Slide 104

Slide 104 text

$3.7 mil $0 Tails Heads

Slide 105

Slide 105 text

$3.7 mil $0 Heads Tails

Slide 106

Slide 106 text

$3.7 mil $0 Heads Tails

Slide 107

Slide 107 text

$3.7 mil $0 Heads Tails

Slide 108

Slide 108 text

P(Hea d s) = P(Hea d s ∣ $$$)P($$$) + P(Hea d s ∣ $0)P($0) $3.7 mil $0 Heads Tails

Slide 109

Slide 109 text

$3.7 mil $0 P(Hea d s) = P(Hea d s ∣ $$$)P($$$) + P(Hea d s ∣ $0)P($0) Heads Tails

Slide 110

Slide 110 text

P(B) = ∑ j P(B ∣ Aj ) P(Aj ) Total Probability

Slide 111

Slide 111 text

P(B) = H H H T probability of heads P(B) = 0.5 * 0.5 + 0.5 * 1 0.75 P(B) =

Slide 112

Slide 112 text

P(B) = ∑ j P(B ∣ Aj ) P(Aj ) P(Hea d s) = P(Hea d s ∣ $$$)P($$$) + P(Hea d s ∣ $0)P($0) Total Probability

Slide 113

Slide 113 text

Let’s make it tougher

Slide 114

Slide 114 text

P(Ai ∣ B) = P(B ∣ Ai ) P(Ai ) ∑ j P(B ∣ Aj ) P(Aj )

Slide 115

Slide 115 text

P(Coini ∣ HH ) = P(HH ∣ Coini ) P(Coini ) ∑ j P(HH ∣ Coinj ) P(Coinj )

Slide 116

Slide 116 text

P(Coini ∣ HH ) = P(HH ∣ Coini ) P(Coini ) ∑ j P(HH ∣ Coinj ) P(Coinj ) P(HH ∣ Coini ) = 0.5 * 0.5

Slide 117

Slide 117 text

P(Coini ∣ HH ) = P(HH ∣ Coini ) P(Coini ) ∑ j P(HH ∣ Coinj ) P(Coinj ) 0.5 P(Coini ) =

Slide 118

Slide 118 text

P(Coini ∣ HH ) = P(HH ∣ Coini ) P(Coini ) ∑ j P(HH ∣ Coinj ) P(Coinj ) ∑ j P(B ∣ Aj ) P(Aj ) = P(HH ∣ $$$)P($$$) + P(HH ∣ $0)P($0) ∑ j P(B ∣ Aj ) P(Aj ) = 0.25(0.5) + 1.0(0.5)

Slide 119

Slide 119 text

P(Coini ∣ HH ) = P(HH ∣ Coini ) P(Coini ) ∑ j P(HH ∣ Coinj ) P(Coinj ) ∑ j P(B ∣ Aj ) P(Aj ) = P(HH ∣ $$$)P($$$) + P(HH ∣ $0)P($0) ∑ j P(B ∣ Aj ) P(Aj ) = 0.25(0.5) + 1.0(0.5)

Slide 120

Slide 120 text

P(Coin$$$ ∣ HH ) = 0.25(0.5) 0.625 = 1 5 = 0.2 P(Coini ∣ HH ) = P(HH ∣ Coini ) P(Coini ) ∑ j P(HH ∣ Coinj ) P(Coinj )

Slide 121

Slide 121 text

P(Coin$$$ ∣ HH ) = 0.25(0.5) 0.625 = 1 5 = 0.2 P(Coini ∣ HH ) = P(HH ∣ Coini ) P(Coini ) ∑ j P(HH ∣ Coinj ) P(Coinj )

Slide 122

Slide 122 text

P(Coini ∣ HH ) = 0.25(0.5) 0.625 = 1 5 = 0.2 P(Coini ∣ HH ) = P(HH ∣ Coini ) P(Coini ) ∑ j P(HH ∣ Coinj ) P(Coinj )

Slide 123

Slide 123 text

Who is ready for a break?

Slide 124

Slide 124 text

Lets take a break from math

Slide 125

Slide 125 text

With more math

Slide 126

Slide 126 text

P(A ∣ B) = P(B ∣ A) P(A) P(B)

Slide 127

Slide 127 text

P(A ∣ B) = P(B ∣ A) P(A) P(B) P(Ai ∣ B) = P(B ∣ Ai ) P(B) P(Ai )

Slide 128

Slide 128 text

P(Ai ∣ B) = P(B ∣ Ai ) P(B) P(Ai ) Prior

Slide 129

Slide 129 text

P(Ai ∣ B) = P(B ∣ Ai ) P(B) P(Ai ) Posterior

Slide 130

Slide 130 text

The kalman filter is a recursive bayes estimation

Slide 131

Slide 131 text

Prediction/ Prior

Slide 132

Slide 132 text

Measure/ Posterior

Slide 133

Slide 133 text

No content

Slide 134

Slide 134 text

Simon D. Levy

Slide 135

Slide 135 text

alt it u decurrent time = 0.75 alt it u deprevious time

Slide 136

Slide 136 text

alt it u decurrent time = 0.75 alt it u deprevious time

Slide 137

Slide 137 text

a = rate_of_decent = 0.75 x = initial_position = 1000 r = measure_error = x * 0.20

Slide 138

Slide 138 text

x_guess = measure_array[0] p = estimate_error = 1 x_guess_array = []

Slide 139

Slide 139 text

for k in range(10): measure = measure_array[k]

Slide 140

Slide 140 text

for k in range(10): measure = measure_array[k] # Predict x_guess = a * x_guess

Slide 141

Slide 141 text

for k in range(10): measure = measure_array[k] # Predict x_guess = a * x_guess p = a * p * a

Slide 142

Slide 142 text

for k in range(10): measure = measure_array[k] # Predict x_guess = a * x_guess p = a * p * a # Update gain = p / (p + r) x_guess = x_guess + gain * (measure - x_guess)

Slide 143

Slide 143 text

for k in range(10): measure = measure_array[k] # Predict x_guess = a * x_guess p = a * p * a # Update gain = p / (p + r) x_guess = x_guess + gain * (measure - x_guess) Low Predict Error, low gain

Slide 144

Slide 144 text

for k in range(10): measure = measure_array[k] # Predict x_guess = a * x_guess p = a * p * a # Update gain = p / (p + r) x_guess = x_guess + 0 * (measure - x_guess) Low Predict Error, low gain

Slide 145

Slide 145 text

for k in range(10): measure = measure_array[k] # Predict x_guess = a * x_guess p = a * p * a # Update gain = p / (p + r) x_guess = x_guess + 0 * (measure - x_guess) Low Predict Error, low gain

Slide 146

Slide 146 text

for k in range(10): measure = measure_array[k] # Predict x_guess = a * x_guess p = a * p * a # Update gain = p / (p + r) x_guess = x_guess + 1 * (measure - x_guess) High Predict Error, High gain

Slide 147

Slide 147 text

for k in range(10): measure = measure_array[k] # Predict x_guess = a * x_guess p = a * p * a # Update gain = p / (p + r) x_guess = x_guess + 1 * (measure - x_guess) High Predict Error, High gain

Slide 148

Slide 148 text

Prediction less certain

Slide 149

Slide 149 text

Prediction more certain

Slide 150

Slide 150 text

for k in range(10): measure = measure_array[k] # Predict x_guess = a * x_guess p = a * p * a # Update gain = p / (p + r) x_guess = x_guess + gain * (measure - x_guess) p = (1 - g) * p

Slide 151

Slide 151 text

No content

Slide 152

Slide 152 text

No content

Slide 153

Slide 153 text

No content

Slide 154

Slide 154 text

No content

Slide 155

Slide 155 text

No content

Slide 156

Slide 156 text

No content

Slide 157

Slide 157 text

No content

Slide 158

Slide 158 text

That’s it for Kalman Filters

Slide 159

Slide 159 text

Bayes Rule

Slide 160

Slide 160 text

Two most important parts

Slide 161

Slide 161 text

No content

Slide 162

Slide 162 text

No content

Slide 163

Slide 163 text

No content

Slide 164

Slide 164 text

No content

Slide 165

Slide 165 text

No content

Slide 166

Slide 166 text

No content

Slide 167

Slide 167 text

No content

Slide 168

Slide 168 text

No content

Slide 169

Slide 169 text

Algorithms to Live By

Slide 170

Slide 170 text

The Signal and the Noise

Slide 171

Slide 171 text

Audio: Mozart Requiem in D minor https://www.youtube.com/watch?v=sPlhKP0nZII

Slide 172

Slide 172 text

http:// bit.ly/ kalman-tutorial

Slide 173

Slide 173 text

http:// bit.ly/ kalman-notebook

Slide 174

Slide 174 text

Udacity & Georgia Tech

Slide 175

Slide 175 text

BAE

Slide 176

Slide 176 text

BAE

Slide 177

Slide 177 text

BAE

Slide 178

Slide 178 text

BAE

Slide 179

Slide 179 text

Questions?

Slide 180

Slide 180 text

Questions?

Slide 181

Slide 181 text

Test Audio

Slide 182

Slide 182 text

Test Audio 2

Slide 183

Slide 183 text

Simon D. Levy

Slide 184

Slide 184 text

No content

Slide 185

Slide 185 text

No content

Slide 186

Slide 186 text

No content

Slide 187

Slide 187 text

No content

Slide 188

Slide 188 text

No content

Slide 189

Slide 189 text

No content

Slide 190

Slide 190 text

No content

Slide 191

Slide 191 text

No content

Slide 192

Slide 192 text

No content

Slide 193

Slide 193 text

No content

Slide 194

Slide 194 text

What is g?

Slide 195

Slide 195 text

No content

Slide 196

Slide 196 text

Prediction

Slide 197

Slide 197 text

Measurement

Slide 198

Slide 198 text

Convolution

Slide 199

Slide 199 text

Prediction less certain

Slide 200

Slide 200 text

Prediction more certain

Slide 201

Slide 201 text

Prediction error is not constant

Slide 202

Slide 202 text

What is g?

Slide 203

Slide 203 text

No content

Slide 204

Slide 204 text

No content

Slide 205

Slide 205 text

What is g?

Slide 206

Slide 206 text

No content

Slide 207

Slide 207 text

Introducing r

Slide 208

Slide 208 text

No content

Slide 209

Slide 209 text

No content

Slide 210

Slide 210 text

No content

Slide 211

Slide 211 text

No content

Slide 212

Slide 212 text

Prediction + Measurement

Slide 213

Slide 213 text

i.e. Prediction + Update

Slide 214

Slide 214 text

Prediction Update

Slide 215

Slide 215 text

Prediction Update ✅

Slide 216

Slide 216 text

Prediction

Slide 217

Slide 217 text

Prediction

Slide 218

Slide 218 text

Prediction Update ✅ ✅

Slide 219

Slide 219 text

$3.7 mil $0

Slide 220

Slide 220 text

$3.7 mil $0