Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
[CS Foundation] AIML - 2 - Regression
Search
x-village
August 14, 2018
Programming
1
43
[CS Foundation] AIML - 2 - Regression
x-village
August 14, 2018
Tweet
Share
More Decks by x-village
See All by x-village
[CS Foundation] Web - 6 - Flask Introduction
xvillage
0
160
[CS Foundation] AIML - 5 - Deep Learning
xvillage
1
98
[CS Foundation] Web - 7 - CRUD in Flask
xvillage
0
93
[CS Foundation] Web - 4 - JavaScript Web Course
xvillage
0
22
[CS Foundation] Web - 5 - Database
xvillage
0
37
[CS Foundation] AIML - 3 - Common Issue
xvillage
1
140
[CS Foundation] AIML - 4 - Classification
xvillage
0
30
[CS Foundation] Web - 1 - Web Course intro
xvillage
2
22
[CS Foundation] Web - 2 - HTML and CSS Web Course
xvillage
0
41
Other Decks in Programming
See All in Programming
Simple組み合わせ村から大都会Railsにやってきた俺は / Coming to Rails from the Simple
moznion
3
3.5k
rails newと同時に型を書く
aki19035vc
6
740
ISUCON14感想戦で85万点まで頑張ってみた
ponyo877
1
780
watsonx.ai Dojo #6 継続的なAIアプリ開発と展開
oniak3ibm
PRO
0
260
毎日13時間もかかるバッチ処理をたった3日で60%短縮するためにやったこと
sho_ssk_
1
680
functionalなアプローチで動的要素を排除する
ryopeko
1
790
AWSマネコンに複数のアカウントで入れるようになりました
yuhta28
2
140
ASP. NET CoreにおけるWebAPIの最新情報
tomokusaba
0
180
Flatt Security XSS Challenge 解答・解説
flatt_security
0
800
振り返れば奴(Cline)がいる
keiyagi
0
110
混沌とした例外処理とエラー監視に秩序をもたらす
morihirok
18
3.1k
カンファレンス動画鑑賞会のススメ / Osaka.swift #1
hironytic
0
190
Featured
See All Featured
Keith and Marios Guide to Fast Websites
keithpitt
410
22k
Into the Great Unknown - MozCon
thekraken
34
1.6k
The MySQL Ecosystem @ GitHub 2015
samlambert
250
12k
Chrome DevTools: State of the Union 2024 - Debugging React & Beyond
addyosmani
3
260
KATA
mclloyd
29
14k
The Myth of the Modular Monolith - Day 2 Keynote - Rails World 2024
eileencodes
20
2.4k
Art, The Web, and Tiny UX
lynnandtonic
298
20k
A Modern Web Designer's Workflow
chriscoyier
693
190k
Designing Experiences People Love
moore
139
23k
The Invisible Side of Design
smashingmag
299
50k
Intergalactic Javascript Robots from Outer Space
tanoku
270
27k
A better future with KSS
kneath
238
17k
Transcript
AI/ML - Regression Lo Pang-Yun Ting X-Village
Outline • Introduction of regression • Linear regression • Gradient
descent • Ordinary least square 2
Unsupervised Learning (非監督式學習) Supervised Learning (監督式學習) Machine Learning 聚類 分類
迴歸 降維 3
Machine Learning 聚類 分類 迴歸 降維 4
Machine Learning • Classification(分類) v.s. Regression(迴歸) Lv. 1 Lv. 1
Man! Q1:超人是否能 打敗Lv. 1怪物? 5 Seven! Neos! Man! Taro! Tiga! Q2:需要幾位超人才能 打敗Lv.1怪物?
Regression 6 • What is ‘regression’ analysis (迴歸分析)? 是一種統計學上分析數據的方法,目的在於了解兩個或多個變數間是否相關、 相關方向與強度,並建立數學模型以便觀察特定變數來預測研究者感興趣的變
數。(from wiki)
Regression • What is ‘regression’ analysis? 怪物等級 1 2 3
4 5 6 7 8 9 10 打敗怪物所需超人數量 1 1 2 3 6 7 11 13 13 15 7
Regression • What is ‘regression’ analysis? 8 怪物等級 打敗怪物所需超人數量 找出曲線/直線來擬合數據
Regression Features: x(i) = [x 1, … x d ]
Outputs: y(i) linear regression (線性迴歸) polynomial regression (多項式迴歸) 9
Linear Regression • Model representation • Hypothesis(假說) : maps from
X to Y Choose θ so that h θ (x) is close to y for training examples Training examples 10 Weigh t
Linear Regression • How to choose θ ? Find lines/hyperplanes
with small error 11
Linear Regression • Definition of cost function 預測 真實 誤差
mean square error (MSE) Cost Function J(θ 0 , θ 1 ) minimize J(θ 0 , θ 1 ) 12 h θ (x) Hypothesis
Linear Regression • Look into cost function x 1 y
0 1 2 3 0 1 2 3 θ 0 = 0 Simplified 13 Goal Hypothesis Weights Cost function minimize Goal Hypothesis Weights Cost function minimize
Linear Regression x 1 y 0 1 2 3 0
1 2 3 • Look into cost function 0 0.5 1 1.5 0 1 2 3 J(θ 1 ) θ 1 = 1 Hypothesis Cost function 2 2.5 θ 1 θ 1 = 0.5 θ 1 = 1.5 14 (02 + 02 + 02) 1 2 x 3 J(1) = ((0.5 - 1)2 + (1 - 2)2 + (1.5 - 3)2) 1 2 x 3 J(0.5) = ≈ 0.58 ((1.5 - 1)2 + (1 - 2)2 + (0.5 - 3)2) 1 2 x 3 J(1.5) = ≈ 0.58 = 0
Linear Regression x 1 y 0 1 2 3 0
1 2 3 • Look into cost function 0 0.5 1 1.5 0 1 2 3 J(θ 1 ) θ 1 = 1 Hypothesis Cost function 2 2.5 θ 1 θ 1 = 0.5 θ 1 = 1.5 15
Linear Regression • Look into cost function 16 θ 1
θ 0 J(θ 0, θ 1 )
• Look into cost function J(θ 0, θ 1 )
θ 1 θ 0 17 Linear Regression
Minimize The Cost Function
Linear Regression • Optimize linear regression • Gradient descent •
Ordinary least square 19
Linear Regression • Optimize linear regression • Gradient descent •
Ordinary least square 20
• Gradient descent (梯度下降法) Gradient Descent 21 Cost function J(θ
0, θ 1 ) Goal J(θ 0, θ 1 ) minimize OUTLINE • Start with some θ 0 , θ 1 • Keep changing θ 0 , θ 1 to reduce J(θ 0, θ 1 ) until we hopefully end up at a minimum
repeat until convergence { } • Gradient descent alogrithm Gradient
Descent 22 Learning rate Assign value from right side to left side
• Gradient descent alogrithm Gradient Descent 23
• Gradient descent intuition Gradient Descent 24 J(θ 1 )
θ 1 · (positive value) Positive Slope repeat until convergence { } 當前 θ 值所處點的 切線斜率 θ 1 Current value θ 1 becomes smaller Cost becomes smaller
• Gradient descent intuition Gradient Descent 25 J(θ 1 )
θ 1 · (negative value) Negative Slope repeat until convergence { } 當前 θ 值所處點的 切線斜率 θ 1 Current value θ 1 becomes bigger Cost becomes smaller
• Gradient descent intuition Gradient Descent 26 J(θ 1 )
θ 1 repeat until convergence { } Learning rate If learning rate is too big It may fail to converge or even diverge θ 1 Current value
• Gradient descent intuition Gradient Descent 27 J(θ 1 )
θ 1 repeat until convergence { } Learning rate If learning rate is too small Gradient descent can be slow θ 1 Current value
Exercise - (1) • TASK: Implement linear regression • Sample
code 28
Exercise - (1) • Requirements 1. 完成 hypothesis function 和
cost function 29 Hypothesis Cost function
Exercise - (1) • Requirements 2. 分別測試 (θ 0 θ
1 ) = (0, 0), (1, 1), (10, -1),印出算出的cost值 3. 觀察不同 θ 值所得到的regression line和cost之間的關係 30
Exercise - (1) • Output 31
Linear Regression • Optimize linear regression • Gradient descent •
Ordinary least square 32
Ordinary Least Square • Ordinary least square (最小平方法/最小二乘法) 33 repeat
until convergence { } Solve Gradient descent Cost function OLS
Ordinary Least Square • OLS v.s. Gradient descent 34 Gradient
descent OLS θ 1 Initial value 直接求最佳解 迭代計算求最佳解
Example • sklearn - LinearRegression 35 Use OLS to optimize
linear regression
Example • sklearn - SGDRegressor 36 Use Gradient descent to
optimize linear regression
Evaluation 37 • Framework Evaluation results
Evaluation • Evaluation metrics for regression 38
Evaluation • Evaluation metrics for regression 39 • Mean square
error (MSE) • Root mean square error (RMSE) • Mean absolute error (MAE) 預測 真實 • 預測值和真實值的差值 • 越小越好
Evaluation • Evaluation metrics for regression 40 • R-squared score
(R2 score) • 預測值和真實數據的擬合程度 • 最佳值為1
Example • sklearn - mean_squared_error, mean_absolute_error, r2_score 41
Exercise - (2) • TASK: Use sklearn to implement linear
regression • Sample code • Requirements • 使用 Exercise - (1) 的數據來訓練LinearRegression( ) and SGDRegressor( ) • 印出兩種方法訓練完後得到的weight值 (θ) • 觀察兩種方法的結果 42
Exercise - (3) 43 • TASK: Use sklearn.metrics to evaluate
models • Requirements • 印出Exercise - (2)兩個models的RMSE (測試的資料先用training data替代)
Exercise - (3) 44 • Output