Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
[CS Foundation] AIML - 2 - Regression
Search
x-village
August 14, 2018
Programming
1
36
[CS Foundation] AIML - 2 - Regression
x-village
August 14, 2018
Tweet
Share
More Decks by x-village
See All by x-village
[CS Foundation] Web - 6 - Flask Introduction
xvillage
0
100
[CS Foundation] AIML - 5 - Deep Learning
xvillage
1
73
[CS Foundation] Web - 7 - CRUD in Flask
xvillage
0
87
[CS Foundation] Web - 4 - JavaScript Web Course
xvillage
0
18
[CS Foundation] Web - 5 - Database
xvillage
0
31
[CS Foundation] AIML - 3 - Common Issue
xvillage
1
110
[CS Foundation] AIML - 4 - Classification
xvillage
0
27
[CS Foundation] Web - 1 - Web Course intro
xvillage
2
20
[CS Foundation] Web - 2 - HTML and CSS Web Course
xvillage
0
35
Other Decks in Programming
See All in Programming
StoreKit2によるiOSのアプリ内課金のリニューアル
kangnux
0
100
Semantic search with Django and pgvector
pauloxnet
0
240
App Router への移行は「改善」となり得るのか?/ Can migration to App Router be an improvement
takefumiyoshii
8
2.1k
Folding Cheat Sheet #1
philipschwarz
PRO
0
210
Code Reviews
bkuhlmann
4
880
甘い香りに誘われてVanilla Extractを1年間運用してみた
miyahkun
1
110
Javaエンジニアのための Nodejs/Nuxt3入門
hidekatsu_izuno
0
280
品質とスピードを両立: TypeScriptの柔軟な型システムをバックエンドで活用する
kosui
8
2.2k
GraphQLサーバの構成要素を整理する #ハッカー鮨 #tsukijigraphql / graphql server technology selection
izumin5210
3
280
Java 22 Overview
kishida
1
170
FigmaとPHPで作る1ミリたりとも表示崩れしない最強の帳票印刷ソリューション
ttskch
39
18k
try! Swift Tokyo 2024のLT枠に採択されたプロポーザルを出すときに考えていたこと
ski
0
340
Featured
See All Featured
Atom: Resistance is Futile
akmur
258
25k
[RailsConf 2023] Rails as a piece of cake
palkan
22
3.9k
The Cult of Friendly URLs
andyhume
74
5.7k
How To Stay Up To Date on Web Technology
chriscoyier
782
250k
個人開発の失敗を避けるイケてる考え方 / tips for indie hackers
panda_program
60
14k
Intergalactic Javascript Robots from Outer Space
tanoku
266
26k
Git: the NoSQL Database
bkeepers
PRO
422
63k
Happy Clients
brianwarren
91
6.4k
Agile that works and the tools we love
rasmusluckow
324
20k
Optimising Largest Contentful Paint
csswizardry
7
2.3k
Clear Off the Table
cherdarchuk
83
310k
Embracing the Ebb and Flow
colly
79
4.1k
Transcript
AI/ML - Regression Lo Pang-Yun Ting X-Village
Outline • Introduction of regression • Linear regression • Gradient
descent • Ordinary least square 2
Unsupervised Learning (非監督式學習) Supervised Learning (監督式學習) Machine Learning 聚類 分類
迴歸 降維 3
Machine Learning 聚類 分類 迴歸 降維 4
Machine Learning • Classification(分類) v.s. Regression(迴歸) Lv. 1 Lv. 1
Man! Q1:超人是否能 打敗Lv. 1怪物? 5 Seven! Neos! Man! Taro! Tiga! Q2:需要幾位超人才能 打敗Lv.1怪物?
Regression 6 • What is ‘regression’ analysis (迴歸分析)? 是一種統計學上分析數據的方法,目的在於了解兩個或多個變數間是否相關、 相關方向與強度,並建立數學模型以便觀察特定變數來預測研究者感興趣的變
數。(from wiki)
Regression • What is ‘regression’ analysis? 怪物等級 1 2 3
4 5 6 7 8 9 10 打敗怪物所需超人數量 1 1 2 3 6 7 11 13 13 15 7
Regression • What is ‘regression’ analysis? 8 怪物等級 打敗怪物所需超人數量 找出曲線/直線來擬合數據
Regression Features: x(i) = [x 1, … x d ]
Outputs: y(i) linear regression (線性迴歸) polynomial regression (多項式迴歸) 9
Linear Regression • Model representation • Hypothesis(假說) : maps from
X to Y Choose θ so that h θ (x) is close to y for training examples Training examples 10 Weigh t
Linear Regression • How to choose θ ? Find lines/hyperplanes
with small error 11
Linear Regression • Definition of cost function 預測 真實 誤差
mean square error (MSE) Cost Function J(θ 0 , θ 1 ) minimize J(θ 0 , θ 1 ) 12 h θ (x) Hypothesis
Linear Regression • Look into cost function x 1 y
0 1 2 3 0 1 2 3 θ 0 = 0 Simplified 13 Goal Hypothesis Weights Cost function minimize Goal Hypothesis Weights Cost function minimize
Linear Regression x 1 y 0 1 2 3 0
1 2 3 • Look into cost function 0 0.5 1 1.5 0 1 2 3 J(θ 1 ) θ 1 = 1 Hypothesis Cost function 2 2.5 θ 1 θ 1 = 0.5 θ 1 = 1.5 14 (02 + 02 + 02) 1 2 x 3 J(1) = ((0.5 - 1)2 + (1 - 2)2 + (1.5 - 3)2) 1 2 x 3 J(0.5) = ≈ 0.58 ((1.5 - 1)2 + (1 - 2)2 + (0.5 - 3)2) 1 2 x 3 J(1.5) = ≈ 0.58 = 0
Linear Regression x 1 y 0 1 2 3 0
1 2 3 • Look into cost function 0 0.5 1 1.5 0 1 2 3 J(θ 1 ) θ 1 = 1 Hypothesis Cost function 2 2.5 θ 1 θ 1 = 0.5 θ 1 = 1.5 15
Linear Regression • Look into cost function 16 θ 1
θ 0 J(θ 0, θ 1 )
• Look into cost function J(θ 0, θ 1 )
θ 1 θ 0 17 Linear Regression
Minimize The Cost Function
Linear Regression • Optimize linear regression • Gradient descent •
Ordinary least square 19
Linear Regression • Optimize linear regression • Gradient descent •
Ordinary least square 20
• Gradient descent (梯度下降法) Gradient Descent 21 Cost function J(θ
0, θ 1 ) Goal J(θ 0, θ 1 ) minimize OUTLINE • Start with some θ 0 , θ 1 • Keep changing θ 0 , θ 1 to reduce J(θ 0, θ 1 ) until we hopefully end up at a minimum
repeat until convergence { } • Gradient descent alogrithm Gradient
Descent 22 Learning rate Assign value from right side to left side
• Gradient descent alogrithm Gradient Descent 23
• Gradient descent intuition Gradient Descent 24 J(θ 1 )
θ 1 · (positive value) Positive Slope repeat until convergence { } 當前 θ 值所處點的 切線斜率 θ 1 Current value θ 1 becomes smaller Cost becomes smaller
• Gradient descent intuition Gradient Descent 25 J(θ 1 )
θ 1 · (negative value) Negative Slope repeat until convergence { } 當前 θ 值所處點的 切線斜率 θ 1 Current value θ 1 becomes bigger Cost becomes smaller
• Gradient descent intuition Gradient Descent 26 J(θ 1 )
θ 1 repeat until convergence { } Learning rate If learning rate is too big It may fail to converge or even diverge θ 1 Current value
• Gradient descent intuition Gradient Descent 27 J(θ 1 )
θ 1 repeat until convergence { } Learning rate If learning rate is too small Gradient descent can be slow θ 1 Current value
Exercise - (1) • TASK: Implement linear regression • Sample
code 28
Exercise - (1) • Requirements 1. 完成 hypothesis function 和
cost function 29 Hypothesis Cost function
Exercise - (1) • Requirements 2. 分別測試 (θ 0 θ
1 ) = (0, 0), (1, 1), (10, -1),印出算出的cost值 3. 觀察不同 θ 值所得到的regression line和cost之間的關係 30
Exercise - (1) • Output 31
Linear Regression • Optimize linear regression • Gradient descent •
Ordinary least square 32
Ordinary Least Square • Ordinary least square (最小平方法/最小二乘法) 33 repeat
until convergence { } Solve Gradient descent Cost function OLS
Ordinary Least Square • OLS v.s. Gradient descent 34 Gradient
descent OLS θ 1 Initial value 直接求最佳解 迭代計算求最佳解
Example • sklearn - LinearRegression 35 Use OLS to optimize
linear regression
Example • sklearn - SGDRegressor 36 Use Gradient descent to
optimize linear regression
Evaluation 37 • Framework Evaluation results
Evaluation • Evaluation metrics for regression 38
Evaluation • Evaluation metrics for regression 39 • Mean square
error (MSE) • Root mean square error (RMSE) • Mean absolute error (MAE) 預測 真實 • 預測值和真實值的差值 • 越小越好
Evaluation • Evaluation metrics for regression 40 • R-squared score
(R2 score) • 預測值和真實數據的擬合程度 • 最佳值為1
Example • sklearn - mean_squared_error, mean_absolute_error, r2_score 41
Exercise - (2) • TASK: Use sklearn to implement linear
regression • Sample code • Requirements • 使用 Exercise - (1) 的數據來訓練LinearRegression( ) and SGDRegressor( ) • 印出兩種方法訓練完後得到的weight值 (θ) • 觀察兩種方法的結果 42
Exercise - (3) 43 • TASK: Use sklearn.metrics to evaluate
models • Requirements • 印出Exercise - (2)兩個models的RMSE (測試的資料先用training data替代)
Exercise - (3) 44 • Output