Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Deep Learning 5章後半
Search
Takahiro Kawashima
May 29, 2018
Science
0
120
Deep Learning 5章後半
ゼミの輪講資料.Goodfellow本5.5節〜5.11節
Takahiro Kawashima
May 29, 2018
Tweet
Share
More Decks by Takahiro Kawashima
See All by Takahiro Kawashima
論文紹介:Precise Expressions for Random Projections
wasyro
0
110
ガウス過程入門
wasyro
0
100
論文紹介:Inter-domain Gaussian Processes
wasyro
0
110
論文紹介:Proximity Variational Inference (近接性変分推論)
wasyro
0
210
機械学習のための行列式点過程:概説
wasyro
0
990
SOLVE-GP: ガウス過程の新しいスパース変分推論法
wasyro
1
790
論文紹介:Stein Variational Gradient Descent
wasyro
0
540
次元削減(主成分分析・線形判別分析・カーネル主成分分析)
wasyro
0
480
論文紹介: Supervised Principal Component Analysis
wasyro
1
620
Other Decks in Science
See All in Science
Machine Learning for Materials (Lecture 4)
aronwalsh
0
670
効果検証入門に物申してみた_JapanR_2023
s1ok69oo
6
4.4k
Running llama.cpp on the CPU
ianozsvald
0
210
同じデータでもP値が変わる話/key_considerations_in_NHST
florets1
1
1.1k
名古屋市立大学データサイエンス学部 秋のオープンキャンパス模擬授業20231111
trycycle
1
1.1k
研究・教育・産学連携の循環の実践
sshimizu2006
0
220
qeMLパッケージの紹介
bob3bob3
0
960
名古屋市立大学データサイエンス学部 夏のオープンキャンパス模擬授業20230818
ncu_ds
0
930
SCOTT: Self-Consistent Chain-of-Thought Distillation
meshidenn
0
300
HAS Dark Site Orientation
astronomyhouston
0
4.9k
文系出身でも「アルゴリズム×数学」はスッキリ理解できた!話
wakamatsu_takumu
0
200
BigQueryで参加するレコメンドコンペ / bq-recommend-competition-kaggle-meetup-tokyo-2023
shimacos
1
1.3k
Featured
See All Featured
Embracing the Ebb and Flow
colly
79
4.1k
[RailsConf 2023 Opening Keynote] The Magic of Rails
eileencodes
9
8.3k
Bash Introduction
62gerente
604
210k
Web development in the modern age
philhawksworth
202
10k
Refactoring Trust on Your Teams (GOTO; Chicago 2020)
rmw
24
2.3k
Exploring the Power of Turbo Streams & Action Cable | RailsConf2023
kevinliebholz
1
3.4k
10 Git Anti Patterns You Should be Aware of
lemiorhan
646
57k
Java REST API Framework Comparison - PWX 2021
mraible
PRO
18
6.9k
Code Reviewing Like a Champion
maltzj
513
39k
The Art of Programming - Codeland 2020
erikaheidi
41
12k
Robots, Beer and Maslow
schacon
PRO
155
7.9k
No one is an island. Learnings from fostering a developers community.
thoeni
14
2.1k
Transcript
5 ষ: Machine Learning Basics ౡوେ May 29, 2018 ిؾ௨৴େֶ
ঙݚڀࣨ B4
࣍ 1. ࠷ਪఆ 2. ϕΠζ౷ܭ 3. ڭࢣ͋Γֶश 4. ڭࢣͳֶ͠श 5.
֬తޯ߱Լ๏ (SGD) 6. Deep Learning ͷಈػ 2
࠷ਪఆ
࠷ਪఆ ൘ॻͰΔ 3
ϕΠζ౷ܭ
ϕΠζ౷ܭ ൘ॻͰΔ 4
ڭࢣ͋Γֶश
ڭࢣ͋Γֶश ֬తڭࢣ͋Γֶश ෮श: ҰൠઢܗϞσϧ y = θT
x + ϵ ϵ ∼ N(ϵ|0, σ2) ⇒ p(y|x; θ) = N(y; θT x, σ2) (5.80) ਖ਼نͷఆٛҬ (−∞, ∞) ˠ {0, 1} ͷೋྨʹ͑ͳ͍ 5
ڭࢣ͋Γֶश ֬తڭࢣ͋Γֶश લड़ͷཧ༝͔Β (0, 1) ͷҬΛͭؔΛߟ͍͑ͨ ˠγάϞΠυؔ f(x) = 1
1 + e−x 6
ڭࢣ͋Γֶश ֬తڭࢣ͋Γֶश ϩδεςΟ οΫճؼ p(y = 1|x; θ) = 1
1 + e−θT x = 1 1 + e−(θ0+θ1x1+θ2x2+··· ) ˠ {0, 1} ͷೋผʹ͑Δ 7
ڭࢣ͋Γֶश αϙʔτϕΫλʔϚγϯ (SVM) ಛ্ۭؒͰઢܗՄೳͳೋྨΛߟ͑Δ ˠ͍Ζ͍Ζͳઢ (ฏ໘) ͷҾ͖ํ͕͋Δ 8
ڭࢣ͋Γֶश αϙʔτϕΫλʔϚγϯ (SVM) マージンを最大化 支持超平面 分類超平面 サポートベクトル ࢧ࣋ฏ໘ͷʮϚʔδϯʯΛ࠷େԽ͢ΔΑ͏ʹྨฏ໘Λֶश 9
ڭࢣ͋Γֶश αϙʔτϕΫλʔϚγϯ (SVM) ฏ໘ͷํఔࣜ ax + by + c =
0 ͳͷͰ͜ΕΛҰൠԽͯ͠ɼྨฏ໘ͷํఔࣜαϙʔτϕΫτ ϧͷू߹ x∗ Λ༻͍ͯ w0 + wT x∗ = 0 ͱॻ͚Δɽֶश͢Δͷ͜ͷ w Ͱ͋Δ 2 ͭͷࢧ࣋ฏ໘ɼྨฏ໘Λ ±k ͚ͩͣΒͯ͠ w0 + wT x∗ = k w0 + wT x∗ = −k ⇒ |w0 + wT x∗| = k Ͱ͋Δ 10
ڭࢣ͋Γֶश αϙʔτϕΫλʔϚγϯ (SVM) ฏ໘ͷࣜఆഒͯ͠ಉ͡ͷΛࣔ͢ͷͰɼֶश݁Ռ͕Ұҙ ʹఆ·Βͳ͍ ˠҰҙੑΛ࣋ͭΑ͏ʹ੍Λ՝͢ ੍: |w0 + wT
x∗| = 1 ॏΈϕΫτϧʹ͍ͭͯඪ४Խ͢Δͱ |w0 + wT x∗| ∥w∥ = 1 ∥w∥ ͜ΕΛ࠷େԽ͢ΔΑ͏ʹֶश͢Δ 11
ڭࢣ͋Γֶश αϙʔτϕΫλʔϚγϯ (SVM) ͜Ε·ͰઢܗՄೳͳͷ ˠઢܗෆՄೳͳΛߟ͍͑ͨ ղܾࡦ: ಛʹඇઢܗมΛࢪͯ͠ผͷಛۭؒʹࣹӨ 12
ڭࢣ͋Γֶश αϙʔτϕΫλʔϚγϯ (SVM) ྫ: ສ༗Ҿྗͷࣜ ಛ: ࣭ྔ m1, m2 ɼڑ
r f(m1, m2, r) = G m1m2 r2 ͜ΕΛ֤ಛʹؔͯ͠ઢܗʹ͍ͨ͠ ˠରΛͱΔ logf(m1, m2, r) = logG + logm1 + logm2 − logr2 ઢܗʹͳͬͨ 13
ڭࢣ͋Γֶश αϙʔτϕΫλʔϚγϯ (SVM) ҰൠʹͱͷಛΑΓߴ࣍ݩͷۭࣹؒӨ͢Δ ˠֶशσʔλ͕ଟ͍ͱܭࢉྔ͕େʹͳΔ ˠ ΧʔωϧτϦοΫͱ͍͏ຐज़Λ༻͍Δͱখ͍͞ܭࢉྔͰߴ࣍ݩ (ແݶ࣍ݩ) ͷࣹӨΛධՁͰ͖Δ 14
ڭࢣͳֶ͠श
ڭࢣͳֶ͠श ओੳ ͬͨͷͰύε 15
ڭࢣͳֶ͠श ΫϥελϦϯά ΫϥελϦϯά ྨࣅͨ͠σʔλΛάϧʔϓʹྨ͢Δ 16
ڭࢣͳֶ͠श k-means ๏ ΞϧΰϦζϜ 1. Ϋϥελத৺ͷॳظͱͯ͠ɼσʔλ͔Β k ݸͷηϯτϩ ΠυΛϥϯμϜʹબͿ (k
ط) 2. ֤αϯϓϧΛ࠷͍ۙηϯτϩΠυʹׂΓͯΔ 3. ֤ηϯτϩΠυΛࣗʹׂΓͯΒΕͨσʔλͷத৺ʹҠಈ ͢Δ 4. 2,3 Λ܁Γฦ͢ 17
ڭࢣͳֶ͠श k-means ๏ σϞΛΕ http://tech.nitoyon.com/ja/blog/2013/11/07/k-means/ 18
ڭࢣͳֶ͠श ิ: k-means++๏ k-means ๏ॳظґଘੑ͕ඇৗʹߴ͍ ˠ֤ηϯτϩΠυͷॳظΛόϥόϥʹࢃ͘͜ͱͰվળ (k-means++๏) σϞΛΕ https://wasyro.github.io/k-meansppVisualizer/ 19
֬తޯ߱Լ๏ (SGD)
֬తޯ߱Լ๏ (SGD) ίετؔ (ςετ) σʔλ͝ͱͷଛࣦؔͷʹղͰ͖Δ͜ ͱ͕ଟ͍ ઢܗճؼͰɼର L(x, y, θ)
Λ༻͍ͯ J(θ) = Ex,y∼ˆ pdata [L(x, y, θ)] = 1 m m ∑ i=1 L(x(i), y(i), θ) (5.96) L(x(i), y(i), θ) = −logp(y|x, θ) ͜ͷίετؔʹؔͯ͠ɼύϥϝʔλ θ ʹ͍ͭͯޯ๏Λద༻ 20
֬తޯ߱Լ๏ (SGD) ∇θJ(θ) = ∇θ [ 1 m m ∑
i=1 L(x(i), y(i), θ) ] = 1 m m ∑ i=1 ∇θL(x(i), y(i), θ) (5.97) ͜ͷܭࢉྔ O(m) Ͱɼσʔλ͕૿͑Δͱ͔ͳΓͭΒ͍ ˠ֬తޯ߱Լ๏ (SGD) 21
֬తޯ߱Լ๏ (SGD) SGD ޯΛظͰදݱͰ͖Δͱߟ͑ɼαϯϓϧͷখ͍͞αϒ ηοτ (ϛχόον) ͷޯ๏Ͱۙࣅతʹٻ·Δͱ͢Δ B = {x(1),
. . . , x(m′)} ͷϛχόονΛҰ༷ϥϯμϜʹֶशσʔλ ηοτ͔Βͬͯ͘Δ m′ ͍͍ͩͨ 100ʙ300 ͘Β͍Ͱɼm ͕ଟͯ͘ಉ༷ ޯͷਪఆྔ g g = 1 m′ ∇θ m′ ∑ i=1 L(x(i), y(i), θ) (5.98) ύϥϝʔλͷਪఆྔ θ ← θ − ϵg 22
Deep Learning ͷಈػ
Deep Learning ͷಈػ ࣍ݩͷढ͍ ಛྔͷ࣍ݩ͕૿͑ΔͱࢦతʹऔΓ͏ΔΈ߹Θ͕ͤ૿͑Δ ্ਤ֤ಛ͕ͦΕͧΕ 10 ݸͷΛऔΓ͏Δ߹ͷ֓೦ਤ 23
Deep Learning ͷಈػ ࣍ݩͷढ͍ ྫͱͯ͠ k ۙ๏ (k-Nearest Neighbour) ͱ͍͏ֶशΞϧΰϦζϜ
Λߟ͑Δ k ۙ๏ ςετσʔλͷೖྗʹରͯ͠ɼಛ্ۭؒͰͬͱ͍ۙ k ݸ ͷֶशσʔλΛ୳͠ɼͦΕΒͷֶशσʔλͷଐ͢ΔΫϥεͷଟ ܾͰςετσʔλʹׂΓৼΔΫϥεΛܾఆ͢Δ k = 3 ͷ߹ ೖྗ˔ͷϥϕϧ˙ 24
Deep Learning ͷಈػ ࣍ݩͷढ͍ ಛۭؒͰσʔλ͕εΧεΧͰ k ۙ๏Ͱ͏·͍͔͘ͳͦ͞͏ ˠಉ༷ʹଟ͘ͷݹయతػցֶशख๏ͰଠଧͪͰ͖ͳ͘ͳΔ 25
References I [1] ਢࢁರࢤ, ϕΠζਪʹΑΔػցֶशೖ. ߨஊࣾ, 2017. [2] খాਸ, αϙʔτϕΫλʔϚγϯ.
ΦʔϜࣾ, 2007. [3] Sebastian Raschka ஶ, גࣜձࣾΫΠʔϓ༁, ୡਓσʔλαΠ ΤϯςΟετʹΑΔཧͱ࣮ફ Python ػցֶशϓϩάϥϛ ϯά, ΠϯϓϨε, 2016.