Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Deep Learning 5章後半
Search
Takahiro Kawashima
May 29, 2018
Science
0
140
Deep Learning 5章後半
ゼミの輪講資料.Goodfellow本5.5節〜5.11節
Takahiro Kawashima
May 29, 2018
Tweet
Share
More Decks by Takahiro Kawashima
See All by Takahiro Kawashima
論文紹介:Precise Expressions for Random Projections
wasyro
0
230
ガウス過程入門
wasyro
0
270
論文紹介:Inter-domain Gaussian Processes
wasyro
0
140
論文紹介:Proximity Variational Inference (近接性変分推論)
wasyro
0
280
機械学習のための行列式点過程:概説
wasyro
0
1.3k
SOLVE-GP: ガウス過程の新しいスパース変分推論法
wasyro
1
1k
論文紹介:Stein Variational Gradient Descent
wasyro
0
940
次元削減(主成分分析・線形判別分析・カーネル主成分分析)
wasyro
0
670
論文紹介: Supervised Principal Component Analysis
wasyro
1
770
Other Decks in Science
See All in Science
(2024) Livres, Femmes et Math
mansuy
0
110
ultraArmをモニター提供してもらった話
miura55
0
190
All-in-One Bioinformatics Platform Realized with Snowflake ~ From In Silico Drug Discovery, Disease Variant Analysis, to Single-Cell RNA-seq
ktatsuya
0
230
統計的因果探索の方法
sshimizu2006
1
1.2k
事業会社における 機械学習・推薦システム技術の活用事例と必要な能力 / ml-recsys-in-layerx-wantedly-2024
yuya4
3
230
Celebrate UTIG: Staff and Student Awards 2024
utig
0
460
Boil Order
uni_of_nomi
0
120
論文紹介: PEFA: Parameter-Free Adapters for Large-scale Embedding-based Retrieval Models (WSDM 2024)
ynakano
0
150
WCS-LA-2024
lcolladotor
0
120
LIMEを用いた判断根拠の可視化
kentaitakura
0
340
山形とさくらんぼに関するレクチャー(YG-900)
07jp27
1
220
構造設計のための3D生成AI-最新の取り組みと今後の展開-
kojinishiguchi
0
560
Featured
See All Featured
Principles of Awesome APIs and How to Build Them.
keavy
126
17k
Optimizing for Happiness
mojombo
376
70k
Rails Girls Zürich Keynote
gr2m
94
13k
Building Better People: How to give real-time feedback that sticks.
wjessup
364
19k
Visualizing Your Data: Incorporating Mongo into Loggly Infrastructure
mongodb
42
9.2k
What's in a price? How to price your products and services
michaelherold
243
12k
Code Review Best Practice
trishagee
64
17k
ピンチをチャンスに:未来をつくるプロダクトロードマップ #pmconf2020
aki_iinuma
109
49k
The Psychology of Web Performance [Beyond Tellerrand 2023]
tammyeverts
44
2.2k
Stop Working from a Prison Cell
hatefulcrawdad
267
20k
Fireside Chat
paigeccino
34
3k
How GitHub (no longer) Works
holman
310
140k
Transcript
5 ষ: Machine Learning Basics ౡوେ May 29, 2018 ిؾ௨৴େֶ
ঙݚڀࣨ B4
࣍ 1. ࠷ਪఆ 2. ϕΠζ౷ܭ 3. ڭࢣ͋Γֶश 4. ڭࢣͳֶ͠श 5.
֬తޯ߱Լ๏ (SGD) 6. Deep Learning ͷಈػ 2
࠷ਪఆ
࠷ਪఆ ൘ॻͰΔ 3
ϕΠζ౷ܭ
ϕΠζ౷ܭ ൘ॻͰΔ 4
ڭࢣ͋Γֶश
ڭࢣ͋Γֶश ֬తڭࢣ͋Γֶश ෮श: ҰൠઢܗϞσϧ y = θT
x + ϵ ϵ ∼ N(ϵ|0, σ2) ⇒ p(y|x; θ) = N(y; θT x, σ2) (5.80) ਖ਼نͷఆٛҬ (−∞, ∞) ˠ {0, 1} ͷೋྨʹ͑ͳ͍ 5
ڭࢣ͋Γֶश ֬తڭࢣ͋Γֶश લड़ͷཧ༝͔Β (0, 1) ͷҬΛͭؔΛߟ͍͑ͨ ˠγάϞΠυؔ f(x) = 1
1 + e−x 6
ڭࢣ͋Γֶश ֬తڭࢣ͋Γֶश ϩδεςΟ οΫճؼ p(y = 1|x; θ) = 1
1 + e−θT x = 1 1 + e−(θ0+θ1x1+θ2x2+··· ) ˠ {0, 1} ͷೋผʹ͑Δ 7
ڭࢣ͋Γֶश αϙʔτϕΫλʔϚγϯ (SVM) ಛ্ۭؒͰઢܗՄೳͳೋྨΛߟ͑Δ ˠ͍Ζ͍Ζͳઢ (ฏ໘) ͷҾ͖ํ͕͋Δ 8
ڭࢣ͋Γֶश αϙʔτϕΫλʔϚγϯ (SVM) マージンを最大化 支持超平面 分類超平面 サポートベクトル ࢧ࣋ฏ໘ͷʮϚʔδϯʯΛ࠷େԽ͢ΔΑ͏ʹྨฏ໘Λֶश 9
ڭࢣ͋Γֶश αϙʔτϕΫλʔϚγϯ (SVM) ฏ໘ͷํఔࣜ ax + by + c =
0 ͳͷͰ͜ΕΛҰൠԽͯ͠ɼྨฏ໘ͷํఔࣜαϙʔτϕΫτ ϧͷू߹ x∗ Λ༻͍ͯ w0 + wT x∗ = 0 ͱॻ͚Δɽֶश͢Δͷ͜ͷ w Ͱ͋Δ 2 ͭͷࢧ࣋ฏ໘ɼྨฏ໘Λ ±k ͚ͩͣΒͯ͠ w0 + wT x∗ = k w0 + wT x∗ = −k ⇒ |w0 + wT x∗| = k Ͱ͋Δ 10
ڭࢣ͋Γֶश αϙʔτϕΫλʔϚγϯ (SVM) ฏ໘ͷࣜఆഒͯ͠ಉ͡ͷΛࣔ͢ͷͰɼֶश݁Ռ͕Ұҙ ʹఆ·Βͳ͍ ˠҰҙੑΛ࣋ͭΑ͏ʹ੍Λ՝͢ ੍: |w0 + wT
x∗| = 1 ॏΈϕΫτϧʹ͍ͭͯඪ४Խ͢Δͱ |w0 + wT x∗| ∥w∥ = 1 ∥w∥ ͜ΕΛ࠷େԽ͢ΔΑ͏ʹֶश͢Δ 11
ڭࢣ͋Γֶश αϙʔτϕΫλʔϚγϯ (SVM) ͜Ε·ͰઢܗՄೳͳͷ ˠઢܗෆՄೳͳΛߟ͍͑ͨ ղܾࡦ: ಛʹඇઢܗมΛࢪͯ͠ผͷಛۭؒʹࣹӨ 12
ڭࢣ͋Γֶश αϙʔτϕΫλʔϚγϯ (SVM) ྫ: ສ༗Ҿྗͷࣜ ಛ: ࣭ྔ m1, m2 ɼڑ
r f(m1, m2, r) = G m1m2 r2 ͜ΕΛ֤ಛʹؔͯ͠ઢܗʹ͍ͨ͠ ˠରΛͱΔ logf(m1, m2, r) = logG + logm1 + logm2 − logr2 ઢܗʹͳͬͨ 13
ڭࢣ͋Γֶश αϙʔτϕΫλʔϚγϯ (SVM) ҰൠʹͱͷಛΑΓߴ࣍ݩͷۭࣹؒӨ͢Δ ˠֶशσʔλ͕ଟ͍ͱܭࢉྔ͕େʹͳΔ ˠ ΧʔωϧτϦοΫͱ͍͏ຐज़Λ༻͍Δͱখ͍͞ܭࢉྔͰߴ࣍ݩ (ແݶ࣍ݩ) ͷࣹӨΛධՁͰ͖Δ 14
ڭࢣͳֶ͠श
ڭࢣͳֶ͠श ओੳ ͬͨͷͰύε 15
ڭࢣͳֶ͠श ΫϥελϦϯά ΫϥελϦϯά ྨࣅͨ͠σʔλΛάϧʔϓʹྨ͢Δ 16
ڭࢣͳֶ͠श k-means ๏ ΞϧΰϦζϜ 1. Ϋϥελத৺ͷॳظͱͯ͠ɼσʔλ͔Β k ݸͷηϯτϩ ΠυΛϥϯμϜʹબͿ (k
ط) 2. ֤αϯϓϧΛ࠷͍ۙηϯτϩΠυʹׂΓͯΔ 3. ֤ηϯτϩΠυΛࣗʹׂΓͯΒΕͨσʔλͷத৺ʹҠಈ ͢Δ 4. 2,3 Λ܁Γฦ͢ 17
ڭࢣͳֶ͠श k-means ๏ σϞΛΕ http://tech.nitoyon.com/ja/blog/2013/11/07/k-means/ 18
ڭࢣͳֶ͠श ิ: k-means++๏ k-means ๏ॳظґଘੑ͕ඇৗʹߴ͍ ˠ֤ηϯτϩΠυͷॳظΛόϥόϥʹࢃ͘͜ͱͰվળ (k-means++๏) σϞΛΕ https://wasyro.github.io/k-meansppVisualizer/ 19
֬తޯ߱Լ๏ (SGD)
֬తޯ߱Լ๏ (SGD) ίετؔ (ςετ) σʔλ͝ͱͷଛࣦؔͷʹղͰ͖Δ͜ ͱ͕ଟ͍ ઢܗճؼͰɼର L(x, y, θ)
Λ༻͍ͯ J(θ) = Ex,y∼ˆ pdata [L(x, y, θ)] = 1 m m ∑ i=1 L(x(i), y(i), θ) (5.96) L(x(i), y(i), θ) = −logp(y|x, θ) ͜ͷίετؔʹؔͯ͠ɼύϥϝʔλ θ ʹ͍ͭͯޯ๏Λద༻ 20
֬తޯ߱Լ๏ (SGD) ∇θJ(θ) = ∇θ [ 1 m m ∑
i=1 L(x(i), y(i), θ) ] = 1 m m ∑ i=1 ∇θL(x(i), y(i), θ) (5.97) ͜ͷܭࢉྔ O(m) Ͱɼσʔλ͕૿͑Δͱ͔ͳΓͭΒ͍ ˠ֬తޯ߱Լ๏ (SGD) 21
֬తޯ߱Լ๏ (SGD) SGD ޯΛظͰදݱͰ͖Δͱߟ͑ɼαϯϓϧͷখ͍͞αϒ ηοτ (ϛχόον) ͷޯ๏Ͱۙࣅతʹٻ·Δͱ͢Δ B = {x(1),
. . . , x(m′)} ͷϛχόονΛҰ༷ϥϯμϜʹֶशσʔλ ηοτ͔Βͬͯ͘Δ m′ ͍͍ͩͨ 100ʙ300 ͘Β͍Ͱɼm ͕ଟͯ͘ಉ༷ ޯͷਪఆྔ g g = 1 m′ ∇θ m′ ∑ i=1 L(x(i), y(i), θ) (5.98) ύϥϝʔλͷਪఆྔ θ ← θ − ϵg 22
Deep Learning ͷಈػ
Deep Learning ͷಈػ ࣍ݩͷढ͍ ಛྔͷ࣍ݩ͕૿͑ΔͱࢦతʹऔΓ͏ΔΈ߹Θ͕ͤ૿͑Δ ্ਤ֤ಛ͕ͦΕͧΕ 10 ݸͷΛऔΓ͏Δ߹ͷ֓೦ਤ 23
Deep Learning ͷಈػ ࣍ݩͷढ͍ ྫͱͯ͠ k ۙ๏ (k-Nearest Neighbour) ͱ͍͏ֶशΞϧΰϦζϜ
Λߟ͑Δ k ۙ๏ ςετσʔλͷೖྗʹରͯ͠ɼಛ্ۭؒͰͬͱ͍ۙ k ݸ ͷֶशσʔλΛ୳͠ɼͦΕΒͷֶशσʔλͷଐ͢ΔΫϥεͷଟ ܾͰςετσʔλʹׂΓৼΔΫϥεΛܾఆ͢Δ k = 3 ͷ߹ ೖྗ˔ͷϥϕϧ˙ 24
Deep Learning ͷಈػ ࣍ݩͷढ͍ ಛۭؒͰσʔλ͕εΧεΧͰ k ۙ๏Ͱ͏·͍͔͘ͳͦ͞͏ ˠಉ༷ʹଟ͘ͷݹయతػցֶशख๏ͰଠଧͪͰ͖ͳ͘ͳΔ 25
References I [1] ਢࢁರࢤ, ϕΠζਪʹΑΔػցֶशೖ. ߨஊࣾ, 2017. [2] খాਸ, αϙʔτϕΫλʔϚγϯ.
ΦʔϜࣾ, 2007. [3] Sebastian Raschka ஶ, גࣜձࣾΫΠʔϓ༁, ୡਓσʔλαΠ ΤϯςΟετʹΑΔཧͱ࣮ફ Python ػցֶशϓϩάϥϛ ϯά, ΠϯϓϨε, 2016.