Slide 1

Slide 1 text

5 ষ: Machine Learning Basics ઒ౡوେ May 29, 2018 ిؾ௨৴େֶ ঙ໺ݚڀࣨ B4

Slide 2

Slide 2 text

໨࣍ 1. ࠷໬ਪఆ 2. ϕΠζ౷ܭ 3. ڭࢣ͋Γֶश 4. ڭࢣͳֶ͠श 5. ֬཰తޯ഑߱Լ๏ (SGD) 6. Deep Learning ΁ͷಈػ 2

Slide 3

Slide 3 text

࠷໬ਪఆ

Slide 4

Slide 4 text

࠷໬ਪఆ ൘ॻͰ΍Δ 3

Slide 5

Slide 5 text

ϕΠζ౷ܭ

Slide 6

Slide 6 text

ϕΠζ౷ܭ ൘ॻͰ΍Δ 4

Slide 7

Slide 7 text

ڭࢣ͋Γֶश

Slide 8

Slide 8 text

ڭࢣ͋Γֶश ֬཰తڭࢣ͋Γֶश ෮श: ҰൠઢܗϞσϧ    y = θT x + ϵ ϵ ∼ N(ϵ|0, σ2) ⇒ p(y|x; θ) = N(y; θT x, σ2) (5.80) ਖ਼ن෼෍ͷఆٛҬ͸ (−∞, ∞) ˠ {0, 1} ͷೋ஋෼ྨ໰୊ʹ͸࢖͑ͳ͍ 5

Slide 9

Slide 9 text

ڭࢣ͋Γֶश ֬཰తڭࢣ͋Γֶश લड़ͷཧ༝͔Β (0, 1) ͷ஋ҬΛ΋ͭؔ਺Λߟ͍͑ͨ ˠγάϞΠυؔ਺ f(x) = 1 1 + e−x 6

Slide 10

Slide 10 text

ڭࢣ͋Γֶश ֬཰తڭࢣ͋Γֶश ϩδεςΟ οΫճؼ p(y = 1|x; θ) = 1 1 + e−θT x = 1 1 + e−(θ0+θ1x1+θ2x2+··· ) ˠ {0, 1} ͷೋ஋൑ผʹ࢖͑Δ 7

Slide 11

Slide 11 text

ڭࢣ͋Γֶश αϙʔτϕΫλʔϚγϯ (SVM) ಛ௃্ۭؒͰઢܗ෼཭Մೳͳೋ஋෼ྨ໰୊Λߟ͑Δ ˠ͍Ζ͍Ζͳઢ (௒ฏ໘) ͷҾ͖ํ͕͋Δ 8

Slide 12

Slide 12 text

ڭࢣ͋Γֶश αϙʔτϕΫλʔϚγϯ (SVM) マージンを最大化 支持超平面 分類超平面 サポートベクトル ࢧ࣋௒ฏ໘ͷʮϚʔδϯʯΛ࠷େԽ͢ΔΑ͏ʹ෼ྨ௒ฏ໘Λֶश 9

Slide 13

Slide 13 text

ڭࢣ͋Γֶश αϙʔτϕΫλʔϚγϯ (SVM) ฏ໘ͷํఔࣜ͸ ax + by + c = 0 ͳͷͰ͜ΕΛҰൠԽͯ͠ɼ෼ྨ௒ฏ໘ͷํఔࣜ͸αϙʔτϕΫτ ϧͷू߹ x∗ Λ༻͍ͯ w0 + wT x∗ = 0 ͱॻ͚Δɽֶश͢Δͷ͸͜ͷ܎਺ w Ͱ͋Δ 2 ͭͷࢧ࣋௒ฏ໘͸ɼ෼ྨ௒ฏ໘Λ ±k ͚ͩͣΒͯ͠    w0 + wT x∗ = k w0 + wT x∗ = −k ⇒ |w0 + wT x∗| = k Ͱ͋Δ 10

Slide 14

Slide 14 text

ڭࢣ͋Γֶश αϙʔτϕΫλʔϚγϯ (SVM) ௒ฏ໘ͷࣜ͸ఆ਺ഒͯ͠΋ಉ͡΋ͷΛࣔ͢ͷͰɼֶश݁Ռ͕Ұҙ ʹఆ·Βͳ͍ ˠҰҙੑΛ࣋ͭΑ͏ʹ੍໿Λ՝͢ ੍໿: |w0 + wT x∗| = 1 ॏΈϕΫτϧʹ͍ͭͯඪ४Խ͢Δͱ |w0 + wT x∗| ∥w∥ = 1 ∥w∥ ͜ΕΛ࠷େԽ͢ΔΑ͏ʹֶश͢Δ 11

Slide 15

Slide 15 text

ڭࢣ͋Γֶश αϙʔτϕΫλʔϚγϯ (SVM) ͜Ε·Ͱ͸ઢܗ෼཭Մೳͳ໰୊ͷ࿩ ˠઢܗ෼཭ෆՄೳͳ໰୊Λߟ͍͑ͨ ղܾࡦ: ಛ௃ʹඇઢܗม׵Λࢪͯ͠ผͷಛ௃ۭؒʹࣹӨ 12

Slide 16

Slide 16 text

ڭࢣ͋Γֶश αϙʔτϕΫλʔϚγϯ (SVM) ྫ: ສ༗Ҿྗͷࣜ ಛ௃: ࣭ྔ m1, m2 ɼڑ཭ r f(m1, m2, r) = G m1m2 r2 ͜ΕΛ֤ಛ௃ʹؔͯ͠ઢܗʹ͍ͨ͠ ˠର਺ΛͱΔ logf(m1, m2, r) = logG + logm1 + logm2 − logr2 ઢܗʹͳͬͨ 13

Slide 17

Slide 17 text

ڭࢣ͋Γֶश αϙʔτϕΫλʔϚγϯ (SVM) Ұൠʹ͸΋ͱͷಛ௃ΑΓߴ࣍ݩͷۭؒ΁ࣹӨ͢Δ ˠֶशσʔλ͕ଟ͍ͱܭࢉྔ͕๲େʹͳΔ ˠ ΧʔωϧτϦοΫͱ͍͏ຐज़Λ༻͍Δͱখ͍͞ܭࢉྔͰߴ࣍ݩ (ແݶ࣍ݩ) ΁ͷࣹӨΛධՁͰ͖Δ 14

Slide 18

Slide 18 text

ڭࢣͳֶ͠श

Slide 19

Slide 19 text

ڭࢣͳֶ͠श ओ੒෼෼ੳ ΍ͬͨͷͰύε 15

Slide 20

Slide 20 text

ڭࢣͳֶ͠श ΫϥελϦϯά໰୊ ΫϥελϦϯά ྨࣅͨ͠σʔλΛάϧʔϓʹ෼ྨ͢Δ 16

Slide 21

Slide 21 text

ڭࢣͳֶ͠श k-means ๏ ΞϧΰϦζϜ 1. Ϋϥελத৺ͷॳظ஋ͱͯ͠ɼσʔλ఺͔Β k ݸͷηϯτϩ ΠυΛϥϯμϜʹબͿ (k ͸ط஌) 2. ֤αϯϓϧΛ࠷΋͍ۙηϯτϩΠυʹׂΓ౰ͯΔ 3. ֤ηϯτϩΠυΛࣗ਎ʹׂΓ౰ͯΒΕͨσʔλͷத৺ʹҠಈ ͢Δ 4. 2,3 Λ܁Γฦ͢ 17

Slide 22

Slide 22 text

ڭࢣͳֶ͠श k-means ๏ σϞΛ΍Ε http://tech.nitoyon.com/ja/blog/2013/11/07/k-means/ 18

Slide 23

Slide 23 text

ڭࢣͳֶ͠श ิ଍: k-means++๏ k-means ๏͸ॳظ஋ґଘੑ͕ඇৗʹߴ͍ ˠ֤ηϯτϩΠυͷॳظ஋Λόϥόϥʹࢃ͘͜ͱͰվળ (k-means++๏) σϞΛ΍Ε https://wasyro.github.io/k-meansppVisualizer/ 19

Slide 24

Slide 24 text

֬཰తޯ഑߱Լ๏ (SGD)

Slide 25

Slide 25 text

֬཰తޯ഑߱Լ๏ (SGD) ίετؔ਺͸ (ςετ) σʔλ͝ͱͷଛࣦؔ਺ͷ࿨ʹ෼ղͰ͖Δ͜ ͱ͕ଟ͍ ઢܗճؼͰ͸ɼର਺໬౓ L(x, y, θ) Λ༻͍ͯ J(θ) = Ex,y∼ˆ pdata [L(x, y, θ)] = 1 m m ∑ i=1 L(x(i), y(i), θ) (5.96) L(x(i), y(i), θ) = −logp(y|x, θ) ͜ͷίετؔ਺ʹؔͯ͠ɼύϥϝʔλ θ ʹ͍ͭͯޯ഑๏Λద༻ 20

Slide 26

Slide 26 text

֬཰తޯ഑߱Լ๏ (SGD) ∇θJ(θ) = ∇θ [ 1 m m ∑ i=1 L(x(i), y(i), θ) ] = 1 m m ∑ i=1 ∇θL(x(i), y(i), θ) (5.97) ͜ͷܭࢉྔ͸ O(m) Ͱɼσʔλ͕૿͑Δͱ͔ͳΓͭΒ͍ ˠ֬཰తޯ഑߱Լ๏ (SGD) 21

Slide 27

Slide 27 text

֬཰తޯ഑߱Լ๏ (SGD) SGD ͸ޯ഑Λظ଴஋ͰදݱͰ͖Δͱߟ͑ɼαϯϓϧͷখ͍͞αϒ ηοτ (ϛχόον) ͷޯ഑๏Ͱۙࣅతʹٻ·Δͱ͢Δ B = {x(1), . . . , x(m′)} ͷϛχόονΛҰ༷ϥϯμϜʹֶशσʔλ ηοτ͔Β΋ͬͯ͘Δ m′ ͸͍͍ͩͨ 100ʙ300 ͘Β͍Ͱɼm ͕ଟͯ͘΋ಉ༷ ޯ഑ͷਪఆྔ g ͸ g = 1 m′ ∇θ m′ ∑ i=1 L(x(i), y(i), θ) (5.98) ύϥϝʔλͷਪఆྔ͸ θ ← θ − ϵg 22

Slide 28

Slide 28 text

Deep Learning ΁ͷಈػ

Slide 29

Slide 29 text

Deep Learning ΁ͷಈػ ࣍ݩͷढ͍ ಛ௃ྔͷ࣍ݩ͕૿͑Δͱࢦ਺తʹऔΓ͏Δ૊Έ߹Θ͕ͤ૿͑Δ ্ਤ͸֤ಛ௃͕ͦΕͧΕ 10 ݸͷ஋ΛऔΓ͏Δ৔߹ͷ֓೦ਤ 23

Slide 30

Slide 30 text

Deep Learning ΁ͷಈػ ࣍ݩͷढ͍ ྫͱͯ͠ k ۙ๣๏ (k-Nearest Neighbour) ͱ͍͏ֶशΞϧΰϦζϜ Λߟ͑Δ k ۙ๣๏ ςετσʔλͷೖྗʹରͯ͠ɼಛ௃্ۭؒͰ΋ͬͱ΋͍ۙ k ݸ ͷֶशσʔλΛ୳͠ɼͦΕΒͷֶशσʔλͷଐ͢ΔΫϥεͷଟ ਺ܾͰςετσʔλʹׂΓৼΔΫϥεΛܾఆ͢Δ k = 3 ͷ৔߹ ೖྗ˔ͷϥϕϧ͸˙ 24

Slide 31

Slide 31 text

Deep Learning ΁ͷಈػ ࣍ݩͷढ͍ ಛ௃ۭؒͰσʔλ͕εΧεΧͰ k ۙ๣๏Ͱ͸͏·͍͔͘ͳͦ͞͏ ˠಉ༷ʹଟ͘ͷݹయతػցֶशख๏Ͱ͸ଠ౛ଧͪͰ͖ͳ͘ͳΔ 25

Slide 32

Slide 32 text

References I [1] ਢࢁರࢤ, ϕΠζਪ࿦ʹΑΔػցֶशೖ໳. ߨஊࣾ, 2017. [2] খ໺ాਸ, αϙʔτϕΫλʔϚγϯ. ΦʔϜࣾ, 2007. [3] Sebastian Raschka ஶ, גࣜձࣾΫΠʔϓ༁, ୡਓσʔλαΠ ΤϯςΟετʹΑΔཧ࿦ͱ࣮ફ Python ػցֶशϓϩάϥϛ ϯά, ΠϯϓϨε, 2016.