Slide 1

Slide 1 text

ࡾ୐༔հ(.01&1"#0JOD ϓϩάϥϚͷͨΊͷ਺ֶษڧձ!෱Ԭ (PʹΑΔޯ഑߱Լ๏ ཧ࿦ͱ࣮ફ

Slide 2

Slide 2 text

ϓϦϯγύϧΤϯδχΞ ࡾ୐༔հ!NPOPDISPNFHBOF NJOOFࣄۀ෦ IUUQCMPHNPOPDISPNFHBOFDPN

Slide 3

Slide 3 text

໨࣍ wޯ഑߱Լ๏ͱ͸ w࠷ٸ߱Լ๏ w֬཰తޯ഑߱Լ๏ wޯ഑߱Լ๏ͷ࠷దԽ w·ͱΊ

Slide 4

Slide 4 text

ޯ഑߱Լ๏ͱ͸

Slide 5

Slide 5 text

ޯ഑߱Լ๏ͱ͸ wػցֶशʹ͓͍ͯϞσϧʹରֶͯ͠शΛਐΊΔͨΊͷख๏ͷͻͱͭɻ wτϨʔχϯάର৅ͷσʔλʹରͯ͠Ϟσϧͱͷޡ͕ࠩ࠷খʹͳΔΑ͏ʹϞσϧ ಺ͷύϥϝλΛߋ৽͍ͯ͘͜͠ͱɻ wύϥϝλߋ৽͸ɺޡࠩΛఆٛͨؔ͠਺Λඍ෼ͯ͠࠷খʹ͚ۙͮΔૢ࡞Λ܁Γฦ ͢͜ͱͰߦ͏ɻ

Slide 6

Slide 6 text

ͳΔ΄Ͳʁʁ

Slide 7

Slide 7 text

ྫ͑͹ɺ͜͜ʹ ޡࠩΛఆٛͨؔ͠਺ͱͯ͠ ͕͋Δͱ͢Δɻ ͜ΕΛ࠷খԽ͢ΔYͷ஋͕ٻ·Δ ޡ͕ࠩ࠷খʹͳΔͱߟ͑Δɻ f ( x ) = ( x 1)2

Slide 8

Slide 8 text

ͭ·Γɺ ͻͨ͢Βඍ෼ͯ͠܏͖͕ʹͳΔͱ͜ ΖΛ୳͢ɻ

Slide 9

Slide 9 text

͋ͯͣͬΆ͏ʁ ͦΕͩͱऴΘΒͳ͍ͷͰɺٻΊͨ܏͖ ΛݩʹYΛ૿΍͠ʢݮΒ͠ʣͯΛ܁ Γฦ͢ x := x d dx f ( x ) ಋؔ਺ͷූ߸͕ෛͰ͋Ε͹ɺYΛ૿΍͠ɺ ಋؔ਺ͷූ߸͕ਖ਼Ͱ͋Ε͹ɺYΛݮΒ͢ɻ

Slide 10

Slide 10 text

ֶश཰ ֶश཰Б͸Yͷߋ৽౓߹͍Λௐ੔͢ Δɻ x := x ⌘ d dx f ( x ) େ͖͗͢ΔͱYͷҠಈྔ͕૿͑ͯɺऩ ଋ͠ͳ͍৔߹΍ൃࢄͯ͠͠·͏৔߹͕ ͋Δɻ খ͗͢͞ΔͱYͷҠಈྔ͕ݮΓɺ܁Γ ฦ͠ճ਺͕૿͑ΔՄೳੑ͕͋Δɻ

Slide 11

Slide 11 text

໨తؔ਺ wτϨʔχϯάର৅ͷσʔλʹର͢ΔϞσϧͱͷޡࠩΛఆٛͨ͠΋ͷ ٻΊΔύϥϝλΛВͱஔ͘ E ( ✓ ) = 1 2 n X i=1 ( yi f✓( xi))2 τϨʔχϯάσʔλ Z ͱ͋Δ࣌఺ͷύϥϝλВΛ ࢖ͬͨϞσϧ͔Βࢉग़͞Εͨ༧ଌ஋ͷࠩʢޡࠩʣ શͯͷτϨʔχϯάσʔλʹର͢Δޡࠩͷೋ৐࿨

Slide 12

Slide 12 text

໨తؔ਺ w͋ͱ͸ɺޡࠩΛఆٛͨؔ͠਺Ͱ͋Δ໨తؔ਺Λύϥϝλʹରͯ͠ඍ෼ͯ͠ޡࠩ Λ࠷খʹ͍͚ͯ͠͹Α͍ ˠ࠷ٸ߱Լ๏

Slide 13

Slide 13 text

࠷ٸ߱Լ๏ - gradient descent, GD -

Slide 14

Slide 14 text

۩ମྫ

Slide 15

Slide 15 text

ଟ߲ࣜճؼ τϨʔχϯάηοτ ਖ਼ݭؔ਺Λσʔλੜ੒ݩͱͯ͠ඪ४ภ ࠩͷཚ਺ΛՃ͑ͨ΋ͷ Ϟσϧ ࣍ͷଟ߲ࣜΛ༻͍ͯ༧ଌ f✓( x ) = ✓0 + ✓1x + ✓2x 2 + ✓3x 3

Slide 16

Slide 16 text

ଟ߲ࣜճؼ ໨తؔ਺ E ( ✓ ) = 1 2 n X i=1 ( yi f✓( xi))2 f✓( x ) = ✓0 + ✓1x + ✓2x 2 + ✓3x 3 ΛϞσϧ ͷύϥϝλͰ͋ΔВ    ʹରͯ͠ ภඍ෼Λߦͬͨಋؔ਺Λ༻͍ͯύϥϝ λͷߋ৽Λߦ͏

Slide 17

Slide 17 text

ଟ߲ࣜճؼ ໨తؔ਺ E ( ✓ ) = 1 2 n X i=1 ( yi f✓( xi))2 f✓( x ) = ✓0 + ✓1x + ✓2x 2 + ✓3x 3 ΛϞσϧ ͷύϥϝλͰ͋ΔВ    ʹରͯ͠ ภඍ෼Λߦͬͨಋؔ਺Λ༻͍ͯύϥϝ λͷߋ৽Λߦ͏ ✓0 := ✓0 ⌘ n X i=1 ( f✓( xi) yi) ✓1 := ✓1 ⌘ n X i=1 ( f✓( xi) yi) xi ✓2 := ✓2 ⌘ n X i=1 ( f✓( xi) yi) x 2 i ✓3 := ✓3 ⌘ n X i=1 ( f✓( xi) yi) x 3 i ύϥϝλߋ৽ࣜ В@ʹ͍ͭͯภඍ෼ В@ʹ͍ͭͯภඍ෼ В@ʹ͍ͭͯภඍ෼ В@ʹ͍ͭͯภඍ෼

Slide 18

Slide 18 text

࠷ٸ߱Լ๏ʹΑΔଟ߲ࣜճؼ(PMBOH // fθ(x) Ϟσϧ func PredictionFunction(x float64, thetas []float64) float64 { result := 0.0 for i, theta := range thetas { result += theta * math.Pow(x, float64(i)) } return result } // E(θ) ໨తؔ਺ func ObjectiveFunction(trainings DataSet, thetas []float64) float64 { result := 0.0 for _, training := range trainings { result += math.Pow((training.Y - PredictionFunction(training.X, thetas)), 2) } return result / 2.0 }

Slide 19

Slide 19 text

࠷ٸ߱Լ๏ʹΑΔଟ߲ࣜճؼ(PMBOH // ύϥϝλ͝ͱͷޯ഑ func gradient(dataset DataSet, thetas []float64, index int, batchSize int) float64 { result := 0.0 for _, data := range dataset[0:batchSize] { result += ((PredictionFunction(data.X, thetas) - data.Y) * math.Pow(data.X, float64(index))) } return result } ✓0 := ✓0 ⌘ n X i=1 ( f✓( xi) yi) ✓1 := ✓1 ⌘ n X i=1 ( f✓( xi) yi) xi ✓2 := ✓2 ⌘ n X i=1 ( f✓( xi) yi) x 2 i ✓3 := ✓3 ⌘ n X i=1 ( f✓( xi) yi) x 3 i

Slide 20

Slide 20 text

࠷ٸ߱Լ๏ʹΑΔଟ߲ࣜճؼ(PMBOH // learning (update parameters) for i := 0; i < opt.Epoch; i++ { // update parameter by gradient descent org_thetas := make([]float64, cap(thetas)) copy(org_thetas, thetas) shuffled := dataset.Shuffle() for j, _ := range thetas { // compute gradient gradient := gradient(shuffled, org_thetas, j, batchSize) // update parameter thetas[j] = org_thetas[j] - (opt.LearingRate * gradient) } }

Slide 21

Slide 21 text

࠷ٸ߱Լ๏ʹΑΔଟ߲ࣜճؼ

Slide 22

Slide 22 text

֬཰తޯ഑߱Լ๏ - stochastic gradient descent, SGD -

Slide 23

Slide 23 text

࠷ٸ߱Լ๏ͷ՝୊

Slide 24

Slide 24 text

࠷ٸ߱Լ๏ͷ՝୊ wύϥϝλߋ৽ຖͷޡࠩͷܭࢉʹશτϨʔχϯάηοτͷ߹ܭ͕ඞཁʹͳΔ wˠτϨʔχϯάηοτ͕ͱͯ΋େ͖͍৔߹ʹܭࢉྔ͕๲େʹͳͬͯ͠·͏ E ( ✓ ) = 1 2 n X i=1 ( yi f✓( xi))2 wશτϨʔχϯάηοτΛ࢖͏ͨΊ࣮֬ʹޯ഑ΛԼͬͯ͠·͏ wˠہॴղʹั·ΔՄೳੑ͕ߴ͍

Slide 25

Slide 25 text

֬཰తޯ഑߱Լ๏ - stochastic gradient descent, SGD -

Slide 26

Slide 26 text

4(%ʹΑΔଟ߲ࣜճؼ ύϥϝλߋ৽ࣜ ޡࠩೋ৐࿨Λ࢖ΘͣɺϥϯμϜʹબ୒ ͨ͠σʔλΛ༻͍ͯύϥϝλߋ৽Λߦ ͏ ✓0 := ✓0 ⌘ 1 X i=1 ( f✓( xi) yi) ✓1 := ✓1 ⌘ 1 X i=1 ( f✓( xi) yi) xi ✓2 := ✓2 ⌘ 1 X i=1 ( f✓( xi) yi) x 2 i ✓3 := ✓3 ⌘ 1 X i=1 ( f✓( xi) yi) x 3 i J͔Β·Ͱɻͻͱ͚ͭͩͷ࿨ ൪໨ͷτϨʔχϯάηοτݻఆͰ ֶश͢ΔͷͰ͸ͳ͘ɺຖճγϟοϑ ϧ্ͨ͠Ͱͷઌ಄σʔλΛ࢖ͬͯύ ϥϝλߋ৽Λߦ͏

Slide 27

Slide 27 text

֬཰తޯ഑߱Լ๏ʹΑΔଟ߲ࣜճؼ(PMBOH // GD: batchSize=len(dataset), SGD: batchSize=1 batchSize := len(dataset) if opt.Algorithm == "sgd" { if opt.BatchSize == -1 { batchSize = 1 } }

Slide 28

Slide 28 text

֬཰తޯ഑߱Լ๏ֶश཰ʹΑΔऩଋਪҠ

Slide 29

Slide 29 text

ϛχόονޯ഑߱Լ๏ - mini-batch gradient descent, mini-batch SGD -

Slide 30

Slide 30 text

NJOJCBUDI4(% 㱡#MFO USBJOHJOHTFU ͱͳΔ όοναΠζΛఆΊͯύϥϝλߋ৽Λ ߦ͏͜ͱͰ࠷ٸ߱Լ๏ͱ֬཰తޯ഑߱ Լ๏ͷ͍͍ͱ͜औΓΛૂ͏ɻ ֬཰తޯ഑߱Լ๏͸#ͷಛघܕͱ ݴ͑Δɻ J͔ΒϛχόοναΠζ·Ͱͷ࿨ ຖճγϟοϑϧ্ͨ͠Ͱઌ಄͔Βϛ χόοναΠζ·ͰͷσʔλΛ࢖ͬ ͯύϥϝλߋ৽Λߦ͏ ✓0 := ✓0 ⌘ B X i=1 ( f✓( xi) yi) ✓1 := ✓1 ⌘ B X i=1 ( f✓( xi) yi) xi ✓2 := ✓2 ⌘ B X i=1 ( f✓( xi) yi) x 2 i ✓3 := ✓3 ⌘ B X i=1 ( f✓( xi) yi) x 3 i

Slide 31

Slide 31 text

ޯ഑߱Լ๏ͷ࠷దԽ - optimization -

Slide 32

Slide 32 text

.PNFOUVN

Slide 33

Slide 33 text

ޯ഑߱Լ๏ͷऩଋΛૣΊΔ

Slide 34

Slide 34 text

.PNFOUVN ύϥϝλߋ৽ʹϞϝϯλϜʢ׳ੑʣͷ ߟ͑ํΛऔΓೖΕΔ͜ͱͰऩଋΛૣΊ Δɻ ϞϝϯλϜͷͳ͍4(% ϞϝϯλϜͷ͋Δ4(% vk = vk 1 + ⌘rE(✓) ✓k = ✓k 1 vk લճ·Ͱͷޯ഑ҠಈΛ׳ੑͱͯ͠ྦྷੵ ͢Δɻͭ·Γಉ͡ํ޲΁ͷҠಈͰ͋Ε ͹׳ੑ͸૿Ճ͠ɺํ޲Λม͑ΔҠಈͰ ͋Ε͹ݱ৅ͤ͞Δɻ ޯ഑ ϞϝϯλϜͷྦྷੵ .PNFOUVNBOE-FBSOJOH3BUF"EBQUBUJPO IUUQTXXXXJMMBNFUUFFEVdHPSSDMBTTFTDTNPNSBUFIUNM

Slide 35

Slide 35 text

.PNFOUVNʹΑΔ࠷దԽ(PMBOH for j, _ := range thetas { // compute gradient gradient := gradient(shuffled, org_thetas, j, batchSize) // Use momentum if momentum option is passed velocities[j] = opt.Momentum*velocities[j] -(opt.LearingRate * gradient) // update parameter thetas[j] = org_thetas[j] + velocities[j] } vk = vk 1 + ⌘rE(✓) ✓k = ✓k 1 vk

Slide 36

Slide 36 text

.PNFOUVNʹΑΔ࠷దԽֶश཰ʹΑΔऩଋਪҠ

Slide 37

Slide 37 text

"EB(SBE

Slide 38

Slide 38 text

ֶश཰ΛࣗಈͰௐ੔͢Δ

Slide 39

Slide 39 text

"EB(SBE Ϟσϧͷֶशͷࡍʹɺֶश཰ΛࣗಈͰ ௐ੔͢Δख๏ͷͻͱͭɻ Gk = Gk 1 + (rE(✓k 1))2 ✓k = ✓k 1 ⌘ p Gk 1 + ✏ rE(✓k 1) ॳظֶश཰БΛޯ഑ͷઈର஋ͷྦྷੵͰ ׂͬͨ΋ͷΛֶश཰ͱͯ͠࢖͏ ϝϦοτ ֤ύϥϝλ͝ͱʹֶश཰͕ௐ੔Ͱ͖ Δɻ มԽͷগͳ͍ύϥϝλʹରͯ͠͸େ ֶ͖͘श͠ɺมԽ͕ଟ͍ύϥϝλʹ ରͯ͠͸গֶͮͭ͠श͍ͯ͘͠ σϝϦοτ ޯ഑ͷྦྷੵΛ෼฼ͱ͢ΔҎ্ɺֶश ͕ਐΉͱֶश཰͸ඇৗʹখ͘͞ͳͬ ͯ͠·͏ ˠॳظֶश཰Λେ͖Ίʹઃఆ͢Δ

Slide 40

Slide 40 text

"EB(SBEʹΑΔ࠷దԽ(PMBOH for j, _ := range thetas { ~~~~ // optimize by AdaGrad gradients[j] += math.Pow(gradient, 2) learningRate := opt.LearingRate / (math.Sqrt(gradients[j] + opt.Epsilon)) update = -(learningRate * gradient) ~~~~ } Gk = Gk 1 + (rE(✓k 1))2 ✓k = ✓k 1 ⌘ p Gk 1 + ✏ rE(✓k 1)

Slide 41

Slide 41 text

"EB(SBEʹΑΔ࠷దԽֶश཰ʹΑΔऩଋਪҠ

Slide 42

Slide 42 text

"EB%FMUB

Slide 43

Slide 43 text

ֶश཰ΛࣗಈͰௐ੔͢Δ

Slide 44

Slide 44 text

"EB%FMUB Ϟσϧͷֶशͷࡍʹɺֶश཰ΛࣗಈͰ ௐ੔͢Δख๏ͷͻͱͭɻ ֶश཰ͷ୯ௐݮগΛճආ ୯७ʹޯ഑ͷ߹ܭΛ༻͍ΔͷͰ͸ͳ͘ɺޯ഑ ΛݮਰฏۉԽ͢Δ͜ͱͰ௚ۙͷޯ഑ʹΑΔֶ श཰ͷࢉग़Λߦ͏ɻ ॳظֶश཰ͷઃఆ͕ෆཁ ·ͨॳظֶश཰Λύϥϝλߋ৽஋Λݮਰฏۉ Խͨ͠΋ͷʹஔ͖׵͑Δ E ⇥ g2 ⇤ t = E ⇥ g2 ⇤ t 1 + (1 )g2 t ✓t = q E [ ✓2]t 1 + ✏ p E [g2]t + ✏ gt E ⇥ ✓2 ⇤ t = E ⇥ ✓2 ⇤ t 1 + (1 ) ✓2 t ✓t+1 = ✓t + ✓t

Slide 45

Slide 45 text

"EB%FMUB Ϟσϧͷֶशͷࡍʹɺֶश཰ΛࣗಈͰ ௐ੔͢Δख๏ͷͻͱͭɻ E ⇥ g2 ⇤ t = E ⇥ g2 ⇤ t 1 + (1 )g2 t ✓t = q E [ ✓2]t 1 + ✏ p E [g2]t + ✏ gt E ⇥ ✓2 ⇤ t = E ⇥ ✓2 ⇤ t 1 + (1 ) ✓2 t ✓t+1 = ✓t + ✓t ޯ഑ΛݮਰฏۉԽͯ͠஝ੵ ઌఔٻΊͨ஋Λ࢖ͬͯύϥϝ λߋ৽஋ͷݮਰฏۉ஝ੵ ௚ۙͷޯ഑ͱύϥϝλߋ৽஋ ͔Βֶश཰ΛٻΊͯ৽͍͠ύ ϥϝλߋ৽஋ΛಘΔ ύϥϝλߋ৽

Slide 46

Slide 46 text

"EB%FMUBʹΑΔ࠷దԽ(PMBOH for j, _ := range thetas { ~~~~ // optimize by AdaDelta gradients[j] = (opt.DecayRate * gradients[j]) + (1.0- opt.DecayRate)*math.Pow(gradient, 2) update = -(math.Sqrt(updates[j]+opt.Epsilon) / math.Sqrt(gradients[j] +opt.Epsilon)) * gradient updates[j] = (opt.DecayRate * updates[j]) + (1.0- opt.DecayRate)*math.Pow(update, 2) ~~~~ } E ⇥ g2 ⇤ t = E ⇥ g2 ⇤ t 1 + (1 )g2 t ✓t = q E [ ✓2]t 1 + ✏ p E [g2]t + ✏ gt E ⇥ ✓2 ⇤ t = E ⇥ ✓2 ⇤ t 1 + (1 ) ✓2 t ✓t+1 = ✓t + ✓t

Slide 47

Slide 47 text

"EB%FMUBʹΑΔ࠷దԽݮਰ཰ʹΑΔऩଋਪҠ

Slide 48

Slide 48 text

"EB(SBEͱ"EB%FMUBͷֶश཰ͷਪҠ

Slide 49

Slide 49 text

ൺֱ

Slide 50

Slide 50 text

֤ޯ഑߱Լ๏ͱ࠷దԽʹΑΔऩଋਪҠͷൺֱ

Slide 51

Slide 51 text

·ͱΊ

Slide 52

Slide 52 text

·ͱΊ wػցֶशͰ͸ޯ഑߱Լ๏ʹΑͬͯޡࠩΛ࠷খԽ͢Δ͜ͱͰϞσϧͷֶशΛਐΊ Δ wޯ഑߱Լ๏ɺ࠷దԽͷछྨ͸༷ʑ͕ͩɺτϨʔχϯάηοτʹదͨ͠΋ͷΛબ ͿͨΊʹ͸ɺΞϧΰϦζϜͷબ୒ɺϋΠύʔύϥϝʔλʔͷௐ੔ͱ͍ͬͨࢼߦ ࡨޡ͕ݱ࣌఺Ͱ͸ඞཁ w࠷৽ͷख๏͕ৗʹΑ͍ͱ͸ݶΒͳ͍ʜ wϋΠύʔύϥϝʔλʔ͸ͳ͘ͳΒͳ͍ʜ wࣗ෼Ͱ࣮૷͢Δͱཧղ͕ਂ·ͬͯ٢ʂ

Slide 53

Slide 53 text

$PEF

Slide 54

Slide 54 text

$PEF $ go run cmd/gradient_descent/main.go \ -eta 0.075 \ -m 3 \ -epoch 40000 \ -algorithm sgd \ -momentum 0.9 w(PݴޠʹΑΔޯ഑߱Լ๏ͷαϯϓϧ࣮૷ΛҎԼʹஔ͍͍ͯ·͢ wIUUQTHJUIVCDPNNPOPDISPNFHBOFHSBEJFOU@EFTDFOU 6TBHF

Slide 55

Slide 55 text

͓ΘΓ

Slide 56

Slide 56 text

܅΋ϖύϘͰಇ͔ͳ͍͔ʁ ࠷৽ͷ࠾༻৘ใΛνΣοΫˠ !QC@SFDSVJU