Slide 1

Slide 1 text

GANͷऩଋੑʹؔ͢Δ࿦จ Yohei KIKUTA @yohei_kikuta 20180127
 NIPS2017࿦จಡΈձ @cookpad

Slide 2

Slide 2 text

ɾ໊લɿ٠ాངฏ @yohei_kikuta ɾॴଐɿΫοΫύουגࣜձࣾ ɹɹɹɹݚڀ։ൃ෦ ɾݞॻɿϦαʔνΤϯδχΞ ɹɹɹɹത࢜ʢཧֶʣ ɾઐ໳ɿը૾෼ੳ ɾ޷෺ɿম͖ᰤࢠɺण࢘ɺDr Pepper ࣗݾ঺հ 2/41

Slide 3

Slide 3 text

NIPS2017 ߦ͖ͬͯ·ͨ͠ 3/41

Slide 4

Slide 4 text

NIPS2017 ߦ͖ͬͯ·ͨ͠ 4/41

Slide 5

Slide 5 text

NIPS2017 ߦ͖ͬͯ·ͨ͠ 5/41

Slide 6

Slide 6 text

NIPS2017 ߦ͖ͬͯ·ͨ͠ 6/41

Slide 7

Slide 7 text

GANͷऩଋੑͷ࿩Λ͠·͢ ۩ମతͳϞσϧʹݴٴ͢Δ৔߹ʹ͸ original GAN ͱ WGAN ͷΈΛѻ͍·͢

Slide 8

Slide 8 text

References

Slide 9

Slide 9 text

[Papers] • Gradient descent GAN optimization is locally stable: 
 https://arxiv.org/abs/1706.04156 • The Numerics of GANs: 
 https://arxiv.org/abs/1705.10461 • Approximation and Convergence Properties of Generative Adversarial Learning: 
 https://arxiv.org/abs/1705.08991 • Generative Adversarial Networks: 
 https://arxiv.org/abs/1406.2661 • Wasserstein GAN: 
 https://arxiv.org/abs/1701.07875 • Hilbert space embeddings and metrics on probability measures: 
 https://arxiv.org/abs/0907.5309 • Equilibrium points in n-person games
 http://www.pnas.org/content/36/1/48.full 9/41

Slide 10

Slide 10 text

Introduction

Slide 11

Slide 11 text

GAN େོ੝ ରཱతߏ଄ʹΑΓ generator ͱ discriminator Λֶश͢Δ৽͍͠ύϥμΠϜ Ref: https://scholar.google.co.jp/ on 20180125 ͜Εʹଓ͘࿦จ΋੎͍͕͋Δ Unsupervised representation learning with deep convolutional generative adversarial networks https://arxiv.org/abs/1511.06434 [Ҿ༻ 991] Improved techniques for training gans https://arxiv.org/abs/1606.03498 [Ҿ༻ 477] Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks https://arxiv.org/abs/1506.05751 [Ҿ༻ 429] Wasserstein gan https://arxiv.org/abs/1701.07875 [Ҿ༻ 403] 11/41

Slide 12

Slide 12 text

GAN ݚڀͷํ޲ੑ ɾΑΓྑ͍ generator (ͱ discriminator) Λ࡞੒͢ΔͨΊͷ৽͍͠Ϟσϧͷ୳ࡧ ɹDCGAN (https://arxiv.org/abs/1511.06434), LAPGAN (https://arxiv.org/abs/1506.05751), … ɾදݱֶशͱͯ͠ͷ GAN ɹcGAN (https://arxiv.org/abs/1411.1784), InfoGAN (https://arxiv.org/abs/1606.03657), … ɾԠ༻ͱͯ͠ͷ GAN ɹcycleGAN (https://arxiv.org/abs/1703.10593), SRGAN (https://arxiv.org/abs/1609.04802), … ɾؔ਺ղੳΛ࢝Ίͱ͢Δཧ࿦తͳղੳ ɹWGAN (https://arxiv.org/abs/1701.07875), neural net distance (https://arxiv.org/abs/1703.00573), … ɾetc… 12/41

Slide 13

Slide 13 text

GAN ͷ೉͠͞ ɾ͏·ֶ͘शͤ͞Δ͜ͱ͕೉͍͠ʢhyper parameter ʹහײɺmode collapse ໰୊ɺ…ʣ ɾฏߧ఺ͷଘࡏ΍ऩଋੑͳͲ͸ཧ࿦తʹ·ͩ·ͩ໌Β͔ʹ͞Εͯͳ͍ ɾϞσϧͷධՁʢਓؒͷ஌֮ͷ਺ֶతදݱʣ ɾͲ͏Ԡ༻ʹ࢖͏͔ʢ໘ന͍ɺύοͱݟ͸Αͦ͞͏ɺҎ্ͷ controllable ͳ΋ͷʣ ɾetc… 
 ⇒ ࠓ೔͸ฏߧ఺ͷଘࡏ΍ͦͷपΓͰͷৼΔ෣͍ͷ࠷ۙͷղੳΛ঺հ 13/41

Slide 14

Slide 14 text

GAN as a two-player zero-sum game

Slide 15

Slide 15 text

GAN ͷ໨తؔ਺ original form ҎԼʹ஫໨ͯ͠ҰൠԽ͢Δ ɾ- log(x) ͕ convex Ͱ log(1-x) ͸ convex ɾཚ਺͔ΒͷαϯϓϦϯάΛ ͔Β࣮ߦ͢ΔͱಡΈସ͑Δ ɾminimize Λ θ ʹؔͯ͠ɺ maximize Λؔ਺ۭؒʹؔͯ͠ɺͱಡΈସ͑Δ ͜͜Ͱ g1 ͱ g2 ͸ convex function Ͱ͋Δ ͞Βʹ g1 ͱ g2 Λ f ʹٵऩ͠ minimize ΋ཱ֬෼෍ʹҰൠԽΛ͢ΔͱҎԼͷܗ 15/41

Slide 16

Slide 16 text

GAN ͷ zero-sum game GAN ͸ήʔϜཧ࿦ͷจ຺Ͱ͸࿈ଓతͳઓུͷ two-player game Ͱ͋Δ zero-sum game ͷҙຯ͸֤ϓϨΠϠʔͷ cost ͷ૯࿨͕ৗʹθϩͱͳΔ͜ͱͰ͋Δ ఆٛ͸ generator ͱ discriminator ͷޮ༻ؔ਺͕ҎԼͷؔ܎Λຬͨ͢͜ͱͰ͋Δ discriminator ͷޮ༻ؔ਺͸ minimize ͷҙຯͰ original ʹ negative sign Λ͚ͭͨ΋ͷ generator ͷޮ༻ؔ਺͸ͦΕΏ͑ʹ ͜͜Ͱ generator ʹؔ͢Δ߲͚ͩΛݟ͍ͯΔ͜ͱʹ஫ҙɻ͜ΕΛղ͚͹ྑ͍ ※֤छGAN͕ඞͣ zero-sum game ͷ࿮૊Έʹऩ·ΔΘ͚Ͱ͸ͳ͍
 ʢ࣮ࡍʹݪ࿦จͰ΋ऩଋੑͷͨΊʹූ߸Λ flip ͨ͠΋ͷΛ༻͍Δʣ 16/41

Slide 17

Slide 17 text

ήʔϜཧ࿦ʹ͓͚Δฏߧ఺ NashۉߧΛཧղͯ͠ΈΔ Ref: http://www.pnas.org/content/36/1/48.full 17/41

Slide 18

Slide 18 text

ήʔϜཧ࿦ʹ͓͚Δฏߧ఺ Ref: http://www.pnas.org/content/36/1/48.full NashۉߧΛཧղͯ͠ΈΔ convex closed ֯୩ͷෆಈ఺ఆཧ compact ޮ༻ͷظ଴஋͕ߴ·Δ࣌ ผͷઓུʹҠΔ 18/41

Slide 19

Slide 19 text

GAN ͷݴ༿Ͱݴ͏ͱʁ player ↔ generator, discriminator ͷ two players strategy ↔ generator, discriminator ͷύϥϝλΛҰ૊બͿ͜ͱ payoff ↔ objective function (generator, discriminator ͦΕͧΕʹؔͯ͠) mapping function ↔ ֶशʹΑͬͯύϥϝλͷ૊Λߋ৽͢Δ͜ͱ equilibrium point ↔ ֶशʹΑͬͯύϥϝλ͕ߋ৽͞Εͳ͍఺ ※ GAN Ͱ͸ඞͣ͠΋ฏߧ఺ͷଘࡏ͕อূ͞Ε͍ͯΔΘ͚Ͱͳ͍͜ͱʹ஫ҙ ※ ଘࡏ͢Δͱͯ͠ɺstochastic gradient ͷํ๏Ͱͦ͜ʹḷΓண͚Δ͔΋อূ͞Εͯͳ͍ 
 ⇒ GANͷฏߧ఺ͷଘࡏՄೳੑͱղ΁ͷऩଋੑ͕஌Γ͍ͨ 19/41

Slide 20

Slide 20 text

ฏߧ఺ͷଘࡏͱऩଋՄೳੑ ʢূ໌͸શͯল͖·͢ʣ

Slide 21

Slide 21 text

֤छ GAN ͷҰൠతఆٛ Ұൠతͳද͔ࣜΒελʔτ͢Δ ͜ΕΛҧͬͨݟํ͔Βɺݱࡏͷ ν ͔Β target µ ͕ͲΕ͘Β͍཭ΕͯΔ͔ͱղऍ͢Δ ͔͜͜Β adversarial divergence Λఆٛ ͜ͷ τ ͕جຊతͳղੳର৅ͱͳΔ ͜͜Ͱ f ͱdiscriminator ͷ class Λదٓఆٛ͢Δ͜ͱʹΑΓɺ֤छ GAN Λ࠶ݱ͢Δ 21/41

Slide 22

Slide 22 text

֤छ GAN ͷ࠶ݱ 22/41 discriminator ͷ set

Slide 23

Slide 23 text

Generalized Moment Matching target distribution µ* ʹ࠷΋ۙͮ͘Α͏ͳ෼෍Λߟ͑Δ ཧ૝తʹ͸͜Ε͕ µ* ࣗ਎ʹͳΔ (strict adversarial divergence) ࣮ࡍʹѻ͏ GAN ͸ discriminator ͷ class ͕ݶఆ͞ΕͯΔͷͰɺͦͷӨڹΛ஌Γ͍ͨ → naive ʹظ଴͢Δͷ͸ OPT ʹ µ* Ҏ֎ͷཁૉ͕ൃੜ͢Δɻ࣮ࡍ͸Ͳ͏ͳΔͷ͔ʁ ৚݅Λ؇Ίͯ৽ͨʹ generalized moment matching Ͱ target ʹ͍ۙ µ ΛఆΊΔ ɹɹɹɹɹɹɹɹɹɹɹɹɹɹɹɹɹm ͕ moment matching ʹ࢖͏෦෼Ͱ r ͕ࠩ෼ 23/41

Slide 24

Slide 24 text

ղͷแؚؔ܎ʹؔ͢Δఆཧ લఏ৚݅ʹ஫ҙ (࣮ݱՄೳͳؔ਺ܥΛنఆ͍ͯ͠Δ) ɾr ʹؔͯ͠ظ଴஋ͷԼݶ஋͕ଘࡏ ɾ͋Δύϥϝλ͕ଘࡏͯͦ͠ͷύϥϝλͰ r ͷظ଴஋ͷԼݶ஋Λ࣮ݱ͠ m ͸ 0 Ҏ্ 24/41

Slide 25

Slide 25 text

ղͷҰகੑʹؔ͢Δఆཧ 25/41

Slide 26

Slide 26 text

ऩଋՄೳੑʹؔ͢Δఆཧ OPT ͕ۭू߹Ͱͳ͍͜ͱΛओு ͞Βʹܥྻ µ_n ͕ OPT ͷղʹऑऩଋ͢Δ͜ͱΛओு ऑऩଋɿ (GAN Ͱඞཁͳͷ͸͜ͷऩଋ) ※ ֶशʹΑΔऩଋੑΛอূ͍ͯ͠ΔΘ͚Ͱ͸ͳ͍͜ͱʹ஫ҙ 26/41

Slide 27

Slide 27 text

૬ରతͳऩଋͷڧ͞ ࠨଆ΄Ͳ “strong“ Ͱಉ͡ box ʹ͋Δ΋ͷ͸ equivalent WGAN ͸͜ͷऩଋͷڧ͞ͷҙຯͰ࠷΋খ͍͞Ϋϥεʹͳ͍ͬͯΔ 27/41

Slide 28

Slide 28 text

ฏߧ఺ۙ๣Ͱͷऩଋੑ

Slide 29

Slide 29 text

ೋͭͷ࿦จͷྨࣅ఺ͱ૬ҧ఺ ฏߧ఺ͷଘࡏΛԾఆ্ͨ͠ͰɺͦͷपΓͰͷৼΔ෣͍Λٞ࿦ ɹ1. Gradient descent GAN optimization is locally stable ɹ2. The Numerics of GAN ͜ͷೋͭ͸͔ͳΓ͍ۙओுΛ͍ͯ͠Δ ɹɾฏߧ఺ۙ๣Ͱͷղͷߋ৽Λৗඍ෼ํఔࣜͰఆࣜԽʢ཭ࢄԽ͢Δͱޯ഑๏ʣ ɹɾޯ഑ͷ flow ʹ஫໨͠ɺJacobian ͷݻ༗஋ͱͷؔ܎Λٞ࿦ ɹɾਖ਼ଇ߲ͱͯ͠ double back prop ߲Λಋೖͯ͠θϩ࣮ݻ༗஋Λ๷͙ ҧ͍΋͋ΔͷͰਅ໘໨ʹಡΉͱ͖͸஫ҙ͕ඞཁ ɹɾdiscriminator ͷ஋Ҭ͸ 1. ͕ (-∞, +∞) Ͱ 2. ͕ [0, +∞) ʹͳ͍ͬͯΔ ɹɾ1. ͸ D ͱ G ͷύϥϝλɺ2. ͸ zero-sum game Ͱͷޮ༻ؔ਺ɺ͕ओͨΔొ৔ਓ෺ ⇒ 2. ͷํ͕ݸਓతʹ޷ΈͳͷͰɺͪ͜ΒΛத৺ʹ঺հ͠·͢ 29/41

Slide 30

Slide 30 text

ొ৔ਓ෺ two-player zero-sum game Ͱͷޮ༻ؔ਺ͱͯ͠ f(φ,θ) ͱ g(φ,θ) Λߟ͑Δ (φ, θ) ͕ͦΕͧΕ discriminator ͱ generator ͷύϥϝλ Nashۉߧ͸ ͕ฏߧ఺ۙ๣Ͱ੒Γཱͭ΋ͷ Euler ๏ʹΑͬͯղΛߋ৽͍ͯ͘͜͠ͱΛߟ͑ɺޯ഑ϕΫτϧ৔ͱ Jacobian Λ࣍Ͱఆٛ ͨͩ͠ v’ ʹ͓͍ͯ two-player zero-sum game ͷ৚݅ f = -g Λ࢖༻͍ͯ͠Δ ֶश͸ Simultaneous Gradient Ascent (SimGA) Ͱ࣮ߦ 30/41

Slide 31

Slide 31 text

ݻ༗஋ͷجຊతͳิ୊ ิ୊ 1. ͸ f ͕ θ ʹؔͯ͠ concave Ͱ φ ʹؔͯ͠ convex Ͱ͋Δ͜ͱΛཁ੥ ূ໌͸༰қ ޙʹݻ༗஋ղੳΛ͢ΔͷͰ negative (semi-) definite ͱ͍͏ͷ͕ॏཁͳؼ݁ ܥ 2. ͸ zero-sum game ͰͷΈ੒ཱ͢Δ͜ͱʹ஫ҙ GAN ͸ඞͣ͠΋ zero-sum game ͷ࿮૊ΈͰهड़͞Εͳ͍ͨΊɺͦͷҙຯͰݶఆత 31/41

Slide 32

Slide 32 text

ฏߧ఺ۙ๣ͰͷৼΔ෣͍ ฏߧ఺ۙ๣Ͱͷղͷऩଋ 2. ͷੑ࣭͸௚ײతʹඇࣗ໌͕ͩɺ F(x) = x + h G(x) where h > 0 Λߟ͑ΔͱཧղͰ͖Δ F’(x) = I + h G’(x) ͳͷͰɺ͜ͷ Jacobian ͷฏߧ఺Ͱͷݻ༗஋͸ 1 ͱͳΔ ͞Βʹઌड़ͷޯ഑ϕΫτϧ৔ʹ߹ΘͤΔͱ x → (φ,θ), G(x) → v(φ,θ) ͱͳΔ ͜ͷ h ͕ SimGA ͷεςοϓαΠζͰ͋ͬͨ͜ͱʹ΋஫ҙ͠ɺ {Jacobianݻ༗஋, h, ऩଋੑ} ʹؔ͢Δٞ࿦Λԡ͠ਐΊΔ 32/41

Slide 33

Slide 33 text

ฏߧ఺ۙ๣ͰͷৼΔ෣͍ {Jacobianݻ༗஋, h, ऩଋੑ} ʹؔ͢Δิ୊ͱܥ (10)ࣜΑΓɺ{େ͖͍࣮ݻ༗஋, ڏ෦͕࣮෦ΑΓେ͖͍} ৔߹ʹ h ͕খ͘͞ͳΔ ύϥϝλߋ৽ͷࡍͷεςοϓαΠζ͕খ͘͞ͳΔͷͰֻ͕͔࣌ؒͬͯ͠·͏ v’ ͷݻ༗஋ۭؒ ฏߧ఺͸ (1,0) ͜ΕΒͷ఺͸খ͍͞ h
 Λཁٻ͢Δ
 → ฏߧ఺ʹͨͲΓண͘ ɹ ·Ͱʹ௕࣌ؒඞཁ 33/41

Slide 34

Slide 34 text

ฏߧ఺ۙ๣ͰͷৼΔ෣͍ ऩଋੑΛྑ͘͢ΔͨΊʹ͸ h ͷ஋͕খ͘͞ͳΓ͗͢ΔͷΛආ͚͍ͨ ɹɹɾฏߧ఺ͰͷৼΔ෣͍͸ม͑ͨ͘ͳ͍ ɹɹɾͦͷ্Ͱ Jacobian ͷݻ༗஋ͷ࣮෦Λෛͷํ޲ʹಈ͔͍ͨ͠ ޮ༻ؔ਺ΛҎԼͷΑ͏ʹ modify ͢Δ ( ) ͜ͷޮ༻ؔ਺ͷԼͰͷ࠷దԽΛ consensus optimization ͱݺͿ straightforward ͳܭࢉʹΑΓ h ͷαΠζΛܾΊΔྔ͸࣍ࣜͰٻ·Γɺγ Ͱௐ੔Մೳ 34/41

Slide 35

Slide 35 text

࣮ݧ mode collapse ʹؔ͢Δ࣮ݧ 35/41

Slide 36

Slide 36 text

࣮ݧ ֶश࣌ͷଛࣦؔ਺ͱ inception score ֶशΛ௨ͯ҆͡ఆੑ͕͋Γɺ͜Ε͸ԿΒ͔ͷେҬతߏ଄Λ͍ࣔࠦͯ͠Δ͔΋ʁ 36/41

Slide 37

Slide 37 text

Summary

Slide 38

Slide 38 text

·ͱΊ • GAN ͸େ͍ʹྲྀߦ͍ͬͯΔ͕ɺղͷଘࡏͱऩଋʹؔͯ͠͸ཧղ͸ෆे෼
 ֶश͕೉ͯ҆͘͠ఆ͠ͳ͍͜ͱ͕େ͖ͳ໰୊ͷҰͭ
 ͨͩ͠࠷ۙ͸ʢ৚݅෇͖Ͱʣ༷ʑͳܥ౷తͳղੳ͕ਐΊΒΕΔΑ͏ʹͳ͖ͬͯͨ • ؔ਺ղੳʹΑΓཧ࿦తͳ੔උ͕͞Ε͖͍ͯͯΔ
 adversarial divergence Ͱ༷ʑͳఏҊख๏Λ౷Ұతʹهड़Մೳ
 ղΛٻΊΔ্Ͱ moment matching effect ͷൣғͰҰக͢Δ΋ͷͷଘࡏ΍ऩଋΛূ໌
 • ฏߧ఺ۙ๣Ͱͷऩଋੑ͕ࣔ͞Εͨ
 ಛʹ two-player zero-sum game ͱݟͳͤΔ΋ͷͰऩଋੑΛূ໌
 ޯ഑ϕΫτϧ৔ͷ flow (ͱͦΕΛ࢘Δ Jacobian ݻ༗஋) ͕ॏཁͰ͋Δ͜ͱ͕෼͔ͬͨ
 ޯ഑ϕΫτϧ৔Λิਖ਼͢ΔͨΊʹਖ਼ଇԽͱͯ͠ double back prop. ߲͕༗ޮ 38/41

Slide 39

Slide 39 text

Future directions

Slide 40

Slide 40 text

໘നͦ͏ͳτϐοΫ • global ͳղʹؔ͢Δղੳ
 ղͷଘࡏՄೳੑʹؔ͢ΔΑΓਐΜͩߟ࡯
 زԿతͳղੳͱ͔΋ͬͱͰ͖ͳ͍ͩΖ͏͔ʢԾఆ͕ڧ͘ͳͬͯ͠·͏ͩΖ͏͚Ͳʣ • φογϡۉߧͰಘΒΕ͍ͯΔ΋ͷ͸ྑ͍”ղ”ͳͷ͔ʁ
 ήʔϜཧ࿦ͷจ຺Ͱ͸ࣾձతʹ๬·͍͠Θ͚Ͱ͸ͳ͍
 ඇڠྗήʔϜ͔ΒڠྗήʔϜʹ֦ு͢Δͱ͔
 ͲͷΑ͏ͳ objective function ͕޷·͍͔͠ͷཧղ͕ਂ·͍ͬͯͬͯཉ͍͠ • practical ͳํ๏࿦ͷચ࿅
 ݁ہͲ͏͢Δͷ͕Ұ൪͍͍ͷ͔ʁʢཧ࿦ղੳ͸Ծఆ΋ڧ͍࣮͠༻ʹ͸গ͠ऑ͍ʣ
 ΋ͬͱ؆୯ʹ҆ఆతʹֶशͰ͖ͯཉ͍͠ʢ·ͩ·ͩ଍Γͳ͍ʣ 40/41

Slide 41

Slide 41 text

41/41 https://www.ai-gakkai.or.jp/no74_jsai_seminar/ [એ఻] NIPSใࠂձ΍Γ·͢