Slide 1

Slide 1 text

Efficient Estimation of Word Representations in Vector Space Tomas Mikolov, Kai Chen, Greg Corrado and Jeffrey Dean, ICLR 2013 ※εϥΠυதͷਤද͸શͯ࿦จ͔ΒҾ༻͞Εͨ΋ͷ খொक [email protected] Deep Learning ษڧձ@ट౎େֶ౦ژ 2014/12/01

Slide 2

Slide 2 text

୯ޠN-gramϞσϧ͸༷ʑͳλ εΫͰ੒ޭΛऩΊ͍ͯΔ͕… | ୯ޠͷදهؒͷྨࣅ౓ͷ֓೦͕ͳ͍ දه͸୯ͳΔΠϯσοΫε | ୯७͔ͭؤ݈ ෳࡶ͕ͩখن໛ͳσʔλͰ܇࿅ͨ͠ϞσϧΑΓɺ ୯७Ͱ΋େن໛ͳσʔλͰ܇࿅ͨ͠Ϟσϧͷ΄ ͏͕ੑೳ͕Α͍ | →େن໛ͳσʔλ͕खʹೖΒͳ͍৔߹͸Ͳ͏͢ Δ͔ʁʢe.g. Ի੠ೝࣝɺػց຋༁ʣͦͷΑ͏ͳ ৔߹ɺෳࡶͳϞσϧ͕ඞཁʹͳΔͷͰ͸ͳ͍ ͔ʁ 2

Slide 3

Slide 3 text

100ສޠኮ͋Δେن໛σʔλ͔ Βޮ཰తʹ୯ޠϕΫτϧΛֶश | word2vec ͱͯ͠ެ։͞Ε͍ͯΔख๏ | ෳ਺ͷछྨͷྨࣅ౓ʢmultiple degrees of similarityʣʹରԠ͢ΔϕΫτϧදݱ { ҙຯͷྨࣅʢҙຯతͳؔ܎ʣ { ୯ޠͷ຤ඌͷྨࣅʢ౷ޠతͳؔ܎ʣ | ୯ޠؒͷઢܗੑΛอͭΑ͏ͳϕΫτϧૢ࡞ { vector(“King”) – vector(“Man”) + vector(“Woman”) = vector(“Queen”) 3

Slide 4

Slide 4 text

୯ޠΛ࿈ଓϕΫτϧͱͯ͠දݱ ͢Δઌߦݚڀ: NNLM | χϡʔϥϧωοτϫʔΫݴޠϞσϧʢNNLMʣ (Bengio et al., JMLR 2003) ϑΟʔυϑΥϫʔυNNΛઢܗࣹӨ૚ͱඇઢܗӅ Ε૚ͱ૊Έ߹Θͤɺ୯ޠϕΫτϧදݱͱ౷ܭత ݴޠϞσϧΛಉ࣌ʹֶश | NNLMͷѥछ (Mikolov et al., ICASSP 2009) 1ͭͷӅΕ૚ͷNNͰ୯ޠϕΫτϧΛֶश͠ɺֶ शͨ͠୯ޠϕΫτϧͰNNLMΛ܇࿅͢Δʢಉ࣌ ʹֶशͤͣɺ෼ֶ͚ͯश͢Δʣ →ຊݚڀ͸ɺͪ͜ΒͷΞϓϩʔνͰɺ୯ޠϕΫ τϧͷγϯϓϧͳֶशํ๏ΛఏҊ 4

Slide 5

Slide 5 text

୯ޠͷ෼ࢄදݱΛֶश͢ ΔNNLMख๏ͱͷൺֱ | ୯ޠͷ࿈ଓϕΫτϧΛֶश͢Δͷ͸ NNLM Ҏ֎ ʹ΋ LSA ΍ LDA ͕ߟ͑ΒΕΔ͕ɺઌߦݚڀͰ NNLM ͷ΄͏͕ LSA/LDA ΑΓΑ͍͜ͱ͕ࣔ͞ Ε͍ͯΔͷͰɺຊݚڀ͸ NNLM ͷΈൺֱ { LDA ͸ʢφΠʔϒʹ͸ʣେن໛σʔλʹద༻͢ Δ͜ͱ͕Ͱ͖ͳ͍ | Ϟσϧͷύϥϝʔλ O = E º T º Q { E: ܇࿅ͷεςοϓ਺ʢ3ʙ50ʣ { T: ܇࿅σʔλͷԆ΂୯ޠ਺ʢʙ1ԯޠʣ { Q: ֤Ϟσϧʹ͓͚Δύϥϝʔλ 5

Slide 6

Slide 6 text

ϑΟʔυϑΥϫʔυNNLM ʹ͓͚Δύϥϝʔλͱܭࢉྔ | Bengio et al. (JMLR 2003) { ೖྗ૚: N ݸલʢͨͱ͑͹N=10ʣ·Ͱͷ୯ޠʢ1- of-Vදݱ; V=ޠኮʣ { ࣹӨ૚: P; NºD࣍ݩʢ500ʙ2000࣍ݩʣͷڞ༗ࣹ Өߦྻ { ӅΕ૚: Hʢ500ʙ1000࣍ݩʣ { ग़ྗ૚: V࣍ݩ | ֤܇࿅ࣄྫ͋ͨΓͷܭࢉྔ Q = N º D + N º D º H + H º V →V Λ2෼໦Ͱදݱ͢Ε͹͜ͷ෦෼͸ log(V) →ܭࢉྔͷϘτϧωοΫ͸ N º D º H ͷ෦෼ 6

Slide 7

Slide 7 text

ϑΟʔυϑΥϫʔυNNLM ͷܭࢉྔ࡟ݮํ๏ | ֤܇࿅ࣄྫ͋ͨΓͷܭࢉྔ Q = N º D + N º D º H + H º V →V Λ2෼໦Ͱදݱ͢Ε͹͜ͷ෦෼͸ log(V) →ܭࢉྔͷϘτϧωοΫ͸ N º D º H ͷ෦෼ | ߴ଎Խͷख๏ { softmaxؔ਺Λ֊૚Խ { Ϟσϧͷਖ਼نԽΛ͠ͳ͍ | Huffman໦Λߏங͢Δ͜ͱʹΑΓɺԼઢ෦͸ log(Unigram_perplexity(V)) →100ສޠኮͷ৔߹ɺ଎౓͸2ഒ 7

Slide 8

Slide 8 text

աڈͷཤྺΛߟྀͰ͖ΔRNNLM | ճؼχϡʔϥϧωοτϫʔΫʹجͮ͘ݴޠϞσ ϧʢrecurrent neural net language modelʣ { ೖྗ૚ { ࣹӨ૚: ͳ͠ { ӅΕ૚: ࣌ؒ஗Ԇͷ͋Δ઀ଓʹΑΔճؼߦྻΛ ͍࣋ͬͯΔ →୹ظهԱɻҎલͷঢ়ଶͰݱࡏͷঢ়ଶΛߋ৽ { ग़ྗ૚ | RNNϞσϧͷܭࢉྔ Q = H º H + H º V →V͸2෼໦Ͱߴ଎ԽͰ͖ΔͷͰɺϘτϧωοΫ ͸ H º H 8

Slide 9

Slide 9 text

χϡʔϥϧωοτϫʔΫ͸ฒྻ ෼ࢄॲཧΛ༻͍ͯ܇࿅Ͱ͖Δ | DistBelief (Dean et al., NIPS 2012) { େن໛෼ࢄϑϨʔϜϫʔΫ { ಉ͡Ϟσϧͷෳ੡Λฒྻʹ࣮ߦ͠ɺύϥϝʔλߋ ৽͸தԝͷαʔόͰಉظ͢Δɻ { AdaGradΛ༻͍ͨϛχόονඇಉظ gradient descent { 100Ҏ্ͷෳ੡Λ࡞Δͷ͕ී௨ | NNLM ΍ word2vec ͸͜ΕΒΛ༻͍ͯ܇࿅ͨ͠ 9

Slide 10

Slide 10 text

৽͍͠ର਺ઢܗϞσϧͷ intuition | ܭࢉྔͷϘτϧωοΫ͸ඇઢܗ૚ʢHʣ χϡʔϥϧωοτϫʔΫ΄Ͳͷදݱೳྗ͸ͳ͍ ͕ɺ΋ͬͱޮ཰తʹ܇࿅Ͱ͖ΔϞσϧΛఏҊ 1. ࿈ଓ୯ޠϖΫτϧ͸୯७ͳϞσϧͰֶश →ਫ਼౓͸ଟগ٘ਜ਼ʹͳΔ͕ɺܭࢉ͕ޮ཰తʹ 2. N-gram NNLM Λεςοϓ1Ͱֶशͨ͠෼ࢄද ݱΛ༻͍ͯ܇࿅ →χϡʔϥϧωοτϫʔΫͷඇઢܗͷදݱೳྗ Λ׆༻ 10

Slide 11

Slide 11 text

ੜίʔύε͔Βχϡʔϥϧ ݴޠϞσϧΛֶश͢Δ | ܭࢉྔͷϘτϧωοΫ͸ඇઢܗ૚ʢHʣ χϡʔϥϧωοτϫʔΫ΄Ͳͷදݱೳྗ͸ͳ͍ ͕ɺ΋ͬͱޮ཰తʹ܇࿅Ͱ͖ΔϞσϧΛఏҊ 11

Slide 12

Slide 12 text

Continuous Bag-of-Words ʢCBOWʣϞσϧ | ݱࡏͷ୯ޠͷपลͷ୯ޠʢະདྷͷจ຺΋ؚΉʣ Λ༻͍ͯݱࡏͷ୯ޠΛ༧ଌɻ | NNLMʹࣅ͍ͯΔ͕ɺӅΕ૚͸ͳ͘ɺࣹӨ૚΋ શ୯ޠͰڞ༗͍ͯ͠Δʢ୯ޠͷҐஔ͸ߟྀ͠ͳ ͍ʣɻ → lDeep” Ͱ͸ͳ͍ | ී௨ͷ BoW ͱҧ͍ɺ෼ࢄදݱΛ ༻͍Δɻ | ܭࢉྔ Q = N º D + D º log(V) 12

Slide 13

Slide 13 text

Continuous Skip-gram Ϟσϧ | CBOWʹࣅ͍ͯΔ͕ɺจ຺͔Βݱࡏͷ୯ޠΛ༧ ଌ͢ΔͷͰ͸ͳ͘ɺݱࡏͷ୯ޠ͔Βपลͷ୯ޠ Λ༧ଌ͢Δʢ෼ྨਫ਼౓Λ࠷େԽ͢ΔʣϞσϧ →͜Ε΋ “Deep” Ͱ͸ͳ͍ | จ຺௕ʢCʣΛ޿͛Δͱ୯ޠϕΫ τϧͷ࣭͸Α͘ͳΔ͕ɺܭࢉྔ͕ େ͖͘ͳΔ͠ɺ཭ΕΕ͹཭ΕΔ΄ Ͳݱࡏͷ୯ޠͱແؔ܎ʹͳΔͷͰɺ ڑ཭ʹԠͯ͡μ΢ϯαϯϓϦϯά →ଟগҙຯɾ౷ޠత৘ใΛߟྀʁ | ܭࢉྔ Q = C º (D + D º log(V))13

Slide 14

Slide 14 text

୯ޠͷҙຯɾ౷ޠؔ܎ͷ ࣮ݧͰ Skip-gram ͕༗ޮ 14

Slide 15

Slide 15 text

783ສ୯ޠ͔Βֶश͞Εͨ 300࣍ݩ Skip άϥϜͷग़ྗ 15 ʢGoogle News ίʔύεશମͰ͸Ԇ΂6ԯޠʣ

Slide 16

Slide 16 text

·ͱΊ୯ޠϕΫτϧͷ৽͍͠ ࡞Γํˠ$#08 ͱ 4LJQάϥ Ϝ | CBOW ͱ Skip άϥϜͱ͍͏2ͭͷχϡʔϥϧ ωοτϫʔΫݴޠϞσϧΛఏҊͨ͠ { طଘͷχϡʔϥϧωοτϫʔΫݴޠϞσϧΑΓ ؆୯ͳΞʔΩςΫνϟ { ฒྻ෼ࢄܭࢉʹΑΓ௒େن໛ίʔύεͰ΋ܭࢉ Մೳʢޠኮ਺Λ੍ݶ͠ͳͯ͘Α͍ʣ | SemEval-2012 Task 2ʢ୯ޠͷҙຯɾ౷ޠؔ܎ ͷ༧ଌʣͰଞʹެ։͞Ε͍ͯΔ୯ޠϕΫτϧͱ ൺ΂ͯେ͖͘ੑೳΛ޲্ͤͨ͞ 16