Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
LSTMを用いた自然言語処理について
Search
tkng
January 27, 2016
Technology
3
3.6k
LSTMを用いた自然言語処理について
第3回TokyoCL勉強会 でのLSTMについての発表資料です
tkng
January 27, 2016
Tweet
Share
More Decks by tkng
See All by tkng
自然言語処理と深層学習の最先端
tkng
16
7.5k
EMNLP2015読み会:Effective Approaches to Attention-based Neural Machine Translation
tkng
2
3.7k
basis-of-optimization.pdf
tkng
1
1.2k
Other Decks in Technology
See All in Technology
生成AIの変革の時代に、直近1年で直面した課題とその解決策
ktc_wada
0
340
Compose Compiler Metricsを使った実践的なコードレビュー
tomorrowkey
1
220
VS CodeでAWSを操作しよう
smt7174
8
1.7k
Building Dashboards as a Hobby
egmc
0
240
SIEMを用いて、セキュリティログ分析の可視化と分析を実現し、PDCAサイクルを回してみた
coconala_engineer
0
340
[新卒向け研修資料] テスト文字列に「うんこ」と入れるな(2024年版)
infiniteloop_inc
4
16k
20分で完全に理解するGrafanaダッシュボード
hamadakoji
3
680
ExaDB-D dbaascli で出来ること
oracle4engineer
PRO
0
2.1k
長期間TiDBを使ってきた話 @ 私たちはなぜNewSQLを使うのかTiDB選定5社が語る選定理由と活用LT / Experiences with TiDB Over Time
chibiegg
2
910
サーバー間 GraphQL と webmock-graphql の話 / server-to-server graphql and webmock-graphql
qsona
2
190
推しは推せるときに推せ! プロダクトにフィードバックしていこう
nakasho
0
320
どうするコスト最適化のトレードオフ
tetsuyaooooo
1
530
Featured
See All Featured
The Power of CSS Pseudo Elements
geoffreycrofte
60
5k
Fantastic passwords and where to find them - at NoRuKo
philnash
37
2.5k
Thoughts on Productivity
jonyablonski
58
3.8k
個人開発の失敗を避けるイケてる考え方 / tips for indie hackers
panda_program
60
14k
Debugging Ruby Performance
tmm1
70
11k
Embracing the Ebb and Flow
colly
80
4.1k
Dealing with People You Can't Stand - Big Design 2015
cassininazir
357
22k
Product Roadmaps are Hard
iamctodd
44
9.7k
Creating an realtime collaboration tool: Agile Flush - .NET Oxford
marcduiker
14
1.5k
What's new in Ruby 2.0
geeforr
337
31k
Designing for humans not robots
tammielis
248
25k
CoffeeScript is Beautiful & I Never Want to Write Plain JavaScript Again
sstephenson
155
14k
Transcript
LSTMΛ༻͍ͨ ࣗવݴޠॲཧʹ͍ͭͯ ಙӬ೭ (@tkng) 5PLZP$-ษڧձୈճ!άʔάϧגࣜձࣾ
ࣗݾհ • Twitter: @tkng • ΧϨʔ͕͖Ͱ͢
ຊͷ • Recurrent Neural Network & LSTM • LSTMΛͬͨ࠷ۙͷݚڀࣄྫ •
LSTMΛͬͨಠ࣮ࣗݧ
Recurrent Neural Network • ࣌ܥྻʹฒͿཁૉΛ1ͭͣͭड͚औͬͯɺঢ়ଶ Λߋ৽͍ͯ͘͠ωοτϫʔΫͷ૯শ • LSTMRNNͷҰछͱݴ͑Δ IUUQDPMBIHJUIVCJPQPTUT6OEFSTUBOEJOH-45.T
Seq2seqϞσϧͱʁ • ՄมͷೖྗσʔλΛɺRecurrent Neural NetworkΛͬͯݻఆͷϕΫτϧʹΤϯίʔ υͯ͠ɺ͔ͦ͜Β༁ޙͷσʔλΛσίʔυ ͢Δ
Seq2seqϞσϧͰͷ༁ 5IJT JT B QFO &04 ͜Ε ϖϯ Ͱ͢
&04 ͜Ε ϖϯ Ͱ͢
Seq2seqϞσϧͰͷ༁ 5IJT JT B QFO &04 ͜Ε ϖϯ Ͱ͢
&04 ͜Ε ϖϯ Ͱ͢ 5IJTJTBQFOΛݻఆʹ Τϯίʔυ͍ͯ͠Δʂ
Effective Approaches to Attention- based Neural Machine Translation (Bahdanau+, 2015)
• Seq2seqͰػց༁ • Local Attentionͱ͍͏৽͍͠ख๏ΛఏҊ • ͍͔ͭ͘ͷݴޠϖΞͰɺstate of the artΛୡ
A Neural Conversational Model (Vinyals+, 2015) • LSTMΛͬͯରγεςϜΛ࡞ͬͨΒͦΕͬ Ά͘ಈ͍ͨ
Ask Me Anything: Dynamic Memory Networks for Natural Language Processing
(Kumar+, 2015) • Dynamic Memory NetworksΛఏҊͨ͠ • Recurrent Neural NetworkΛΈ߹ΘͤͨΑ ͏ͳϞσϧʹͳ͍ͬͯΔ • ࣭Ԡɺࢺλά͚ɺڞࢀরղੳɺධ ੳͰstate of the art • ΄΅ಉ͡ϞσϧͰ৭ʑղ͚Δͷ͕͍͢͝ͱ ͍͏ओு
Show, Attend and Tell: Neural Image Caption Generation with Visual
Attention (Xu+, 2015) • ը૾ʹର͢Δղઆจͷੜ • CNN + LSTM + Attention IUUQLFMWJOYVHJUIVCJPQSPKFDUTDBQHFOIUNM
Semi-supervised Sequence Learning (Dai+, 2015) • LSTMΛͬͯ sentiment analysis ͳͲྨܥ
ͷλεΫΛ࣮ݧ • Language ModelͱSequence Autoencoderͷ2 ͭΛpretrainingͷख๏ͱͯ͠༻͍ɺ্هͷ࣮ݧ Ͱstate of the artΛߋ৽ • γϯϓϧͳख๏Ͱɺຯʹ͍͢͝
An Empirical Exploration of Recurrent Network Architectures ( Jozefowicz+, 2015)
• GoogleͷܭࢉػύϫʔʹΛݴΘͤͯɺ LSTMGRUͷੜϞσϧΛͨ͘͞ΜධՁͨ͠ • LSTMΛॳظԽ͢Δࡍɺforget gateͷbiasΛ1 ʹ͢Δ͜ͱΛڧ͘קΊ͍ͯΔ
ͭ·ΓɺࠓɺLSTM͕φ͍ʂ • ྨλεΫͰstate of the art͕ग़ͤΔ • Seq2seqͰจੜ͕Ͱ͖Δ Ͳ͏͍͏ཧ۶ͳͷ͔Α͘Θ͔Βͳ͍…
͔͜͜Βઌಠ࣮ࣗݧͷ
ͳʹΛ࣮ݧ͔ͨ͠ʁ • LSTMʹΑΔ Sentence Auto-encoder • ࣮ݧ͍Ζ͍ΖࡶͰ͢ ͜Ε ϖϯ
Ͱ͢ &04 ͜Ε ϖϯ Ͱ͢ &04 ͜Ε ϖϯ Ͱ͢
ͳΜͰ࣮ݧ͔ͨ͠ʁ • Seq2seq͓͠Ζ͍ٕज़͕ͩɺத͕Α͘ Θ͔Βͳ͍
Γ͍ͨ͜ͱ • Ͳ͏͍͏จΛؒҧ͍͑͢ͷ͔ʁ • ͳΜͰ։͍ׅͨހΛด͡ΒΕΔͷ͔ʁ • ͳΜͰೖྗΛٯʹͨ͠ํ͕͍͍ͷ͔ʁ • ࣅͨจ͕࡞Γग़͢ঢ়ଶࣅ͍ͯΔͷ͔ʁ •
ͳΜͰݻఆͷσʔλ͔ΒՄมͷग़ྗ͕Ͱ ͖Δͷ͔ʁ
σʔλ • ݸਓతʹूΊ͍ͯͨຊޠͷϒϩάσʔλ • ܇࿅ɿ 60ສจ (33MB) • ςετɿ 3ສ5ઍจ
(2MB)
࣮ݧઃఆʢ1ʣ • ޠኮʢor จࣈʣ80000 or 10000 • ೖྗ100࣍ݩͷembeddingʹม • LSTM1ɺঢ়ଶ200࣍ݩ
• ࠷దԽʹAdamΛ༻ • ίʔυ https://github.com/odashi/ chainer_examples Λར༻ͨ͠
࣮ݧઃఆʢ2ʣ • ೖྗΛ୯ޠ୯Ґʹ͢Δ͔ɺจࣈ୯Ґʹ͢Δ͔ • ೖྗΛٯॱʹ͢Δ͔ɺͦͷ··ೖΕΔ͔ • ߹ܭ4ύλʔϯΛ࣮ݧͨ͠ • ୯ޠ୯Ґ50epoch, ޠኮ80000ޠ
• จࣈ୯Ґ100epoch, ޠኮ10000จࣈ
ͲΜͳײ͡ͰֶशͰ͖Δ͔ʁ • trg = ͠Ό͘ ͠Ό͘ ͱ φγ ͷ Α͏
ͳ ৯ ײ ʹ ্ ͳ ຯ ɻ • hyp = Χϥʔ ͱ ग़ ͷ Α͏ ͳ ࣄଶ ʹ ʹ ඇৗ ͳ ࢪઃ ɻ • hyp = ମ ͱ ࠃՈ ͷ Α͏ ͳ ৯ ײ ʹ ߦ͘ ͳ ୴ಹ ɻ • hyp = ͱ φγ ͷ Α͏ ͳ ৯ ײ ʹ ্ ͳ ୴ಹ ɻ • hyp = ೖࡳ ͱ φγ ͷ Α͏ ͳ ৯ ʹ ্ ͳ ຯ ɻ • hyp = ࣇۄ Ԃ ͱ φγ ͷ Α͏ ͳ ৯ ײ ʹ ্ ͳ ຯ ɻ • hyp = PA ͠Ό͘ ͱ ٳܜ ͷ Α͏ ͳ ৯ ͗͢ ʹ ্ ͳ ຯ ɻ • hyp = ͠Ό͘ ͠Ό͘ ͱ φγ ͷ Α͏ ͳ ৯ ײ ʹ ্ ͳ ຯ ɻ • hyp = ͠Ό͘ ͠Ό͘ ͱ ↑ ͷ Α͏ ͳ ৯ ײ ʹ ্ ͳ ຯ ɻ • hyp = ͠Ό͘ ͠Ό͘ ͱ ↑ ͷ Α͏ ͳ ৯ ײ ʹ ্ ͳ ຯ ɻ • hyp = ͠Ό͘ ͠Ό͘ ͱ φγ ͷ ͳ ͳ ৯ ײ ʹ ্ ͳ ຯ ɻ • hyp = ͠Ό͘ ͠Ό͘ ͱ φγ ͷ Α͏ ͳ ৯ ײ ʹ ্ ͳ ຯ ɻ
ֶशͷਐΉ༷ࢠʢจਖ਼ղʣ
ֶशͷਐΉ༷ࢠʢจਖ਼ղʣ
จʹΑΔਖ਼ղͷมԽ
ਖ਼ॱͱٯॱͱͷൺֱ
ग़ྗΛٯॱʹͯ͠ΈΔͱ…ʁ
͜͜·Ͱͷߟ • จΛؒҧ͍͑͢໌Β͔Ͱ͋Δ • ٯॱͰೖྗͨ͠΄͏͕ੑೳ͕Α͍ • ΘΓʹग़ྗΛٯॱʹͯ͠Α͍ • ֶशΛ్தͰԼ͛ΔͷޮՌతͰ͋Δ
ඍົʹೖྗΛม͑ͯΈΔͱʁ ʮձࣾͰΠϯϑϧΤϯβ͕େྲྀߦ͍ͯ͠Δɻʯ ೖྗɿձࣾͰϯϑϧΤϯβ͕େྲྀߦ͍ͯ͠Δɻ ग़ྗɿձࣾͰϯϑϧΤϯβ͕େྲྀͦʹ͍Δɻ ೖྗɿձࣾͰΠϑϧΤϯβ͕େྲྀߦ͍ͯ͠Δɻ ग़ྗɿձࣾͰΠϑϧΤϯβ͕େྲྀ͍ͦ͠Δɻ ೖྗɿΠϯϑϧΤϯβ ग़ྗɿΠϯϑϧΤϯτ
ผͷྫͰṖͷ่յ ೖྗɿձࣾͰΠϯϑϧϯΤβ͕େྲྀߦ͍ͯ͠Δɻ ग़ྗɿձࣾͰΠϯϑϧϯΤϨʢ໙Θ։ྲྀͩ૿ خɻ
ೲಘͰ͖Δؒҧ͍ྫ ೖྗɿࢲʰिؒগαϯσʔʱΛຖिߪಡ͠ ͍ͯ·͢ɻ ग़ྗɿࢲʰिؒগαϯσʔΛΛຖങಡ͠ ͍ͯ·͢ɻ
ೖྗΛม͑ͯΈͯͷߟ • ͍จͰࣦഊ͢Δ͜ͱ͕͋Δ • 1จࣈ่͚ͩΕΔ߹͋Δ͕ɺ్த͔Βେ่ յΛ࢝ΊΔ߹͋Δ • िͱΛؒҧ͑ͨΓɺ༻๏͕ࣅͨจࣈΛؒҧ ͍͑ͯΔྫݟΒΕΔ
ࠓޙͷ՝ • ͬͱͪΌΜͱ࣮ͨ͠ݧ • ΑΓൣͳύϥϝʔλʔ୳ࡧ • 2ɺ3ͷLSTMͷར༻ • dropoutͷར༻ •
ঢ়ଶͷՄࢹԽ
·ͱΊ • LSTMʹจΛ෮ݩͤͯ͞Έͨ • ࣮ࡍɺͦΕͳΓʹ෮ݩͰ͖Δ • ೖྗΛٯॱʹ͢ΔΘΓʹɺग़ྗΛٯॱʹ͠ ͯΑ͍͜ͱ͕Θ͔ͬͨ
࠷ۙͷؔ࿈ݚڀ • http://arxiv.org/abs/1506.02078 • LSTMͷՄࢹԽ • http://citeseerx.ist.psu.edu/viewdoc/download? doi=10.1.1.1.3412&rep=rep1&type=pdf • RNNͰจ຺ࣗ༝จ๏ΛύʔεͰ͖Δ
• http://www.aclweb.org/anthology/P/P15/ P15-1107.pdf • Sentence Auto encoderͷ࣮ݧ