$30 off During Our Annual Pro Sale. View Details »
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
LSTMを用いた自然言語処理について
Search
tkng
January 27, 2016
Technology
3
3.7k
LSTMを用いた自然言語処理について
第3回TokyoCL勉強会 でのLSTMについての発表資料です
tkng
January 27, 2016
Tweet
Share
More Decks by tkng
See All by tkng
自然言語処理と深層学習の最先端
tkng
16
7.7k
EMNLP2015読み会:Effective Approaches to Attention-based Neural Machine Translation
tkng
2
4k
basis-of-optimization.pdf
tkng
1
1.4k
Other Decks in Technology
See All in Technology
テストセンター受験、オンライン受験、どっちなんだい?
yama3133
0
150
AI との良い付き合い方を僕らは誰も知らない
asei
0
250
TED_modeki_共創ラボ_20251203.pdf
iotcomjpadmin
0
150
Strands AgentsとNova 2 SonicでS2Sを実践してみた
yama3133
1
1.8k
AgentCoreとStrandsで社内d払いナレッジボットを作った話
motojimayu
1
900
モダンデータスタックの理想と現実の間で~1.3億人Vポイントデータ基盤の現在地とこれから~
taromatsui_cccmkhd
2
260
Oracle Database@Google Cloud:サービス概要のご紹介
oracle4engineer
PRO
1
760
Amazon Bedrock Knowledge Bases × メタデータ活用で実現する検証可能な RAG 設計
tomoaki25
6
2.3k
100以上の新規コネクタ提供を可能にしたアーキテクチャ
ooyukioo
0
250
Identity Management for Agentic AI 解説
fujie
0
460
ペアーズにおけるAIエージェント 基盤とText to SQLツールの紹介
hisamouna
2
1.6k
Introduce marp-ai-slide-generator
itarutomy
0
110
Featured
See All Featured
The Curious Case for Waylosing
cassininazir
0
190
My Coaching Mixtape
mlcsv
0
13
The MySQL Ecosystem @ GitHub 2015
samlambert
251
13k
Redefining SEO in the New Era of Traffic Generation
szymonslowik
1
170
Embracing the Ebb and Flow
colly
88
4.9k
Dealing with People You Can't Stand - Big Design 2015
cassininazir
367
27k
Creating an realtime collaboration tool: Agile Flush - .NET Oxford
marcduiker
35
2.3k
It's Worth the Effort
3n
187
29k
Are puppies a ranking factor?
jonoalderson
0
2.4k
Impact Scores and Hybrid Strategies: The future of link building
tamaranovitovic
0
170
Hiding What from Whom? A Critical Review of the History of Programming languages for Music
tomoyanonymous
0
300
個人開発の失敗を避けるイケてる考え方 / tips for indie hackers
panda_program
122
21k
Transcript
LSTMΛ༻͍ͨ ࣗવݴޠॲཧʹ͍ͭͯ ಙӬ೭ (@tkng) 5PLZP$-ษڧձୈճ!άʔάϧגࣜձࣾ
ࣗݾհ • Twitter: @tkng • ΧϨʔ͕͖Ͱ͢
ຊͷ • Recurrent Neural Network & LSTM • LSTMΛͬͨ࠷ۙͷݚڀࣄྫ •
LSTMΛͬͨಠ࣮ࣗݧ
Recurrent Neural Network • ࣌ܥྻʹฒͿཁૉΛ1ͭͣͭड͚औͬͯɺঢ়ଶ Λߋ৽͍ͯ͘͠ωοτϫʔΫͷ૯শ • LSTMRNNͷҰछͱݴ͑Δ IUUQDPMBIHJUIVCJPQPTUT6OEFSTUBOEJOH-45.T
Seq2seqϞσϧͱʁ • ՄมͷೖྗσʔλΛɺRecurrent Neural NetworkΛͬͯݻఆͷϕΫτϧʹΤϯίʔ υͯ͠ɺ͔ͦ͜Β༁ޙͷσʔλΛσίʔυ ͢Δ
Seq2seqϞσϧͰͷ༁ 5IJT JT B QFO &04 ͜Ε ϖϯ Ͱ͢
&04 ͜Ε ϖϯ Ͱ͢
Seq2seqϞσϧͰͷ༁ 5IJT JT B QFO &04 ͜Ε ϖϯ Ͱ͢
&04 ͜Ε ϖϯ Ͱ͢ 5IJTJTBQFOΛݻఆʹ Τϯίʔυ͍ͯ͠Δʂ
Effective Approaches to Attention- based Neural Machine Translation (Bahdanau+, 2015)
• Seq2seqͰػց༁ • Local Attentionͱ͍͏৽͍͠ख๏ΛఏҊ • ͍͔ͭ͘ͷݴޠϖΞͰɺstate of the artΛୡ
A Neural Conversational Model (Vinyals+, 2015) • LSTMΛͬͯରγεςϜΛ࡞ͬͨΒͦΕͬ Ά͘ಈ͍ͨ
Ask Me Anything: Dynamic Memory Networks for Natural Language Processing
(Kumar+, 2015) • Dynamic Memory NetworksΛఏҊͨ͠ • Recurrent Neural NetworkΛΈ߹ΘͤͨΑ ͏ͳϞσϧʹͳ͍ͬͯΔ • ࣭Ԡɺࢺλά͚ɺڞࢀরղੳɺධ ੳͰstate of the art • ΄΅ಉ͡ϞσϧͰ৭ʑղ͚Δͷ͕͍͢͝ͱ ͍͏ओு
Show, Attend and Tell: Neural Image Caption Generation with Visual
Attention (Xu+, 2015) • ը૾ʹର͢Δղઆจͷੜ • CNN + LSTM + Attention IUUQLFMWJOYVHJUIVCJPQSPKFDUTDBQHFOIUNM
Semi-supervised Sequence Learning (Dai+, 2015) • LSTMΛͬͯ sentiment analysis ͳͲྨܥ
ͷλεΫΛ࣮ݧ • Language ModelͱSequence Autoencoderͷ2 ͭΛpretrainingͷख๏ͱͯ͠༻͍ɺ্هͷ࣮ݧ Ͱstate of the artΛߋ৽ • γϯϓϧͳख๏Ͱɺຯʹ͍͢͝
An Empirical Exploration of Recurrent Network Architectures ( Jozefowicz+, 2015)
• GoogleͷܭࢉػύϫʔʹΛݴΘͤͯɺ LSTMGRUͷੜϞσϧΛͨ͘͞ΜධՁͨ͠ • LSTMΛॳظԽ͢Δࡍɺforget gateͷbiasΛ1 ʹ͢Δ͜ͱΛڧ͘קΊ͍ͯΔ
ͭ·ΓɺࠓɺLSTM͕φ͍ʂ • ྨλεΫͰstate of the art͕ग़ͤΔ • Seq2seqͰจੜ͕Ͱ͖Δ Ͳ͏͍͏ཧ۶ͳͷ͔Α͘Θ͔Βͳ͍…
͔͜͜Βઌಠ࣮ࣗݧͷ
ͳʹΛ࣮ݧ͔ͨ͠ʁ • LSTMʹΑΔ Sentence Auto-encoder • ࣮ݧ͍Ζ͍ΖࡶͰ͢ ͜Ε ϖϯ
Ͱ͢ &04 ͜Ε ϖϯ Ͱ͢ &04 ͜Ε ϖϯ Ͱ͢
ͳΜͰ࣮ݧ͔ͨ͠ʁ • Seq2seq͓͠Ζ͍ٕज़͕ͩɺத͕Α͘ Θ͔Βͳ͍
Γ͍ͨ͜ͱ • Ͳ͏͍͏จΛؒҧ͍͑͢ͷ͔ʁ • ͳΜͰ։͍ׅͨހΛด͡ΒΕΔͷ͔ʁ • ͳΜͰೖྗΛٯʹͨ͠ํ͕͍͍ͷ͔ʁ • ࣅͨจ͕࡞Γग़͢ঢ়ଶࣅ͍ͯΔͷ͔ʁ •
ͳΜͰݻఆͷσʔλ͔ΒՄมͷग़ྗ͕Ͱ ͖Δͷ͔ʁ
σʔλ • ݸਓతʹूΊ͍ͯͨຊޠͷϒϩάσʔλ • ܇࿅ɿ 60ສจ (33MB) • ςετɿ 3ສ5ઍจ
(2MB)
࣮ݧઃఆʢ1ʣ • ޠኮʢor จࣈʣ80000 or 10000 • ೖྗ100࣍ݩͷembeddingʹม • LSTM1ɺঢ়ଶ200࣍ݩ
• ࠷దԽʹAdamΛ༻ • ίʔυ https://github.com/odashi/ chainer_examples Λར༻ͨ͠
࣮ݧઃఆʢ2ʣ • ೖྗΛ୯ޠ୯Ґʹ͢Δ͔ɺจࣈ୯Ґʹ͢Δ͔ • ೖྗΛٯॱʹ͢Δ͔ɺͦͷ··ೖΕΔ͔ • ߹ܭ4ύλʔϯΛ࣮ݧͨ͠ • ୯ޠ୯Ґ50epoch, ޠኮ80000ޠ
• จࣈ୯Ґ100epoch, ޠኮ10000จࣈ
ͲΜͳײ͡ͰֶशͰ͖Δ͔ʁ • trg = ͠Ό͘ ͠Ό͘ ͱ φγ ͷ Α͏
ͳ ৯ ײ ʹ ্ ͳ ຯ ɻ • hyp = Χϥʔ ͱ ग़ ͷ Α͏ ͳ ࣄଶ ʹ ʹ ඇৗ ͳ ࢪઃ ɻ • hyp = ମ ͱ ࠃՈ ͷ Α͏ ͳ ৯ ײ ʹ ߦ͘ ͳ ୴ಹ ɻ • hyp = ͱ φγ ͷ Α͏ ͳ ৯ ײ ʹ ্ ͳ ୴ಹ ɻ • hyp = ೖࡳ ͱ φγ ͷ Α͏ ͳ ৯ ʹ ্ ͳ ຯ ɻ • hyp = ࣇۄ Ԃ ͱ φγ ͷ Α͏ ͳ ৯ ײ ʹ ্ ͳ ຯ ɻ • hyp = PA ͠Ό͘ ͱ ٳܜ ͷ Α͏ ͳ ৯ ͗͢ ʹ ্ ͳ ຯ ɻ • hyp = ͠Ό͘ ͠Ό͘ ͱ φγ ͷ Α͏ ͳ ৯ ײ ʹ ্ ͳ ຯ ɻ • hyp = ͠Ό͘ ͠Ό͘ ͱ ↑ ͷ Α͏ ͳ ৯ ײ ʹ ্ ͳ ຯ ɻ • hyp = ͠Ό͘ ͠Ό͘ ͱ ↑ ͷ Α͏ ͳ ৯ ײ ʹ ্ ͳ ຯ ɻ • hyp = ͠Ό͘ ͠Ό͘ ͱ φγ ͷ ͳ ͳ ৯ ײ ʹ ্ ͳ ຯ ɻ • hyp = ͠Ό͘ ͠Ό͘ ͱ φγ ͷ Α͏ ͳ ৯ ײ ʹ ্ ͳ ຯ ɻ
ֶशͷਐΉ༷ࢠʢจਖ਼ղʣ
ֶशͷਐΉ༷ࢠʢจਖ਼ղʣ
จʹΑΔਖ਼ղͷมԽ
ਖ਼ॱͱٯॱͱͷൺֱ
ग़ྗΛٯॱʹͯ͠ΈΔͱ…ʁ
͜͜·Ͱͷߟ • จΛؒҧ͍͑͢໌Β͔Ͱ͋Δ • ٯॱͰೖྗͨ͠΄͏͕ੑೳ͕Α͍ • ΘΓʹग़ྗΛٯॱʹͯ͠Α͍ • ֶशΛ్தͰԼ͛ΔͷޮՌతͰ͋Δ
ඍົʹೖྗΛม͑ͯΈΔͱʁ ʮձࣾͰΠϯϑϧΤϯβ͕େྲྀߦ͍ͯ͠Δɻʯ ೖྗɿձࣾͰϯϑϧΤϯβ͕େྲྀߦ͍ͯ͠Δɻ ग़ྗɿձࣾͰϯϑϧΤϯβ͕େྲྀͦʹ͍Δɻ ೖྗɿձࣾͰΠϑϧΤϯβ͕େྲྀߦ͍ͯ͠Δɻ ग़ྗɿձࣾͰΠϑϧΤϯβ͕େྲྀ͍ͦ͠Δɻ ೖྗɿΠϯϑϧΤϯβ ग़ྗɿΠϯϑϧΤϯτ
ผͷྫͰṖͷ่յ ೖྗɿձࣾͰΠϯϑϧϯΤβ͕େྲྀߦ͍ͯ͠Δɻ ग़ྗɿձࣾͰΠϯϑϧϯΤϨʢ໙Θ։ྲྀͩ૿ خɻ
ೲಘͰ͖Δؒҧ͍ྫ ೖྗɿࢲʰिؒগαϯσʔʱΛຖिߪಡ͠ ͍ͯ·͢ɻ ग़ྗɿࢲʰिؒগαϯσʔΛΛຖങಡ͠ ͍ͯ·͢ɻ
ೖྗΛม͑ͯΈͯͷߟ • ͍จͰࣦഊ͢Δ͜ͱ͕͋Δ • 1จࣈ่͚ͩΕΔ߹͋Δ͕ɺ్த͔Βେ่ յΛ࢝ΊΔ߹͋Δ • िͱΛؒҧ͑ͨΓɺ༻๏͕ࣅͨจࣈΛؒҧ ͍͑ͯΔྫݟΒΕΔ
ࠓޙͷ՝ • ͬͱͪΌΜͱ࣮ͨ͠ݧ • ΑΓൣͳύϥϝʔλʔ୳ࡧ • 2ɺ3ͷLSTMͷར༻ • dropoutͷར༻ •
ঢ়ଶͷՄࢹԽ
·ͱΊ • LSTMʹจΛ෮ݩͤͯ͞Έͨ • ࣮ࡍɺͦΕͳΓʹ෮ݩͰ͖Δ • ೖྗΛٯॱʹ͢ΔΘΓʹɺग़ྗΛٯॱʹ͠ ͯΑ͍͜ͱ͕Θ͔ͬͨ
࠷ۙͷؔ࿈ݚڀ • http://arxiv.org/abs/1506.02078 • LSTMͷՄࢹԽ • http://citeseerx.ist.psu.edu/viewdoc/download? doi=10.1.1.1.3412&rep=rep1&type=pdf • RNNͰจ຺ࣗ༝จ๏ΛύʔεͰ͖Δ
• http://www.aclweb.org/anthology/P/P15/ P15-1107.pdf • Sentence Auto encoderͷ࣮ݧ