Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
EMNLP2015読み会:Effective Approaches to Attention-...
Search
Sponsored
·
Your Podcast. Everywhere. Effortlessly.
Share. Educate. Inspire. Entertain. You do you. We'll handle the rest.
→
tkng
October 24, 2015
Research
2
4.1k
EMNLP2015読み会:Effective Approaches to Attention-based Neural Machine Translation
tkng
October 24, 2015
Tweet
Share
More Decks by tkng
See All by tkng
LSTMを用いた自然言語処理について
tkng
3
3.7k
自然言語処理と深層学習の最先端
tkng
16
7.7k
basis-of-optimization.pdf
tkng
1
1.4k
Other Decks in Research
See All in Research
Proposal of an Information Delivery Method for Electronic Paper Signage Using Human Mobility as the Communication Medium / ICCE-Asia 2025
yumulab
0
240
Multi-Agent Large Language Models for Code Intelligence: Opportunities, Challenges, and Research Directions
fatemeh_fard
0
140
視覚から身体性を持つAIへ: 巧緻な動作の3次元理解
tkhkaeio
1
210
[チュートリアル] 電波マップ構築入門 :研究動向と課題設定の勘所
k_sato
0
320
Akamaiのキャッシュ効率を支えるAdaptSizeについての論文を読んでみた
bootjp
1
510
IEEE AIxVR 2026 Keynote Talk: "Beyond Visibility: Understanding Scenes and Humans under Challenging Conditions with Diverse Sensing"
miso2024
0
120
2026年3月1日(日)福島「除染土」の公共利用をかんがえる
atsukomasano2026
0
440
[IBIS 2025] 深層基盤モデルのための強化学習驚きから理論にもとづく納得へ
akifumi_wachi
20
9.8k
HU Berlin: Industrial-Strength Natural Language Processing with spaCy and Prodigy
inesmontani
PRO
0
260
データサイエンティストの業務変化
datascientistsociety
PRO
0
290
英語教育 “研究” のあり方:学術知とアウトリーチの緊張関係
terasawat
1
500
LLM-jp-3 and beyond: Training Large Language Models
odashi
1
780
Featured
See All Featured
Making Projects Easy
brettharned
120
6.6k
What the history of the web can teach us about the future of AI
inesmontani
PRO
1
470
Efficient Content Optimization with Google Search Console & Apps Script
katarinadahlin
PRO
1
400
Thoughts on Productivity
jonyablonski
75
5.1k
職位にかかわらず全員がリーダーシップを発揮するチーム作り / Building a team where everyone can demonstrate leadership regardless of position
madoxten
61
52k
The Invisible Side of Design
smashingmag
302
51k
KATA
mclloyd
PRO
35
15k
Claude Code のすすめ
schroneko
67
220k
Optimizing for Happiness
mojombo
378
71k
Marketing Yourself as an Engineer | Alaka | Gurzu
gurzu
0
150
Practical Tips for Bootstrapping Information Extraction Pipelines
honnibal
25
1.8k
Heart Work Chapter 1 - Part 1
lfama
PRO
5
35k
Transcript
Effective Approaches to Attention-based Neural Machine Translation Authors: Minh-Thang LuongɹHieu
PhamɹChristopher D. Manning ಡΉਓ: ಙӬ೭ ਤશͯ͜ͷจ͔ΒҾ༻ &./-1ಡΈձ
ࣗݾհɿಙӬ೭ • Twitter ID: @tkng • εϚʔτχϡʔεגࣜձࣾͰNLPͬͯ·͢
ࠓͷจʁ • Effective Approaches to Attention-based Neural Machine Translation •
ڈ͙Β͍͔ΒྲྀߦΓ࢝Ίͨseq2seqܥͷख ๏ͷ֦ு
Seq2seq modelͱʁ • Encoder/Decoder modelͱݴ͏ • ༁ݩͷจΛݻఆͷϕΫτϧʹΤϯίʔυ ͯ͠ɺ͔ͦ͜Β༁ޙͷจΛσίʔυ͢Δ • ՄมͷσʔλऔΓѻ͍͕͍͠ͷͰɺ
͑ͯݻఆʹͯ͠͠·͏ͱ͍͏ൃ
Ͳ͏ͬͯݻఆʹΤϯίʔυ ͢Δͷʁ • recurrent neural networkΛ͏ • http://colah.github.io/posts/2015-08-Understanding-LSTMs/ • http://kaishengtai.github.io/static/slides/treelstm-acl2015.pdf
• LSTM = recurrent neural networkͷҰछ
Seq2seqϞσϧͰͷ༁
Seq2seq·ͰͷಓͷΓ (1) • Recurrent Continuous Translation Models (EMNLP2013) • Learning
Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation (EMNLP2014)
Seq2seq·ͰͷಓͷΓ (2) • Sequence to Sequence Learning with Neural Networks
(NIPS2014) • ൺֱతγϯϓϧͳStacked LSTM͕ྑ͍ੑೳΛ ࣔ͢͜ͱ͕࣮ݧͰࣔ͞Εͨ • ϏʔϜαʔνɺٯॱͰͷೖྗɺΞϯαϯϒϧ ͷ3छྨͷ͕ೖ͍ͬͯΔ
Seq2seqϞσϧͷऑ • จʹऑ͍ • ݻ༗໊ࢺ͕ೖΕସΘΔ
AttentionʹΑΔվળ [Bahdanau+ 2015] • DecodeͷࡍͷContextʹEncodeͷࡍͷ֤࣌ࠁ ʹ͓͚ΔӅΕঢ়ଶͷॏΈ͖Λ༻͍Δ • ॏΈࣗମRNNͰܭࢉ͢Δ
ࠓճͷจͷߩݙ • ৽͍͠attention (local attention) ΛఏҊͨ͠ • ༁ݩจʹ͓͍ͯɺҐஔɹ͔ΒલޙD୯ޠ ͷӅΕঢ়ଶͷॏΈ͖ΛऔΔ •
ॏΈͷܭࢉglobal attentionͷ߹ͱಉ༷ • ɹ1ͭͣͭਐΊ͍ͯ͘߹ʢlocal-mʣ ͱɺ͜ΕࣗମRNNʹ͢Δ߹ʢlocal- pʣͷ2ͭΛ࣮ݧ͍ͯ͠Δ pt pt
local attention
local attentionͷҹ • ޠॱ͕ࣅ͍ͯΔݴޠؒͰͷ༁ͳΒɺ໌Β͔ ʹ͜ͷํ͕ྑͦ͞͏ • ӳΈ͍ͨʹޠॱ͕େ͖͘ҧ͏߹ɺ Ґஔɹͷਪఆࣗମ͕͍͠λεΫʹͳͬͪΌ ͍ͦ͏… pt
࣮ݧ݁ՌɿWMT'14
࣮ݧ݁ՌɿWMT'14 • Α͘ݟΔͱɺlocal attentionͰͷੑೳ্ +0.9ϙΠϯτ • ଞͷςΫχοΫͰՔ͍ͰΔϙΠϯτ͕ଟ͍
࣮ݧ݁ՌɿWMT'15
͍͔ͭ͘༁αϯϓϧ
·ͱΊ • Seq2seqϞσϧͷ֦ுͱͯ͠ɺlocal attention ΛఏҊͨ͠ • ఏҊख๏͍͔ͭ͘ͷ࣮ݧʹ͓͍ͯɺState of the artͷੑೳΛୡͨ͠
ײ • Local attentionΛඍ • ྨࣅ͢Δख๏ͱ۩ମతʹͲ͏ҧ͏͔͕໌շʹ ॻ͔Ε͓ͯΓɺಡΈ͔ͬͨ͢ • AttentionΛཧղͰ͖ͯΑ͔ͬͨʢখฒײʣ