Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
自然言語処理と深層学習の最先端
Search
Sponsored
·
Your Podcast. Everywhere. Effortlessly.
Share. Educate. Inspire. Entertain. You do you. We'll handle the rest.
→
tkng
January 15, 2016
Technology
16
7.7k
自然言語処理と深層学習の最先端
第4回 JustTechTalk の発表資料
tkng
January 15, 2016
Tweet
Share
More Decks by tkng
See All by tkng
LSTMを用いた自然言語処理について
tkng
3
3.7k
EMNLP2015読み会:Effective Approaches to Attention-based Neural Machine Translation
tkng
2
4.1k
basis-of-optimization.pdf
tkng
1
1.4k
Other Decks in Technology
See All in Technology
サイボウズ 開発本部採用ピッチ / Cybozu Engineer Recruit
cybozuinsideout
PRO
10
72k
The Engineer with a Three-Year Cycle
e99h2121
0
160
Oracle Cloud Infrastructure:2026年1月度サービス・アップデート
oracle4engineer
PRO
0
140
Databricks Free Edition講座 データエンジニアリング編
taka_aki
0
2.8k
習慣とAIと環境 — 技術探求を続ける3つの鍵
azukiazusa1
3
780
JuliaTokaiとしてはこれが最後かもしれない(仮) for NGK2026S
antimon2
0
120
Werner Vogelsが14年間 問い続けてきたこと
yusukeshimizu
2
200
SREの仕事を自動化する際にやっておきたい5つのポイント
jacopen
6
1k
それぞれのペースでやっていく Bet AI / Bet AI at Your Own Pace
yuyatakeyama
1
570
会社紹介資料 / Sansan Company Profile
sansan33
PRO
13
400k
BPaaSオペレーション・kubell社内 n8n活用による効率化検証事例紹介
kubell_hr
0
250
【Oracle Cloud ウェビナー】ランサムウェアが突く「侵入の隙」とバックアップの「死角」 ~ 過去の教訓に学ぶ — 侵入前提の防御とデータ保護 ~
oracle4engineer
PRO
2
220
Featured
See All Featured
Avoiding the “Bad Training, Faster” Trap in the Age of AI
tmiket
0
61
Google's AI Overviews - The New Search
badams
0
890
Measuring Dark Social's Impact On Conversion and Attribution
stephenakadiri
1
110
The Cult of Friendly URLs
andyhume
79
6.8k
B2B Lead Gen: Tactics, Traps & Triumph
marketingsoph
0
45
The Invisible Side of Design
smashingmag
302
51k
I Don’t Have Time: Getting Over the Fear to Launch Your Podcast
jcasabona
34
2.6k
Jamie Indigo - Trashchat’s Guide to Black Boxes: Technical SEO Tactics for LLMs
techseoconnect
PRO
0
52
ラッコキーワード サービス紹介資料
rakko
1
2.1M
Leo the Paperboy
mayatellez
4
1.3k
Exploring anti-patterns in Rails
aemeredith
2
230
Claude Code のすすめ
schroneko
67
210k
Transcript
ࣗવݴޠॲཧͱਂֶशͷ࠷ઌ ಙӬ೭ +VTU5FDI5BML
ࣗવݴޠॲཧͱਂֶशͷ࠷ઌ ͷҰ෦Λհ͠·͢ ಙӬ೭ (@tkng) +VTU5FDI5BML
ࣗݾհɿಙӬ೭ • Twitter ID: @tkng • εϚʔτχϡʔεגࣜձࣾͰࣗવݴޠॲཧ ը૾ॲཧΛͬͯ·͢
None
ࣗવݴޠॲཧͱ • ࣗવݴޠʢ≠ϓϩάϥϛϯάݴޠʣΛѻ͏ • ػց༁ • ࣭Ԡ • จॻྨ •
ߏจղੳɾΓड͚ղੳ • ܗଶૉղੳɾ୯ޠׂ
ػց༁ͷྫ • Google༁ͷword lensػೳ IUUQHPPHMFUSBOTMBUFCMPHTQPUKQIBMMPIPMBPMBUPOFXNPSFQPXFSGVM@IUNM
࣭Ԡͷྫ • IBM Watson • Jeopardy!Ͱਓؒʹউར IUUQXXXOZUJNFTDPNTDJFODFKFPQBSEZXBUTPOIUNM
ਂֶशͱ • ≒ χϡʔϥϧωοτ • ۙͷྲྀߦɺҎԼͷཧ༝ʹΑΔ • ܭࢉػͷੑೳ্ • ֶशσʔλͷ૿Ճ
• ࠷దԽख๏ͳͲͷݚڀͷਐల
ࣗવݴޠॲཧͱ ਂֶशͷ࠷ઌ
Show, Attend and Tell: Neural Image Caption Generation with Visual
Attention (Xu+, 2015) • ը૾ʹର͢Δղઆจͷੜ IUUQLFMWJOYVHJUIVCJPQSPKFDUTDBQHFOIUNM
Show, Attend and Tell Ͳ͏͍͏ख๏͔ • ҎԼͷ3ͭͷΈ߹Θͤ • Convolutional Neural
Network • Long Short Term Memory • Attention
Generating Images from Captions with Attention (Mansimov+, 2015) • Ωϟϓγϣϯ͔Βը૾Λੜ͢Δ
• ࡉͰݟΕඈߦػʹݟ͑ͳ͘ͳ͍
Effective Approaches to Attention- based Neural Machine Translation (Bahdanau+, 2015)
• Deep LearningΛ༻͍ͯػց༁ • Local Attentionͱ͍͏৽͍͠ख๏ΛఏҊ • ͍͔ͭ͘ͷݴޠϖΞͰɺstate of the artΛୡ ࠷ߴਫ४
Ask Me Anything: Dynamic Memory Networks for Natural Language Processing
(Kumar+, 2015) • ৽͍͠ϞσϧʢDynamic Memory Networksʣ ΛఏҊͨ͠ • Recurrent Neural NetworkΛΈ߹ΘͤͨΑ ͏ͳϞσϧʹͳ͍ͬͯΔ • ࣭Ԡɺࢺλά͚ɺڞࢀরղੳɺධ ੳͰstate of the art
ਂֶशͷNLPʹ͓͚Δݱঢ় • ਫ਼໘Ͱɺଞͷख๏ͱେ͍͍ࠩͭͯͳ͍ • ը૾ॲཧԻೝࣝͱҧ͏ • ػց༁࣭Ԡ͕γϯϓϧͳख๏Ͱղ͚ ΔΑ͏ʹͳͬͨ • จͷੜ͕Ͱ͖ΔΑ͏ʹͳͬͨ
ࠓޙͲ͏ͳΔͷ͔ʁ • ਖ਼ɺΑ͘Θ͔Βͳ͍…… • ը૾ಈըͱΈ߹Θͤͨݚڀ૿͑ͦ͏
࠷ઌʹ͍͍ͭͯͨ͘Ίʹ
3ͭʹߜͬͯղઆ͠·͢ • Neural Networkͷجૅ • Recurrent Neural Network • ಛʹGated
Recurrent Unit • Attention
χϡʔϥϧωοτϫʔΫ = ؔ • χϡʔϥϧωοτϫʔΫɺ͋ΔछͷؔͰ ͋Δͱߟ͑Δ͜ͱ͕Ͱ͖Δ • ೖग़ྗϕΫτϧ • ඍՄೳ
γϯϓϧͳྫ͔Β࢝ΊΔ y = f(x) = W x
ग़ྗΛ0ʙ1ʹਖ਼نԽ͢Δ • y = softmax(f(x))
ଟԽͯ͠ΈΑ͏ • y = softmax(g(f(x)))
Ͳ͕͜ϨΠϠʔʁ
౾ࣝ • ϨΠϠʔͱ͍͏ݴ༿ʹؾΛ͚ͭΑ͏ • ͲͬͪΛࢦͯ͠Δ͔ᐆດʢಡΉͱ͖ʹؾΛ ͚ͭΕΘ͔Δ͕…ʣ • ϝδϟʔͳOSSͰɺؔΛࢦ͢ͷ͕ଟ ʢCaffe, Torch,
Chainer, TensorFlowʣ
Recurrent Neural Network • ࣌ܥྻʹฒͿཁૉΛ1ͭͣͭड͚औͬͯɺঢ়ଶ Λߋ৽͍ͯ͘͠ωοτϫʔΫͷ૯শ • ࠷ۙͱͯྲྀߦ͍ͯ͠Δ IUUQDPMBIHJUIVCJPQPTUT6OEFSTUBOEJOH-45.T
ͳͥRNN͕ྲྀߦ͍ͯ͠Δͷ͔ʁ • ՄมͷσʔλͷऔΓѻ͍͍͠ • RNNΛͬͨseq2seqϞσϧʢEncoder/ DecoderϞσϧͱݺͿʣͰՄมσʔλΛ ͏·͘औΓѻ͑Δࣄ͕Θ͔͖ͬͯͨ
Seq2seqϞσϧͱʁ • ՄมͷೖྗσʔλΛɺݻఆͷϕΫτϧʹ Τϯίʔυͯ͠ɺ͔ͦ͜Β༁ޙͷσʔλΛ σίʔυ͢Δ • ػց༁ࣗಈཁͳͲೖग़ྗͷ͕͞ҧ͏ λεΫͰۙݚڀ͕ਐΜͰ͍Δ
Seq2seqϞσϧͰͷ༁ 5IJT JT B QFO &04 ͜Ε ϖϯ Ͱ͢
&04 ͜Ε ϖϯ Ͱ͢
Seq2seqϞσϧͰͷ༁ 5IJT JT B QFO &04 ͜Ε ϖϯ Ͱ͢
&04 ͜Ε ϖϯ Ͱ͢ 5IJTJTBQFOΛݻఆʹ Τϯίʔυ͍ͯ͠Δʂ
Seq2seqϞσϧΛ༁ʹ͏ͱʁ • ͔ͳΓ͏·͍͘͘ࣄ͕Θ͔͍ͬͯΔ • ͨͩ࣍͠ͷ༷ͳऑ͕͋Δ • จʹऑ͍ • ݻ༗໊ࢺ͕ೖΕସΘΔ •
͜ΕΛղܾ͢Δͷ͕࣍ʹઆ໌͢ΔAttention
Attentionͱ • σίʔυ࣌ʹΤϯίʔυ࣌ͷใΛগ͚ͩ͠ ࢀর͢ΔͨΊͷΈ • গ͚ͩ͠ = બͨ͠෦͚ͩΛݟΔ • Global
AttentionͱLocal Attention͕͋Δ
Global Attention • ީิঢ়ଶͷॏΈ͖ΛAttentionͱ͢Δ • ྺ࢙తʹͪ͜Βͷํ͕ͪΐͬͱݹ͍ 5IJT JT B QFO
&04 ͜Ε ͜Ε
Local Attention • Τϯίʔυ࣌ͷঢ়ଶΛ͍͔ͭ͘બͯ͠͏ 5IJT JT B QFO &04 ͜Ε
͜Ε
Attentionͷॱং • ΛͯΔॱংɺGlobal AttentionͰ Local AttentionͰ͍͠Ͱ͋Δ • AttentionͷॱংRNNͰֶशͨ͠Γ͢Δ • લ͔ΒॱʹAttentionΛ͍͚ͯͯͩ͘Ͱੑ
ೳ্͢Δ
࣮ݧ݁ՌɿWMT'14
࣮ݧ݁ՌɿWMT'15
࣮ࡍͷ༁ͷྫ
͜͜·Ͱͷ·ͱΊ • جૅతͳχϡʔϥϧωοτϫʔΫͷղઆ • Recurrent Neural Network • Attention
ࠓ͞ͳ͔ͬͨ͜ͱ • ֶशʢback propagation, minibatchʣ • ଛࣦؔʢlog loss, cross entropy
lossʣ • ਖ਼ଇԽͷςΫχοΫ • dropout, batch normalization • ࠷దԽͷςΫχοΫ • RMSProp, AdaGrad, Adam • ֤छ׆ੑԽؔ • (Very) Leaky ReLU, Maxout
ࠓޙͷΦεεϝ • ࣗͰͳʹ͔࣮ݧͯ͠ΈΑ͏ • γϯϓϧͳྫͰ͍͍͔Β·ͣಈ͔͢ • ಈ͍ͨΒ࣍ʹࣗͰվͯ͠ΈΔ • ͱʹ͔͘खΛಈ͔͢͜ͱ͕େࣄ •
࠷ॳ͔Β͗͢͠Δ͜ͱʹखΛग़͞ͳ͍
࠷৽ใͷΞϯςφ (1) • TwitterͰػցֶशͳͲʹ͍ͭͯൃݴ͍ͯ͠Δ ਓΛϑΥϩʔ͢Δ • ͱΓ͋͑ͣ @hillbig • ͍͍ਓଞʹͨ͘͞Μ͍·͕͢
• ͍͋͠ਓ͍Δ͔Βҙͯ͠Ͷ
࠷৽ใͷΞϯςφ (2) • จΛಡ͏ • ಡΉ͚ͩ࣌ؒͷແବͳจ͋ΔͷͰҙ • ࠷ॳͷ͏ͪɺ༗໊ͳֶձʢACL, EMNLP, ICML,
NIPS, KDD, etc.ʣʹ௨ͬͯΔจʹ ߜ͕ͬͨΑ͍
࠷৽ใͷΞϯςφ (3) • จͷஶऀʹ͢Δ • จΛಡΜͰ͍Δ͏ͪʹɺ͕ࣗ໘ന͍ͱ ࢥ͏จͷஶऀ͕Կਓ͔ग़ͯ͘Δ • ͦ͏͍͏ਓͷ৽͍͠จͲ͏ʹ͔ͯ͠ νΣοΫ͠Α͏
Take home messages • ؾ͕࣋ͪΓ্͕ͬͯΔ͏ͪʹɺࣗͷखͰ ৭ʑ࣮ݧͯ͠ΈΑ͏ • ॳ৺ऀʹChainer͕Φεεϝ • ࠷৽ใωοτͰೖखͰ͖Δ
• มͳํʹҙ͕ࣝߴ͍ਓʹҙ