Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
文献紹介: Confidence Modeling for Neural Semantic P...
Search
Yumeto Inaoka
October 24, 2018
Research
3
230
文献紹介: Confidence Modeling for Neural Semantic Parsing
2018/10/24の文献紹介で発表
Yumeto Inaoka
October 24, 2018
Tweet
Share
More Decks by Yumeto Inaoka
See All by Yumeto Inaoka
文献紹介: Quantity doesn’t buy quality syntax with neural language models
yumeto
1
200
文献紹介: Open Domain Web Keyphrase Extraction Beyond Language Modeling
yumeto
0
250
文献紹介: Self-Supervised_Neural_Machine_Translation
yumeto
0
170
文献紹介: Comparing and Developing Tools to Measure the Readability of Domain-Specific Texts
yumeto
0
180
文献紹介: PAWS: Paraphrase Adversaries from Word Scrambling
yumeto
0
170
文献紹介: Beyond BLEU: Training Neural Machine Translation with Semantic Similarity
yumeto
0
290
文献紹介: EditNTS: An Neural Programmer-Interpreter Model for Sentence Simplification through Explicit Editing
yumeto
0
360
文献紹介: Decomposable Neural Paraphrase Generation
yumeto
0
230
文献紹介: Analyzing the Limitations of Cross-lingual Word Embedding Mappings
yumeto
0
230
Other Decks in Research
See All in Research
Pythonでジオを使い倒そう! 〜それとFOSS4G Hiroshima 2026のご紹介を少し〜
wata909
0
1.1k
能動適応的実験計画
masakat0
2
1k
「どう育てるか」より「どう働きたいか」〜スクラムマスターの最初の一歩〜
hirakawa51
0
1k
言語モデルの地図:確率分布と情報幾何による類似性の可視化
shimosan
8
2.1k
問いを起点に、社会と共鳴する知を育む場へ
matsumoto_r
PRO
0
710
音声感情認識技術の進展と展望
nagase
0
360
Minimax and Bayes Optimal Best-arm Identification: Adaptive Experimental Design for Treatment Choice
masakat0
0
190
投資戦略202508
pw
0
570
機械学習と数理最適化の融合 (MOAI) による革新
mickey_kubo
1
420
AIグラフィックデザインの進化:断片から統合(One Piece)へ / From Fragment to One Piece: A Survey on AI-Driven Graphic Design
shunk031
0
560
CVPR2025論文紹介:Unboxed
murakawatakuya
0
210
論文紹介:Not All Tokens Are What You Need for Pretraining
kosuken
1
210
Featured
See All Featured
Put a Button on it: Removing Barriers to Going Fast.
kastner
60
4.1k
Java REST API Framework Comparison - PWX 2021
mraible
34
9k
Why Our Code Smells
bkeepers
PRO
340
57k
Refactoring Trust on Your Teams (GOTO; Chicago 2020)
rmw
35
3.2k
Intergalactic Javascript Robots from Outer Space
tanoku
273
27k
jQuery: Nuts, Bolts and Bling
dougneiner
65
8k
実際に使うSQLの書き方 徹底解説 / pgcon21j-tutorial
soudai
PRO
192
58k
CSS Pre-Processors: Stylus, Less & Sass
bermonpainter
359
30k
Done Done
chrislema
186
16k
Context Engineering - Making Every Token Count
addyosmani
9
410
ReactJS: Keep Simple. Everything can be a component!
pedronauck
666
130k
Measuring & Analyzing Core Web Vitals
bluesmoon
9
680
Transcript
Confidence Modeling for Neural Semantic Parsing จݙհɹ Ԭٕज़Պֶେֶɹࣗવݴޠॲཧݚڀࣨ ҴԬɹເਓ
Literature Confidence Modeling for Neural Semantic Parsing Li Dong† and
Chris Quirk‡ and Mirella Lapata† †School of Informatics, University of Edinburgh ‡Microsoft Research, Redmond Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 743–753, 2018. !2
Abstract • Neural Semantic Parsing (seq2seq) ʹ͓͚Δ֬৴ϞσϦϯά • ೖྗͷͲ͕͜ෆ͔֬͞ͷཁҼʹͳ͍ͬͯΔ͔Λࣝผ •
ࣄޙ֬ɺΞςϯγϣϯʹґଘ͢Δख๏ΑΓ༏ल !3
Introduction • Neural Semantic ParsingظͰ͖Δ݁ՌΛग़͢ҰํͰ ग़ྗͷݪҼ͕ղऍͮ͠Β͍ϒϥοΫϘοΫεͱͯ͠ಈ࡞ • Ϟσϧͷ༧ଌʹର͢Δ֬৴ͷਪఆʹΑͬͯ༗ҙٛͳ ϑΟʔυόοΫ͕ՄೳʹͳΔͷͰͳ͍͔ •
֬৴ͷείΞϦϯάख๏ࣄޙ֬ p(y|x) ͕Α͘༻͞ΕΔ → ઢܗϞσϧͰ༗ޮ͕ͩχϡʔϥϧϞσϧͰྑ͘ͳ͍ !4
Neural Semantic Parsing • In: Natural Language Out: Logical form
• Seq2seq with LSTM • Attention mechanism • Maximize the likelihood • Beam Search !5 !5
Confidence Estimation • ೖྗqͱ༧ଌͨ͠ҙຯදݱa͔Β֬৴s(q, a) ∈ (0, 1)Λ༧ଌ • ֬৴ͷஅʹʮԿΛΒͳ͍͔ʯΛਪఆ͢Δඞཁ͕͋Δ
• Ϟσϧͷෆ͔֬͞ɺσʔλͷෆ͔֬͞ɺೖྗͷෆ͔֬͞Λجʹ ࡞ΒΕΔࢦඪ͔Β֬৴ΛճؼϞσϧʹΑͬͯٻΊΔ !6
Model Uncertainty • ϞσϧͷύϥϝʔλߏʹΑΔෆ͔֬͞Ͱ֬৴͕Լ ← ྫ͑܇࿅σʔλʹؚ·ΕΔϊΠζ֬తֶशΞϧΰϦζϜ • Dropout Perturbation, Gaussian
Noise, Posterior Probability͔Β ࢦඪΛ࡞͠ɺෆ͔֬͞Λ༧ଌ !7
Dropout Perturbation • DropoutΛςετ࣌ʹ༻ (ਤதͷi, ii, iii, ivͷՕॴ) • จϨϕϧͰͷࢦඪɿ
• τʔΫϯϨϕϧͰͷࢦඪɿ • ɹɹઁಈͤ͞Δύϥϝʔλɹ݁ՌΛूΊͯࢄΛܭࢉ !8
Gaussian Noise • Gaussian NoiseΛϕΫτϧՃ͑ͯDropoutͱಉ༷ʹࢄΛܭࢉ ← DropoutϕϧψʔΠɺ͜ΕΨεʹै͏ϊΠζ • ϊΠζͷՃ͑ํҎԼͷ2ͭ (vݩͷϕΫτϧ,
gGaussian Noise) !9
Posterior Probability • ࣄޙ֬ p(a | q)ΛจϨϕϧͰͷࢦඪʹ༻ • τʔΫϯϨϕϧͰҎԼͷ2ͭΛࢦඪʹ༻ •
ɹɹɹɹɹɹɹɹɹɹɹɹɿ࠷ෆ͔֬ͳ୯ޠʹண • ɹɹɹɹɹɹɹɹɹɹɹɹɹɹɿτʔΫϯຖͷperplexity !10
Data Uncertainty • ܇࿅σʔλͷΧόϨοδෆ͔֬͞ʹӨڹΛ༩͑Δ • ܇࿅σʔλͰݴޠϞσϧΛֶशͤ͞ɺೖྗͷݴޠϞσϧ֬Λ ࢦඪʹ༻͍Δ • ೖྗͷະޠτʔΫϯΛࢦඪʹ༻͍Δ !11
Input Uncertainty • Ϟσϧ͕ᘳͰೖྗ͕ᐆດͩͱෆ͔֬͞ൃੜ͢Δ (e.g. 9 o’clock -> flight_time(9am) or
flight_time(9pm) ) • ্Ґީิͷ֬ͷࢄΛ༻͍Δ • ΤϯτϩϐʔΛ༻͍Δ ← a’αϯϓϦϯάۙࣅ !12
Confidence Storing • ͜ΕΒͷ༷ʑͳࢦඪΛ༻͍ͯ֬৴ͷείΞϦϯάΛߦ͏ • ޯϒʔεςΟϯάϞσϧʹ֤ࢦඪΛ༩ֶ͑ͯशͤ͞Δ ग़ྗ͕0ʙ1ʹͳΔΑ͏ϩδεςΟοΫؔͰϥοϓ • ޯϒʔεςΟϯάϞσϧҎԼͷղઆهࣄ͕͔Γ͍͢ (ʮGradient
Boosting ͱ XGBoostʯ: ɹ https://zaburo-ch.github.io/post/xgboost/ ) !13
Uncertainty Interpretation • Ͳͷೖྗ͕ෆ͔֬͞ʹ࡞༻͍ͯ͠Δ͔Λಛఆ → ͦͷೖྗΛಛผͳέʔεͱͯ͠ѻ͏͕ग़དྷΔ • ༧ଌ͔ΒೖྗτʔΫϯؒ·ͰΛٯൖ → ֤τʔΫϯͷෆ͔֬͞ͷد༩͕Θ͔Δ
!14
Experiments (Datasets) • IFTTT σʔληοτ (train-dev-test : 77,495 - 5,171
- 4,294) • DJANGO σʔληοτ (train-dev-test : 16,000 - 1,000 - 1,805) !15
Experiments (Settings) • Dropout Perturbation Dropout rate0.1ɺ30ճ࣮ߦͯ͠ࢄΛܭࢉ • Gaussian Noise
ඪ४ภࠩΛ0.05ʹઃఆ • Probability of Input ݴޠϞσϧͱͯ͠KenLMΛ༻ • Input Uncertainty 10-best ͷީิ͔ΒࢄΛܭࢉ !16
Experiments (Results) • Model Uncertainty͕࠷ޮՌత • Data UncertaintyӨڹ͕খ͍͞ → In-domainͰ͋ΔͨΊ
!17
Experiments (Results) !18
Experiments (Results) • Model Uncertaintyͷ ࢦඪ͕ॏཁ • ಛʹIFTTT#UNKͱ Var͕ॏཁ !19
Experiments (Results) !20
Experiments (Results) • ϊΠζΛՃ͑ͨτʔΫϯྻͱ ٯൖͰಘͨτʔΫϯྻͷ ΦʔόʔϥοϓͰධՁ • Attentionͱൺֱͯ͠ߴ͍ • K=4ʹ͓͍ͯ80%͕Ұக
!21
Experiments (Results) !22
Conclusions • Neural Semantic ParsingͷͨΊͷ֬৴ਪఆϞσϧΛఏࣔ • ෆ͔֬͞ΛೖྗτʔΫϯϨϕϧͰղऍ͢Δํ๏Λఏࣔ • IFTTT, DJANGOσʔληοτʹ͓͍ͯ༗ޮੑΛ֬ೝ
• ఏҊϞσϧSeq2seqΛ࠾༻͢Δ༷ʑͳλεΫͰద༻Մೳ • Neural Semantic ParsingͷActive Learningʹ͓͍ͯར༻Ͱ͖Δ !23