Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
文献紹介: Confidence Modeling for Neural Semantic P...
Search
Yumeto Inaoka
October 24, 2018
Research
3
230
文献紹介: Confidence Modeling for Neural Semantic Parsing
2018/10/24の文献紹介で発表
Yumeto Inaoka
October 24, 2018
Tweet
Share
More Decks by Yumeto Inaoka
See All by Yumeto Inaoka
文献紹介: Quantity doesn’t buy quality syntax with neural language models
yumeto
1
190
文献紹介: Open Domain Web Keyphrase Extraction Beyond Language Modeling
yumeto
0
240
文献紹介: Self-Supervised_Neural_Machine_Translation
yumeto
0
160
文献紹介: Comparing and Developing Tools to Measure the Readability of Domain-Specific Texts
yumeto
0
170
文献紹介: PAWS: Paraphrase Adversaries from Word Scrambling
yumeto
0
160
文献紹介: Beyond BLEU: Training Neural Machine Translation with Semantic Similarity
yumeto
0
280
文献紹介: EditNTS: An Neural Programmer-Interpreter Model for Sentence Simplification through Explicit Editing
yumeto
0
340
文献紹介: Decomposable Neural Paraphrase Generation
yumeto
0
230
文献紹介: Analyzing the Limitations of Cross-lingual Word Embedding Mappings
yumeto
0
230
Other Decks in Research
See All in Research
問いを起点に、社会と共鳴する知を育む場へ
matsumoto_r
PRO
0
610
A scalable, annual aboveground biomass product for monitoring carbon impacts of ecosystem restoration projects
satai
4
240
Minimax and Bayes Optimal Best-arm Identification: Adaptive Experimental Design for Treatment Choice
masakat0
0
170
SSII2025 [TS2] リモートセンシング画像処理の最前線
ssii
PRO
7
3.1k
Google Agent Development Kit (ADK) 入門 🚀
mickey_kubo
2
1.8k
日本語新聞記事を用いた大規模言語モデルの暗記定量化 / LLMC2025
upura
0
180
最適決定木を用いた処方的価格最適化
mickey_kubo
4
1.9k
AI エージェントを活用した研究再現性の自動定量評価 / scisci2025
upura
1
150
引力・斥力を制御可能なランダム部分集合の確率分布
wasyro
0
240
MetaEarth: A Generative Foundation Model for Global-Scale Remote Sensing Image Generation
satai
4
200
20250624_熊本経済同友会6月例会講演
trafficbrain
1
610
SSII2025 [TS3] 医工連携における画像情報学研究
ssii
PRO
2
1.3k
Featured
See All Featured
Building an army of robots
kneath
306
46k
Sharpening the Axe: The Primacy of Toolmaking
bcantrill
44
2.5k
The World Runs on Bad Software
bkeepers
PRO
70
11k
Performance Is Good for Brains [We Love Speed 2024]
tammyeverts
12
1.1k
Refactoring Trust on Your Teams (GOTO; Chicago 2020)
rmw
34
3.1k
Automating Front-end Workflow
addyosmani
1370
200k
Unsuck your backbone
ammeep
671
58k
Dealing with People You Can't Stand - Big Design 2015
cassininazir
367
27k
Visualization
eitanlees
148
16k
The Straight Up "How To Draw Better" Workshop
denniskardys
236
140k
The Art of Delivering Value - GDevCon NA Keynote
reverentgeek
15
1.7k
I Don’t Have Time: Getting Over the Fear to Launch Your Podcast
jcasabona
33
2.4k
Transcript
Confidence Modeling for Neural Semantic Parsing จݙհɹ Ԭٕज़Պֶେֶɹࣗવݴޠॲཧݚڀࣨ ҴԬɹເਓ
Literature Confidence Modeling for Neural Semantic Parsing Li Dong† and
Chris Quirk‡ and Mirella Lapata† †School of Informatics, University of Edinburgh ‡Microsoft Research, Redmond Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 743–753, 2018. !2
Abstract • Neural Semantic Parsing (seq2seq) ʹ͓͚Δ֬৴ϞσϦϯά • ೖྗͷͲ͕͜ෆ͔֬͞ͷཁҼʹͳ͍ͬͯΔ͔Λࣝผ •
ࣄޙ֬ɺΞςϯγϣϯʹґଘ͢Δख๏ΑΓ༏ल !3
Introduction • Neural Semantic ParsingظͰ͖Δ݁ՌΛग़͢ҰํͰ ग़ྗͷݪҼ͕ղऍͮ͠Β͍ϒϥοΫϘοΫεͱͯ͠ಈ࡞ • Ϟσϧͷ༧ଌʹର͢Δ֬৴ͷਪఆʹΑͬͯ༗ҙٛͳ ϑΟʔυόοΫ͕ՄೳʹͳΔͷͰͳ͍͔ •
֬৴ͷείΞϦϯάख๏ࣄޙ֬ p(y|x) ͕Α͘༻͞ΕΔ → ઢܗϞσϧͰ༗ޮ͕ͩχϡʔϥϧϞσϧͰྑ͘ͳ͍ !4
Neural Semantic Parsing • In: Natural Language Out: Logical form
• Seq2seq with LSTM • Attention mechanism • Maximize the likelihood • Beam Search !5 !5
Confidence Estimation • ೖྗqͱ༧ଌͨ͠ҙຯදݱa͔Β֬৴s(q, a) ∈ (0, 1)Λ༧ଌ • ֬৴ͷஅʹʮԿΛΒͳ͍͔ʯΛਪఆ͢Δඞཁ͕͋Δ
• Ϟσϧͷෆ͔֬͞ɺσʔλͷෆ͔֬͞ɺೖྗͷෆ͔֬͞Λجʹ ࡞ΒΕΔࢦඪ͔Β֬৴ΛճؼϞσϧʹΑͬͯٻΊΔ !6
Model Uncertainty • ϞσϧͷύϥϝʔλߏʹΑΔෆ͔֬͞Ͱ֬৴͕Լ ← ྫ͑܇࿅σʔλʹؚ·ΕΔϊΠζ֬తֶशΞϧΰϦζϜ • Dropout Perturbation, Gaussian
Noise, Posterior Probability͔Β ࢦඪΛ࡞͠ɺෆ͔֬͞Λ༧ଌ !7
Dropout Perturbation • DropoutΛςετ࣌ʹ༻ (ਤதͷi, ii, iii, ivͷՕॴ) • จϨϕϧͰͷࢦඪɿ
• τʔΫϯϨϕϧͰͷࢦඪɿ • ɹɹઁಈͤ͞Δύϥϝʔλɹ݁ՌΛूΊͯࢄΛܭࢉ !8
Gaussian Noise • Gaussian NoiseΛϕΫτϧՃ͑ͯDropoutͱಉ༷ʹࢄΛܭࢉ ← DropoutϕϧψʔΠɺ͜ΕΨεʹै͏ϊΠζ • ϊΠζͷՃ͑ํҎԼͷ2ͭ (vݩͷϕΫτϧ,
gGaussian Noise) !9
Posterior Probability • ࣄޙ֬ p(a | q)ΛจϨϕϧͰͷࢦඪʹ༻ • τʔΫϯϨϕϧͰҎԼͷ2ͭΛࢦඪʹ༻ •
ɹɹɹɹɹɹɹɹɹɹɹɹɿ࠷ෆ͔֬ͳ୯ޠʹண • ɹɹɹɹɹɹɹɹɹɹɹɹɹɹɿτʔΫϯຖͷperplexity !10
Data Uncertainty • ܇࿅σʔλͷΧόϨοδෆ͔֬͞ʹӨڹΛ༩͑Δ • ܇࿅σʔλͰݴޠϞσϧΛֶशͤ͞ɺೖྗͷݴޠϞσϧ֬Λ ࢦඪʹ༻͍Δ • ೖྗͷະޠτʔΫϯΛࢦඪʹ༻͍Δ !11
Input Uncertainty • Ϟσϧ͕ᘳͰೖྗ͕ᐆດͩͱෆ͔֬͞ൃੜ͢Δ (e.g. 9 o’clock -> flight_time(9am) or
flight_time(9pm) ) • ্Ґީิͷ֬ͷࢄΛ༻͍Δ • ΤϯτϩϐʔΛ༻͍Δ ← a’αϯϓϦϯάۙࣅ !12
Confidence Storing • ͜ΕΒͷ༷ʑͳࢦඪΛ༻͍ͯ֬৴ͷείΞϦϯάΛߦ͏ • ޯϒʔεςΟϯάϞσϧʹ֤ࢦඪΛ༩ֶ͑ͯशͤ͞Δ ग़ྗ͕0ʙ1ʹͳΔΑ͏ϩδεςΟοΫؔͰϥοϓ • ޯϒʔεςΟϯάϞσϧҎԼͷղઆهࣄ͕͔Γ͍͢ (ʮGradient
Boosting ͱ XGBoostʯ: ɹ https://zaburo-ch.github.io/post/xgboost/ ) !13
Uncertainty Interpretation • Ͳͷೖྗ͕ෆ͔֬͞ʹ࡞༻͍ͯ͠Δ͔Λಛఆ → ͦͷೖྗΛಛผͳέʔεͱͯ͠ѻ͏͕ग़དྷΔ • ༧ଌ͔ΒೖྗτʔΫϯؒ·ͰΛٯൖ → ֤τʔΫϯͷෆ͔֬͞ͷد༩͕Θ͔Δ
!14
Experiments (Datasets) • IFTTT σʔληοτ (train-dev-test : 77,495 - 5,171
- 4,294) • DJANGO σʔληοτ (train-dev-test : 16,000 - 1,000 - 1,805) !15
Experiments (Settings) • Dropout Perturbation Dropout rate0.1ɺ30ճ࣮ߦͯ͠ࢄΛܭࢉ • Gaussian Noise
ඪ४ภࠩΛ0.05ʹઃఆ • Probability of Input ݴޠϞσϧͱͯ͠KenLMΛ༻ • Input Uncertainty 10-best ͷީิ͔ΒࢄΛܭࢉ !16
Experiments (Results) • Model Uncertainty͕࠷ޮՌత • Data UncertaintyӨڹ͕খ͍͞ → In-domainͰ͋ΔͨΊ
!17
Experiments (Results) !18
Experiments (Results) • Model Uncertaintyͷ ࢦඪ͕ॏཁ • ಛʹIFTTT#UNKͱ Var͕ॏཁ !19
Experiments (Results) !20
Experiments (Results) • ϊΠζΛՃ͑ͨτʔΫϯྻͱ ٯൖͰಘͨτʔΫϯྻͷ ΦʔόʔϥοϓͰධՁ • Attentionͱൺֱͯ͠ߴ͍ • K=4ʹ͓͍ͯ80%͕Ұக
!21
Experiments (Results) !22
Conclusions • Neural Semantic ParsingͷͨΊͷ֬৴ਪఆϞσϧΛఏࣔ • ෆ͔֬͞ΛೖྗτʔΫϯϨϕϧͰղऍ͢Δํ๏Λఏࣔ • IFTTT, DJANGOσʔληοτʹ͓͍ͯ༗ޮੑΛ֬ೝ
• ఏҊϞσϧSeq2seqΛ࠾༻͢Δ༷ʑͳλεΫͰద༻Մೳ • Neural Semantic ParsingͷActive Learningʹ͓͍ͯར༻Ͱ͖Δ !23