Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
文献紹介: Confidence Modeling for Neural Semantic P...
Search
Yumeto Inaoka
October 24, 2018
Research
3
230
文献紹介: Confidence Modeling for Neural Semantic Parsing
2018/10/24の文献紹介で発表
Yumeto Inaoka
October 24, 2018
Tweet
Share
More Decks by Yumeto Inaoka
See All by Yumeto Inaoka
文献紹介: Quantity doesn’t buy quality syntax with neural language models
yumeto
1
180
文献紹介: Open Domain Web Keyphrase Extraction Beyond Language Modeling
yumeto
0
230
文献紹介: Self-Supervised_Neural_Machine_Translation
yumeto
0
160
文献紹介: Comparing and Developing Tools to Measure the Readability of Domain-Specific Texts
yumeto
0
170
文献紹介: PAWS: Paraphrase Adversaries from Word Scrambling
yumeto
0
150
文献紹介: Beyond BLEU: Training Neural Machine Translation with Semantic Similarity
yumeto
0
270
文献紹介: EditNTS: An Neural Programmer-Interpreter Model for Sentence Simplification through Explicit Editing
yumeto
0
340
文献紹介: Decomposable Neural Paraphrase Generation
yumeto
0
230
文献紹介: Analyzing the Limitations of Cross-lingual Word Embedding Mappings
yumeto
0
220
Other Decks in Research
See All in Research
Mechanistic Interpretability:解釈可能性研究の新たな潮流
koshiro_aoki
1
300
ストレス計測方法の確立に向けたマルチモーダルデータの活用
yurikomium
0
680
Ad-DS Paper Circle #1
ykaneko1992
0
5.5k
定性データ、どう活かす? 〜定性データのための分析基盤、はじめました〜 / How to utilize qualitative data? ~We have launched an analysis platform for qualitative data~
kaminashi
6
1.1k
電力システム最適化入門
mickey_kubo
1
680
Computational OT #4 - Gradient flow and diffusion models
gpeyre
0
310
Sosiaalisen median katsaus 03/2025 + tekoäly
hponka
0
1.3k
Agentic AIとMCPを利用したサービス作成入門
mickey_kubo
0
280
実行環境に中立なWebAssemblyライブマイグレーション機構/techtalk-2025spring
chikuwait
0
230
Type Theory as a Formal Basis of Natural Language Semantics
daikimatsuoka
1
240
SSII2025 [TS3] 医工連携における画像情報学研究
ssii
PRO
2
1.2k
GeoCLIP: Clip-Inspired Alignment between Locations and Images for Effective Worldwide Geo-localization
satai
3
250
Featured
See All Featured
Reflections from 52 weeks, 52 projects
jeffersonlam
351
20k
The Invisible Side of Design
smashingmag
301
51k
We Have a Design System, Now What?
morganepeng
53
7.7k
[RailsConf 2023] Rails as a piece of cake
palkan
55
5.7k
Bootstrapping a Software Product
garrettdimon
PRO
307
110k
Art, The Web, and Tiny UX
lynnandtonic
299
21k
ReactJS: Keep Simple. Everything can be a component!
pedronauck
667
120k
Why Our Code Smells
bkeepers
PRO
337
57k
The Myth of the Modular Monolith - Day 2 Keynote - Rails World 2024
eileencodes
26
2.9k
Into the Great Unknown - MozCon
thekraken
39
1.9k
Agile that works and the tools we love
rasmusluckow
329
21k
Build your cross-platform service in a week with App Engine
jlugia
231
18k
Transcript
Confidence Modeling for Neural Semantic Parsing จݙհɹ Ԭٕज़Պֶେֶɹࣗવݴޠॲཧݚڀࣨ ҴԬɹເਓ
Literature Confidence Modeling for Neural Semantic Parsing Li Dong† and
Chris Quirk‡ and Mirella Lapata† †School of Informatics, University of Edinburgh ‡Microsoft Research, Redmond Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 743–753, 2018. !2
Abstract • Neural Semantic Parsing (seq2seq) ʹ͓͚Δ֬৴ϞσϦϯά • ೖྗͷͲ͕͜ෆ͔֬͞ͷཁҼʹͳ͍ͬͯΔ͔Λࣝผ •
ࣄޙ֬ɺΞςϯγϣϯʹґଘ͢Δख๏ΑΓ༏ल !3
Introduction • Neural Semantic ParsingظͰ͖Δ݁ՌΛग़͢ҰํͰ ग़ྗͷݪҼ͕ղऍͮ͠Β͍ϒϥοΫϘοΫεͱͯ͠ಈ࡞ • Ϟσϧͷ༧ଌʹର͢Δ֬৴ͷਪఆʹΑͬͯ༗ҙٛͳ ϑΟʔυόοΫ͕ՄೳʹͳΔͷͰͳ͍͔ •
֬৴ͷείΞϦϯάख๏ࣄޙ֬ p(y|x) ͕Α͘༻͞ΕΔ → ઢܗϞσϧͰ༗ޮ͕ͩχϡʔϥϧϞσϧͰྑ͘ͳ͍ !4
Neural Semantic Parsing • In: Natural Language Out: Logical form
• Seq2seq with LSTM • Attention mechanism • Maximize the likelihood • Beam Search !5 !5
Confidence Estimation • ೖྗqͱ༧ଌͨ͠ҙຯදݱa͔Β֬৴s(q, a) ∈ (0, 1)Λ༧ଌ • ֬৴ͷஅʹʮԿΛΒͳ͍͔ʯΛਪఆ͢Δඞཁ͕͋Δ
• Ϟσϧͷෆ͔֬͞ɺσʔλͷෆ͔֬͞ɺೖྗͷෆ͔֬͞Λجʹ ࡞ΒΕΔࢦඪ͔Β֬৴ΛճؼϞσϧʹΑͬͯٻΊΔ !6
Model Uncertainty • ϞσϧͷύϥϝʔλߏʹΑΔෆ͔֬͞Ͱ֬৴͕Լ ← ྫ͑܇࿅σʔλʹؚ·ΕΔϊΠζ֬తֶशΞϧΰϦζϜ • Dropout Perturbation, Gaussian
Noise, Posterior Probability͔Β ࢦඪΛ࡞͠ɺෆ͔֬͞Λ༧ଌ !7
Dropout Perturbation • DropoutΛςετ࣌ʹ༻ (ਤதͷi, ii, iii, ivͷՕॴ) • จϨϕϧͰͷࢦඪɿ
• τʔΫϯϨϕϧͰͷࢦඪɿ • ɹɹઁಈͤ͞Δύϥϝʔλɹ݁ՌΛूΊͯࢄΛܭࢉ !8
Gaussian Noise • Gaussian NoiseΛϕΫτϧՃ͑ͯDropoutͱಉ༷ʹࢄΛܭࢉ ← DropoutϕϧψʔΠɺ͜ΕΨεʹै͏ϊΠζ • ϊΠζͷՃ͑ํҎԼͷ2ͭ (vݩͷϕΫτϧ,
gGaussian Noise) !9
Posterior Probability • ࣄޙ֬ p(a | q)ΛจϨϕϧͰͷࢦඪʹ༻ • τʔΫϯϨϕϧͰҎԼͷ2ͭΛࢦඪʹ༻ •
ɹɹɹɹɹɹɹɹɹɹɹɹɿ࠷ෆ͔֬ͳ୯ޠʹண • ɹɹɹɹɹɹɹɹɹɹɹɹɹɹɿτʔΫϯຖͷperplexity !10
Data Uncertainty • ܇࿅σʔλͷΧόϨοδෆ͔֬͞ʹӨڹΛ༩͑Δ • ܇࿅σʔλͰݴޠϞσϧΛֶशͤ͞ɺೖྗͷݴޠϞσϧ֬Λ ࢦඪʹ༻͍Δ • ೖྗͷະޠτʔΫϯΛࢦඪʹ༻͍Δ !11
Input Uncertainty • Ϟσϧ͕ᘳͰೖྗ͕ᐆດͩͱෆ͔֬͞ൃੜ͢Δ (e.g. 9 o’clock -> flight_time(9am) or
flight_time(9pm) ) • ্Ґީิͷ֬ͷࢄΛ༻͍Δ • ΤϯτϩϐʔΛ༻͍Δ ← a’αϯϓϦϯάۙࣅ !12
Confidence Storing • ͜ΕΒͷ༷ʑͳࢦඪΛ༻͍ͯ֬৴ͷείΞϦϯάΛߦ͏ • ޯϒʔεςΟϯάϞσϧʹ֤ࢦඪΛ༩ֶ͑ͯशͤ͞Δ ग़ྗ͕0ʙ1ʹͳΔΑ͏ϩδεςΟοΫؔͰϥοϓ • ޯϒʔεςΟϯάϞσϧҎԼͷղઆهࣄ͕͔Γ͍͢ (ʮGradient
Boosting ͱ XGBoostʯ: ɹ https://zaburo-ch.github.io/post/xgboost/ ) !13
Uncertainty Interpretation • Ͳͷೖྗ͕ෆ͔֬͞ʹ࡞༻͍ͯ͠Δ͔Λಛఆ → ͦͷೖྗΛಛผͳέʔεͱͯ͠ѻ͏͕ग़དྷΔ • ༧ଌ͔ΒೖྗτʔΫϯؒ·ͰΛٯൖ → ֤τʔΫϯͷෆ͔֬͞ͷد༩͕Θ͔Δ
!14
Experiments (Datasets) • IFTTT σʔληοτ (train-dev-test : 77,495 - 5,171
- 4,294) • DJANGO σʔληοτ (train-dev-test : 16,000 - 1,000 - 1,805) !15
Experiments (Settings) • Dropout Perturbation Dropout rate0.1ɺ30ճ࣮ߦͯ͠ࢄΛܭࢉ • Gaussian Noise
ඪ४ภࠩΛ0.05ʹઃఆ • Probability of Input ݴޠϞσϧͱͯ͠KenLMΛ༻ • Input Uncertainty 10-best ͷީิ͔ΒࢄΛܭࢉ !16
Experiments (Results) • Model Uncertainty͕࠷ޮՌత • Data UncertaintyӨڹ͕খ͍͞ → In-domainͰ͋ΔͨΊ
!17
Experiments (Results) !18
Experiments (Results) • Model Uncertaintyͷ ࢦඪ͕ॏཁ • ಛʹIFTTT#UNKͱ Var͕ॏཁ !19
Experiments (Results) !20
Experiments (Results) • ϊΠζΛՃ͑ͨτʔΫϯྻͱ ٯൖͰಘͨτʔΫϯྻͷ ΦʔόʔϥοϓͰධՁ • Attentionͱൺֱͯ͠ߴ͍ • K=4ʹ͓͍ͯ80%͕Ұக
!21
Experiments (Results) !22
Conclusions • Neural Semantic ParsingͷͨΊͷ֬৴ਪఆϞσϧΛఏࣔ • ෆ͔֬͞ΛೖྗτʔΫϯϨϕϧͰղऍ͢Δํ๏Λఏࣔ • IFTTT, DJANGOσʔληοτʹ͓͍ͯ༗ޮੑΛ֬ೝ
• ఏҊϞσϧSeq2seqΛ࠾༻͢Δ༷ʑͳλεΫͰద༻Մೳ • Neural Semantic ParsingͷActive Learningʹ͓͍ͯར༻Ͱ͖Δ !23