Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
文献紹介: Confidence Modeling for Neural Semantic P...
Search
Yumeto Inaoka
October 24, 2018
Research
3
190
文献紹介: Confidence Modeling for Neural Semantic Parsing
2018/10/24の文献紹介で発表
Yumeto Inaoka
October 24, 2018
Tweet
Share
More Decks by Yumeto Inaoka
See All by Yumeto Inaoka
文献紹介: Quantity doesn’t buy quality syntax with neural language models
yumeto
1
130
文献紹介: Open Domain Web Keyphrase Extraction Beyond Language Modeling
yumeto
0
180
文献紹介: Self-Supervised_Neural_Machine_Translation
yumeto
0
120
文献紹介: Comparing and Developing Tools to Measure the Readability of Domain-Specific Texts
yumeto
0
130
文献紹介: PAWS: Paraphrase Adversaries from Word Scrambling
yumeto
0
100
文献紹介: Beyond BLEU: Training Neural Machine Translation with Semantic Similarity
yumeto
0
220
文献紹介: EditNTS: An Neural Programmer-Interpreter Model for Sentence Simplification through Explicit Editing
yumeto
0
280
文献紹介: Decomposable Neural Paraphrase Generation
yumeto
0
190
文献紹介: Analyzing the Limitations of Cross-lingual Word Embedding Mappings
yumeto
0
180
Other Decks in Research
See All in Research
湯村研究室の紹介2024 / yumulab2024
yumulab
0
350
多様かつ継続的に変化する環境に適応する情報システム/thesis-defense-presentation
monochromegane
1
580
Large Vision Language Model (LVLM) に関する最新知見まとめ (Part 1)
onely7
21
4.7k
精度を無視しない推薦多様化の評価指標
kuri8ive
1
290
KDD論文読み会2024: False Positive in A/B Tests
ryotoitoi
0
230
日本語医療LLM評価ベンチマークの構築と性能分析
fta98
3
770
熊本から日本の都市交通政策を立て直す~「車1割削減、渋滞半減、公共交通2倍」の実現へ~@公共交通マーケティング研究会リスタートセミナー
trafficbrain
0
180
20240820: Minimum Bayes Risk Decoding for High-Quality Text Generation Beyond High-Probability Text
de9uch1
0
140
機械学習でヒトの行動を変える
hiromu1996
1
380
論文紹介: COSMO: A Large-Scale E-commerce Common Sense Knowledge Generation and Serving System at Amazon (SIGMOD 2024)
ynakano
1
190
論文紹介/Expectations over Unspoken Alternatives Predict Pragmatic Inferences
chemical_tree
1
280
外積やロドリゲスの回転公式を利用した点群の回転
kentaitakura
1
710
Featured
See All Featured
Put a Button on it: Removing Barriers to Going Fast.
kastner
59
3.6k
The Myth of the Modular Monolith - Day 2 Keynote - Rails World 2024
eileencodes
17
2.2k
Save Time (by Creating Custom Rails Generators)
garrettdimon
PRO
28
900
BBQ
matthewcrist
85
9.4k
The Art of Delivering Value - GDevCon NA Keynote
reverentgeek
8
1.2k
How to Create Impact in a Changing Tech Landscape [PerfNow 2023]
tammyeverts
48
2.2k
How to Ace a Technical Interview
jacobian
276
23k
ReactJS: Keep Simple. Everything can be a component!
pedronauck
665
120k
Learning to Love Humans: Emotional Interface Design
aarron
273
40k
Building Adaptive Systems
keathley
38
2.3k
Speed Design
sergeychernyshev
25
660
For a Future-Friendly Web
brad_frost
175
9.4k
Transcript
Confidence Modeling for Neural Semantic Parsing จݙհɹ Ԭٕज़Պֶେֶɹࣗવݴޠॲཧݚڀࣨ ҴԬɹເਓ
Literature Confidence Modeling for Neural Semantic Parsing Li Dong† and
Chris Quirk‡ and Mirella Lapata† †School of Informatics, University of Edinburgh ‡Microsoft Research, Redmond Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 743–753, 2018. !2
Abstract • Neural Semantic Parsing (seq2seq) ʹ͓͚Δ֬৴ϞσϦϯά • ೖྗͷͲ͕͜ෆ͔֬͞ͷཁҼʹͳ͍ͬͯΔ͔Λࣝผ •
ࣄޙ֬ɺΞςϯγϣϯʹґଘ͢Δख๏ΑΓ༏ल !3
Introduction • Neural Semantic ParsingظͰ͖Δ݁ՌΛग़͢ҰํͰ ग़ྗͷݪҼ͕ղऍͮ͠Β͍ϒϥοΫϘοΫεͱͯ͠ಈ࡞ • Ϟσϧͷ༧ଌʹର͢Δ֬৴ͷਪఆʹΑͬͯ༗ҙٛͳ ϑΟʔυόοΫ͕ՄೳʹͳΔͷͰͳ͍͔ •
֬৴ͷείΞϦϯάख๏ࣄޙ֬ p(y|x) ͕Α͘༻͞ΕΔ → ઢܗϞσϧͰ༗ޮ͕ͩχϡʔϥϧϞσϧͰྑ͘ͳ͍ !4
Neural Semantic Parsing • In: Natural Language Out: Logical form
• Seq2seq with LSTM • Attention mechanism • Maximize the likelihood • Beam Search !5 !5
Confidence Estimation • ೖྗqͱ༧ଌͨ͠ҙຯදݱa͔Β֬৴s(q, a) ∈ (0, 1)Λ༧ଌ • ֬৴ͷஅʹʮԿΛΒͳ͍͔ʯΛਪఆ͢Δඞཁ͕͋Δ
• Ϟσϧͷෆ͔֬͞ɺσʔλͷෆ͔֬͞ɺೖྗͷෆ͔֬͞Λجʹ ࡞ΒΕΔࢦඪ͔Β֬৴ΛճؼϞσϧʹΑͬͯٻΊΔ !6
Model Uncertainty • ϞσϧͷύϥϝʔλߏʹΑΔෆ͔֬͞Ͱ֬৴͕Լ ← ྫ͑܇࿅σʔλʹؚ·ΕΔϊΠζ֬తֶशΞϧΰϦζϜ • Dropout Perturbation, Gaussian
Noise, Posterior Probability͔Β ࢦඪΛ࡞͠ɺෆ͔֬͞Λ༧ଌ !7
Dropout Perturbation • DropoutΛςετ࣌ʹ༻ (ਤதͷi, ii, iii, ivͷՕॴ) • จϨϕϧͰͷࢦඪɿ
• τʔΫϯϨϕϧͰͷࢦඪɿ • ɹɹઁಈͤ͞Δύϥϝʔλɹ݁ՌΛूΊͯࢄΛܭࢉ !8
Gaussian Noise • Gaussian NoiseΛϕΫτϧՃ͑ͯDropoutͱಉ༷ʹࢄΛܭࢉ ← DropoutϕϧψʔΠɺ͜ΕΨεʹै͏ϊΠζ • ϊΠζͷՃ͑ํҎԼͷ2ͭ (vݩͷϕΫτϧ,
gGaussian Noise) !9
Posterior Probability • ࣄޙ֬ p(a | q)ΛจϨϕϧͰͷࢦඪʹ༻ • τʔΫϯϨϕϧͰҎԼͷ2ͭΛࢦඪʹ༻ •
ɹɹɹɹɹɹɹɹɹɹɹɹɿ࠷ෆ͔֬ͳ୯ޠʹண • ɹɹɹɹɹɹɹɹɹɹɹɹɹɹɿτʔΫϯຖͷperplexity !10
Data Uncertainty • ܇࿅σʔλͷΧόϨοδෆ͔֬͞ʹӨڹΛ༩͑Δ • ܇࿅σʔλͰݴޠϞσϧΛֶशͤ͞ɺೖྗͷݴޠϞσϧ֬Λ ࢦඪʹ༻͍Δ • ೖྗͷະޠτʔΫϯΛࢦඪʹ༻͍Δ !11
Input Uncertainty • Ϟσϧ͕ᘳͰೖྗ͕ᐆດͩͱෆ͔֬͞ൃੜ͢Δ (e.g. 9 o’clock -> flight_time(9am) or
flight_time(9pm) ) • ্Ґީิͷ֬ͷࢄΛ༻͍Δ • ΤϯτϩϐʔΛ༻͍Δ ← a’αϯϓϦϯάۙࣅ !12
Confidence Storing • ͜ΕΒͷ༷ʑͳࢦඪΛ༻͍ͯ֬৴ͷείΞϦϯάΛߦ͏ • ޯϒʔεςΟϯάϞσϧʹ֤ࢦඪΛ༩ֶ͑ͯशͤ͞Δ ग़ྗ͕0ʙ1ʹͳΔΑ͏ϩδεςΟοΫؔͰϥοϓ • ޯϒʔεςΟϯάϞσϧҎԼͷղઆهࣄ͕͔Γ͍͢ (ʮGradient
Boosting ͱ XGBoostʯ: ɹ https://zaburo-ch.github.io/post/xgboost/ ) !13
Uncertainty Interpretation • Ͳͷೖྗ͕ෆ͔֬͞ʹ࡞༻͍ͯ͠Δ͔Λಛఆ → ͦͷೖྗΛಛผͳέʔεͱͯ͠ѻ͏͕ग़དྷΔ • ༧ଌ͔ΒೖྗτʔΫϯؒ·ͰΛٯൖ → ֤τʔΫϯͷෆ͔֬͞ͷد༩͕Θ͔Δ
!14
Experiments (Datasets) • IFTTT σʔληοτ (train-dev-test : 77,495 - 5,171
- 4,294) • DJANGO σʔληοτ (train-dev-test : 16,000 - 1,000 - 1,805) !15
Experiments (Settings) • Dropout Perturbation Dropout rate0.1ɺ30ճ࣮ߦͯ͠ࢄΛܭࢉ • Gaussian Noise
ඪ४ภࠩΛ0.05ʹઃఆ • Probability of Input ݴޠϞσϧͱͯ͠KenLMΛ༻ • Input Uncertainty 10-best ͷީิ͔ΒࢄΛܭࢉ !16
Experiments (Results) • Model Uncertainty͕࠷ޮՌత • Data UncertaintyӨڹ͕খ͍͞ → In-domainͰ͋ΔͨΊ
!17
Experiments (Results) !18
Experiments (Results) • Model Uncertaintyͷ ࢦඪ͕ॏཁ • ಛʹIFTTT#UNKͱ Var͕ॏཁ !19
Experiments (Results) !20
Experiments (Results) • ϊΠζΛՃ͑ͨτʔΫϯྻͱ ٯൖͰಘͨτʔΫϯྻͷ ΦʔόʔϥοϓͰධՁ • Attentionͱൺֱͯ͠ߴ͍ • K=4ʹ͓͍ͯ80%͕Ұக
!21
Experiments (Results) !22
Conclusions • Neural Semantic ParsingͷͨΊͷ֬৴ਪఆϞσϧΛఏࣔ • ෆ͔֬͞ΛೖྗτʔΫϯϨϕϧͰղऍ͢Δํ๏Λఏࣔ • IFTTT, DJANGOσʔληοτʹ͓͍ͯ༗ޮੑΛ֬ೝ
• ఏҊϞσϧSeq2seqΛ࠾༻͢Δ༷ʑͳλεΫͰద༻Մೳ • Neural Semantic ParsingͷActive Learningʹ͓͍ͯར༻Ͱ͖Δ !23