Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Open-Retrieval Conversational Question Answering
Search
Scatter Lab Inc.
July 24, 2020
Research
0
2.3k
Open-Retrieval Conversational Question Answering
Scatter Lab Inc.
July 24, 2020
Tweet
Share
More Decks by Scatter Lab Inc.
See All by Scatter Lab Inc.
zeta introduction
scatterlab
0
1.8k
SimCLR: A Simple Framework for Contrastive Learning of Visual Representations
scatterlab
0
4.1k
Adversarial Filters of Dataset Biases
scatterlab
0
2.2k
Sparse, Dense, and Attentional Representations for Text Retrieval
scatterlab
0
2.3k
Weight Poisoning Attacks on Pre-trained Models
scatterlab
0
2.2k
Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval
scatterlab
0
2.5k
Beyond Accuracy: Behavioral Testing of NLP Models with CheckList
scatterlab
0
2.3k
What Can Neural Networks Reason About?
scatterlab
0
2.2k
Exploring the Limits of Transfer Learning with Unified Text-to-Text Transformer
scatterlab
0
2.2k
Other Decks in Research
See All in Research
VectorLLM: Human-like Extraction of Structured Building Contours via Multimodal LLMs
satai
4
170
心理言語学の視点から再考する言語モデルの学習過程
chemical_tree
2
560
とあるSREの博士「過程」 / A Certain SRE’s Ph.D. Journey
yuukit
9
4.2k
2025年度人工知能学会全国大会チュートリアル講演「深層基盤モデルの数理」
taiji_suzuki
25
18k
AlphaEarth Foundations: An embedding field model for accurate and efficient global mapping from sparse label data
satai
1
170
CVPR2025論文紹介:Unboxed
murakawatakuya
0
150
When Submarine Cables Go Dark: Examining the Web Services Resilience Amid Global Internet Disruptions
irvin
0
300
Adaptive Experimental Design for Efficient Average Treatment Effect Estimation and Treatment Choice
masakat0
0
160
不確実性下における目的と手段の統合的探索に向けた連続腕バンディットの応用 / iot70_gp_rff_mab
monochromegane
2
150
利用シーンを意識した推薦システム〜SpotifyとAmazonの事例から〜
kuri8ive
1
240
Language Models Are Implicitly Continuous
eumesy
PRO
0
200
Large Language Model Agent: A Survey on Methodology, Applications and Challenges
shunk031
14
9.8k
Featured
See All Featured
4 Signs Your Business is Dying
shpigford
184
22k
Let's Do A Bunch of Simple Stuff to Make Websites Faster
chriscoyier
507
140k
個人開発の失敗を避けるイケてる考え方 / tips for indie hackers
panda_program
111
20k
jQuery: Nuts, Bolts and Bling
dougneiner
64
7.9k
StorybookのUI Testing Handbookを読んだ
zakiyama
30
6.1k
Refactoring Trust on Your Teams (GOTO; Chicago 2020)
rmw
34
3.1k
It's Worth the Effort
3n
187
28k
ピンチをチャンスに:未来をつくるプロダクトロードマップ #pmconf2020
aki_iinuma
126
53k
Become a Pro
speakerdeck
PRO
29
5.5k
The Cult of Friendly URLs
andyhume
79
6.6k
The Myth of the Modular Monolith - Day 2 Keynote - Rails World 2024
eileencodes
26
3k
[RailsConf 2023 Opening Keynote] The Magic of Rails
eileencodes
30
9.6k
Transcript
Open-Retrieval Conversational Question Answering ࢲ࢚ (ܻࢲ ࢎ౭झ, ೝಯ)
ѐਃ Open-Retrieval Conversational Question Answering
ѐਃ ѐਃ • SIGIR 20 • Chen Qu, Liu Yang,
Cen Chen, Minghui Qiu, W. Bruce Croft, Mohit Iyyer • University of Massachusetts Amherst, Ant Financial, Alibaba Group • Conversational searchਸ ਤ೧ ConvQAܳ open retrieval settingਵ۽ ഛೞח Ѫ ਃ োҳ ਃ
ѐਃ ѐਃ • Conversational search information retrieval Ҿӓੋ ݾী ೞա
• ୭Ӕ োҳٜ conversational searchܳ response rankingҗ conversational question answering۽ ೧Ѿ • ױࣽ ߸ਸ য candidate setীࢲ ҊܰѢա য passageীࢲ spanਸ ࢶఖ • ח conversational searchীࢲ retrieval ӝୡੋ ഝਸ ޖदೞח ߑध • ࠄ ֤ޙ open-retrieval conversational question answering(ORConvQA) settingਸ ઁউೞৈ ޙઁܳ ೧Ѿ
ѐਃ ѐਃ • ORConvQAী ೠ োҳܳ ਤ೧ OR-QuAC ؘఠ ࣇਸ
ٜ݅ਵݴ ORConvQAܳ ਤೠ end-to-end दझమਸ ҳ୷ೞݴ ےझನݠ ӝ߈ retriever, reranker ৬ reader ١ਸ ನೣ • OR-QuACܳ ࢚ਵ۽ ೠ ֤ޙ प learnable retriever ਃࢿਸ ૐݺ • ژೠ ݽٚ दझమ ҳࢿ ਃࣗ(retriever, reranker ৬ reader)ীࢲ history modelingਸ ࢎਊೞݶ दझమ ѱ ѐࢶ ؼ ࣻ ਸ ࠁ
Dataset Open-Retrieval Conversational Question Answering
ORConvQA? Dataset • conversational search systemsਸ ҳ୷ೞӝ ਤೠ ୶о ױ҅۽ࢲ
߸ਸ Ҋܰ ӝ ী retrieve evidenceܳ large collection۽ ࠗఠ Ѩ࢝ 1. ࠁܳ ҳೞח ചܳ ઁҕ(information seeker৬ information provider)৬ ೞח QuAC dataset 2. QuAC ޙਸ context-independentೞѱ द ࢿೠ CANARD dataset 3. Wikipedia passage
Dataset
CANARD? Dataset • QuAC dialogsח self-containedೞ ঋח ড חؘ ח
ࠛ৮ೠ ୡӝ ޙਵ۽ ੋ೧ ߊࢤ • ܳ ٜয seekerীѱ a Chinese polymathic scientistੋ Zhang Hengী ೧ ߓۄҊ ೮חؘ ޙ "җҗ ӝࣿҗ যڃ ҙ ҅о णפө?” • ۞ೠ ࠛౠೞҊ ݽഐೠ ୡӝ ޙ ചܳ ೧ࢳೞӝ য۵ѱ ೞӝ ٸޙী ҕѐ Ѩ࢝ ജ҃ীࢲ ޙઁܳ ঠӝ • CANARD ؘఠ ࣁীࢲ ઁҕೞח context-independent rewritesਵ۽ ೞৈ ޙઁܳ ೧Ѿ, Ӓۢ "Zhang Heng җ ӝ ࣿҗ যڃ ҙ҅о णפө?"۽ ޙ
CANARD? Dataset • ߣ૩ ޙী ೧ࢲ݅ Үܳ ࣻ೯ೞݶ ച
ղীࢲ history dependenciesਸ Ӓ۽ ਬೞݶࢲ ചо self-contained • QuAC test set ҕѐغয ঋӝ ٸޙী QuAC dev setਸ ਊೞৈ CANARD test setਸ ݅ٞ • ژೠ QuAC train set 10%ܳ dev۽ ഝਊ. • CANARDী হח QuAC ޙ ತӝ೮ਵݴ ܳ ਊೠ ࢤ ؘఠ ੋ OR-QuAC ؘఠ ా҅ח җ э.
Model Open-Retrieval Conversational Question Answering
ݽ؛ Retriever, Reranker, Reader۽ ա Model
ݽ؛ Retriever, Reranker, Reader۽ ա Model
Passage Retriever Dataset • Passage Encoder • Question Encoder •
Retrieval Score
Retrieval score ӝળਵ۽ ࢚ਤ top-Kѐ ޙࢲܳ rerank৬ reader۽ ׳ Model
ݽ؛ Retriever, Reranker, Reader۽ ա Model
Reranker& Reader Encoding Dataset • Input • Contextualized Representations •
sequence representation
Reranker& Reader Dataset • Sequence Representation • Reranker (W_rr is
vector) • Reader (span prediction)
Training Open-Retrieval Conversational Question Answering
Retriever pretraining Training • retrieval scores for the batch •
to maximize the probability of the gold passage for each question • Pretraining loss Pretraning റী passage encoderח offlineਵ۽ ك. Faissܳ ࢎਊ೧ࢲ Ѿҗܳ ࡳই১.
Concurrent Learning Training • Retriever loss • Reranker loss •
Reader loss
Inference Training • Retrieval Ѿҗ Top-K ޙࢲܳ ݽف ੋಌ۠झ ೞৈ
п ޙࢲ߹ spanਸ ஏ • Retriever loss + Reranker loss + Reader lossо ઁੌ ޙࢲ spanਸ ୭ઙ ਵ۽ ஏ
RESULTS Open-Retrieval Conversational Question Answering
Competing Method RESULTS • DrQA : TF-IDF + RNN based
reader • BERTserini : BM25 + BERT reader • ORConvQA without history : our method + window size 0 • ORConvQA : our method • Evaluation Metric : word level F1, human equivalence score (HEQ), Mean Reciprocal Rank(MRR), Recall
DrQA < BERTserini < Ours w/o hist < Ours RESULTS
Ablation study RESULTS
History windows size ઑ RESULTS
хࢎפ✌ ୶о ޙ ژח ҾӘೠ ݶ ઁٚ ইې োۅ۽
োۅ ࣁਃ! ࢲ࢚ (ܻࢲ ࢎ౭झ, ೝಯ)
[email protected]
Linked in. @pingpong