et al. 2014) ◦ Transformer (Vaswani et al. 2017) • データセット ◦ Twitter ◦ OpenSubtitle Where are you going? I have to walk the dog. Encoder Decoder 入力 予測 機械学習
対話への応用: Vinyals and Le 2015 • Memory Network ◦ End-to-End: Sukhbaatar et al. 2015 ◦ 対話への応用: Ghazvininejad et al. 2017 • Transformer ◦ Attention Is All You Need: Vaswani et al. 2017 ◦ 対話への応用例: Diwan et al. 2018
▪ 文脈, 一貫性 • 自動評価と人手評価の相関が弱い点が課題(Liu et al. 2016) Liu et al. 2016のFigure 2より,自動スコアは低いが人手評価は高い例 A: dearest! question. how many thousands of people can panaad[*1] occupy? B: @user[*2] panaad has <number> k seat capacity while rizal[*1] has <number> k thats why they choose rizal i think. 【Ground Truth】 A: now i know about the siting capasity. thanks for the info @user great evening. 【Proposed】 A: @user makes sense. thanks! (*1: スタジアム名, *2: おそらく個人名) BLEUと人手評価の相関
Networks’, L. Dong et al., ACL 2015 ‘Question Answering on Freebase via Relation Extraction and Textual Evidence’, K. Xu et al., ACL 2016 ‘Bidirectional Attention Flow for Machine Comprehension’, M. Seo et al., ICLR 2017 ‘Hybrid Question Answering over Knowledge Base and Free Text’, K. Xu et al., COLING 2016 ‘Evidence Aggregation for Answer Re-Ranking in Open-Domain Question Answering’, S. Wang et al., ICLR 2018 ‘CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge’, A. Talmor et al., NAACL 2019
you have pets too?’, S. Zhang et al., ACL 2018 ‘Wizard of Wikipedia: Knowledge-Powered Conversational Agents’, E. Dinan et al., ICLR 2019 ‘A Persona-Based Neural Conversation Model’, J. Li et al., ACL 2016 ‘Flexible End-to-End Dialogue System for Knowledge Grounded Conversation’, W. Zhu et al.,arXiv:1709.04264 ‘A Knowledge-Grounded Neural Conversation Model’, M. Ghazvininejad et al.,, AAAI 2018 ‘Commonsense Knowledge Aware Conversation Generation with Graph Attention’, H. Zhou et al., IJCAI 2018 ‘Knowledge Aware Conversation Generation with Explainable Reasoning on Augmented Graphs’, Z. Liu et al., arXiv:1903.10245 ‘Learning to Select Knowledge for Response Generation in Dialog Systems’, R. Lian et al, arXiv:1902.04911 ‘AliMe Chat: A Sequence to Sequence and Rerank based Chatbot Engine’, M. Qiu et al., ACL 2017 ‘Mem2Seq: Effectively Incorporating Knowledge Bases into End-to-End Task-Oriented Dialog Systems’, A. Madotto et al., ACL 2018 ‘Disentangling Language and Knowledge in Task-Oriented Dialogs’, D. Raghu et al., NAACL 2019
質問文をそれぞれがシンプルな関係性を表すように分解を行う • WebQuestionsにおいて, QA on Freebase に対してF値0.5上回った(53.8). Kun Xu,et al. COLING 2016(担当:坂田) 質問をTriplet で表せる形に分 割 質問文 曖昧性解消 Coffee & TV(曲名) wrote → KBでは associatedBand is the front man of パラフレーズ化してテ キスト内を探索
Relation Extraction: DBPediaから質問文内の関係性を抽出. ◦ Textual Relation Extraction: Wikipediaから質問文内の関係性を抽出. • 上記のそれぞれの妥当性を最大化するような線形計画問題を解き統合. Kun Xu,et al. COLING 2016(担当:坂田) 曖昧性解消 Coffee & TV(曲名) wrote → KBでは associatedBand is the front man of パラフレーズ化してテキスト内を 探索
A さんが, “Where do you live?” と聞かれた時に “In the UK.”と答えたとする. 同じくイギリス在住Bさんが”Where do you live?” と聞かれた時に, Bさんが回答したデータが無くても “In the UK.”と返せる. • I don’t know などの無難な応答生成を避けるため p(M(入力)/R(応答))も考慮. Jiwei Li,et al. ACL 2016 (担当:坂田)
California Academy of Sciences. Make sure you catch the show at the Planetarium. Tickets are usually limited. ・Twitterの会話例 ・斜太字は新しい情報 が含まれる発言 対話の学習のみでは,意味のある 返答を生成するのが困難 → 外部知識を返答生成に 入れ込みたい! User: Going to Kusakabe tonight. > Neural : Have a great time! > Human : You’ll love it! Try omakase, the bet in town. “jewely”に関連する”braceletsを 生成文に入れ込めている! (あくまで上手くいった例) A: Obsessed with [jewely company] :-* B: oh my gosh obsessed with their bracelets and the meaning behind them! 本論文モデルでの生成例
を追加。 • Persona-chat, Wizard-of-Wikipedia のタスクで実験し、Seq2Seq や MemNet (Ghazvininejad et al.) よりも全てで良い結果に。 R. Lian et al. (Baidu), arXiv:1902.04911 (担当: 中西)