Sentence Simplification with Deep Reinforcement Learning Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, 2017 pp. 584–594 Zhang Xingxing, Lapata Mirella Nagaoka University of Technology Takumi Maruyama
Abstract Ø Sentence simplification aims to make sentences easier to read and understand Ø This paper proposes encoder-decoder model coupled with a deep reinforcement learning frame work for text simplification Ø The proposed model outperforms competitive simplification systems on experiments. 2
Reinforcement Learning for Sentence Simplification Ø This paper proposes following two models: • Deep Reinforcement learning sentence simplification model (DRESS) • DRESS + Lexical Simplification model (DRESS-LS) 3
DRESS-LS Ø Lexical simplification is a task that replaces complex words with simpler alternatives Ø This paper uses pre-trained encoder-decoder model for lexical simplification Ø ! "# "$:#&$ , ( = 1 − , -./ "# "$:#&$ , ( + ,-/1 "# (, 2# Where , ∈ [0,1] 9
Experimental Setup Ø Comparison systems • PBMT-R: • Hybrid: A hybrid semantic-based model that combine simplification model and monolingual machine translation model • SBMT-SARI: A syntax-based translation model trained with PPDB and tuned with SARI A monolingual phrase base machine translation with a reranking post-processing step 11