Upgrade to Pro — share decks privately, control downloads, hide ads and more …

文献紹介:Recurrent Neural Network based Language Model

Van Hai
May 23, 2017
99

文献紹介:Recurrent Neural Network based Language Model

Van Hai

May 23, 2017
Tweet

More Decks by Van Hai

Transcript

  1. Information 2 Tomas Mikolov, Martin Karafiat, Lukas Burget, JanCernocky, and

    Sanjeev Khudanpur Recurrent neural network based language model In 11th Annual Conference of the International Speech Communication Association, pp.1045– 1048, 2010
  2. 1. Introduction • Statistical language modeling: • Predict the next

    word in textual data • Special language domain: • Sentence must be described by parse trees • Morphology of words, syntax and semantics • There are some significant progress in language model • Measure by ability of models to better predict sequential data 3
  3. 2.1 Simple Recurrent Neural Network 6 • Networks are trained

    in several epochs • Weights are initialized to small values • Train network by standard backpropagation algorithm with stochastic gradient descent • Error vector:
  4. 3. Experiments • Wall Street Journal (WSJ) Experiments • NIST

    Rich Transcription Evaluation 2005 (RT05) Experiments 8
  5. 3.1 WSJ Experiments • Training corpus • 37M words from

    NYT section of English Gigaword • Training 6.4M words (300K sentences) • Perplexity evaluated on 230K words • Kneser-Ney smoothed 5-gram as KN5 • RNN 90/2 • Hidden layer size is 90 • Threshold for merging words to rare token is 2 9
  6. Conclusion and future work • In WSJ, WER • Around

    18% with the same data • Around 12% when backoff model is trained with data 5 times than RNN model • NIST RT05 can outperform big backoff models 14 Vietnamese Morphological Analysis 2017/05/17