Slide 1

Slide 1 text

7 2016/08/20 1

Slide 2

Slide 2 text

2 l RNN#' l RNN " " l RNN& !( $%

Slide 3

Slide 3 text

3

Slide 4

Slide 4 text

4 We can get an idea of the quality of the learned feature vectors by displaying them in a 2-D map.

Slide 5

Slide 5 text

5 $%"! '(Bag of Words ')N-gram We can get an idea of the quality " #& or

Slide 6

Slide 6 text

6 l RNN#' l RNN " " l RNN& !( $%

Slide 7

Slide 7 text

7 l RNN#' l RNN " " l RNN& !( $%

Slide 8

Slide 8 text

RNN 8

Slide 9

Slide 9 text

RNN 9 x1 z0

Slide 10

Slide 10 text

RNN 10 z1 y1

Slide 11

Slide 11 text

RNN 11 x2 z1

Slide 12

Slide 12 text

RNN 12 z2 y2

Slide 13

Slide 13 text

13 l RNN#' l RNN " " l RNN& !( $%

Slide 14

Slide 14 text

RNN 14 xt zt-1 y t →

Slide 15

Slide 15 text

RNN 15 xt zt-1 y t →

Slide 16

Slide 16 text

16 l RNN#' l RNN " " l RNN& !( $%

Slide 17

Slide 17 text

RNN 17 Back Propagation through time

Slide 18

Slide 18 text

BPTT 18 % x #!% d $ & y , ... , y ' % δ ( δ ) * " t t 1 t k out, t j t

Slide 19

Slide 19 text

BPTT 19 δ k out, 1 δ k out, 2 δ k out, 3 δ k out, t

Slide 20

Slide 20 text

BPTT 20 t1 t δ j t

Slide 21

Slide 21 text

BPTT 21

Slide 22

Slide 22 text

22 l RNN#' l RNN " " l RNN& !( $%

Slide 23

Slide 23 text

23 l RNN#' l RNN " " l RNN& !( $%

Slide 24

Slide 24 text

RNN 24 #@10+'<3= 0A; ← &91,?7 &9$)+/") 4 *58&90 or :( !.2- ← RNN%>264

Slide 25

Slide 25 text

LSTM 25 '% (Long Short-Term Memory, LSTM) RNN&#→ &#!$ (+) "*

Slide 26

Slide 26 text

LSTM 26

Slide 27

Slide 27 text

LSTM 27

Slide 28

Slide 28 text

LSTM 28

Slide 29

Slide 29 text

LSTM 29

Slide 30

Slide 30 text

30 l RNN#' l RNN " " l RNN& !( $%

Slide 31

Slide 31 text

RNN 31 “w n” …… ^

Slide 32

Slide 32 text

(HMM) 32 %! $"# $ "# %!

Slide 33

Slide 33 text

33 $ .)-+ (Connectionist temporal classification, CTC) HMM# ! RNN &, %*"(,' &,

Slide 34

Slide 34 text

CTC 34 X = x , ... , x l = l , … , l = p( l | X ) 1 t 1 |l|

Slide 35

Slide 35 text

CTC 35 l = ‘ab’ t = 6 a, b, , , , a, , , b, , , , , a, , b …

Slide 36

Slide 36 text

CTC 36 = p( l | X ) a, b, , , , a, a, , b, , , , , a, , b … p( l1 | X ) = p( l2 | X ) = p( l3 | X ) = = p(a)*p(b)*p( )*p( ) *p( )*p( ) = p(a)*p(a)*p( )*p(b) *p( )*p( ) = p( )*p( )*p( )*p(a)*p( )*p(b)

Slide 37

Slide 37 text

37 • ;&B(2015):5:#3, .<2 • /%) in $"#3E?!(2015): http://www.slideshare.net/shotarosano5/chapter7-50542830, 2016A8*12@C • Recurrent Neural Networks(2014): http://www.slideshare.net/beam2d/pfi-seminar- 20141030rnn?qid=9e5894c7-f162-4da3-b082-a1e4963689e8&v=&b=&from_search=17, 2016A8*12@C • =86 (2013): 7+,4D19+,4D, 2 • LSTM 0(>-'(2016): http://qiita.com/t_Signull/items/21b82be280b46f467d1b, 2016A8*12@C • A. Graves(2008): Supervised sequence labelling with Recurrent Neural Networks, PhD thesis, Technische Universität München, https://www.cs.toronto.edu/~graves/preprint.pdf