Slide 1

Slide 1 text

Hyena Hierarchy: Towards Larger Convolutional Language Models D1, Graduate School of Informatics, Nagoya University, Japan Hayato Tsukagoshi Michael Poli, Stefano Massaroli, Eric Nguyen, Daniel Y. Fu, Tri Dao, Stephen Baccus, Yoshua Bengio, Stefano Ermon, Christopher Ré ICML2023

Slide 2

Slide 2 text

•ঢ়ଶۭؒϞσϧ(SSMs)ϕʔεͷΞʔΩςΫνϟHyenaΛఏҊ • AttentionΑΓখ͍͞ܭࢉྔ: O(N log N) • Attentionෆ࢖༻ͷϞσϧͰॳΊͯAttentionͱಉ౳Ҏ্ͷੑೳΛୡ੒ •ঢ়ଶۭؒϞσϧͱLinear Transformerͷѱຐ߹ମϞσϧ ֓ཁ 2

Slide 3

Slide 3 text

•ঢ়ଶۭؒϞσϧ • ίϯηϓτ • ৞ΈࠐΈԋࢉͰͷදݱ •Hyena • ઌߦݚڀ • ͓ؾ࣋ͪ • ධՁ࣮ݧ ໔੹ࣄ߲ •εϥΠυதͷਤද͸֤εϥΠυͰݴٴ͞Ε͍ͯΔ࿦จ͔ΒͷҾ༻Ͱ͢ •࿦จதͷ਺ࣜͱ͸ҟͳΔจࣈΛ࢖͍ͬͯΔ৔߹͕͋Γ·͢ ൃද໨࣍ / ໔੹ࣄ߲ 3

Slide 4

Slide 4 text

•ೖྗͱࠓͷঢ়ଶ͔Βग़ྗͱ࣍ͷঢ়ଶΛ࡞ΔϞσϧ • RNNͬΆ͍Ϟσϧ ঢ়ଶۭؒϞσϧ: State Space Models (SSMs) 4 si+1 = Asi + Bxi yi = Csi + Dxi

Slide 5

Slide 5 text

•ೖྗͱࠓͷঢ়ଶ͔Βग़ྗͱ࣍ͷঢ়ଶΛ࡞ΔϞσϧ • RNNͬΆ͍Ϟσϧ ঢ়ଶۭؒϞσϧ: State Space Models (SSMs) 5 ೖྗ xi-1 ঢ়ଶ si-1 si+1 = Asi + Bxi yi = Csi + Dxi

Slide 6

Slide 6 text

•ೖྗͱࠓͷঢ়ଶ͔Βग़ྗͱ࣍ͷঢ়ଶΛ࡞ΔϞσϧ • RNNͬΆ͍Ϟσϧ ঢ়ଶۭؒϞσϧ: State Space Models (SSMs) 6 ೖྗ xi-1 ঢ়ଶ si-1 ग़ྗ yi-1 si+1 = Asi + Bxi yi = Csi + Dxi

Slide 7

Slide 7 text

•ೖྗͱࠓͷঢ়ଶ͔Βग़ྗͱ࣍ͷঢ়ଶΛ࡞ΔϞσϧ • RNNͬΆ͍Ϟσϧ ঢ়ଶۭؒϞσϧ: State Space Models (SSMs) 7 ೖྗ xi-1 ঢ়ଶ si-1 ग़ྗ yi-1 ঢ়ଶ si si+1 = Asi + Bxi yi = Csi + Dxi

Slide 8

Slide 8 text

•ೖྗͱࠓͷঢ়ଶ͔Βग़ྗͱ࣍ͷঢ়ଶΛ࡞ΔϞσϧ • RNNͬΆ͍Ϟσϧ ঢ়ଶۭؒϞσϧ: State Space Models (SSMs) 8 ೖྗ xi-1 ঢ়ଶ si-1 ग़ྗ yi-1 ೖྗ xi ঢ়ଶ si si+1 = Asi + Bxi yi = Csi + Dxi

Slide 9

Slide 9 text

•ೖྗͱࠓͷঢ়ଶ͔Βग़ྗͱ࣍ͷঢ়ଶΛ࡞ΔϞσϧ • RNNͬΆ͍Ϟσϧ ঢ়ଶۭؒϞσϧ: State Space Models (SSMs) 9 ೖྗ xi-1 ঢ়ଶ si-1 ग़ྗ yi-1 ೖྗ xi ঢ়ଶ si ग़ྗ yi-1 si+1 = Asi + Bxi yi = Csi + Dxi

Slide 10

Slide 10 text

•ೖྗͱࠓͷঢ়ଶ͔Βग़ྗͱ࣍ͷঢ়ଶΛ࡞ΔϞσϧ • RNNͬΆ͍Ϟσϧ ঢ়ଶۭؒϞσϧ: State Space Models (SSMs) 10 ೖྗ xi-1 ঢ়ଶ si-1 ग़ྗ yi-1 ೖྗ xi ঢ়ଶ si ग़ྗ yi-1 ঢ়ଶ si+1 si+1 = Asi + Bxi yi = Csi + Dxi

Slide 11

Slide 11 text

ঢ়ଶۭؒϞσϧ: ܭࢉաఔͷల։ 11 yi = Csi + Dxi si+1 = Asi + Bxi

Slide 12

Slide 12 text

ঢ়ଶۭؒϞσϧ: ܭࢉաఔͷల։ 12 yi = Csi + Dxi si+1 = Asi + Bxi yi = C (Asi-1 + Bxi-1 ) + Dxi

Slide 13

Slide 13 text

ঢ়ଶۭؒϞσϧ: ܭࢉաఔͷల։ 13 yi = Csi + Dxi si+1 = Asi + Bxi yi = C (Asi-1 + Bxi-1 ) + Dxi yi = C(A (Asi-2 + Bxi-2 ) + Bxi-1 ) + Dxi

Slide 14

Slide 14 text

ঢ়ଶۭؒϞσϧ: ܭࢉաఔͷల։ 14 yi = Csi + Dxi si+1 = Asi + Bxi yi = C (Asi-1 + Bxi-1 ) + Dxi yi = C(A (Asi-2 + Bxi-2 ) + Bxi-1 ) + Dxi yi = C(A(A (Asi-3 + Bxi-3 ) + Bxi-2 ) + Bxi-1 ) + Dxi

Slide 15

Slide 15 text

ঢ়ଶۭؒϞσϧ: ۩ମྫ 15 y0 = Dx0

Slide 16

Slide 16 text

ঢ়ଶۭؒϞσϧ: ۩ମྫ 16 y0 = Dx0 y1 = CA0Bx0 + Dx1

Slide 17

Slide 17 text

ঢ়ଶۭؒϞσϧ: ۩ମྫ 17 y0 = Dx0 y1 = CA0Bx0 + Dx1 y2 = CA1Bx0 + CA0Bx1 + Dx2

Slide 18

Slide 18 text

ঢ়ଶۭؒϞσϧ: ۩ମྫ 18 y0 = Dx0 y1 = CA0Bx0 + Dx1 y2 = CA1Bx0 + CA0Bx1 + Dx2 y3 = CA2Bx0 + CA1Bx1 + CA0Bx2 + Dx3 …

Slide 19

Slide 19 text

ঢ়ଶۭؒϞσϧ: t=0 19 x0 x1 x2 x3 D y0 y1 y2 y3

Slide 20

Slide 20 text

ঢ়ଶۭؒϞσϧ: t=1 20 x0 x1 x2 x3 D CA0B y0 y1 y2 y3

Slide 21

Slide 21 text

ঢ়ଶۭؒϞσϧ: t=2 21 x0 x2 x3 D CA1B y0 y1 y2 y3 x1 CA0B

Slide 22

Slide 22 text

ঢ়ଶۭؒϞσϧ: t=3 22 x0 x3 D CA1B y0 y1 y2 y3 x1 CA0B x2 CA2B

Slide 23

Slide 23 text

ঢ়ଶۭؒϞσϧ: શମ૾ 23 y0 y1 y2 y3 x0 x3 x1 x2

Slide 24

Slide 24 text

ঢ়ଶۭؒϞσϧ: શମ૾ 24 y0 y1 y2 y3 ग़ྗಉ࢜ͷґଘ͕ͳ͍ → ฒྻܭࢉՄೳ x0 x3 x1 x2 Attentionͱಉ͡ੑ࣭

Slide 25

Slide 25 text

৞ΈࠐΈԋࢉͰͷදݱ 25 y0 = Dx0 y1 = CA0Bx0 + Dx1 y2 = CA1Bx0 + CA0Bx1 + Dx2 y3 = CA2Bx0 + CA1Bx1 + CA0Bx2 + Dx3

Slide 26

Slide 26 text

৞ΈࠐΈԋࢉͰͷදݱ 26 y0 = Dx0 y1 = CA0Bx0 + Dx1 y2 = CA1Bx0 + CA0Bx1 + Dx2 y3 = CA2Bx0 + CA1Bx1 + CA0Bx2 + Dx3 ಉ͡Α͏ͳܭࢉ͕ଟ͍

Slide 27

Slide 27 text

৞ΈࠐΈԋࢉͰͷදݱ 27 y0 = Dx0 y1 = CA0Bx0 + Dx1 y2 = CA1Bx0 + CA0Bx1 + Dx2 y3 = CA2Bx0 + CA1Bx1 + CA0Bx2 + Dx3 ಉ͡Α͏ͳܭࢉ͕ଟ͍ Ͳ͏ʹ͔ͯ͠ߴ଎ԽͰ͖ͳ͍͔ʁ

Slide 28

Slide 28 text

৞ΈࠐΈԋࢉͰͷදݱ 28 f = [ CA0B, CA1B, CA2B, …, CAN-1B ] CABΛฒ΂Δ

Slide 29

Slide 29 text

৞ΈࠐΈԋࢉͰͷදݱ 29 f = [ CA0B, CA1B, CA2B, …, CAN-1B ] x = [ x0 , x1 , x2 , …, xN-1 ]

Slide 30

Slide 30 text

৞ΈࠐΈԋࢉͰͷදݱ 30 f = [ CA0B, CA1B, CA2B, …, CAN-1B ] x = [ x0 , x1 , x2 , …, xN-1 ] ( f ˎ x ) = [

Slide 31

Slide 31 text

৞ΈࠐΈԋࢉͰͷදݱ 31 f = [ CA0B, CA1B, CA2B, …, CAN-1B ] x = [ x0 , x1 , x2 , …, xN-1 ] ( f ˎ x ) = [ CA0Bx0 ,

Slide 32

Slide 32 text

৞ΈࠐΈԋࢉͰͷදݱ 32 f = [ CA0B, CA1B, CA2B, …, CAN-1B ] x = [ x0 , x1 , x2 , …, xN-1 ] ( f ˎ x ) = [ CA0Bx0 ,
 CA1Bx0 + CA0Bx1 ,

Slide 33

Slide 33 text

৞ΈࠐΈԋࢉͰͷදݱ 33 f = [ CA0B, CA1B, CA2B, …, CAN-1B ] x = [ x0 , x1 , x2 , …, xN-1 ] ( f ˎ x ) = [ CA0Bx0 ,
 CA1Bx0 + CA0Bx1 ,
 CA2Bx0 + CA1Bx1 + CA0Bx2 ,
 … ]

Slide 34

Slide 34 text

৞ΈࠐΈԋࢉͰͷදݱ 34 f = [ CA0B, CA1B, CA2B, …, CAN-1B ] x = [ x0 , x1 , x2 , …, xN-1 ] ( f ˎ x ) = [ CA0Bx0 ,
 CA1Bx0 + CA0Bx1 ,
 CA2Bx0 + CA1Bx1 + CA0Bx2 ,
 … ] → y1 → y2 → y3 … ೖྗͱಉ͡௕͞ͷ
 ग़ྗܥྻ

Slide 35

Slide 35 text

৞ΈࠐΈԋࢉͰͷදݱ 35 f = [ CA0B, CA1B, CA2B, …, CAN-1B ] x = [ x0 , x1 , x2 , …, xN-1 ] ( f ˎ x ) = [ CA0Bx0 ,
 CA1Bx0 + CA0Bx1 ,
 CA2Bx0 + CA1Bx1 + CA0Bx2 ,
 … ] → y1 → y2 → y3 … yN = ( f ˎ x )N-1 + DxN

Slide 36

Slide 36 text

৞ΈࠐΈԋࢉͰͷදݱ 36 f = [ CA0B, CA1B, CA2B, …, CAN-1B ] x = [ x0 , x1 , x2 , …, xN-1 ] ( f ˎ x ) = [ CA0Bx0 ,
 CA1Bx0 + CA0Bx1 ,
 CA2Bx0 + CA1Bx1 + CA0Bx2 ,
 … ] → y1 → y2 → y3 … yN = ( f ˎ x )N-1 + DxN ৞ΈࠐΈܭࢉͷ݁ՌΛ ϐοΫΞοϓ

Slide 37

Slide 37 text

•৞ΈࠐΈԋࢉ͸ϑʔϦΤม׵ͨ͠ܥྻಉ࢜ͷཁૉੵͱͯ͠දݱՄೳ ௨ৗͷ৞ΈࠐΈԋࢉ •ܭࢉճ਺: N * (N+1) / 2 → O(N2) ߴ଎ϑʔϦΤม׵ʹΑΔ৞ΈࠐΈԋࢉͷߴ଎Խ 37

Slide 38

Slide 38 text

•৞ΈࠐΈԋࢉ͸ϑʔϦΤม׵ͨ͠ܥྻಉ࢜ͷཁૉੵͱͯ͠දݱՄೳ ௨ৗͷ৞ΈࠐΈԋࢉ •ܭࢉճ਺: N * (N+1) / 2 → O(N2) ߴ଎ϑʔϦΤม׵ʹΑΔ৞ΈࠐΈԋࢉ •f ͱ x Λߴ଎ϑʔϦΤม׵: O(N log N) •FFT(f) ͱ FFT(x) ͷཁૉੵΛͱΔ: O(N) •f ͱ x Λߴ଎ٯϑʔϦΤม׵: O(N log N) ߴ଎ϑʔϦΤม׵ʹΑΔ৞ΈࠐΈԋࢉͷߴ଎Խ 38

Slide 39

Slide 39 text

•৞ΈࠐΈԋࢉ͸ϑʔϦΤม׵ͨ͠ܥྻಉ࢜ͷཁૉੵͱͯ͠දݱՄೳ ௨ৗͷ৞ΈࠐΈԋࢉ •ܭࢉճ਺: N * (N+1) / 2 → O(N2) ߴ଎ϑʔϦΤม׵ʹΑΔ৞ΈࠐΈԋࢉ •f ͱ x Λߴ଎ϑʔϦΤม׵: O(N log N) •FFT(f) ͱ FFT(x) ͷཁૉੵΛͱΔ: O(N) •f ͱ x Λߴ଎ٯϑʔϦΤม׵: O(N log N) ߴ଎ϑʔϦΤม׵ʹΑΔ৞ΈࠐΈԋࢉͷߴ଎Խ 39 Nݸͷग़ྗͷܭࢉ͕
 O(N log N) ͰͰ͖Δʂ ঢ়ଶۭؒϞσϧ΁ͷద༻ʹ͸ ࣮ࡍʹ͸৭ʑͳԾఆ͕ඞཁ

Slide 40

Slide 40 text

•ೖྗͱࠓͷঢ়ଶ͔Βग़ྗͱ࣍ͷঢ়ଶΛ࡞ΔϞσϧ • ৔߹ʹΑͬͯ͸ܭࢉΛܰ͘Ͱ͖Δ͜ͱ΋ •ϕΫτϧͷܥྻΛࠞͥͯϕΫτϧͷܥྻΛग़ྗ͢Δػߏ • TransformerͱࣅͨΑ͏ͳ͜ͱ͕Ͱ͖Δ ·ͱΊ: ঢ়ଶۭؒϞσϧ (SSMs) ͱਂ૚ֶश 40

Slide 41

Slide 41 text

Hyena

Slide 42

Slide 42 text

Data-controlled Linear Operator •ೖྗܥྻࣗମʹґଘͨ͠ԋࢉ (context-dependency) ͕࣮ݱͰ͖Δ SubLinear Parameter Scaling •ύϥϝʔλ਺͕ೖྗܥྻͷ௕͞ʹґଘ͠ͳ͍ Unrestricted Context •೚ҙͷtokenؒͷؔ܎Λଊ͑Δ͜ͱ͕Ͱ͖Δ • context෯͕ແݶʹཉ͍͠ AttentionΛࢧ͑Δੑ࣭: Hyena࿦จͷओு 42 Local Attention: https://github.com/lucidrains/local-attention

Slide 43

Slide 43 text

Data-controlled Linear Operator •ೖྗܥྻࣗମʹґଘͨ͠ԋࢉ (context-dependency) ͕࣮ݱͰ͖Δ SubLinear Parameter Scaling •ύϥϝʔλ਺͕ೖྗܥྻͷ௕͞ʹґଘ͠ͳ͍ Unrestricted Context •೚ҙͷtokenؒͷؔ܎Λଊ͑Δ͜ͱ͕Ͱ͖Δ • context෯͕ແݶʹཉ͍͠ AttentionΛࢧ͑Δੑ࣭: Hyena࿦จͷओு 43 Local Attention: https://github.com/lucidrains/local-attention S4͸ͩΊ

Slide 44

Slide 44 text

Data-controlled Linear Operator •ೖྗܥྻࣗମʹґଘͨ͠ԋࢉ (context-dependency) ͕࣮ݱͰ͖Δ SubLinear Parameter Scaling •ύϥϝʔλ਺͕ೖྗܥྻͷ௕͞ʹґଘ͠ͳ͍ Unrestricted Context •೚ҙͷtokenؒͷؔ܎Λଊ͑Δ͜ͱ͕Ͱ͖Δ • context෯͕ແݶʹཉ͍͠ AttentionΛࢧ͑Δੑ࣭: Hyena࿦จͷओு 44 Local Attention: https://github.com/lucidrains/local-attention MLP-Mixer͸ͩΊ S4͸ͩΊ

Slide 45

Slide 45 text

Data-controlled Linear Operator •ೖྗܥྻࣗମʹґଘͨ͠ԋࢉ (context-dependency) ͕࣮ݱͰ͖Δ SubLinear Parameter Scaling •ύϥϝʔλ਺͕ೖྗܥྻͷ௕͞ʹґଘ͠ͳ͍ Unrestricted Context •೚ҙͷtokenؒͷؔ܎Λଊ͑Δ͜ͱ͕Ͱ͖Δ • context෯͕ແݶʹཉ͍͠ AttentionΛࢧ͑Δੑ࣭: Hyena࿦จͷओு 45 Local Attention: https://github.com/lucidrains/local-attention MLP-Mixer͸ͩΊ S4͸ͩΊ CNN / Local Attention
 ͸ͩΊ

Slide 46

Slide 46 text

•ঢ়ଶۭؒϞσϧʹجͮ͘ਂ૚ֶशϞσϧͷύΠΦχΞతଘࡏ • ը૾(bitྻ)෼ྨͳͲ௕ܥྻɾ௕ڑ཭ґଘܥλεΫͰߴੑೳ •ೖྗܥྻʹґଘͨ͠ઢܗԋࢉ͕ଘࡏ͠ͳ͍ • AttentionͷQKVͷΑ͏ͳػߏ͕ͳ͘ɺදݱྗ͕ൺֱతऑ͍ Gu+: E ff i ciently Modeling Long Sequences with Structured State Spaces. ICLR 2022 outstanding paper. ઌߦݚڀ: Structured State Space Sequence (S4) 46

Slide 47

Slide 47 text

•ঢ়ଶۭؒϞσϧʹجͮ͘ਂ૚ֶशϞσϧͷύΠΦχΞతଘࡏ • ը૾(bitྻ)෼ྨͳͲ௕ܥྻɾ௕ڑ཭ґଘܥλεΫͰߴੑೳ •ೖྗܥྻʹґଘͨ͠ઢܗԋࢉ͕ଘࡏ͠ͳ͍ • AttentionͷQKVͷΑ͏ͳػߏ͕ͳ͘ɺදݱྗ͕ൺֱతऑ͍ Gu+: E ff i ciently Modeling Long Sequences with Structured State Spaces. ICLR 2022 outstanding paper. ઌߦݚڀ: Structured State Space Sequence (S4) 47

Slide 48

Slide 48 text

•SSMs͸ݴޠλεΫʹऑ͍ͷͰվળ͢Δκ •໰୊: S4ؚΉSSMs͸tokenͷهԱྗɾൺֱೳྗ͕௿͍ •ঢ়ଶۭؒϞσϧͰAttentionͷQKVΛ໛฿ • SSMsͰmixingͯ͠Linear Attentionతʹߋʹmixing • Linear AttentionͱSSMsΛ૊Έ߹Θͤͨܗ •୯ମͰ͸TransformerΛ௒͑ΒΕͳ͍ • AttentionΛڬΜͩhybridϞσϧͰ΍ͬͱಉ౳Ҏ্ • hybridϞσϧ͸ਪ࿦͕AttentionʹҾͬுΒΕͯ஗͍ Fu+: Hungry Hungry Hippos: Towards Language Modeling with State Space Models. ICLR 2023 spotlight. ઌߦݚڀ: Hungry Hungry Hippos (H3) 48

Slide 49

Slide 49 text

•SSMs͸ݴޠλεΫʹऑ͍ͷͰվળ͢Δκ •໰୊: S4ؚΉSSMs͸tokenͷهԱྗɾൺֱೳྗ͕௿͍ •ঢ়ଶۭؒϞσϧͰAttentionͷQKVΛ໛฿ • SSMsͰmixingͯ͠Linear Attentionతʹߋʹmixing • Linear AttentionͱSSMsΛ૊Έ߹Θͤͨܗ •୯ମͰ͸TransformerΛ௒͑ΒΕͳ͍ • AttentionΛڬΜͩhybridϞσϧͰ΍ͬͱಉ౳Ҏ্ • hybridϞσϧ͸ਪ࿦͕AttentionʹҾͬுΒΕͯ஗͍ Fu+: Hungry Hungry Hippos: Towards Language Modeling with State Space Models. ICLR 2023 spotlight. ઌߦݚڀ: Hungry Hungry Hippos (H3) 49

Slide 50

Slide 50 text

•SSMs͸ݴޠλεΫʹऑ͍ͷͰվળ͢Δκ •໰୊: S4ؚΉSSMs͸tokenͷهԱྗɾൺֱೳྗ͕௿͍ •ঢ়ଶۭؒϞσϧͰAttentionͷQKVΛ໛฿ • SSMsͰmixingͯ͠Linear Attentionతʹߋʹmixing • Linear AttentionͱSSMsΛ૊Έ߹Θͤͨܗ •୯ମͰ͸TransformerΛ௒͑ΒΕͳ͍ • AttentionΛڬΜͩhybridϞσϧͰ΍ͬͱಉ౳Ҏ্ • hybridϞσϧ͸ਪ࿦͕AttentionʹҾͬுΒΕͯ஗͍ Fu+: Hungry Hungry Hippos: Towards Language Modeling with State Space Models. ICLR 2023 spotlight. ઌߦݚڀ: Hungry Hungry Hippos (H3) 50

Slide 51

Slide 51 text

•SSMs͸ݴޠλεΫʹऑ͍ͷͰվળ͢Δκ •໰୊: S4ؚΉSSMs͸tokenͷهԱྗɾൺֱೳྗ͕௿͍ •ঢ়ଶۭؒϞσϧͰAttentionͷQKVΛ໛฿ • SSMsͰmixingͯ͠Linear Attentionతʹߋʹmixing • Linear AttentionͱSSMsΛ૊Έ߹Θͤͨܗ •୯ମͰ͸TransformerΛ௒͑ΒΕͳ͍ • AttentionΛڬΜͩhybridϞσϧͰ΍ͬͱಉ౳Ҏ্ • hybridϞσϧ͸ਪ࿦͕AttentionʹҾͬுΒΕͯ஗͍ Fu+: Hungry Hungry Hippos: Towards Language Modeling with State Space Models. ICLR 2023 spotlight. ઌߦݚڀ: Hungry Hungry Hippos (H3) 51

Slide 52

Slide 52 text

•QK & V Ͱ͸ͳ͘ Q & KV Λܭࢉ͢Δ دΓಓ: Linear Attentionʹ͓͚ΔQKVܭࢉ 52 Q K V Q K V Attention Linear Attention Shen+: E ff i cient Attention: Attention with Linear Complexities. WACV 2021. https://github.com/lucidrains/linear-attention-transformer Katharopoulos+: Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention. ICML 2020.

Slide 53

Slide 53 text

دΓಓ: Linear Attentionʹ͓͚ΔQKVܭࢉ 53 Q K V Q K V Attention Linear Attention •QK & V Ͱ͸ͳ͘ Q & KV Λܭࢉ͢Δ Shen+: E ff i cient Attention: Attention with Linear Complexities. WACV 2021. https://github.com/lucidrains/linear-attention-transformer Katharopoulos+: Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention. ICML 2020.

Slide 54

Slide 54 text

دΓಓ: Linear Attentionʹ͓͚ΔQKVܭࢉ 54 QK V Q KV Attention Linear Attention O(N2d) O(Nd2) N N d d d N d N •QK & V Ͱ͸ͳ͘ Q & KV Λܭࢉ͢Δ Shen+: E ff i cient Attention: Attention with Linear Complexities. WACV 2021. https://github.com/lucidrains/linear-attention-transformer Katharopoulos+: Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention. ICML 2020.

Slide 55

Slide 55 text

دΓಓ: Linear Attentionʹ͓͚ΔQKVܭࢉ 55 QK V Q KV Attention Linear Attention O(N2d) O(Nd2) N N d d d N d N •QK & V Ͱ͸ͳ͘ Q & KV Λܭࢉ͢Δ ܭࢉ͕͍ܰʂ Shen+: E ff i cient Attention: Attention with Linear Complexities. WACV 2021. https://github.com/lucidrains/linear-attention-transformer Katharopoulos+: Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention. ICML 2020.

Slide 56

Slide 56 text

دΓಓ: Linear Attentionʹ͓͚ΔQKVܭࢉ 56 QK V Q KV Attention Linear Attention O(N2d) O(Nd2) N N d d d N d N •QK & V Ͱ͸ͳ͘ Q & KV Λܭࢉ͢Δ ܭࢉ͕͍ܰʂ Shen+: E ff i cient Attention: Attention with Linear Complexities. WACV 2021. https://github.com/lucidrains/linear-attention-transformer Katharopoulos+: Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention. ICML 2020. Causalityͷ୲อ͕
 େมͳͷͰมܗ͕
 ͍͔ͭ͘ଘࡏ

Slide 57

Slide 57 text

•SSMͰࠞͥͨKͱVͷཁૉੵˠ SSMͰࠞͥͨKVͱQͷཁૉੵ H3: Πϝʔδ 57 X

Slide 58

Slide 58 text

•SSMͰࠞͥͨKͱVͷཁૉੵˠ SSMͰࠞͥͨKVͱQͷཁૉੵ H3: Πϝʔδ 58 K V X Q (N x d) (N x d) (N x d)

Slide 59

Slide 59 text

•SSMͰࠞͥͨKͱVͷཁૉੵˠ SSMͰࠞͥͨKVͱQͷཁૉੵ H3: Πϝʔδ 59 SSM K V X Q KV (N x d) (N x d) (N x d)

Slide 60

Slide 60 text

•SSMͰࠞͥͨKͱVͷཁૉੵˠ SSMͰࠞͥͨKVͱQͷཁૉੵ H3: Πϝʔδ 60 SSM K V X Q KV (N x d) (N x d) (N x d) ཁૉੵͳͷͰ
 Causality͕อͨΕ͍ͯΔ

Slide 61

Slide 61 text

•SSMͰࠞͥͨKͱVͷཁૉੵˠ SSMͰࠞͥͨKVͱQͷཁૉੵ H3: Πϝʔδ 61 SSM K V X Q KV (N x d) (N x d) (N x d) SSM Y

Slide 62

Slide 62 text

•SSMͰࠞͥͨKͱVͷཁૉੵˠ SSMͰࠞͥͨKVͱQͷཁૉੵ H3: Πϝʔδ 62 SSM K V X Q KV (N x d) (N x d) (N x d) SSM Y Linear Attentionͷ
 kernelʹ૬౰

Slide 63

Slide 63 text

•SSMͰࠞͥͨKͱVͷཁૉੵˠ SSMͰࠞͥͨKVͱQͷཁૉੵ H3: Πϝʔδ 63 SSM K V X Q KV (N x d) (N x d) (N x d) SSM Y Linear Attention + SSMs
 ͳѱຐతΞʔΩςΫνϟ

Slide 64

Slide 64 text

•SSMͰࠞͥͨKͱVͷཁૉੵˠ SSMͰࠞͥͨKVͱQͷཁૉੵ H3: ࣮ࡍͷܭࢉ 64 SSM K V X Q KV (N x d x d) (N x d) (N x d) (N x d) SSM Y Ґஔ͝ͱ֎ੵ ཁૉੵ

Slide 65

Slide 65 text

•SSMͰࠞͥͨKͱVͷཁૉੵˠ SSMͰࠞͥͨKVͱQͷཁૉੵ H3: ࣮ࡍͷܭࢉ 65 SSM K V X Q KV (N x d x d) (N x d) (N x d) (N x d) SSM Y Q1 ∈ R1 x d, KV1 ∈ Rd x d

Slide 66

Slide 66 text

•SSMsΛ༻͍ͨਅʹAttention-freeͳϞσϧ • Multi Head Attention (MHA)ΑΖ͘͠৞ΈࠐΈϑΟϧλΛෳ਺༻ҙ •SSMʹΑΔtoken mixingΛ܁Γฦ࣮͠ࢪ • H3ΛKVͷԋࢉճ਺ʹؔͯ͠ҰൠԽͨ͠΋ͷͱΈͳͤΔ Hyena 66

Slide 67

Slide 67 text

•SSMͰࠞͥͨKͱVͷཁૉੵˠ SSMͰࠞͥͨKV (=K’)ͱV’ͷཁૉੵˠ… • mճSSMͰࠞͥΔ • m=2ͷ࣌H3ͱҰக Hyena: Πϝʔδ 67 X

Slide 68

Slide 68 text

•SSMͰࠞͥͨKͱVͷཁૉੵˠ SSMͰࠞͥͨKV (=K’)ͱV’ͷཁૉੵˠ… • mճSSMͰࠞͥΔ • m=2ͷ࣌H3ͱҰக Hyena: Πϝʔδ 68 SSM K1 X

Slide 69

Slide 69 text

•SSMͰࠞͥͨKͱVͷཁૉੵˠ SSMͰࠞͥͨKV (=K’)ͱV’ͷཁૉੵˠ… • mճSSMͰࠞͥΔ • m=2ͷ࣌H3ͱҰக Hyena: Πϝʔδ 69 SSM K1 V1 X

Slide 70

Slide 70 text

•SSMͰࠞͥͨKͱVͷཁૉੵˠ SSMͰࠞͥͨKV (=K’)ͱV’ͷཁૉੵˠ… • mճSSMͰࠞͥΔ • m=2ͷ࣌H3ͱҰக Hyena: Πϝʔδ 70 SSM K1 K2 V1 X SSM

Slide 71

Slide 71 text

•SSMͰࠞͥͨKͱVͷཁૉੵˠ SSMͰࠞͥͨKV (=K’)ͱV’ͷཁૉੵˠ… • mճSSMͰࠞͥΔ • m=2ͷ࣌H3ͱҰக Hyena: Πϝʔδ 71 SSM K1 K2 V1 V2 SSM X

Slide 72

Slide 72 text

•SSMͰࠞͥͨKͱVͷཁૉੵˠ SSMͰࠞͥͨKV (=K’)ͱV’ͷཁૉੵˠ… • mճSSMͰࠞͥΔ • m=2ͷ࣌H3ͱҰக Hyena: Πϝʔδ 72 SSM K1 K2 V1 V2 SSM X mճ

Slide 73

Slide 73 text

•SSMͰࠞͥͨKͱVͷཁૉੵˠ SSMͰࠞͥͨKV (=K’)ͱV’ͷཁૉੵˠ… • mճSSMͰࠞͥΔ • m=2ͷ࣌H3ͱҰக Hyena: Πϝʔδ 73 SSM K1 K2 V1 V2 SSM X mճ एׯ΍͚ͦ͘ײ͕͋Δ

Slide 74

Slide 74 text

•SSMͰࠞͥͨKͱVͷཁૉੵˠ SSMͰࠞͥͨKV (=K’)ͱV’ͷཁૉੵˠ… • mճSSMͰࠞͥΔ • m=2ͷ࣌H3ͱҰக Hyena: Πϝʔδ 74 SSM K1 K2 Km V1 V2 SSM SSM Vm X mճ

Slide 75

Slide 75 text

•SSMͰࠞͥͨKͱVͷཁૉੵˠ SSMͰࠞͥͨKV (=K’)ͱV’ͷཁૉੵˠ… • mճSSMͰࠞͥΔ • m=2ͷ࣌H3ͱҰக Hyena: Πϝʔδ 75 SSM K1 K2 Km V1 V2 SSM SSM Vm X mճ Y

Slide 76

Slide 76 text

•SSMͰࠞͥͨKͱVͷཁૉੵˠ SSMͰࠞͥͨKV (=K’)ͱV’ͷཁૉੵˠ… • mճSSMͰࠞͥΔ • m=2ͷ࣌H3ͱҰக Hyena: Πϝʔδ 76 SSM K1 K2 Km V1 V2 SSM SSM Vm X mճ Y H3ͷQʹ૬౰

Slide 77

Slide 77 text

•H3ͷQ͸Vͱ΍͍ͬͯΔ͜ͱ͕͋·ΓมΘΒͳ͍ • H3ͷQΛV2 ͱݟΕ͹Hyenaͷm=2ͷ࣌ (Hyena-2) ͱҰக •H3: QKV •Hyena-3: QKVV Hyena: ߦํෆ໌ͷQʹ͍ͭͯ 77 H3 Hyena

Slide 78

Slide 78 text

•৞ΈࠐΈϑΟϧλ f ΛҐஔຒΊࠐΈ+FFN+ࢦ਺తͳݮਰͰදݱ • on-the- fl yʹೖྗܥྻʹ߹Θͤͯຖճੜ੒ Multi-scale Retention — Sun+: Retentive Network: A Successor to Transformer for Large Language Models. arXiv 2023.
 RoPE — Su+: RoFormer: Enhanced Transformer with Rotary Position Embedding. arXiv 2021. Hyena: ৞ΈࠐΈϑΟϧλ 78 f = [h0, h1, h2, …, hN] ht =FFN(PositionalEncoding(t)) · Window(t)

Slide 79

Slide 79 text

•৞ΈࠐΈϑΟϧλ f ΛҐஔຒΊࠐΈ+FFN+ࢦ਺తͳݮਰͰදݱ • on-the- fl yʹೖྗܥྻʹ߹Θͤͯຖճੜ੒ Multi-scale Retention — Sun+: Retentive Network: A Successor to Transformer for Large Language Models. arXiv 2023.
 RoPE — Su+: RoFormer: Enhanced Transformer with Rotary Position Embedding. arXiv 2021. Hyena: ৞ΈࠐΈϑΟϧλ 79 f = [h0, h1, h2, …, hN] ht =FFN(PositionalEncoding(t)) · Window(t) Multi-scale Retention΍
 RoPEతͳ͓ؾ࣋ͪ

Slide 80

Slide 80 text

•৞ΈࠐΈϑΟϧλ f ΛҐஔຒΊࠐΈ+FFN+ࢦ਺తͳݮਰͰදݱ • on-the- fl yʹೖྗܥྻʹ߹Θͤͯຖճੜ੒ Multi-scale Retention — Sun+: Retentive Network: A Successor to Transformer for Large Language Models. arXiv 2023.
 RoPE — Su+: RoFormer: Enhanced Transformer with Rotary Position Embedding. arXiv 2021. Hyena: ৞ΈࠐΈϑΟϧλ 80 f = [h0, h1, h2, …, hN] ht =FFN(PositionalEncoding(t)) · Window(t) Multi-scale Retention΍
 RoPEతͳ͓ؾ࣋ͪ

Slide 81

Slide 81 text

•ݴޠϞσϦϯά •ԼྲྀλεΫ (SuperGLUE) •ը૾෼ྨ •ܭࢉ࣌ؒൺֱ ධՁ࣮ݧ 81

Slide 82

Slide 82 text

•m=3 (QKVV) ͷHyena͸PPLͰTransformerʹඖఢ • ಉن໛ͷGPTతϞσϧΑΓ܇࿅ίετ΋খ͍͞ ධՁ࣮ݧ: ݴޠϞσϦϯά 82 WikiText-103 The Pile

Slide 83

Slide 83 text

•ಉن໛ͷTransformer
 ϕʔεϞσϧͱಉ౳ੑೳ •΍͚ͬͭײ͕͋ΔධՁ ධՁ࣮ݧ: SuperGLUE / ը૾෼ྨ 83 RWKV͸v4 SuperGLUE (4-shot learning) ը૾෼ྨ

Slide 84

Slide 84 text

•ಛʹ௕͍ܥྻʹରͯ͠ΑΓখ͍͞ਪ࿦ίετ ධՁ࣮ݧ: ܭࢉ࣌ؒൺֱ 84 ܥྻ௕ ਪ࿦࣌ؒ

Slide 85

Slide 85 text

•ಛʹ௕͍ܥྻʹରͯ͠ΑΓখ͍͞ਪ࿦ίετ ධՁ࣮ݧ: ܭࢉ࣌ؒൺֱ 85 ܥྻ௕ ਪ࿦࣌ؒ Hyena͸ঢ়ଶۭؒϞσϧϕʔεͳͷʹ
 ਪ࿦͕֤εςοϓO(n)ʹͳͬͯ
 ͠·͍ͬͯΔ Implicit fi lterͷ͍ͤͬΆ͍

Slide 86

Slide 86 text

•ঢ়ଶۭؒϞσϧʹجͮ͘৽ͨͳΞʔΩςΫνϟHyenaΛఏҊ •AttentionΑΓܭࢉྔ͕খ͘͞Transformerͱಉ౳Ҏ্ͷੑೳ • ෳ਺༻ҙͨ͠৞ΈࠐΈΧʔωϧ͕MHA΍Multi-scale RetentionͬΆ͍ •S4/H3ΑΓਪ࿦࣌ͷܭࢉίετ͕ѱԽ͍ͯ͠Δ఺ʹ஫ҙ ײ૝ •ධՁϞσϧ͕খ͍͞ (΄΅ ≦355M) • ॳظ࣮ݧͰ1.3BϞσϧ͸܇࿅͍ͯ͠ΔΒ͍͠(cf. Appendix A.2) • ؾ߹͍ͰSuperGLUE౳ͰධՁͭͭ͠Scaling law΋ݟͯ΄͍͠ • S4΍H3ͱͷൺֱͱͯ͠Long Range Arena (LRA)Ͱ΋ධՁͯ͠΄͔ͬͨ͠ LRA — Tay+: Long Range Arena: A Benchmark for E ff i cient Transformers. ICLR 2020. ·ͱΊ 86

Slide 87

Slide 87 text

•Hyena: ࣍ੈ୅LLM΁޲͚ͨTransformerΛӽ͑Δ৽ػցֶशϞσϧ Is Attention All You Need? Part 3 •Is Attention All You Need? Part 1 Transformer Λ௒͑Δ(?) ৽ϞσϧS4 •HyenaDNA: DNAͷݴޠΛಡΈղ͘LLMͷ৽ͨͳΔԠ༻ •[Journal club] Hyena Hierarchy: Towards Larger Convolutional Language Models •The Annotated S4 •Hungry Hungry Hippos: Towards Language Modeling with State Space Models ؔ࿈ࢿྉ 87