Slide 1

Slide 1 text

Similarity-Based Reconstruction Loss for Meaning Representation

Slide 2

Slide 2 text

Literature 2

Slide 3

Slide 3 text

Abstract • • • 3

Slide 4

Slide 4 text

Introduction • • 4

Slide 5

Slide 5 text

Related Work • • • • 5

Slide 6

Slide 6 text

Related Work • • 6

Slide 7

Slide 7 text

Auto-Encoder •ℒ , • • • • 7

Slide 8

Slide 8 text

Weighted similarity loss •ℒ = − σ =1 sim , • • • : • • sim() • 8

Slide 9

Slide 9 text

Weighted cross-entropy loss •ℒ = − σ =1 sim , log( ) • • 9

Slide 10

Slide 10 text

Soft label loss •ℒ = − σ =1 ∗log • ∗ = ൞ sim , σ =1 sim(,) , ∈ top N 0 , ∉ top N • • 10

Slide 11

Slide 11 text

True-label encoding 11

Slide 12

Slide 12 text

Tasks & Datasets • • • 12

Slide 13

Slide 13 text

Results 13

Slide 14

Slide 14 text

Results 14

Slide 15

Slide 15 text

Additional Experiments • • 15

Slide 16

Slide 16 text

Results • • 16

Slide 17

Slide 17 text

Results 17

Slide 18

Slide 18 text

Results 18

Slide 19

Slide 19 text

Results 19

Slide 20

Slide 20 text

Discussion • • 20

Slide 21

Slide 21 text

Conclusion • • • • 21