( | | ) ( r g e g e r m e g e c n n n n n Evaluate matching rate of n-gram between r(ref) and e(translated). ☆N-gram position is ignored. The number of n-gram of e The number of match between reference and translated text Calculate a geometric mean from 1-gram to 4-gram. 4 1 4 / 1 ) ( 1 ) ( ) ( 1 ) ( ) 9 . 2 ( ) , ( ) ( ) }, , ... , ({ ) , ( n i n i i M i i n E R BP e c e r r m E R BLEU brevity penalty | } { } { | ) }, , ... , ({ 1 1 M j j n n M n r g e g e r r m If you have M references per e, choose max ) , ( e r m j n )) , ( , ), , ( ), , ( max( ) }, , ... , ({ 2 1 1 e r m e r m e r m e r r m M n n n M n 2
4 1 4 / 1 ) ( 1 ) ( ) ( 1 ) ( ) 9 . 2 ( ) , ( ) ( ) }, , ... , ({ ) , ( n i n i i M i i n E R BP e c e r r m E R BLEU 2.3.3 BLEU example r1 : I’d like to stay there for five nights , from June sixth . r2 : I want to stay for five nights , from June sixth . e : Well , I’d like to stay five nights beginning June sixth. ) , ( 1 e r ) , ( 2 e r 1 m 2 m 4 m 3 m n : n-gram 13 12 11 10 1 c 2 c 4 c 3 c e = 1 accepted ( ) |2 | < |1 | < || 1 , 2 , = 1 , = 11 13 ⋅ 7 12 ⋅ 4 11 ⋅ 2 10 1 4 ⋅ 1 , ≅ 0.4353 ⋅ 1 , 3
(and short) 2.3.3 brevity penalty N i i N i i e r E R BP 1 ) ( 1 ) ( | | | ~ | 1 exp , 1 min ) , ( (2.10) N i i e 1 ) ( | | N i i r 1 ) ( | ~ | N i i e 1 ) ( | | N i i r 1 ) ( | ~ | N i i e 1 ) ( | | N i i r 1 ) ( | ~ | << > ≅ 0 ) , ( E R BP 1 ) , ( E R BP 1 ) , ( E R BP BP penalizes translated text is too short against reference. ) ( ~ i r 4
and grammaticality Using geometric averaging There are problems to use BLEU naively. (※ref->134) Brevity Penalty does not adequately compensate for the lack of recall. [Lavie 2004] Explicit word-matching is required. Geometric averaging results in score of zero whenever one of the component n-gram scores is zero. Metric for Evaluation of Translation with Explicit Ordering assess them. 5
for five nights , from June sixth . e : Well , I ‘d like to stay five nights beginning June sixth . To Explicit word-matching, taking alignment between r and e. Ex) (2.11) F-measure The number of words aligned. The number of words in e. The number of words aligned. The number of words in r. 14 words 13 words 11 alignments 6 (if = 0.5)
groups of sequential words r : I ‘d like to stay there for five nights , from June sixth . e : Well , I ‘d like to stay five nights beginning June sixth . Ex) (1) (2) (3) (4) Summary of METEOR ・High precision and high recall are desirable. ・FP intends to divide a text to long sentences. ・Necessary to tune hyper parameter , , 8
for five nights , from June sixth . e : Well , I ‘d like to stay five nights beginning June sixth . Ex) (1)(2) (3) (4) (5) (6) (7) (8) (9) (10)(11) (12) (13) (14) (10)(1)(2) (3) (8) (9) (12) (13) (14) (4) (5) Position number Aligned by r Rank vector = 8 , 1 , 2 , 3 , 4 , 5 , 6 , 7 , 9 , 10 , 11 Scoring by using a rank correlation coefficient. To evaluate bilingual translations required to sort extremely. Rank vector Rank correlation coefficients Spearman’s Kendall’s Considering coefficients as score. 10
e Brevity Penalty ≅ is better (, ) = is desirable (∵ , ≤ ) Summary of RIBES ・Rank correlation coefficient is useful for bilingual translation. ・Spearman score is almost equal to Kendall score. ・Necessary to tune hyper parameter , (2.15) 12
the best ? Score may be different by another system or evaluators. Our test resources (data, human) are limited. Statistical Testing Problem Calculating confidence interval “You can get score which is out of confidence interval with probability p.” 14
texts 100 texts 100 texts ・・・ Ex) 1st 2nd Nth Statistical Machine Translation s 1 (a) s 2 (a) ⋯ s () s 1 (b) s 2 (b) ⋯ s (b) Get Score s (System) Win rate of system a If is over 95% , System a is better than b with p=0.05 16
"The significance of recall in automatic metrics for MT evaluation." Machine Translation: From Real Users to Research. Springer Berlin Heidelberg, 2004. 134-143. 111) Isozaki, Hideki, et al. "Automatic evaluation of translation quality for distant language pairs." Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2010. 134) Lavie, Alon, and Michael J. Denkowski. "The METEOR metric for automatic evaluation of machine translation." Machine translation 23.2-3 (2009): 105-115. 17