Slide 29
Slide 29 text
ӳޠσʔληοτ
•STS12, 13, 14, 15, 16 [16, 17, 18, 19, 20]
•STS Benchmark (test set) [21]
•SICK-R [22]
ຊޠσʔληοτ
•JSICK [23]
•JSTS [24]
[16] Agirre+: SemEval-2012 Task 6: A Pilot on Semantic Textual Similarity, *SEM ’12
[17] Agirre+: *SEM 2013 shared task: Semantic Textual Similarity, *SEM ‘13
[18] Agirre+: SemEval-2014 Task 10: Multilingual Semantic Textual Similarity, SemEval ‘14
[19] Agirre+: SemEval-2015 Task 2: Semantic Textual Similarity, English, Spanish and Pilot on Interpretability, SemEval ’15
[20] Agirre+: SemEval-2016 Task 1: Semantic Textual Similarity, Monolingual and Cross-Lingual Evaluation, SemEval ’16
[21] Cer+: SemEval-2017 Task 1: Semantic Textual Similarity Multilingual and Crosslingual Focused Evaluation, SemEval ’17
[22] Marelli+: A SICK cure for the evaluation of compositional distributional semantic models, LREC ’14
[23] ୩த+: JSICK: ຊޠߏతਪɾྨࣅσʔληοτͷߏங, ਓೳֶձ ୈ35ճશࠃେձ (2021)
[24] ܀ݪ+: JGLUE: ຊޠݴޠཧղϕϯνϚʔΫ, ݴޠॲཧֶձ ୈ28ճ࣍େձ (2022)
ಋೖ: STSͷධՁ༻σʔληοτ
29
STS12-16ͦΕͧΕখ͍͞σʔληοτͷू߹
௨ৗɺ“αϒ”σʔληοτΛࠞͥͯ૬ؔΛܭࢉ
STS12-16, STS Benchmark, SICK-RͷείΞͷ
ฏۉͰ࠷ऴతͳධՁ͕͞ΕΔ͜ͱ͕ଟ͍