Slide 23
Slide 23 text
BERT-
fl
ow: ҟํతͳBERTͷจຒΊࠐΈۭ͔ؒΒํతͳજࡏۭؒͷࣸ૾Λֶश
BERT-whitening: จຒΊࠐΈͷฏۉ͕0ɼڞࢄߦྻ͕୯ҐߦྻʹͳΔΑ͏ʹઢܗม (+࣍ݩݮ)
IS-BERT: จຒΊࠐΈͱจதͷn-gramͷຒΊࠐΈͷ૬ޓใྔΛ࠷େԽ͢ΔΑ͏ʹֶश
BERT-CT: ҟͳΔೋͭͷಉ͡Ϟσϧͷಉ͡จʹର͢ΔຒΊࠐΈಉ࢜ͷੵ͕େ͖͘ͳΔΑ͏ʹֶश
SimCSE: ҟͳΔDropoutΛద༻ͨ͠ಉ͡จΛਖ਼ྫ or ؚҙؔͷจϖΞΛਖ਼ྫͱͨ͠ରরֶश
MixCSE: ҟͳΔจΛࠞͥͨจΛhard negativeͱͯ͠ڭࢣͳ͠ରরֶश
ArcCSE: ؚҙϖΞจຒΊࠐΈͷmargin͖֯࠷খԽ+DAͨ͠จΛෛྫʹ͢ΔTriplet Lossͷ༥߹
DCLR: ΨγΞϯϊΠζΛෛྫͱͯ͠Ճ + ࣄྫ͝ͱॏΈ͚ͯ͠Unsup-SimCSE
MoCoSE: ϞʔϝϯλϜΤϯίʔμͷ࠷దͳෛྫੳ+FGSMʹΑΔσʔλ֦ுͰରরֶश
Li+: On the Sentence Embeddings from Pre-trained Language Models, EMNLP '20
Su+: Whitening Sentence Representations for Better Semantics and Faster Retrieval, arXiv ’21
Zhang+: An Unsupervised Sentence Embedding Method by Mutual Information Maximization, EMNLP ’20
Carlsson+: Semantic Re-tuning with Contrastive Tension, ICLR ’21
Gao+: SimCSE: Simple Contrastive Learning of Sentence Embeddings, EMNLP ’21
Zhang+: Unsupervised Sentence Representation via Contrastive Learning with Mixing Negatives, AAAI ’22
Zhang+: A Contrastive Framework for Learning Sentence Representations from Pairwise and Triple-wise Perspective in Angular Space, ACL ’22
Zhou+: Debiased Contrastive Learning of Unsupervised Sentence Representations, ACL 2022
Cao+: Exploring the Impact of Negative Samples of Contrastive Learning: A Case Study of Sentence Embedding, ACL
fi
ndings ’22
ൺֱख๏
23