LoRA: Low-Rank Adaptation of Large Language Models Graduate School of Informatics, Nagoya University, Japan. ൃදऀ: Hayato Tsukagoshi Edward J Hu, yelong shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen ICLR 2022 https://arxiv.org/abs/2106.09685
•ࣄલֶश (pre-training) → ඍௐ ( fi ne-tuning) ͱ͍͏ύϥμΠϜ͕ීٴ BERT: Bidirectional Encoder Representations from Transformers 14 Devlin et al., BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, NAACL 2019.
• ܇࿅ίετ (σʔληοτऩूɾ܇࿅࣌ؒ) ͕ܶతʹݮগ BERTΛ༻͍ͨࣗવݴޠॲཧ 15 Devlin et al., BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, NAACL 2019.
• Prompt TuningSoft PromptsͷΈߋ৽ Pre fi x-Tuning / Prompt Tuning 28 Li et al., Pre fi x-Tuning: Optimizing Continuous Prompts for Generation, ACL-IJCNLP 2021.
Lester et al., The Power of Scale for Parameter-E ff i cient Prompt Tuning, EMNLP 2021. LLMͰ༻͍ΒΕΔ “ࢄతͳ” promptͱରরత Prompt Tuning Pre fi x-Tuning