Slide 13
Slide 13 text
•ҙػߏͷΈͰߏ͞ΕͨϞσϧߏ
• ͦΕ·ͰNLPͰΑ͘ར༻͞Ε͍ͯͨ
RNN, LSTMCNNΛഉআ
•ϕΫτϧྻΛೖྗʹϕΫτϧྻΛग़ྗ͢Δػߏ
• ೖྗϕΫτϧಉ࢜ͷ૬ޓ࡞༻Λߟྀ
•EncoderͱDecoderͷೋछྨ͕ଘࡏ
• EncoderͷΈ: BERT, LUKE, …
• DecoderͷΈ: GPT, GPT-2, GPT-3, …
• Encoder-Decoder: BART, T5, UL2, …
Transformer
13
Vaswani etl al., Attention Is All You Need, NeurIPS 2017.
Θ͔Γ͍͢ղઆ: ʲਂֶशʳTransformer - Multi-Head AttentionΛཧղͯ͠Ζ͏͡Όͳ͍ͷʲσΟʔϓϥʔχϯάͷੈքvol.28ʳ
֓ཁਤ
Encoder Decoder