Slide 1

Slide 1 text

Vision Transformer Pythonػցֶशษڧձ in ৽ׁ #12 2021-02-20 @kasacchiful

Slide 2

Slide 2 text

Software Developer Favorite: Community: • JAWS-UG Niigata • Python ML in Niigata (New!!) • JaSST Niigata • ASTER • SWANII • etc. Hiroshi Kasahara @kasacchiful @kasacchiful 2 New!!

Slide 3

Slide 3 text

JAWS-UG Niigata #9 IUUQTKBXTVHOJJHBUBDPOOQBTTDPNFWFOU

Slide 4

Slide 4 text

໨࣍ 1. Vision Transformerͱ͸Կ͔ʁ 2. Transformerͷ͓͞Β͍ 3. Vision TransformerͷϝϦοτɾσϝϦοτ 4. ݱ࣌఺Ͱͷࢲͷߟ࡯ 5. ը૾෼ྨҎ֎ͷTransformerద༻ྫ

Slide 5

Slide 5 text

Vision Transformer https://github.com/google-research/vision_transformer

Slide 6

Slide 6 text

Vision Transformerͱ͸Կ͔ʁ • ࡢࠓͷࣗવݴޠॲཧͰϕʔεʹͳ͍ͬͯΔʮTransformerʯΛը૾෼ྨ ʹద༻ • ը૾෼ྨͰඪ४ͷʮCNNʯ͸࢖༻ͤͣ • ֤छSoTAϞσϧͱಉఔ౓΋͘͠͸ͦΕҎ্ͷੑೳୡ੒ • τϨʔχϯά࣌ͷܭࢉϦιʔε͸গͳͯ͘ࡁΉʢେྔͷσʔλ͸ඞཁʣ • ݱࡏICLR2021ࠪಡத

Slide 7

Slide 7 text

Vision TransformerͷྲྀΕ 1.ը૾ΛNຕͷύονʹ෼͚Δ 2.ύονΛฏ׈Խͯ͠ઢܗࣸ૾ม׵ • ઢܗࣸ૾ͷύϥϝʔλ͸ֶश࣌ʹ֫ಘ 3.ݩͷύονͷҐஔ৘ใΛ࡞੒ 4.ม׵͞ΕͨύονͱҐஔ৘ใΛTransformer Encoderʹ 5.Transformer Encoderͷग़ྗΛMLPͰΫϥε෼ྨ IUUQTHJUIVCDPNMVDJESBJOTWJUQZUPSDI

Slide 8

Slide 8 text

Vision Transformerͷੑೳ Ҿ༻"O*NBHFJT8PSUIY8PSET5SBOTGPSNFSTGPS*NBHF3FDPHOJUJPOBU4DBMF

Slide 9

Slide 9 text

Transformerͷ͓͞Β͍

Slide 10

Slide 10 text

Transformer • AttentionͰߏ੒͞Εͨػց຋༁Ϟσϧ • Attention = Dictionary Object (Query, Key, Value) ͱղऍ • QueryΛೖΕΔͱɺࢀর͢΂͖৔ॴ(Key)ΛಘΒΕɺͦͷ৔ॴͷ஋(Value)͕ಘΒΕΔ • KeyͱValue͕ࣄલ஌ࣝʹΑͬͯಘΒΕΔͨΊɺMemoryʹ૬౰͢Δ • Self-Attention: จষ಺ͷ୯ޠؒͷؔ܎ΛͱΒ͑ΔɻQuery/Key/Value͸ಉ͡୯ޠ͔Βੜ੒ɻ • Source Target Attention: 2ͭͷܥྻؒͷରԠؔ܎ΛͱΒ͑ΔɻQuery͸σίʔμଆɺKey/Value͸Τ ϯίʔμଆ͔Βੜ੒ɻ • Vision TransformerͰ͸ɺTransformerͷEncoder෦෼Λվྑͨ͠΋ͷΛ࢖༻͍ͯ͠Δ

Slide 11

Slide 11 text

Transformer Ϟσϧ Ҿ༻"UUFOUJPO*T"MM:PV/FFE

Slide 12

Slide 12 text

Attention ྫ IUUQTDPMBCSFTFBSDIHPPHMFDPNHJUIVCUFOTPSqPXUFOTPSUFOTPSCMPCNBTUFSUFOTPSUFOTPSOPUFCPPLTIFMMP@UUJQZOC Self Attention Source Target Attention

Slide 13

Slide 13 text

Vision Transformer ͱ Transformer ͷ Encoderൺֱ Vision Transformer Transformer

Slide 14

Slide 14 text

Vision Transformerͷ ϝϦοτɾσϝϦοτ

Slide 15

Slide 15 text

Vision TransformerͷϝϦοτ Ҿ༻"O*NBHFJT8PSUIY8PSET5SBOTGPSNFSTGPS*NBHF3FDPHOJUJPOBU4DBMF • ߴੑೳ • ֤छSoTAϞσϧͱಉఔ౓΋͘͠͸ͦΕҎ্ͷੑೳୡ੒ • ܭࢉϦιʔε͕খͯ͘͞ࡁΉ • ࣄલֶशʹBiT΍NoisyStudent͸໿1ສTPUcore೔͔͔Δ͕ɺViT- HugeͰ͸໿2,500TPUcore೔ͱ໿1/4ͰࡁΉ

Slide 16

Slide 16 text

Vision TransformerͷσϝϦοτ • େྔͷσʔλ͕ඞཁ • ʮJFT300Mʯͱ͍͏ڊେͳσʔληοτͰࣄલʹֶशࡁͷϞσϧΛ ϑΝΠϯνϡʔχϯά͍ͯ͠Δ • ImageNetͷσʔληοτͰֶशͯ͠΋ɺطଘͷSoTAϞσϧΑΓੑೳ ͸্͕Βͳ͍ ➡গྔͷσʔληοτͰ͸͏·͍͔͘ͳ͍ ➡ڊେͳσʔληοτͰਅՁΛൃش͢Δ

Slide 17

Slide 17 text

ݱ࣌఺Ͱͷࢲͷߟ࡯

Slide 18

Slide 18 text

Vision TransformerΛ্खʹ࢖͏ʹ͸ʁ • େྔͷσʔλΛͲͷΑ͏ʹ༻ҙ͢Δʁ • େن໛σʔληοτͷࣄલֶशࡁϞσϧ͕ެ։͞Ε͍ͯΔͳΒɺ ͦΕΛ࢖ͬͯϑΝΠϯνϡʔχϯάͯ͠ར༻͢Δ • ࣗલͰ༻ҙ͢Δ & ࣗલͰੜ੒͢Δ • ʮࣗݾڭࢣ͋Γֶशʯͷݚڀ͕ਐΜͰ͍ΔͷͰɺڭࢣσʔλ Λʮࣗݾڭࢣ͋Γֶशʯʹ͋Δఔ౓೚ͤΔํ๏΋ߟ͑ΒΕΔ

Slide 19

Slide 19 text

Vision TransformerΛ্खʹ࢖͏ʹ͸ʁ • େྔͷσʔλΛ༻ҙͰ͖ͳ͍৔߹͸ʁ • ۀ຿ʹ໰୊ͳ͍ਫ਼౓Ͱ͋Ε͹ɺطଘϞσϧʹ͢Δ • ը૾෼ྨͰ͋Ε͹ɺ࠷ۙ͸EfficientNet͕tf.keras.applicationʹ͋ ΔͷͰ؆୯ʹ࢖͑Δ

Slide 20

Slide 20 text

Vision Transformerͷࠓޙͷ༧ײ • Vision TransformerΑΓҎલʹ΋ɺTransformerΛࣗવݴޠॲཧҎ֎ʹ ద༻͢Δࣄྫ͸͋Δ • Vision TransformerΛվྑͯ͠ɺΑΓখن໛ͳσʔληοτͰ΋ਫ਼౓ ͕ग़ΔϞσϧ͕ग़ͯ͘ΔͩΖ͏ ➡Transformerܥͷ֤छλεΫ͸ཁ஫໨ ➡ซͤͯɺࣗݾڭࢣ͋Γֶशͷख๏΋ԡ͓͑ͯ͘͞ͱ͍͍͔΋

Slide 21

Slide 21 text

ը૾෼ྨҎ֎ͷ Transformerద༻ྫ

Slide 22

Slide 22 text

ը૾෼ྨҎ֎ͷ Transformer ద༻ྫ ֤छλεΫʹTransformerΛద༻ͨ͠΋ͷͷҰྫ • DETR: ෺ମݕ஌ʹTransformerΛར༻ • Axial-Attention: ηάϝϯςʔγϣϯʹTransformerΛར༻ • Image Transformer: ը૾ੜ੒ʹTransformerΛར༻ • VideoBERT: ಈըཧղʹTransformerΛར༻ • Set Transformer: ΫϥελϦϯάʹTransformerΛར༻

Slide 23

Slide 23 text

·ͱΊ

Slide 24

Slide 24 text

·ͱΊ • ը૾෼ྨʹTransformerΛద༻ͨ͠ʮVision Transformerʯ͕ొ৔ • ߴੑೳͰɺֶशʹ͔͔ΔܭࢉϦιʔε͸গͳ͘ࡁΉ • ͨͩ͠ɺେྔͷσʔληοτ͕ඞཁ • Vision TransformerΛ࢖͏ʹ͸ɺେن໛σʔληοτͰͷࣄલֶशࡁϞσϧΛϑΝΠϯνϡʔ χϯάͯ͠࢖͏͔ɺࣗલͰσʔλ༻ҙͯ͠࢖͏͔ • ࣗલͰ༻ҙ͢Δ৔߹͸ʮࣗݾڭࢣ͋ΓֶशʯΛ࢖ͬͯϥϕϧ෇͚͢Δํ๏΋ݕ౼ͨ͠ํ ͕͍͍͔΋ • VisionҎ֎ʹ΋Transformer͕ద༻͞Ε͖͍ͯͯΔͷͰɺࠓޙͷτϨϯυͱͯ͠ԡ͓͑ͯ͜͞͏

Slide 25

Slide 25 text

͓͠·͍

Slide 26

Slide 26 text

ࢀߟ • An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale • https://arxiv.org/abs/2010.11929 • google-research/vision_transformer • https://github.com/google-research/vision_transformer • emla2805/vision-transformer: Tensorflow implementation of the Vision Transformer (An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale) • https://github.com/emla2805/vision-transformer • lucidrains/vit-pytorch: Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch • https://github.com/lucidrains/vit-pytorch • ը૾ೝࣝͷେֵ໋ɻAIքͰ࿩୊രൃதͷʮVision TransformerʯΛղઆʂ - Qiita • https://qiita.com/omiita/items/0049ade809c4817670d7 • Transformer Ͱը૾ೝࣝΛ΍ͬͯΈΔ ~ Vision Transformer ~ | GMOΠϯλʔωοτ ࣍ੈ୅γεςϜݚڀࣨ • https://recruit.gmo.jp/engineer/jisedai/blog/vision_transformer/

Slide 27

Slide 27 text

ࢀߟ • Attention Is All You Need • https://arxiv.org/abs/1706.03762 • End-to-End Object Detection with Transformers • https://arxiv.org/abs/2005.12872 • Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation • https://arxiv.org/abs/2003.07853 • Image Transformer • https://arxiv.org/abs/1802.05751 • VideoBERT: A Joint Model for Video and Language Representation Learning • https://arxiv.org/abs/1904.01766 • Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks • https://arxiv.org/abs/1810.00825