Upgrade to Pro — share decks privately, control downloads, hide ads and more …

AI最新論文読み会2021年まとめ

874ff503a00697a857e198a0ebb8f55f?s=47 ai.labo.ocu
December 01, 2021

 AI最新論文読み会2021年まとめ

874ff503a00697a857e198a0ebb8f55f?s=128

ai.labo.ocu

December 01, 2021
Tweet

Transcript

  1. େࡕࢢཱେֶɹ২ాɹେथ AI࠷৽࿦จಡΈձ 1೥·ͱΊ

  2. Top Recent in Last Year 1. Transformers in Vision: A

    Survey 2. The Modern Mathematics of Deep Learning 3. High-Performance Large-Scale Image Recognition Without Normalization 4. Cross-validation: what does it estimate and how well does it do it? 5. How to avoid machine learning pitfalls: a guide for academic researchers 6. How to represent part-whole hierarchies in a neural network 7. Point Transformer 8. Every Model Learned by Gradient Descent Is Approximately a Kernel Machine 9. Switch Transformers: Scaling to Trillion Parameter Models with Simple and E ff i cient Sparsity 10.A Survey on Vision Transformer
  3. Top Recent in Last Year 1. Transformers in Vision: A

    Survey 2. The Modern Mathematics of Deep Learning 3. High-Performance Large-Scale Image Recognition Without Normalization 4. Cross-validation: what does it estimate and how well does it do it? 5. How to avoid machine learning pitfalls: a guide for academic researchers 6. How to represent part-whole hierarchies in a neural network 7. Point Transformer 8. Every Model Learned by Gradient Descent Is Approximately a Kernel Machine 9. Switch Transformers: Scaling to Trillion Parameter Models with Simple and E ff i cient Sparsity 10.A Survey on Vision Transformer ౰ษڧձͰѻͬͨ࿦จ
  4. Top Recent in Last Year 1. Transformers in Vision: A

    Survey 2. The Modern Mathematics of Deep Learning 3. High-Performance Large-Scale Image Recognition Without Normalization 4. Cross-validation: what does it estimate and how well does it do it? 5. How to avoid machine learning pitfalls: a guide for academic researchers 6. How to represent part-whole hierarchies in a neural network 7. Point Transformer 8. Every Model Learned by Gradient Descent Is Approximately a Kernel Machine 9. Switch Transformers: Scaling to Trillion Parameter Models with Simple and E ff i cient Sparsity 10.A Survey on Vision Transformer ᶃTransformerͷը૾෼໺΁ͷԠ༻ͷReview
  5. Top Recent in Last Year 1. Transformers in Vision: A

    Survey 2. The Modern Mathematics of Deep Learning 3. High-Performance Large-Scale Image Recognition Without Normalization 4. Cross-validation: what does it estimate and how well does it do it? 5. How to avoid machine learning pitfalls: a guide for academic researchers 6. How to represent part-whole hierarchies in a neural network 7. Point Transformer 8. Every Model Learned by Gradient Descent Is Approximately a Kernel Machine 9. Switch Transformers: Scaling to Trillion Parameter Models with Simple and E ff i cient Sparsity 10.A Survey on Vision Transformer ᶄBNͳ͠ͰEf fi cientNet௒͑ͨ࠷ઌ୺Ϟσϧ
  6. Top Recent in Last Year 1. Transformers in Vision: A

    Survey 2. The Modern Mathematics of Deep Learning 3. High-Performance Large-Scale Image Recognition Without Normalization 4. Cross-validation: what does it estimate and how well does it do it? 5. How to avoid machine learning pitfalls: a guide for academic researchers 6. How to represent part-whole hierarchies in a neural network 7. Point Transformer 8. Every Model Learned by Gradient Descent Is Approximately a Kernel Machine 9. Switch Transformers: Scaling to Trillion Parameter Models with Simple and E fi cientNet௒͑ͨ࠷ઌ୺Ϟσϧ E ffi cientNetV2: Smaller Models and Faster Training ᶄ͞ΒʹͦΕΛ௒͑ͨEf fi cientNetV2
  7. Top Recent in Last Year 1. Transformers in Vision: A

    Survey 2. The Modern Mathematics of Deep Learning 3. High-Performance Large-Scale Image Recognition Without Normalization 4. Cross-validation: what does it estimate and how well does it do it? 5. How to avoid machine learning pitfalls: a guide for academic researchers 6. How to represent part-whole hierarchies in a neural network 7. Point Transformer 8. Every Model Learned by Gradient Descent Is Approximately a Kernel Machine 9. Switch Transformers: Scaling to Trillion Parameter Models with Simple and E ff i cient Sparsity 10.A Survey on Vision Transformer ᶅجຊ: ॳ৺ऀ/ݚڀऀ޲͚
  8. Top Recent in Last Year 1. Transformers in Vision: A

    Survey 2. The Modern Mathematics of Deep Learning 3. High-Performance Large-Scale Image Recognition Without Normalization 4. Cross-validation: what does it estimate and how well does it do it? 5. How to avoid machine learning pitfalls: a guide for academic researchers 6. How to represent part-whole hierarchies in a neural network 7. Point Transformer 8. Every Model Learned by Gradient Descent Is Approximately a Kernel Machine 9. Switch Transformers: Scaling to Trillion Parameter Models with Simple and E ff i cient Sparsity 10.A Survey on Vision Transformer ɾώϯτϯڭतͷϞσϧͷσβΠϯఏএͷPaper
  9. Top Recent in Last Year 1. Transformers in Vision: A

    Survey 2. The Modern Mathematics of Deep Learning 3. High-Performance Large-Scale Image Recognition Without Normalization 4. Cross-validation: what does it estimate and how well does it do it? 5. How to avoid machine learning pitfalls: a guide for academic researchers 6. How to represent part-whole hierarchies in a neural network 7. Point Transformer 8. Every Model Learned by Gradient Descent Is Approximately a Kernel Machine 9. Switch Transformers: Scaling to Trillion Parameter Models with Simple and E ff i cient Sparsity 10.A Survey on Vision Transformer ɾޯ഑߱Լ๏Ͱͷಛ௃நग़͸Χʔωϧ๏ͱࣅ͍ͯΔ
  10. ຊ೔ͷ໨࣍ ᶃTransformersͷը૾෼໺΁ͷԠ༻ͷRevie w ᶄEf fi cientNetV 2 ᶅجຊ: ॳ৺ऀ/ݚڀऀ޲͚ Pitfalls

  11. ຊ೔ͷ໨࣍ ᶃTransformersͷը૾෼໺΁ͷԠ༻ͷRevie w ᶄEf fi cientNetV 2 ᶅجຊ: ॳ৺ऀ/ݚڀऀ޲͚ Pitfalls

  12. Review of Transformers for Vision

  13. ๶ଙ΄Ԩమ Transformer̵፥΄䋚አ۸! →။娞;敽ᰁ۸΁๗இ̶ ͳ΄՜̵ Segmentation taskΎ΄䖕አ GAN独Contrastive Learning;΄ᣟݳ独独独 य़ᴝ૱ᒈय़਍̴༙ኦ̴य़䰽 2020ଙAIΔ;Η

    AI抷෈抎ΕտΨ᭗ଙ樄؛ͭͼ ڈ೥ͷ΍ͭ
  14. Mobile-Forme r Bridging MobileNet and Transformer MobileVi T Light-weight, General-purpose,

    and Mobile-friendly Vision Transformer Microsoft Apple
  15. Mobile-Former: Bridging MobileNet and Transformer Microsoft

  16. MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer Apple

  17. Review of Transformers for Vision CCNet(Criss-cross Attention) Stand-alone Self-Attention Local

    Relation Networks Attention Augmented Convolutional Networks Vectorized Self-Attention ViT(Vision Transformer) DeiT(Data-e ffi cient image Transformer) Classi fi cation Detection DETR(Detection Transformer) D-DETR(Deformable-DETR) Axial-attention for Panoptic Segmentation CMSA(Cross-modal Self-Attention) Segmentation Image generation iGPT(image GPT) Image Transformer High-resolution Image Synthesis SceneFormer Super resolution TTSR
  18. Review of Transformers for Vision

  19. ຊ೔ͷ໨࣍ ᶃTransformersͷը૾෼໺΁ͷԠ༻ͷRevie w ᶄEf fi cientNetV 2 ᶅجຊ: ॳ৺ऀ/ݚڀऀ޲͚ Pitfalls

  20. Ef fi cientNetV2 Progressive Trainin g →খ͍͞ը૾͔Βॱʹֶश + Augumentation +

    αΠζʹ߹Θͤͨਖ਼نԽ
  21. Ef fi cientNetV2 MBConv → Fused-MBConv ং൫͚ͩ:

  22. Ef fi cientNetV2

  23. ຊ೔ͷ໨࣍ ᶃTransformersͷը૾෼໺΁ͷԠ༻ͷRevie w ᶄEf fi cientNetV 2 ᶅجຊ: ॳ৺ऀ/ݚڀऀ޲͚ Pitfalls

  24. ॳ৺ऀ/ݚڀऀ޲͚ Pitfalls

  25. ๶ଙ΄಼揗 独AI抷෈ۣ䔶տ姆ͧΡ 独Transformer䖕አͯΡ 独܅咅䖕አAI䌑槹΄ۣ䔶տਧ๗樄؛ य़ᴝ૱ᒈय़਍̴༙ኦ̴य़䰽 2020ଙAIΔ;Η AI抷෈抎ΕտΨ᭗ଙ樄؛ͭͼ ڈ೥ͷ΍ͭ

  26. ๶ଙ΄಼揗

  27. None
  28. ᜉ̵͚͠ଙΨ