$30 off During Our Annual Pro Sale. View Details »

AI最新論文読み会2021年まとめ

 AI最新論文読み会2021年まとめ

More Decks by 医療AI研究所@大阪公立大学

Other Decks in Education

Transcript

  1. େࡕࢢཱେֶɹ২ాɹେथ
    AI࠷৽࿦จಡΈձ
    1೥·ͱΊ

    View Slide

  2. Top Recent in Last Year
    1. Transformers in Vision: A Survey
    2. The Modern Mathematics of Deep Learning
    3. High-Performance Large-Scale Image Recognition Without Normalization
    4. Cross-validation: what does it estimate and how well does it do it?
    5. How to avoid machine learning pitfalls: a guide for academic researchers
    6. How to represent part-whole hierarchies in a neural network
    7. Point Transformer
    8. Every Model Learned by Gradient Descent Is Approximately a Kernel Machine
    9. Switch Transformers: Scaling to Trillion Parameter Models with Simple and E
    ff
    i
    cient Sparsity
    10.A Survey on Vision Transformer

    View Slide

  3. Top Recent in Last Year
    1. Transformers in Vision: A Survey
    2. The Modern Mathematics of Deep Learning
    3. High-Performance Large-Scale Image Recognition Without Normalization
    4. Cross-validation: what does it estimate and how well does it do it?
    5. How to avoid machine learning pitfalls: a guide for academic researchers
    6. How to represent part-whole hierarchies in a neural network
    7. Point Transformer
    8. Every Model Learned by Gradient Descent Is Approximately a Kernel Machine
    9. Switch Transformers: Scaling to Trillion Parameter Models with Simple and E
    ff
    i
    cient Sparsity
    10.A Survey on Vision Transformer
    ౰ษڧձͰѻͬͨ࿦จ

    View Slide

  4. Top Recent in Last Year
    1. Transformers in Vision: A Survey
    2. The Modern Mathematics of Deep Learning
    3. High-Performance Large-Scale Image Recognition Without Normalization
    4. Cross-validation: what does it estimate and how well does it do it?
    5. How to avoid machine learning pitfalls: a guide for academic researchers
    6. How to represent part-whole hierarchies in a neural network
    7. Point Transformer
    8. Every Model Learned by Gradient Descent Is Approximately a Kernel Machine
    9. Switch Transformers: Scaling to Trillion Parameter Models with Simple and E
    ff
    i
    cient Sparsity
    10.A Survey on Vision Transformer
    ᶃTransformerͷը૾෼໺΁ͷԠ༻ͷReview

    View Slide

  5. Top Recent in Last Year
    1. Transformers in Vision: A Survey
    2. The Modern Mathematics of Deep Learning
    3. High-Performance Large-Scale Image Recognition Without Normalization
    4. Cross-validation: what does it estimate and how well does it do it?
    5. How to avoid machine learning pitfalls: a guide for academic researchers
    6. How to represent part-whole hierarchies in a neural network
    7. Point Transformer
    8. Every Model Learned by Gradient Descent Is Approximately a Kernel Machine
    9. Switch Transformers: Scaling to Trillion Parameter Models with Simple and E
    ff
    i
    cient Sparsity
    10.A Survey on Vision Transformer
    ᶄBNͳ͠ͰEf
    fi
    cientNet௒͑ͨ࠷ઌ୺Ϟσϧ

    View Slide

  6. Top Recent in Last Year
    1. Transformers in Vision: A Survey
    2. The Modern Mathematics of Deep Learning
    3. High-Performance Large-Scale Image Recognition Without Normalization
    4. Cross-validation: what does it estimate and how well does it do it?
    5. How to avoid machine learning pitfalls: a guide for academic researchers
    6. How to represent part-whole hierarchies in a neural network
    7. Point Transformer
    8. Every Model Learned by Gradient Descent Is Approximately a Kernel Machine
    9. Switch Transformers: Scaling to Trillion Parameter Models with Simple and E
    fi
    cientNet௒͑ͨ࠷ઌ୺Ϟσϧ
    E
    ffi
    cientNetV2:


    Smaller Models and Faster Training
    ᶄ͞ΒʹͦΕΛ௒͑ͨEf
    fi
    cientNetV2

    View Slide

  7. Top Recent in Last Year
    1. Transformers in Vision: A Survey
    2. The Modern Mathematics of Deep Learning
    3. High-Performance Large-Scale Image Recognition Without Normalization
    4. Cross-validation: what does it estimate and how well does it do it?
    5. How to avoid machine learning pitfalls: a guide for academic researchers
    6. How to represent part-whole hierarchies in a neural network
    7. Point Transformer
    8. Every Model Learned by Gradient Descent Is Approximately a Kernel Machine
    9. Switch Transformers: Scaling to Trillion Parameter Models with Simple and E
    ff
    i
    cient Sparsity
    10.A Survey on Vision Transformer
    ᶅجຊ: ॳ৺ऀ/ݚڀऀ޲͚

    View Slide

  8. Top Recent in Last Year
    1. Transformers in Vision: A Survey
    2. The Modern Mathematics of Deep Learning
    3. High-Performance Large-Scale Image Recognition Without Normalization
    4. Cross-validation: what does it estimate and how well does it do it?
    5. How to avoid machine learning pitfalls: a guide for academic researchers
    6. How to represent part-whole hierarchies in a neural network
    7. Point Transformer
    8. Every Model Learned by Gradient Descent Is Approximately a Kernel Machine
    9. Switch Transformers: Scaling to Trillion Parameter Models with Simple and E
    ff
    i
    cient Sparsity
    10.A Survey on Vision Transformer
    ɾώϯτϯڭतͷϞσϧͷσβΠϯఏএͷPaper

    View Slide

  9. Top Recent in Last Year
    1. Transformers in Vision: A Survey
    2. The Modern Mathematics of Deep Learning
    3. High-Performance Large-Scale Image Recognition Without Normalization
    4. Cross-validation: what does it estimate and how well does it do it?
    5. How to avoid machine learning pitfalls: a guide for academic researchers
    6. How to represent part-whole hierarchies in a neural network
    7. Point Transformer
    8. Every Model Learned by Gradient Descent Is Approximately a Kernel Machine
    9. Switch Transformers: Scaling to Trillion Parameter Models with Simple and E
    ff
    i
    cient Sparsity
    10.A Survey on Vision Transformer
    ɾޯ഑߱Լ๏Ͱͷಛ௃நग़͸Χʔωϧ๏ͱࣅ͍ͯΔ

    View Slide

  10. ຊ೔ͷ໨࣍
    ᶃTransformersͷը૾෼໺΁ͷԠ༻ͷRevie
    w

    ᶄEf
    fi
    cientNetV
    2

    ᶅجຊ: ॳ৺ऀ/ݚڀऀ޲͚ Pitfalls

    View Slide

  11. ຊ೔ͷ໨࣍
    ᶃTransformersͷը૾෼໺΁ͷԠ༻ͷRevie
    w

    ᶄEf
    fi
    cientNetV
    2

    ᶅجຊ: ॳ৺ऀ/ݚڀऀ޲͚ Pitfalls

    View Slide

  12. Review of Transformers for Vision

    View Slide

  13. ๶ଙ΄Ԩమ
    Transformer̵፥΄䋚አ۸!

    →။娞;敽ᰁ۸΁๗இ̶

    ͳ΄՜̵

    Segmentation taskΎ΄䖕አ

    GAN独Contrastive Learning;΄ᣟݳ独独独
    य़ᴝ૱ᒈय़਍̴༙ኦ̴य़䰽
    2020ଙAIΔ;Η
    AI抷෈抎ΕտΨ᭗ଙ樄؛ͭͼ
    ڈ೥ͷ΍ͭ

    View Slide

  14. Mobile-Forme
    r

    Bridging MobileNet and Transformer
    MobileVi
    T

    Light-weight, General-purpose, and


    Mobile-friendly Vision Transformer
    Microsoft Apple

    View Slide

  15. Mobile-Former: Bridging MobileNet and Transformer
    Microsoft

    View Slide

  16. MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer
    Apple

    View Slide

  17. Review of Transformers for Vision
    CCNet(Criss-cross Attention)

    Stand-alone Self-Attention

    Local Relation Networks

    Attention Augmented Convolutional Networks

    Vectorized Self-Attention

    ViT(Vision Transformer)

    DeiT(Data-e
    ffi
    cient image Transformer)
    Classi
    fi
    cation Detection
    DETR(Detection Transformer)

    D-DETR(Deformable-DETR)
    Axial-attention for Panoptic Segmentation

    CMSA(Cross-modal Self-Attention)
    Segmentation
    Image generation
    iGPT(image GPT)

    Image Transformer

    High-resolution Image Synthesis

    SceneFormer
    Super resolution
    TTSR

    View Slide

  18. Review of Transformers for Vision

    View Slide

  19. ຊ೔ͷ໨࣍
    ᶃTransformersͷը૾෼໺΁ͷԠ༻ͷRevie
    w

    ᶄEf
    fi
    cientNetV
    2

    ᶅجຊ: ॳ৺ऀ/ݚڀऀ޲͚ Pitfalls

    View Slide

  20. Ef
    fi
    cientNetV2
    Progressive Trainin
    g

    →খ͍͞ը૾͔Βॱʹֶश + Augumentation + αΠζʹ߹Θͤͨਖ਼نԽ

    View Slide

  21. Ef
    fi
    cientNetV2
    MBConv → Fused-MBConv
    ং൫͚ͩ:

    View Slide

  22. Ef
    fi
    cientNetV2

    View Slide

  23. ຊ೔ͷ໨࣍
    ᶃTransformersͷը૾෼໺΁ͷԠ༻ͷRevie
    w

    ᶄEf
    fi
    cientNetV
    2

    ᶅجຊ: ॳ৺ऀ/ݚڀऀ޲͚ Pitfalls

    View Slide

  24. ॳ৺ऀ/ݚڀऀ޲͚ Pitfalls

    View Slide

  25. ๶ଙ΄಼揗
    独AI抷෈ۣ䔶տ姆ͧΡ

    独Transformer䖕አͯΡ

    独܅咅䖕አAI䌑槹΄ۣ䔶տਧ๗樄؛
    य़ᴝ૱ᒈय़਍̴༙ኦ̴य़䰽
    2020ଙAIΔ;Η
    AI抷෈抎ΕտΨ᭗ଙ樄؛ͭͼ
    ڈ೥ͷ΍ͭ

    View Slide

  26. ๶ଙ΄಼揗

    View Slide

  27. View Slide

  28. ᜉ̵͚͠ଙΨ

    View Slide