Transformers 2. A Simple Framework for Contrastive Learning of Visual Representations 3. A Primer in BERTology: What we know about how BERT works 4. Reformer: The Efficient Transformer 5. Language Models are Few-Shot Learners 6. Supervised Contrastive Learning 7. AutoML-Zero: Evolving Machine Learning Algorithms From Scratch 8. Explainable Deep Learning: A Field Guide for the Uninitiated 9. YOLOv4: Optimal Speed and Accuracy of Object Detection 10.Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention
Transformers 2. A Simple Framework for Contrastive Learning of Visual Representations 3. A Primer in BERTology: What we know about how BERT works 4. Reformer: The Efficient Transformer 5. Language Models are Few-Shot Learners 6. Supervised Contrastive Learning 7. AutoML-Zero: Evolving Machine Learning Algorithms From Scratch 8. Explainable Deep Learning: A Field Guide for the Uninitiated 9. YOLOv4: Optimal Speed and Accuracy of Object Detection 10.Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention 当勉強会扱った論⽂
Framework for Contrastive Learning of Visual Representations 3. A Primer in BERTology: What we know about how BERT works 4. Reformer: The Efficient Transformer 5. Language Models are Few-Shot Learners 6. Supervised Contrastive Learning 7. AutoML-Zero: Evolving Machine Learning Algorithms From Scratch 8. Explainable Deep Learning: A Field Guide for the Uninitiated 9. YOLOv4: Optimal Speed and Accuracy of Object Detection 10.Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention
Simple Framework for Contrastive Learning of Visual Representations 3. A Primer in BERTology: What we know about how BERT works 4. Reformer: The Efficient Transformer 5. Language Models are Few-Shot Learners 6. Supervised Contrastive Learning 7. AutoML-Zero: Evolving Machine Learning Algorithms From Scratch 8. Explainable Deep Learning: A Field Guide for the Uninitiated 9. YOLOv4: Optimal Speed and Accuracy of Object Detection 10.Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention
2. A Simple Framework for Contrastive Learning of Visual Representations 3. A Primer in BERTology: What we know about how BERT works 4. Reformer: The Efficient Transformer 5. Language Models are Few-Shot Learners 6. Supervised Contrastive Learning 7. AutoML-Zero: Evolving Machine Learning Algorithms From Scratch 8. Explainable Deep Learning: A Field Guide for the Uninitiated 9. YOLOv4: Optimal Speed and Accuracy of Object Detection 10.Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention
End-to-End Object Detection with Transformers: 6⽉勉強会 An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: 11⽉勉強会 → Deformable DETR