Slide 138
Slide 138 text
• [Chandra+, NeurIPSʼ22] : Gradient Descent: The Ultimate Optimizer
• [Sonek+, NeurIPSʼ12] : Practical Bayesian Optimization of Machine Learning Algorithms
• [Touvron+, ECCVʼ22] : DeiT III: Revenge of the ViT
• [Liu+, CVPRʼ22] : A ConvNet for the 2020s
• [Zoph+, ICLRʼ17] : Neural Architecture Search with Reinforcement Learning
• [Real+, ICMLʼ17] : Large-Scale Evolution of Image Classifiers
• [Pham+, ICMLʼ18] : Efficient Neural Architecture Search via Parameters Sharing
• [Liu+, ICLRʼ19] : DARTS: Differentiable Architecture Search
• [Cai+, ICLRʼ19] : ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware
• [Xu+, ICLRʼ20] : PC-DARTS: Partial Channel Connections for Memory-Efficient Architecture Search
• [Chen+, ICCVʼ21] : AutoFormer: Searching Transformers for Visual Recognition
• [Li+, UAIʼ19] : Random Search and Reproducibility for Neural Architecture Search
• [So+, NeurIPSʼ21] : Searching for Efficient Transformers for Language Modeling
• [Yang+, ICLRʼ20] : NAS evaluation is frustratingly hard
• [So+, ICMLʼ19] : The Evolved Transformer
• [Mellor+, ICMLʼ21] : Neural Architecture Search without Training
• [Lin+, ICCVʼ21] : Zen-NAS: A Zero-Shot NAS for High-Performance Image Recognition
138
参考⽂献