Slide 52
Slide 52 text
CONFIDENTIAL
COMPANY PROFILE
参考文献
● Chitta, Kashyap, Aditya Prakash, and Andreas Geiger. 2021. “NEAT: Neural Attention Fields for End-to-End Autonomous Driving.” arXiv [cs.CV]. arXiv. http://arxiv.org/abs/2109.04456.
● Can, Yigit Baran, Alexander Liniger, Danda Pani Paudel, and Luc Van Gool. 2021. “Structured Bird’s-Eye-View Traffic Scene Understanding from Onboard Images.” arXiv [cs.CV]. arXiv.
http://arxiv.org/abs/2110.01997.
● Wang, Yue, Vitor Guizilini, Tianyuan Zhang, Yilun Wang, Hang Zhao, and Justin Solomon. 2021. “DETR3D: 3D Object Detection from Multi-View Images via 3D-to-2D Queries.” arXiv [cs.CV].
arXiv. http://arxiv.org/abs/2110.06922.
● Brady Zhou, Philipp Kr Ahenb Uhl. n.d. Cross-View Transformers for Real-Time Map-View Semantic Segmentation. UT Austin. Accessed July 30, 2022. https://github.com/bradyz.
● Peng, Lang, Zhirong Chen, Zhangjie Fu, Pengpeng Liang, and Erkang Cheng. 2022. “BEVSegFormer: Bird’s Eye View Semantic Segmentation From Arbitrary Camera Rigs.” arXiv [cs.CV].
arXiv. http://arxiv.org/abs/2203.04050.
● Li, Zhiqi. n.d. BEVFormer: This Is the Official Implementation of BEVFormer, a Camera-Only Framework for Autonomous Driving Perception, E.g., 3D Object Detection and Semantic Map
Segmentation. Github. Accessed May 25, 2022. https://github.com/zhiqi-li/BEVFormer.
● Harley, Adam W., Zhaoyuan Fang, Jie Li, Rares Ambrus, and Katerina Fragkiadaki. 2022. “A Simple Baseline for BEV Perception Without LiDAR.” arXiv [cs.CV]. arXiv.
http://arxiv.org/abs/2206.07959.
● [CVPR'22 WAD] Keynote - Ashok Elluswamy, Tesla:link
Section 00 - 00