Upgrade to Pro — share decks privately, control downloads, hide ads and more …

360° Object Detection Robot Car

360° Object Detection Robot Car

This slide was used in the AMD Pervasive AI Developer Contest.
Main Project:
https://www.hackster.io/iotengineer22/360-object-detection-robot-car-bdb1bd

misoji engineer

January 29, 2025
Tweet

More Decks by misoji engineer

Other Decks in Technology

Transcript

  1. 360° Object Detection Robot Car @misoji_engineer AMD Pervasive AI Developer

    Contest September 19, 2024 Main Project: https://www.hackster.io/iotengineer22/ 360-object-detection-robot-car-bdb1bd
  2. Subprojects Created 14 Subprojects to make it easier for beginners

    to recreate. Subprojects are organized into the following chapters. 1. PYNQ + GPIO(LED Blinking) 2. PYNQ + PWM(DC-Motor Control) 3. Object Detection(Yolo) with DPU-PYNQ 4. Implementation DPU, GPIO, and PWM 5. Remote Control 360° Camera 6. GStreamer + OpenCV with 360°Camera 7. 360 Live Streaming + Object Detect(DPU) 8. ROS2 3D Marker from 360 Live Streaming 9. Control 360° Object Detection Robot Car 10. Improve Object Detection Speed with YOLOX 11. Benchmark Architectures of the DPU 12. Power Consumption of 360° Object Detection Robot Car 13. Application to Vitis AI ONNX Runtime Engine (VOE) 14. Appendix: Object Detection Using YOLOX with a Webcam Long project is divided into 14 subprojects
  3. Videos Provided 35 demo videos to make the content clearer.

    We made detailed 35 Videos, and opened in hackster.io and Youtube. https://www.youtube.com/playlist?list=PLVu7mPgGBMbaku3Ff9iG18OYoNWfO1pT7
  4. Open Software source All Software environment (Jupyter notebook, python) are

    opened. The detailed Software source is saved, and linked in main project page; https://github.com/iotengineer22/AMD-Pervasive-AI-Developer-Contest . ├── bom # BOM file (BOM list) ├── imgs # Image file (Electrical Diagrams) ├── jupyter_notebooks # Jupyter Notebooks (pre-Test Program) ├── pcb # PCB link files ├── src # Python files (Test Program) ├── LICENSE └── README.md
  5. Prepared all pre-Test environments Anyone can perform pre-Test program(.ipynb) easily.

    Jupyter Notebooks (pre-Test Program) examples https://github.com/iotengineer22/AMD-Pervasive-AI-Developer-Contest/tree/main/jupyter_notebooks All the necessary files are provided.
  6. Prepared all Main environments Similarly, anyone can easily follow main

    program(.py). Python (Main Program) https://github.com/iotengineer22/AMD-Pervasive-AI-Developer-Contest/tree/main/src All the necessary files are provided.
  7. Linked for project Provided codes to run all demos introduced

    in projects. ・Controller test with Jupyter Notebooks Subproject_9. Control 360° Object Detection Robot Car ・YOLOX Object Detection with Python Subproject_10. Improve Object Detection Speed with YOLOX /jupyter_notebooks/pynq-original-dpu-model/controller-pwm-gpio-test.ipynb src/yolox-test/app_gst-yolox-real-360-2divide.py
  8. Code Overview(1) from pynq_dpu import DpuOverlay from pynq import Overlay

    from pynq.lib import AxiGPIO overlay = DpuOverlay(dpu_model) overlay.load_model(cnn_xmodel) # Define and create the GStreamer pipeline for 360-camera pipeline = "thetauvcsrc mode=2K ! h264parse ! avdec_h264 ! videoconvert ! video/x-raw,format=BGR ! appsink" # Initialize the VideoCapture object cap = cv2.VideoCapture(pipeline, cv2.CAP_GSTREAMER) # Initialize and start the Motor-controller-thread controller_thread = threading.Thread(target=continuous_controller) controller_thread.start() 360-degree AI vision
  9. Code Overview(2) try: while True: ret, frame = cap.read() #

    Split the 360-image into 2 quadrants quadrants = { 'q1': frame[:, :width // 2], #front 'q2': frame[:, width // 2:], #back } # Apply YOLOX object detection and ROS2 Output to each quadrant for key, img in quadrants.items(): d_boxes, d_scores, d_classes = yolox_run(img, key) # Publish each quadrant as a separate ROS message img_publishers[key].publish(bridge.cv2_to_imgmsg(img, encoding="bgr8")) publish_markers(box_publishers, node_box, d_boxes, d_classes, key) 360-degree AI vision
  10. Open Hardware source All Hardware data(Diagrams and PCBs) are opened.

    The detailed Hardware source is also saved; https://www.hackster.io/iotengineer22/360-object-detection-robot-car-bdb1bd#schematics $130 Electrical Diagrams Debug PCB Datas Motor-driver PCB Datas
  11. Simple and Low-cost BOM This low-cost robot costs about $550

    with a 360° camera. The detailed BOM list is also saved; https://www.hackster.io/iotengineer22/360-object-detection- robot-car-bdb1bd#toc-2--bom-1 ・360° camera …$370 ・Robot Structure …$130 ・Original PCBs …$30 ・wire and spacers …$20 ~~~~~~~~~~~~~~~~~~ ・Total …$550 $370 $20 $130 $30
  12. 360-degree AI vision Provided detailed explanations on key points(AI vision).

    ・Implement DPU, GPIO and PWM with PYNQ Subproject_4. Implementation DPU, GPIO, and PWM ・ROS2 Output from 360 Live Streaming Subproject_8. ROS2 3D Marker from 360 Live Streaming
  13. Update Object Detection Implemented YOLOX for object detection with DPU-PYNQ.

    ・Update and Speedup with YOLOX from YOLOv3(DPU-PYNQ sample program) Subproject_10. Improve Object Detection Speed with YOLOX
  14. Benchmark Conducted benchmarks for processing speed and power consumption. ・DPU

    inference benchmark Subproject_11. Benchmark Architectures of the DPU ・Power Consumption benchmark Subproject_12. Power Consumption of 360° Object Detection Robot Car
  15. Vitis AI 3.5 New Function Introduced the use of the

    Vitis AI ONNX Runtime Engine (VOE). ・The new function in ONNX Runtime have made benchmark significantly easier. Subproject_13. Application to Vitis AI ONNX Runtime Engine (VOE)
  16. Thank you This contest has been a great fun challenge!

    Thanks to AMD and Hackster for the opportunity and the hardware.