Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Algorithm-SoC Co-Design for Mobile Continuous Vision

Algorithm-SoC Co-Design for Mobile Continuous Vision

ISCA 2018 Main Talk

Yuhao Zhu

June 06, 2018
Tweet

More Decks by Yuhao Zhu

Other Decks in Technology

Transcript

  1. Algorithm-SoC Co-Design for Mobile Continuous Vision Yuhao Zhu Department of

    Computer Science University of Rochester with Anand Samajdar, Georgia Tech Matthew Mattina, ARM Research Paul Whatmough, ARM Research
  2. Mobile Continuous Vision: Excessive Energy Consumption Energy Budget: (under 3

    W TDP) 109 nJ/pixel Object Detection Energy Consumption 1400 nJ/pixel 720p, 30 FPS
  3. Expanding the Scope Our Scope 4 Vision Kernels RGB Frames

    Semantic Results Imaging Photons Motion Metadata
  4. Expanding the Scope Our Scope 4 Vision Kernels RGB Frames

    Semantic Results Imaging Photons Motion Metadata f(xt) =
  5. Expanding the Scope Our Scope 4 Vision Kernels RGB Frames

    Semantic Results Imaging Photons Motion Metadata diff (motion) f(xt) = (xt ⊖ xt-1)
  6. Expanding the Scope Our Scope 4 Vision Kernels RGB Frames

    Semantic Results Imaging Photons Motion Metadata diff (motion) f(xt) = f(x1, …, t-1) ⊕ (xt ⊖ xt-1)
  7. Expanding the Scope Our Scope 4 Vision Kernels RGB Frames

    Semantic Results Imaging Photons Motion Metadata diff (motion) synthesis f(xt) = f(x1, …, t-1) ⊕ (xt ⊖ xt-1)
  8. Expanding the Scope Our Scope 4 Vision Kernels RGB Frames

    Semantic Results Imaging Photons Motion Metadata diff (motion) synthesis cheap f(xt) = f(x1, …, t-1) ⊕ (xt ⊖ xt-1)
  9. Expanding the Scope Our Scope 4 Vision Kernels RGB Frames

    Semantic Results Imaging Photons Motion Metadata diff (motion) synthesis Motion-based Synthesis cheap f(xt) = f(x1, …, t-1) ⊕ (xt ⊖ xt-1)
  10. Getting Motion Data 5 Vision Kernels RGB Frames Semantic Results

    Imaging Photons Conversion Demosaic … Bayer Domain Dead Pixel Correction … YUV Domain Temporal Denoising …
  11. Getting Motion Data 5 Vision Kernels RGB Frames Semantic Results

    Imaging Photons Conversion Demosaic … Bayer Domain Dead Pixel Correction … YUV Domain Temporal Denoising …
  12. Getting Motion Data 5 Vision Kernels RGB Frames Semantic Results

    Imaging Photons Conversion Demosaic … Bayer Domain Dead Pixel Correction … YUV Domain Temporal Denoising … Frame k
  13. Getting Motion Data 5 Vision Kernels RGB Frames Semantic Results

    Imaging Photons Conversion Demosaic … Bayer Domain Dead Pixel Correction … YUV Domain Temporal Denoising … Frame k <u, v>
  14. Getting Motion Data 5 Vision Kernels RGB Frames Semantic Results

    Imaging Photons Conversion Demosaic … Bayer Domain Dead Pixel Correction … YUV Domain Temporal Denoising … Frame k-1 Frame k <u, v> <x, y>
  15. Getting Motion Data 5 Vision Kernels RGB Frames Semantic Results

    Imaging Photons Conversion Demosaic … Bayer Domain Dead Pixel Correction … YUV Domain Temporal Denoising … Frame k-1 Frame k <u, v> <x, y> Motion Vector = <x - u, y - v>
  16. Getting Motion Data 5 Vision Kernels RGB Frames Semantic Results

    Imaging Photons Conversion Demosaic … Bayer Domain Dead Pixel Correction … YUV Domain Temporal Denoising … Motion Info. Frame k-1 Frame k <u, v> <x, y> Motion Vector = <x - u, y - v>
  17. Synthesis Operation 6 f(xt) = f(x1, …, t-1) ⊕ (xt

    ⊖ xt-1) diff (motion) synthesis Motion-based Synthesis
  18. Synthesis Operation 6 f(xt) = f(x1, …, t-1) ⊕ (xt

    ⊖ xt-1) diff (motion) synthesis Motion-based Synthesis ▸ Synthesis operation: Extrapolate based on motion vectors
  19. Synthesis Operation 6 f(xt) = f(x1, …, t-1) ⊕ (xt

    ⊖ xt-1) diff (motion) synthesis Motion-based Synthesis ▸ Synthesis operation: Extrapolate based on motion vectors
  20. Synthesis Operation 6 f(xt) = f(x1, …, t-1) ⊕ (xt

    ⊖ xt-1) diff (motion) synthesis Motion-based Synthesis ▸ Synthesis operation: Extrapolate based on motion vectors
  21. Synthesis Operation 6 f(xt) = f(x1, …, t-1) ⊕ (xt

    ⊖ xt-1) diff (motion) synthesis Motion-based Synthesis ▸ Synthesis operation: Extrapolate based on motion vectors
  22. Inference (I-Frame) Extrapolation (E-Frame) Inference (I-Frame) Extrapolation (E-Frame) Extrapolation Window

    = 2 Extrapolation (E-Frame) Extrapolation Window = 3 t4 t0 t1 t2 t3 Synthesis Operation 6 f(xt) = f(x1, …, t-1) ⊕ (xt ⊖ xt-1) diff (motion) synthesis Motion-based Synthesis ▸ Synthesis operation: Extrapolate based on motion vectors
  23. Inference (I-Frame) Extrapolation (E-Frame) Inference (I-Frame) Extrapolation (E-Frame) Extrapolation Window

    = 2 Extrapolation (E-Frame) Extrapolation Window = 3 t4 t0 t1 t2 t3 Synthesis Operation 6 f(xt) = f(x1, …, t-1) ⊕ (xt ⊖ xt-1) diff (motion) synthesis Motion-based Synthesis ▸ Synthesis operation: Extrapolate based on motion vectors
  24. Inference (I-Frame) Extrapolation (E-Frame) Inference (I-Frame) Extrapolation (E-Frame) Extrapolation Window

    = 2 Extrapolation (E-Frame) Extrapolation Window = 3 t4 t0 t1 t2 t3 Synthesis Operation 6 f(xt) = f(x1, …, t-1) ⊕ (xt ⊖ xt-1) diff (motion) synthesis Motion-based Synthesis ▸ Synthesis operation: Extrapolate based on motion vectors
  25. Synthesis Operation 6 f(xt) = f(x1, …, t-1) ⊕ (xt

    ⊖ xt-1) diff (motion) synthesis Motion-based Synthesis ▸ Synthesis operation: Extrapolate based on motion vectors ▸ Address three challenges:
  26. Synthesis Operation 6 f(xt) = f(x1, …, t-1) ⊕ (xt

    ⊖ xt-1) diff (motion) synthesis Motion-based Synthesis ▸ Synthesis operation: Extrapolate based on motion vectors ▸ Address three challenges: ▹ Handle deformable parts
  27. Synthesis Operation 6 f(xt) = f(x1, …, t-1) ⊕ (xt

    ⊖ xt-1) diff (motion) synthesis Motion-based Synthesis ▸ Synthesis operation: Extrapolate based on motion vectors ▸ Address three challenges: ▹ Handle deformable parts ▹ Filter motion noise
  28. Synthesis Operation 6 f(xt) = f(x1, …, t-1) ⊕ (xt

    ⊖ xt-1) diff (motion) synthesis Motion-based Synthesis ▸ Synthesis operation: Extrapolate based on motion vectors ▸ Address three challenges: ▹ Handle deformable parts ▹ Filter motion noise ▹ When to inference vs. extrapolate?
  29. Synthesis Operation 6 f(xt) = f(x1, …, t-1) ⊕ (xt

    ⊖ xt-1) diff (motion) synthesis Motion-based Synthesis ▸ Synthesis operation: Extrapolate based on motion vectors ▸ Address three challenges: ▹ Handle deformable parts ▹ Filter motion noise ▹ When to inference vs. extrapolate? ▹ See paper for details!
  30. Synthesis Operation 6 f(xt) = f(x1, …, t-1) ⊕ (xt

    ⊖ xt-1) diff (motion) synthesis Motion-based Synthesis ▸ Synthesis operation: Extrapolate based on motion vectors ▸ Address three challenges: ▹ Handle deformable parts ▹ Filter motion noise ▹ When to inference vs. extrapolate? ▹ See paper for details! Computationally efficient: Extrapolation: 10K operations/frame CNN Inference: 50B operations/frame
  31. 7 Euphrates An Algorithm-SoC Co-Designed System for Energy-Efficient Mobile Continuous

    Vision Algorithm Motion-based tracking and detection synthesis.
  32. 7 Euphrates An Algorithm-SoC Co-Designed System for Energy-Efficient Mobile Continuous

    Vision SoC Exploits synergies across IP blocks. Enables task autonomy. Algorithm Motion-based tracking and detection synthesis.
  33. 7 Euphrates An Algorithm-SoC Co-Designed System for Energy-Efficient Mobile Continuous

    Vision Results 66% energy saving & 1% accuracy loss with RTL/measurement. SoC Exploits synergies across IP blocks. Enables task autonomy. Algorithm Motion-based tracking and detection synthesis.
  34. 7 Euphrates An Algorithm-SoC Co-Designed System for Energy-Efficient Mobile Continuous

    Vision Results 66% energy saving & 1% accuracy loss with RTL/measurement. SoC Exploits synergies across IP blocks. Enables task autonomy. Algorithm Motion-based tracking and detection synthesis.
  35. SoC Architecture 8 Image Signal Processor CNN Accelerator Camera Sensor

    Sensor Interface On-chip Interconnect Vision Kernels RGB Frames Semantic Results Imaging Photons
  36. DRAM Display SoC Architecture 9 Image Signal Processor CNN Accelerator

    Camera Sensor Sensor Interface On-chip Interconnect CPU (Host) Memory Controller DMA Engine SoC
  37. DRAM Display Frame Buffer SoC Architecture 9 Image Signal Processor

    CNN Accelerator Camera Sensor Sensor Interface On-chip Interconnect CPU (Host) Memory Controller DMA Engine SoC
  38. DRAM Display Frame Buffer SoC Architecture 9 Image Signal Processor

    CNN Accelerator Camera Sensor Sensor Interface On-chip Interconnect CPU (Host) Memory Controller DMA Engine SoC
  39. DRAM Display Frame Buffer SoC Architecture 9 Image Signal Processor

    CNN Accelerator Camera Sensor Sensor Interface On-chip Interconnect CPU (Host) Memory Controller DMA Engine SoC
  40. DRAM Display Frame Buffer SoC Architecture 9 Image Signal Processor

    CNN Accelerator Camera Sensor Sensor Interface On-chip Interconnect CPU (Host) Memory Controller DMA Engine SoC
  41. DRAM Display Frame Buffer SoC Architecture 9 CNN Accelerator Camera

    Sensor Sensor Interface On-chip Interconnect CPU (Host) Memory Controller DMA Engine Image Signal Processor SoC
  42. DRAM Display Frame Buffer SoC Architecture 9 CNN Accelerator Camera

    Sensor Sensor Interface On-chip Interconnect CPU (Host) Memory Controller DMA Engine Image Signal Processor Metadata SoC
  43. DRAM Display Frame Buffer SoC Architecture 9 CNN Accelerator Camera

    Sensor Sensor Interface On-chip Interconnect CPU (Host) Memory Controller DMA Engine Image Signal Processor Metadata 1 SoC
  44. DRAM Display Frame Buffer SoC Architecture 9 CNN Accelerator Camera

    Sensor Sensor Interface On-chip Interconnect CPU (Host) Memory Controller DMA Engine Image Signal Processor Motion Controller Metadata 1 2
  45. DRAM Display Frame Buffer SoC Architecture 9 CNN Accelerator Camera

    Sensor Sensor Interface On-chip Interconnect CPU (Host) Memory Controller DMA Engine Image Signal Processor Motion Controller Metadata 1 2
  46. DRAM Display Frame Buffer SoC Architecture 9 CNN Accelerator Camera

    Sensor Sensor Interface On-chip Interconnect CPU (Host) Memory Controller DMA Engine Image Signal Processor Motion Controller Metadata 1 2
  47. DRAM Display Frame Buffer SoC Architecture 9 CNN Accelerator Camera

    Sensor Sensor Interface On-chip Interconnect CPU (Host) Memory Controller DMA Engine Image Signal Processor Motion Controller Metadata 1 2
  48. ISP Augmentation ▸ Expose motion vectors to the rest of

    the SoC ▸ Design decision: transfer MVs through DRAM 10
  49. ISP Augmentation ▸ Expose motion vectors to the rest of

    the SoC ▸ Design decision: transfer MVs through DRAM ▹ One 1080p frame: 8KB MV traffic vs. ~6MB pixel data 10
  50. ISP Augmentation ▸ Expose motion vectors to the rest of

    the SoC ▸ Design decision: transfer MVs through DRAM ▹ One 1080p frame: 8KB MV traffic vs. ~6MB pixel data ▹ Easy to piggyback on the existing SoC communication scheme 10
  51. ISP Augmentation ▸ Expose motion vectors to the rest of

    the SoC ▸ Design decision: transfer MVs through DRAM ▹ One 1080p frame: 8KB MV traffic vs. ~6MB pixel data ▹ Easy to piggyback on the existing SoC communication scheme ▸ Light-weight modification to ISP Sequencer 10
  52. Temporal Denoising Stage Motion Estimation Motion Compensation SRAM DMA Demosaic

    Color Balance ISP Internal Interconnect SoC Interconnect ISP Pipeline Frame Buffer (DRAM) ISP Sequencer Noisy Frame Denoised Frame Prev. Noisy Frame Prev. Denoised Frame ISP Augmentation ▸ Expose motion vectors to the rest of the SoC ▸ Design decision: transfer MVs through DRAM ▹ One 1080p frame: 8KB MV traffic vs. ~6MB pixel data ▹ Easy to piggyback on the existing SoC communication scheme ▸ Light-weight modification to ISP Sequencer 10
  53. Temporal Denoising Stage Motion Estimation Motion Compensation SRAM DMA Demosaic

    Color Balance ISP Internal Interconnect SoC Interconnect ISP Pipeline Frame Buffer (DRAM) ISP Sequencer Noisy Frame Denoised Frame Prev. Noisy Frame Prev. Denoised Frame ISP Augmentation ▸ Expose motion vectors to the rest of the SoC ▸ Design decision: transfer MVs through DRAM ▹ One 1080p frame: 8KB MV traffic vs. ~6MB pixel data ▹ Easy to piggyback on the existing SoC communication scheme ▸ Light-weight modification to ISP Sequencer 10
  54. Temporal Denoising Stage Motion Estimation Motion Compensation SRAM DMA Demosaic

    Color Balance ISP Internal Interconnect SoC Interconnect ISP Pipeline Frame Buffer (DRAM) ISP Sequencer Noisy Frame Denoised Frame Prev. Noisy Frame Prev. Denoised Frame ISP Augmentation ▸ Expose motion vectors to the rest of the SoC ▸ Design decision: transfer MVs through DRAM ▹ One 1080p frame: 8KB MV traffic vs. ~6MB pixel data ▹ Easy to piggyback on the existing SoC communication scheme ▸ Light-weight modification to ISP Sequencer 10
  55. Temporal Denoising Stage Motion Estimation Motion Compensation SRAM DMA Demosaic

    Color Balance ISP Internal Interconnect SoC Interconnect ISP Pipeline Frame Buffer (DRAM) ISP Sequencer Noisy Frame Denoised Frame Prev. Noisy Frame Prev. Denoised Frame ISP Augmentation ▸ Expose motion vectors to the rest of the SoC ▸ Design decision: transfer MVs through DRAM ▹ One 1080p frame: 8KB MV traffic vs. ~6MB pixel data ▹ Easy to piggyback on the existing SoC communication scheme ▸ Light-weight modification to ISP Sequencer 10 MVs
  56. Temporal Denoising Stage Motion Estimation Motion Compensation SRAM DMA Demosaic

    Color Balance ISP Internal Interconnect SoC Interconnect ISP Pipeline Frame Buffer (DRAM) ISP Sequencer Noisy Frame Denoised Frame Prev. Noisy Frame Prev. Denoised Frame ISP Augmentation ▸ Expose motion vectors to the rest of the SoC ▸ Design decision: transfer MVs through DRAM ▹ One 1080p frame: 8KB MV traffic vs. ~6MB pixel data ▹ Easy to piggyback on the existing SoC communication scheme ▸ Light-weight modification to ISP Sequencer 10 MVs
  57. Temporal Denoising Stage Motion Estimation Motion Compensation SRAM DMA Demosaic

    Color Balance ISP Internal Interconnect SoC Interconnect ISP Pipeline Frame Buffer (DRAM) ISP Sequencer Noisy Frame Denoised Frame Prev. Noisy Frame Prev. Denoised Frame ISP Augmentation ▸ Expose motion vectors to the rest of the SoC ▸ Design decision: transfer MVs through DRAM ▹ One 1080p frame: 8KB MV traffic vs. ~6MB pixel data ▹ Easy to piggyback on the existing SoC communication scheme ▸ Light-weight modification to ISP Sequencer 10 MVs
  58. Motion Controller IP 11 Extrapolation Unit Motion Vector Buffer DMA

    Sequencer (FSM) ROI Selection ROI 4-Way SIMD Unit Scalar MVs New ROI MMap Regs ROI Winsize Base Addrs Conf
  59. Motion Controller IP 11 Extrapolation Unit Motion Vector Buffer DMA

    Sequencer (FSM) ROI Selection ROI 4-Way SIMD Unit Scalar MVs New ROI MMap Regs ROI Winsize Base Addrs Conf
  60. Motion Controller IP 11 Extrapolation Unit Motion Vector Buffer DMA

    Sequencer (FSM) ROI Selection ROI 4-Way SIMD Unit Scalar MVs New ROI MMap Regs ROI Winsize Base Addrs Conf
  61. Motion Controller IP 11 Extrapolation Unit Motion Vector Buffer DMA

    Sequencer (FSM) ROI Selection ROI 4-Way SIMD Unit Scalar MVs New ROI MMap Regs ROI Winsize Base Addrs Conf
  62. Motion Controller IP 11 Extrapolation Unit Motion Vector Buffer DMA

    Sequencer (FSM) ROI Selection ROI 4-Way SIMD Unit Scalar MVs New ROI MMap Regs ROI Winsize Base Addrs Conf
  63. Motion Controller IP ▸ Why not directly augment the CNN

    accelerator, but a new IP? ▹Independent of vision algo./arch implementation 11 Extrapolation Unit Motion Vector Buffer DMA Sequencer (FSM) ROI Selection ROI 4-Way SIMD Unit Scalar MVs New ROI MMap Regs ROI Winsize Base Addrs Conf
  64. Motion Controller IP ▸ Why not directly augment the CNN

    accelerator, but a new IP? ▹Independent of vision algo./arch implementation ▸ Why not synthesize in CPU, but a new IP? ▹Switch-off CPU to enable “always-on” vision 11 Extrapolation Unit Motion Vector Buffer DMA Sequencer (FSM) ROI Selection ROI 4-Way SIMD Unit Scalar MVs New ROI MMap Regs ROI Winsize Base Addrs Conf
  65. Motion Controller CNN Accelerator Motion Controller IP 12 Extrapolation Unit

    Motion Vector Buffer DMA Sequencer (FSM) ROI Selection ROI 4-Way SIMD Unit Scalar MVs New ROI MMap Regs ROI Winsize Base Addrs Conf ISP SoC Interconnect ▸ Why not directly augment the CNN accelerator, but a new IP? ▹Independent of vision algo./arch implementation ▸ Why not synthesize in CPU, but a new IP? ▹Switch-off CPU to enable “always-on” vision
  66. 13 Euphrates An Algorithm-SoC Co-Designed System for Energy-Efficient Mobile Continuous

    Vision Algorithm Motion-based tracking and detection synthesis. SoC Exploits synergies across IP blocks. Enables task autonomy. Results 66% energy saving & 1% accuracy loss with RTL/measurement.
  67. Experimental Setup ▸ In-house simulator modeling a commercial mobile SoC:

    Nvidia Tegra X2 ▹ Real board measurement ▸ Develop RTL models for IPs unavailable on TX2 ▹ CNN Accelerator (651 mW, 1.58 mm2) ▹ Motion Controller (2.2 mW, 0.035 mm2) 14
  68. Experimental Setup ▸ In-house simulator modeling a commercial mobile SoC:

    Nvidia Tegra X2 ▹ Real board measurement ▸ Develop RTL models for IPs unavailable on TX2 ▹ CNN Accelerator (651 mW, 1.58 mm2) ▹ Motion Controller (2.2 mW, 0.035 mm2) 14 ▸ Evaluate on Object Tracking and Object Detection ▹Important domains that are building blocks for many vision applications ▹IP vendors have started shipping standalone tracking/detection IPs
  69. Experimental Setup ▸ In-house simulator modeling a commercial mobile SoC:

    Nvidia Tegra X2 ▹ Real board measurement ▸ Develop RTL models for IPs unavailable on TX2 ▹ CNN Accelerator (651 mW, 1.58 mm2) ▹ Motion Controller (2.2 mW, 0.035 mm2) 14 ▸ Evaluate on Object Tracking and Object Detection ▹Important domains that are building blocks for many vision applications ▹IP vendors have started shipping standalone tracking/detection IPs
  70. Experimental Setup ▸ In-house simulator modeling a commercial mobile SoC:

    Nvidia Tegra X2 ▹ Real board measurement ▸ Develop RTL models for IPs unavailable on TX2 ▹ CNN Accelerator (651 mW, 1.58 mm2) ▹ Motion Controller (2.2 mW, 0.035 mm2) 14 ▸ Evaluate on Object Tracking and Object Detection ▹Important domains that are building blocks for many vision applications ▹IP vendors have started shipping standalone tracking/detection IPs ▸ Object Detection ▹Baseline CNN: YOLOv2 (state-of-the-art detection results)
  71. Experimental Setup ▸ In-house simulator modeling a commercial mobile SoC:

    Nvidia Tegra X2 ▹ Real board measurement ▸ Develop RTL models for IPs unavailable on TX2 ▹ CNN Accelerator (651 mW, 1.58 mm2) ▹ Motion Controller (2.2 mW, 0.035 mm2) 14 ▸ Evaluate on Object Tracking and Object Detection ▹Important domains that are building blocks for many vision applications ▹IP vendors have started shipping standalone tracking/detection IPs ▸ Object Detection ▹Baseline CNN: YOLOv2 (state-of-the-art detection results) ▸ SCALESim: A systolic array-based, cycle-accurate CNN accelerator simulator. https://github.com/ARM-software/SCALE-Sim.
  72. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 YOLOv2 0 0.25

    0.5 0.75 1 YOLOv2 Evaluation Results 15 Accuracy Norm. Energy
  73. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 YOLOv2 0 0.25

    0.5 0.75 1 YOLOv2 YOLOv2 EW-2 EW-4 EW-8 EW-16 EW-32 Evaluation Results 15 Accuracy Norm. Energy EW = Extrapolation Window
  74. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 YOLOv2 0 0.25

    0.5 0.75 1 YOLOv2 YOLOv2 EW-2 EW-4 EW-8 EW-16 EW-32 YOLOv2 EW-2 EW-4 EW-8 EW-16 EW-32 Evaluation Results 15 Accuracy Norm. Energy EW = Extrapolation Window
  75. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 YOLOv2 0 0.25

    0.5 0.75 1 YOLOv2 YOLOv2 EW-2 EW-4 EW-8 EW-16 EW-32 YOLOv2 EW-2 EW-4 EW-8 EW-16 EW-32 Evaluation Results 15 Accuracy Norm. Energy 66% system energy saving with ~ 1% accuracy loss. EW = Extrapolation Window
  76. Scale-down CNN 0.1 0.2 0.3 0.4 0.5 0.6 0.7 YOLOv2

    0 0.25 0.5 0.75 1 YOLOv2 YOLOv2 EW-2 EW-4 EW-8 EW-16 EW-32 YOLOv2 EW-2 EW-4 EW-8 EW-16 EW-32 YOLOv2 EW-4 EW-16 TinyYOLO Evaluation Results 15 Accuracy Norm. Energy 66% system energy saving with ~ 1% accuracy loss. EW = Extrapolation Window
  77. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 YOLOv2 0 0.25

    0.5 0.75 1 YOLOv2 YOLOv2 EW-2 EW-4 EW-8 EW-16 EW-32 YOLOv2 EW-2 EW-4 EW-8 EW-16 EW-32 YOLOv2 EW-4 EW-16 TinyYOLO Evaluation Results 15 Accuracy Norm. Energy 66% system energy saving with ~ 1% accuracy loss. More efficient than simply scaling-down the CNN. EW = Extrapolation Window
  78. Conclusions 16 ▸ We must expand our focus from isolated

    accelerators to holistic SoC architecture.
  79. Conclusions 16 ▸ We must expand our focus from isolated

    accelerators to holistic SoC architecture.
  80. Conclusions 16 ▸ Euphrates co-designs the SoC with a motion-based

    synthesis algorithm. ▸ We must expand our focus from isolated accelerators to holistic SoC architecture.
  81. Conclusions 16 ▸ Euphrates co-designs the SoC with a motion-based

    synthesis algorithm. ▸ We must expand our focus from isolated accelerators to holistic SoC architecture. ▸ 66% SoC energy savings with ~1% accuracy loss. More efficient than scaling-down CNNs.