Upgrade to Pro — share decks privately, control downloads, hide ads and more …

【創造者爐邊聚 #1:Donkey Car 學走路 - 自駕遙控車的機器學習攻略】

Star Rocket
December 19, 2018

【創造者爐邊聚 #1:Donkey Car 學走路 - 自駕遙控車的機器學習攻略】

【 簡報單元 】

1. 接觸 Donkey Car 緣起
2. Donkey Car 系統構成
3. 資料與模型 - 驢車的胡蘿蔔與大棒
---

【 簡報檔案由分享者 CC 授權 】
李翼 | DonkeyCar.Taipei 車把式

---
【 DonkeyCar . Taipei 臉書社團 】
https://www.facebook.com/groups/donkeycar.taipei/

【 Star Rocket 粉絲專頁】
https://www.facebook.com/starrocket.io/

Star Rocket

December 19, 2018
Tweet

More Decks by Star Rocket

Other Decks in Programming

Transcript

  1. Agenda 1. 接觸 Donkey Car 緣起 2. Donkey Car 系統構成

    3. 資料與模型 - 驢車的胡蘿蔔與大棒 4. Demo
  2. What is Donkey Car? • An open-source DIY self -driving

    platform for small scale RC cars ◦ 以 RC 遙控車(的下半部)為載台 ◦ Controlled by Rapsberry Pi ◦ Programmed in Python ◦ Driven Autonomouly by Neural Network source: www.donkeycar.com
  3. Donkey Car 的玩法 1. 用膠帶在地板上貼出車道 2. 組裝與設定好 Donkey Car 3.

    以人工遙控方式控制 Donkey Car 在車道上前 進(蒐集 Training Data ) 4. 將 Training Data 傳到PC訓練 model 5. 將 model 傳回到 Donkey Car 6. 讓 Donkey Car 在車道上自主行進(Inference) 7. 觀察與蒐集結果,重複步驟3至6,持續改善 model,提高成功率
  4. 軟體架構 Take Pitcure Get User Input Get Model Prediction Update

    Servo Update Motor Recording & Save Data Perception Planning Control Data Collecting
  5. Take Pitcure Get User Input Get Model Prediction Update Servo

    Update Motor Recording & Save Data b l u e t o o t h
  6. Take Pitcure Get User Input Get Model Prediction Update Servo

    Update Motor Recording & Save Data Neural Network Throttle & Steering Prediction
  7. Take Pitcure Get User Input Get Model Prediction Update Servo

    Update Motor Recording & Save Data PWM Value (I2C) PWM Value (I2C) ESC PWM Signal PWM Signal
  8. Take Pitcure Get User Input Get Model Prediction Update Servo

    Update Motor Recording & Save Data { "user/mode": "pilot", "user/angle": 1.0, "user/throttle": 0.33919034394360176, "cam/image_array": "1683_cam-image_array_.jpg" }
  9. 硬體清單 1. RC 遙控車 (我用1/18 WLTOYS A979B)(不要買這台!) a. 電子變速器是獨規,必須換成PWM公規(例如:QuicRun 1060)

    b. 伺服馬達是獨規,必須換成PWM公規(例如:銀燕 ES 3154 伺服馬達) 2. Raspberry Pi 3B (running Rasbian Stretch) 3. Pi Camera (最好買廣角版本) 4. PCA9685 PWM控制板 5. PS3 Joystick 6. 行動電源 ([email protected]) 7. Micro SD Card (16GB以上) 8. 杜邦跳線 9. 壓克力板 (大小不要超過車長x車寬) 10. 冰棒棍 11. 3M Dual Lock魔鬼沾 12. MPU6050或MPU9250陀螺儀 (optional)
  10. Donkey Car 配置與調校 - 鏡頭位置的考量 • 下傾角 ◦ 要看到車子前方的車道 •

    相機高度 ◦ 要足夠高,才能看到前方車道 ◦ 不能太高,避免遠方前景入鏡 • 相機要靠近車前,但不要超出車 體投影範圍 ◦ 過於靠後,會使車頭入鏡 ◦ 過於靠前,易撞毀
  11. Label { "user/mode": "user", "user/angle": 1.0, "user/throttle": 0.33919034394360176, "cam/image_array": "1683_cam-image_array_.jpg",

    "imu/accel_x": -0.0576171875, "imu/accel_y": -0.105224609375, "imu/accel_z": 1.135498046875, "imu/gyro_x": 0.07548455893993378, "imu/gyro_y": 0.02495477721095085, "imu/gyro_z": 0.1672498732805252, "imu/compass_x": -210.21829223632812, "imu/compass_y": 257.4739685058594, "imu/compass_z": -155.9217987060547, "imu/pose_roll": 0.014761204831302166, "imu/pose_pitch": 0.008595574647188187, "imu/pose_yaw": -2.2539045810699463, "timestamp": "2018-07-12T04:41:16.661911" } • input為cam/image_array • output是user/angle與user/throttle
  12. 訓練資料類別 1 2 3 4 1. 精確開 : 車子沿中線開,保持 平衡。

    2. 小幅震盪 : 小幅擺動,神經網 路增加不同角度的視角,車子 自己能學習修正方向回中央車 道。 3. 大幅震盪 : 比前次小幅左右擺 動,增加擺動角度。 4. 車道邊界 : 讓車子在邊界之間 來回,開到邊界立即折返,神 經網路能學習到車道的邊界極 值。 5. 避障 : 除了車道之外,也學習 障礙物是不可撞擊。 5
  13. 那些年,我們一起收的資料集(2) 0816資料集 (44000 images) 0817資料集 (15000 images) 類別 精確開 小幅震盪

    大幅震盪 車道邊界 避障開 精確開 數量 4400 8800 8800 22000 7500 7500 圈數 正2反2 正4反4 正4反4 正10反10 正5反5 正5反5
  14. Model是手刻CNN def default_linear(): img_in = Input(shape=(120, 160, 3), name='img_in') x

    = img_in x = Convolution2D(24, (5, 5), strides=(2, 2), activation='relu')(x) x = Convolution2D(32, (5, 5), strides=(2, 2), activation='relu')(x) x = Convolution2D(64, (5, 5), strides=(2, 2), activation='relu')(x) x = Convolution2D(64, (3, 3), strides=(2, 2), activation='relu')(x) x = Convolution2D(64, (3, 3), strides=(1, 1), activation='relu')(x) x = Flatten(name='flattened')(x) x = Dense(100, activation='linear')(x) x = Dropout(.1)(x) x = Dense(50, activation='linear')(x) x = Dropout(.1)(x) # categorical output of the angle angle_out = Dense(1, activation='linear', name='angle_out')(x) # continous output of throttle throttle_out = Dense(1, activation='linear', name='throttle_out')(x) model = Model(inputs=[img_in], outputs=[angle_out, throttle_out]) model.compile(optimizer='adam', loss={'angle_out': 'mean_squared_error', 'throttle_out': 'mean_squared_error'}, loss_weights={'angle_out': 0.5, 'throttle_out': .5}) return model
  15. Model是手刻CNN (cont) __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected

    to ================================================================================================== img_in (InputLayer) (None, 120, 160, 3) 0 __________________________________________________________________________________________________ conv2d_11 (Conv2D) (None, 58, 78, 24) 1824 img_in[0][0] __________________________________________________________________________________________________ conv2d_12 (Conv2D) (None, 27, 37, 32) 19232 conv2d_11[0][0] __________________________________________________________________________________________________ conv2d_13 (Conv2D) (None, 12, 17, 64) 51264 conv2d_12[0][0] __________________________________________________________________________________________________ conv2d_14 (Conv2D) (None, 5, 8, 64) 36928 conv2d_13[0][0] __________________________________________________________________________________________________ conv2d_15 (Conv2D) (None, 3, 6, 64) 36928 conv2d_14[0][0] __________________________________________________________________________________________________ flattened (Flatten) (None, 1152) 0 conv2d_15[0][0] __________________________________________________________________________________________________ dense_5 (Dense) (None, 100) 115300 flattened[0][0] __________________________________________________________________________________________________ dropout_5 (Dropout) (None, 100) 0 dense_5[0][0] __________________________________________________________________________________________________ dense_6 (Dense) (None, 50) 5050 dropout_5[0][0] __________________________________________________________________________________________________ dropout_6 (Dropout) (None, 50) 0 dense_6[0][0] __________________________________________________________________________________________________ angle_out (Dense) (None, 1) 51 dropout_6[0][0] __________________________________________________________________________________________________ throttle_out (Dense) (None, 1) 51 dropout_6[0][0] ======================= Total params: 266,628 Trainable params: 266,628 Non-trainable params: 0 _________________________ None
  16. Approach #1 • Maxpooling ◦ 在Convolution layer 後加入 Maxpooling ,提取陣列中最大值,

    削減雜訊,因此可以大幅減少參數 量。 • Batch Normalization ◦ 方向、速度的值介於1至-1之間,標 準化後,模型更容易學到規律,提 升預測準確度 • 參數量減少 42%
  17. Approach #2 (Cropping) 120 x 160 100 x 160 裁切影像高度

    20 pixels Total params: 229k Trainable params: 229k Non-trainable params: 0 比 base model 少了 14% 參數量
  18. Approach #3 (Image Preprocessing) • 目的 ◦ 在輸入影像畫輔助線,突出特徵,降 低環境雜訊干擾 •

    手法 ◦ 使用OpenCV處理輸入影像 ◦ 邊緣偵測—Canny + HoughLines ◦ 顏色空間—inRange(HSV偵測) • 限制 ◦ 光線、干擾物影響不固定,難以準確 切割 ◦ Inference時也要進行相同的前處理, 將影響效能
  19. Heat Map (Base Model vs Image Pre-processing) Base Model Image

    Pre-processing Image Pre-processing可以減少ㄧ層 convolution layer嗎?
  20. Future Works • 方向號誌 • 過斑馬線停車再開 • 論影像前處理之必要 • 兩車互動

    ◦ 兩車追逐競速超車 ◦ 兩車相向避讓 • 兩台同型車子可以用同一個 model嗎? • 多個感測器
  21. Reference 1. [nVIDIA Paper] https://devblogs.nvidia.com/deep-learning-self-driving-cars/ 2. [Donkey Car 官網] http://www.donkeycar.com/

    3. [Donkey Car 程式碼] https://github.com/wroscoe/donkey 4. [組裝教學與經驗分享] https://medium.com/ljlstyle/tagged/autonomous-cars 5. [FB社團] https://www.facebook.com/groups/donkeycar.taipei/