4. Run GPU enabled pod 5. Get GPU info from pod log 1. Download Nvidia Jetpack SDK >= v4.2.1 2. Set the Nvidia runtime as a default runtime in Docker 3. Deploy k3s with Docker engine Access Jetson GPU in Kubernetes { “default-runtime”: “nvidia”, “runtimes”: { “nvidia”: { “path”: “nvidia-container-runtime”, “runtimeArgs”: [] } } } curl -sfL https://get.k3s.io | sh -s - --docker apiVersion: v1 kind: Pod metadata: name: devicequery spec: containers: - name: nvidia image: jitteam/devicequery:latest command: [ "./deviceQuery" ] ./deviceQuery Starting... CUDA Device Query (Runtime API) version (CUDART static linking) Detected 1 CUDA Capable device(s) Device 0: "NVIDIA Tegra X1" CUDA Driver Version / Runtime Version 10.2 / 10.2 CUDA Capability Major/Minor version number: 5.3 Total amount of global memory: 3964 MBytes (4156780544 bytes) ( 1) Multiprocessors, (128) CUDA Cores/MP: 128 CUDA Cores GPU Max Clock rate: 922 MHz (0.92 GHz) … Result = PASS
システム構成 Jetson0 GPU Camera Disk Jetson1 GPU Camera Disk Jetson2 GPU Camera Disk Disk http://20.0.0.20:3xxxx http://20.0.0.21:3xxxx http://20.0.0.22:3xxxx