Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Do Linux KVM Hypervisor dream of GPU-VDI computing?

Do Linux KVM Hypervisor dream of GPU-VDI computing?

GPU-VDI on Linux KVM is my dream.

Naoto Gohko

June 16, 2017
Tweet

More Decks by Naoto Gohko

Other Decks in Technology

Transcript

  1. Do Linux KVM Hypervisor dream of GPU-VDI computing? @naoto_gohko (郷古

    直仁) (Japan OpenStack Users Group / GMO Internet Inc.,) GPU-Accelerated VDI International Conference 2017 Asia Community LT 2017/06/16, Okinawa
  2. LT presenter(Itʼs me) #1 • Naoto Gohko / 郷古 直仁

    (@naoto_gohko) • Cloud Service development divistion, GMO Internet Inc., • Japan OpenStack Users Group (JOSUG) Member. @MikumoConoHa
  3. LT presenter(Itʼs me) #2 • My trend “To live a

    beautiful life” • Untile Last December, it was night type life. to the office 11:00, leave the office 20:00 • This year, worked start 9 oʼclock, leave the office 18:00 • I was at a loss as to whether to apply for the OpenStack jobs at OIST on Linked-in site. : )
  4. Swift cluster GMO Internet, Inc.: VPS and Cloud services 0OBNBFDPN

    714   IUUQXXXPOBNBFTFSWFSDPN 'PSDVTHMPCBM*1TQSPWJEFECZTJNQMFOPWBOFUXPSL UFOUFO 714  IUUQXXXUFOUFOWO 4IBSFPG044CZ(SPVQDPNQBOJFTJO7JFUOBN $POP)B 714   IUUQXXXDPOPIBKQ 'PSDVT2VBOUBN /FVUSPO PWFSMBZUFOBOUOFUXPSL (.0"QQT$MPVE   IUUQDMPVEHNPKQ 0QFO4UBDL )BWBOBCBTFETU SFHJPO &OUFSQSJTFHSBEF*BB4 XJUICMPDLTUPSBHF PCKFDUTUPSBHF  -#BB4 BOECBSFNFUBM DPNQVUFXBTQSPWJEFE 0OBNBFDPN $MPVE  IUUQXXXPOBNBFDMPVEDPN 'PSDVT-PXQSJDF7.JOTUBODFT CBSFNFUBM DPNQVUFBOEPCKFDUTUPSBHF $POP)B $MPVE  IUUQXXXDPOPIBKQ 'PSDVT.-WYMBO PWFSMBZ -#BB4 CMPDLTUPSBHF %/4BB4 %FTJHOBUF BOEPSJHJOBMTFSWJDFTCZLFZTUPOFBVUI OpenStack Diablo on CentOS 6.x Nova Keystone Glance Nova network Shared codes Quantam OpenStack Glizzly on Ubuntu 12.04 Nova Keystone Glance OpenStack Havana on CentOS 6.x Keystone Glance Cinder Swift Swift Shared cluster Shared codes Keystone Glance Neutron Nova Swift Baremetal compute Nova Ceilometer Baremetal compute Neutron LBaaS ovs + gre tunnel overlay Ceilometer Designate Swift OpenStack Juno on CentOS 7.x Nova Keystone Glance Cinder Ceilometer Neutron LBaaS (.0"QQT$MPVE   IUUQDMPVEHNPKQ OE SFHJPOCZ0QFO4UBDL +VOPCBTFE &OUFSQSJTFHSBEF*BB4 XJUI)JHI*014*SPOJD$PNQVUFBOE/FVUSPO-#BB4 Upgrade Juno GSLB Swift Keystone Glance Cinder Ceilometer Nova Neutron Ironic LBaaS
  5. GPU-VDI with Linux KVM hypervisor Pros • Linux KVM is

    Opensource Cons • Linux KVM and Linux kernel is the opensource.
  6. GPU-VDI with Linux Computing (not limited to KVM) Computing Accelereted

    Method Pass-through • A) KVM-VM with GPU-PCI pass-through (with OpenStack) • B) Container deployment (with Kubernetes 1.6~) Virulization GPU • C) KVMGT: Full GPU Virtulization(only Intel Chip embedded GPU) API Intercept based • D) virGL : virtio-GPU driver para-virtulization with KVM (3D library accelerelation on Host GPU OpenGL computing) • F) Legacy: VMGL (limited Linux workstation) • G*) VirtCL : virtio-OpenCL (GPGPU instead of GPU-VDI)
  7. OpenStack for Scientific Research https://www.openstack.org/science/ HPC and HTPC (High-throughput Computing)

    Book URL The Crossroads of Cloud and HPC: OpenStack for Scientific Research https://www.openstack.org/assets/science/OpenStack-CloudandHPC6x9Booklet- v4-online.pdf
  8. GPGPU on OpenStack ‒ The Best Practice for GPGPU Internal

    Cloud • My friend, Ohta-san. (he is the leader of the Japanese Raspberry-Pi user group.) • Open Source Summit Japan 2017 • LinuxCon Chaina 2017 https://speakerdeck.com/masafumi_ohta/gpu-on-openstack
  9. GPGPU on OpenStack ‒ The Best Practice for GPGPU Internal

    Cloud • PCI Pass-though has generalized in GPGPU. • but in GPU-VDI it depends on the number of GPUs in Computing node
  10. How to VDI-GPU with Kubernetes 1.6 • Exp) Nvidia GPU:

    • Pass-through PCI-GPU • Docker run with-in KVM instance (run KVM instance as a application with system priviledge.) Guest VM of Windows is OK OR • runv and frakti: Hypervisor-based container Guest VM of Windows is ??
  11. GPU with Kubernetes 1.6 • Node affinity/anti-affinity scheduler: beta •

    Special Hardware (like a GPU) • Multiple GPU support for Docker container • For CUDA use case GPU … CUDA ?? GPGPU? (VDI is ok)
  12. How to VDI-GPU with Kubernetes 1.6 •Exp) Nvidia GPU: •

    Pass-through PCI-GPU • Docker run with-in KVM instance (run KVM instance as a application with system priviledge.) Windows guest is OK (PCI pass-through) OR • runv and frakti: Hypervisor-based container Guest VM of Windows is ??
  13. How to VDI-GPU with Kubernetes 1.6 • Hypervisor-based container: •

    Hypernetes: manage Frakit, HyperContainer, CNI, Volumes… https://hyper.sh HyperHQ team: Success of CRI: Bringing Hypervisor based Container to Kubernetes. (Cloud Native Con, 2017 / KubeCon, Harry Zhang)
  14. How to VDI-GPU with Kubernetes 1.6+ • Hypervisor-based container: •

    Frakti: the hypervisor based container https://github.com/kubernetes/frakti (… It is similar to a Windows container) • HyperContainer https://hypercontainer.io/ • runv: hypervisor based runtime for OCI https://github.com/hyperhq/runv • hyperd: control daemon https://github.com/hyperhq/hyperd • hyperstart: init service (PID=1) https://github.com/hyperhq/hyperstart Frakti, runv: using same CT image for runc
  15. GPU-VDI with Linux Computing (not limited to KVM) Computing Accelereted

    Method Pass-through • A) KVM-VM with GPU-PCI pass-through (with OpenStack) • B) Container deployment (with Kubernetes 1.6~) Virulization GPU • C) KVMGT: Full GPU Virtulization(only Intel Chip embedded GPU) API Intercept based • D) virGL : virtio-GPU driver para-virtulization with KVM (3D library accelerelation on Host GPU OpenGL computing) • F) Legacy: VMGL (limited Linux workstation) • G*) VirtCL : virtio-OpenCL (GPGPU instead of GPU-VDI)
  16. virtCL: A Framework for OpenCL Device Abstractoin and Management https://www.researchgate.net/publication/273630028

    Yi-Ping You, Hen-Junk Wu, Yeh-Ning Tsai, Yen-Ting Chao; Department of Computer Science, National Chiao Tung University, Taiwan (is not yet opensource, in development)
  17. GPU-VDI with Linux Computing (not limited to KVM) Computing Accelereted

    Method Pass-through • A) KVM-VM with GPU-PCI pass-through (with OpenStack) • B) Container deployment (with Kubernetes 1.6~) Virulization GPU • C) KVMGT: Full GPU Virtulization(only Intel Chip embedded GPU) API Intercept based • D) virGL : virtio-GPU driver para-virtulization with KVM (3D library accelerelation on Host GPU OpenGL computing) • F) Legacy: VMGL (limited Linux workstation) • G*) VirtCL : virtio-OpenCL (GPGPU instead of GPU-VDI)
  18. virGL: Guest/Host OpenGL/DirectX software • Host/Guest = Qemu • Gaming,

    Rendering(opengl), Encording etc. workload was accelereted. https://www.freedesktop.org/wiki/Software/gallium/
  19. Virgil 3D GPU project • For details, please see the

    slide of this URL. • Direct3D drivers for it easy?? • that allows the guest operating system to use the capabilities of the host GPU to accelerate 3D rendering. https://virgil3d.github.io/
  20. News in Qemu graphics • For details, please see the

    slide of this URL. • Linux kernel 4.4+ Qemu 2.5+ (with GL enabled build) • Not yet: Windows guest, DirectX(3D) drivers. • opengl support in qemu UIs • Spice remote display; in progress (VDI) https://www.kraxel.org/slides/qemu-gfx-2016/
  21. Exp) virtio-GPU: guest OS Using virtio-gpu (with virgl aka opengl

    acceleration) with libvirt and spice • Fedora 24 or lator (Host/Guest support) Requirements for Guest: • Linux kernel 4.4+ (version 4.2 without opengl build) • Mesa 11.1 • xorg server 1.19, or 1.18 with commit "5627708 dri2: add virtio-gpu pci ids" backported
  22. Exp) virtio-GPU: host OS Using virtio-gpu (with virgl aka opengl

    acceleration) with libvirt and spice Requirements for Host: • Qemu 2.6+ • virglrenderer • spice-server 0.13.2+ (development release) • spice-gtk 0.32+ (used by virt-viewer & friends) Note that 0.32 got a new shared library major version, therefore the tools using this must be compiled against that version. • Mesa 10.6 • libepoxy 1.3.1 • libvirt 1.3+
  23. Exp) virtio-GPU: libvirt guest config and run Libvirt guest config:

    Client GUI(spice local only): virt-viewer --attach $domain (final important bit is that spice needs a unix socket connection for opengl to work)
  24. Ex) Windows server 2016 with Hyper-V service by Linux nested

    KVM For Developer use on Windows VDI-Guest
  25. Running Hyper-V in QEMU/KVM Guest https://ladipro.wordpress.com/2017/02/24/running-hyperv-in- kvm-guest/ (Feb. 2017, only

    Intel CPU) Linux 4.10 or newer kernel (from ELREPO http://elrepo.org/tiki/kernel-ml) QEMU 2.7 or later (Better is latest QEMU 2.9 = my test) SeaBIOS 1.10 or later (my test is latest) And qemu command line must include the +vmx parameter: “-cpu hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,+vmx”
  26. GPU support running Hyper-V 2016 with Nested KVM I have

    not verified it yet. Pass through is better. IF you have a Nutanix, then Nvidia vGPU is …? (I want to test …)
  27. Do Linux KVM Hypervisor dream of GPU-VDI computing? GPU-VDI on

    Linux KVM with virtio-GPU in community opensource. - Not yet Windows driver (virtio-GPU) - Not yet spice client GPU support (virtio-GPU)