Upgrade to Pro — share decks privately, control downloads, hide ads and more …

The Wonders of NUMA

The Wonders of NUMA

(or Why Your High-Performance Application Doesn't Perform)

You select the best possible hardware for the job, you optimize the host OS to deliver the best performance ever seen by mankind, and you tweak your high-performance vSwitch to ensure nothing, nothing, could possibly stop you now. You rub your bloodshot eyes, run 'openstack server create' and, well, things don't look so rosy.

Welcome to the world of OpenStack on NUMA-based architectures, where one poor scheduling decision can result in drastic performance reductions. Thankfully, OpenStack realized this some time ago and has been doing many wonderful things since then to prevent this pain. In this talk, we shine a light on all things NUMA, from both a general and OpenStack-orientated perspective.

Coming out of this talk, you should know everything there is to know about NUMA in OpenStack and will be able to, one can hope, finally put those performance issues to bed and get some sleep.

Stephen Finucane

May 23, 2018
Tweet

More Decks by Stephen Finucane

Other Decks in Technology

Transcript

  1. The Wonders of NUMA (Or Why Your High-Performance Application Doesn't

    Perform) Stephen Finucane OpenStack Software Developer 23rd May 2018
  2. INSERT DESIGNATOR, IF NEEDED 4 What is NUMA? UMA (Uniform

    Memory Access) Historically, all memory on x86 systems is equally accessible by all CPUs. Known as Uniform Memory Access (UMA), access times are the same no matter which CPU performs the operation. NUMA (Non-Uniform Memory Access) This behavior is no longer the case with recent x86 processors. In Non-Uniform Memory Access (NUMA), system memory is divided into zones (called nodes), which are allocated to particular CPUs or sockets. Access to memory that is local to a CPU is faster than memory connected to remote CPUs on that system.
  3. INSERT DESIGNATOR, IF NEEDED 5 What is NUMA? node A

    node B Local Access Remote Access Memory Channel Interconnect Memory Channel
  4. INSERT DESIGNATOR, IF NEEDED 10 NUMA in OpenStack • NUMA

    Guest Topologies • Guest vCPU Placement • PCI NUMA Affinity • vGPU, Neutron Network NUMA Affinity
  5. INSERT DESIGNATOR, IF NEEDED 11 NUMA in OpenStack • NUMA

    Guest Topologies • Guest vCPU Placement • PCI NUMA Affinity • vGPU, Neutron Network NUMA Affinity
  6. INSERT DESIGNATOR, IF NEEDED $ openstack flavor create --vcpus 6

    --ram 6144 --disk 20 test.numa 12 NUMA Guest Topologies
  7. INSERT DESIGNATOR, IF NEEDED $ openstack flavor create --vcpus 6

    --ram 6144 --disk 20 test.numa $ openstack flavor set test.numa \ --property hw:numa_nodes=2 13 NUMA Guest Topologies
  8. INSERT DESIGNATOR, IF NEEDED $ openstack flavor create --vcpus 6

    --ram 6144 --disk 20 test.numa $ openstack flavor set test.numa \ --property hw:numa_nodes=2 \ --property hw:numa_cpus.0=0-3 \ --property hw:numa_cpus.1=4,5 \ --property hw:numa_mem.0=4096 \ --property hw:numa_mem.1=2048 14 NUMA Guest Topologies
  9. INSERT DESIGNATOR, IF NEEDED $ openstack flavor create --vcpus 6

    --ram 6144 --disk 20 test.numa $ openstack flavor set test.numa \ --property hw:numa_nodes=2 \ --property hw:numa_cpus.0=0-3 \ --property hw:numa_cpus.1=4,5 \ # guest vCPUs - not host CPUs --property hw:numa_mem.0=4096 \ --property hw:numa_mem.1=2048 15 NUMA Guest Topologies
  10. INSERT DESIGNATOR, IF NEEDED 16 NUMA in OpenStack • NUMA

    Guest Topologies • Guest vCPU Placement • PCI NUMA Affinity • vGPU, Neutron Network NUMA Affinity
  11. INSERT DESIGNATOR, IF NEEDED 17 NUMA in OpenStack • NUMA

    Guest Topologies • Guest vCPU Placement • PCI NUMA Affinity • vGPU, Neutron Network NUMA Affinity
  12. INSERT DESIGNATOR, IF NEEDED $ openstack flavor create --vcpus 4

    --ram 4096 --disk 20 test.pinned 18 Guest vCPU Placement
  13. INSERT DESIGNATOR, IF NEEDED $ openstack flavor create --vcpus 4

    --ram 4096 --disk 20 test.pinned $ openstack flavor set test.pinned \ --property hw:cpu_policy=dedicated 19 Guest vCPU Placement
  14. INSERT DESIGNATOR, IF NEEDED 20 Guest vCPU Placement node #0

    core #0 core #1 core #3 core #2 node #1 core #0 core #1 core #3 core #2
  15. INSERT DESIGNATOR, IF NEEDED 21 Guest vCPU Placement node #0

    core #0 core #1 core #3 core #2 node #1 core #0 core #1 core #3 core #2
  16. INSERT DESIGNATOR, IF NEEDED 22 Guest vCPU Placement node #0

    core #0 core #1 core #3 core #2 node #1 core #0 core #1 core #3 core #2
  17. INSERT DESIGNATOR, IF NEEDED 23 Guest vCPU Placement node #0

    core #0 core #1 core #3 core #2 node #1 core #0 core #1 core #3 core #2
  18. INSERT DESIGNATOR, IF NEEDED $ openstack flavor create --vcpus 4

    --ram 4096 --disk 20 test.pinned $ openstack flavor set test.pinned \ --property hw:cpu_policy=dedicated 24 Guest vCPU Placement
  19. INSERT DESIGNATOR, IF NEEDED $ openstack flavor create --vcpus 6

    --ram 4096 --disk 20 test.pinned $ openstack flavor set test.pinned \ --property hw:cpu_policy=dedicated 25 Guest vCPU Placement
  20. INSERT DESIGNATOR, IF NEEDED 26 Guest vCPU Placement node #0

    core #0 core #1 core #3 core #2 node #1 core #0 core #1 core #3 core #2
  21. INSERT DESIGNATOR, IF NEEDED $ openstack flavor create --vcpus 6

    --ram 4096 --disk 20 test.pinned $ openstack flavor set test.pinned \ --property hw:cpu_policy=dedicated 27 Guest vCPU Placement
  22. INSERT DESIGNATOR, IF NEEDED $ openstack flavor create --vcpus 6

    --ram 4096 --disk 20 test.pinned $ openstack flavor set test.pinned \ --property hw:cpu_policy=dedicated --property hw:numa_nodes=2 28 Guest vCPU Placement
  23. INSERT DESIGNATOR, IF NEEDED 29 Guest vCPU Placement node #0

    core #0 core #1 core #3 core #2 node #1 core #0 core #1 core #3 core #2
  24. INSERT DESIGNATOR, IF NEEDED 30 NUMA in OpenStack • NUMA

    Guest Topologies • Guest vCPU Placement • PCI NUMA Affinity • vGPU, Neutron Network NUMA Affinity
  25. INSERT DESIGNATOR, IF NEEDED 31 NUMA in OpenStack • NUMA

    Guest Topologies • Guest vCPU Placement • PCI NUMA Affinity • vGPU, Neutron Network NUMA Affinity
  26. INSERT DESIGNATOR, IF NEEDED 34 PCI NUMA Affinity node #0

    core #1 core #2 core #5 core #4 core #0 core #3 node #1 core #1 core #2 core #5 core #4 core #0 core #3
  27. INSERT DESIGNATOR, IF NEEDED [pci] alias = '{ "name": "QuickAssist",

    "product_id": "0443", "vendor_id": "8086", "device_type": "type-PCI" }' 35 PCI NUMA Affinity
  28. INSERT DESIGNATOR, IF NEEDED $ openstack flavor create --vcpus 4

    --ram 4096 --disk 20 test.pci $ openstack flavor set test.pci \ --property pci_passthrough:alias=QuickAssist:1 36 PCI NUMA Affinity
  29. INSERT DESIGNATOR, IF NEEDED 38 PCI NUMA Affinity node #0

    core #1 core #2 core #5 core #4 core #0 core #3 node #1 core #1 core #2 core #5 core #4 core #0 core #3
  30. INSERT DESIGNATOR, IF NEEDED 39 PCI NUMA Affinity node #0

    core #1 core #2 core #5 core #4 core #0 core #3 node #1 core #1 core #2 core #5 core #4 core #0 core #3
  31. INSERT DESIGNATOR, IF NEEDED [pci] alias = '{ "name": "QuickAssist",

    "product_id": "0443", "vendor_id": "8086", "device_type": "type-PCI" }' 40 PCI NUMA Affinity
  32. INSERT DESIGNATOR, IF NEEDED [pci] alias = '{ "name": "QuickAssist",

    "product_id": "0443", "vendor_id": "8086", "device_type": "type-PCI", "numa_policy": "preferred" # or 'legacy' or 'required' }' 41 PCI NUMA Affinity
  33. INSERT DESIGNATOR, IF NEEDED 42 PCI NUMA Affinity node #0

    core #1 core #2 core #5 core #4 core #0 core #3 node #1 core #1 core #2 core #5 core #4 core #0 core #3
  34. INSERT DESIGNATOR, IF NEEDED 43 NUMA in OpenStack • NUMA

    Guest Topologies • Guest vCPU Placement • PCI NUMA Affinity • vGPU, Neutron Network NUMA Affinity
  35. INSERT DESIGNATOR, IF NEEDED 44 NUMA in OpenStack • NUMA

    Guest Topologies • Guest vCPU Placement • PCI NUMA Affinity • vGPU, Neutron Network NUMA Affinity
  36. INSERT DESIGNATOR, IF NEEDED 45 NUMA in OpenStack • NUMA

    Guest Topologies • Guest vCPU Placement • PCI NUMA Affinity • vGPU, Neutron Network NUMA Affinity *coming soon*
  37. INSERT DESIGNATOR, IF NEEDED 47 Common Questions • Can I

    choose what host NUMA nodes my guest runs on?
  38. INSERT DESIGNATOR, IF NEEDED 48 Common Questions • Can I

    choose what host NUMA nodes my guest runs on? ◦ We don’t support this by design
  39. INSERT DESIGNATOR, IF NEEDED 49 Common Questions • Can I

    choose what host NUMA nodes my guest runs on? ◦ We don’t support this by design • Why would I want a multi-node guest?
  40. INSERT DESIGNATOR, IF NEEDED 50 Common Questions • Can I

    choose what host NUMA nodes my guest runs on? ◦ We don’t support this by design • Why would I want a multi-node guest? ◦ By necessity ▪ Large core counts ▪ Multiple PCI devices with different NUMA affinities ◦ Application requirements
  41. INSERT DESIGNATOR, IF NEEDED 51 Common Questions • Can I

    choose what host NUMA nodes my guest runs on? ◦ We don’t support this by design • Why would I want a multi-node guest? ◦ By necessity ▪ Large core counts ▪ Multiple PCI devices with different NUMA affinities ◦ Application requirements • Can a guest’s NUMA nodes share the same host node?
  42. INSERT DESIGNATOR, IF NEEDED 52 Common Questions • Can I

    choose what host NUMA nodes my guest runs on? ◦ We don’t support this by design • Why would I want a multi-node guest? ◦ By necessity ▪ Large core counts ▪ Multiple PCI devices with different NUMA affinities ◦ Application requirements • Can a guest’s NUMA nodes share the same host node? ◦ Not at the moment
  43. INSERT DESIGNATOR, IF NEEDED 54 Common Misconceptions • Host NUMA

    node selection ◦ You can’t dictate what node is used - nova must decide
  44. INSERT DESIGNATOR, IF NEEDED 55 Common Misconceptions • Host NUMA

    node selection ◦ You can’t dictate what node is used - nova must decide • Host sockets != NUMA nodes ◦ Cluster-on-Die is a thing
  45. INSERT DESIGNATOR, IF NEEDED 56 Common Misconceptions • Host NUMA

    node selection ◦ You can’t dictate what node is used - nova must decide • Host sockets != NUMA nodes ◦ Cluster-on-Die is a thing • Guest sockets != NUMA nodes ◦ You can specify hw:numa_nodes and hw:cpu_sockets
  46. INSERT DESIGNATOR, IF NEEDED 57 Common Misconceptions • Host NUMA

    node selection ◦ You can’t dictate what node is used - nova must decide • Host sockets != NUMA nodes ◦ Cluster-on-Die is a thing • Guest sockets != NUMA nodes ◦ You can specify hw:numa_nodes and hw:cpu_sockets • CPU pinning isn’t a requirement ◦ It’s just common in these scenarios
  47. INSERT DESIGNATOR, IF NEEDED 60 Resources You might want to

    know about these... • RHEL NUMA Tuning Guide • Attaching physical PCI devices to guests • Nova Flavors Guide