Upgrade to Pro — share decks privately, control downloads, hide ads and more …

An Eye for (Network) Design

An Eye for (Network) Design

Discusses five common questions that are asked when creating a network design for VMware vSphere

Scott Lowe

June 28, 2011
Tweet

More Decks by Scott Lowe

Other Decks in Technology

Transcript

  1. Before we start • Get involved! • If you use

    Twitter, feel free to tweet about this session (use hashtag #DenverVMUG) • I encourage you to take photos or videos of today’s session and share them online • This presentation will be made available online after the event
  2. An Eye For (Network) Design Five questions that get asked

    when creating a vSphere network design Scott Lowe, VCDX 39 vExpert, Author, Blogger, Geek http://blog.scottlowe.org / Twitter: @scott_lowe
  3. Agenda • First, some assumptions • Next, a caveat •

    Question #1: How many vSwitches should I use? • Question #2: Should I use a distributed vSwitch? • Question #3: What traffic types can/should share uplinks? • Question #4: How many uplinks do I need? • Question #5: When should I use link aggregation?
  4. First, some assumptions • Throughout this discussion I’ll assume that

    the following is true: • You are using at least two (2) physical switches • You’ve enabled PortFast/disabled STP on vSphere-facing ports • You’ve enabled CDP/LLDP
  5. Next, a caveat • All of these recommendations are just

    that: recommendations • Ultimately you need to understand the impact of your networking design decisions and react accordingly • Be sure to keep the functional requirements in mind—does the network configuration meet the functional requirements? • Your vSphere networking design might violate “general recommendations” because of your specific needs or requirements. That’s OK.
  6. Number of vSwitches • A separate vSwitch is only required

    when you need different sets of uplinks • Without VLANs, separate uplinks (and thus separate vSwitches) would be necessary • I generally recommend as few vSwitches as possible (more vSwitches don’t add redundancy) • I strongly advocate the use of VLANs wherever possible • Separate vSwitches are necessary for disjointed L2 domains
  7. VLAN handling • With regard to VLANs, here are some

    additional recommended practices: • Avoid the use of VLAN 1 where possible (although this recommendation is a bit dated) • Set an unused VLAN as the native VLAN on your trunks • Understand the behavior of the native VLAN with vSwitches and port groups
  8. Using distributed vSwitches • vSwitches require more manual effort (duplicate

    effort), but offer fewer points of failure and fewer dependencies • Distributed vSwitches (dvSwitches) offer streamlined administration but with additional dependencies • Most of the advanced features are found only in dvSwitches
  9. Using distributed vSwitches • Each option has its advantages and

    disadvantages Feature vSwitch dvSwitch Continues to operate even in the absence of an external control plane (vCenter, VSM) Yes No Supports all key networking features (VLANs, vMotion, FT, link aggregation, etc.) Yes Yes Offers simplified network mgmt and potential mgmt offload to network team No Yes
  10. Using distributed vSwitches • My recommendation: • Use both in

    a hybrid configuration (minimum 4 uplinks) • Run management traffic on a vSwitch, run VM/VM-related traffic on a dvSwitch • When using a dvSwitch, appropriately protect the control plane (VSM or vCenter Server) • If it must be “or” not “and,” then go back to your functional requirements
  11. Mixing traffic • Above all, you need to provide redundancy

    for all types of network traffic • Try to understand the network traffic in terms of: • Consistency: Is it bursty traffic? Or is it constant? • Bandwidth: How much bandwidth does it use? • Scope: Is this traffic for one VM, or will it affect multiple VMs?
  12. Mixing traffic • Some information on traffic types: • Management

    traffic is generally low bandwidth • vMotion is generally bursty and inconsistent • Fault Tolerance logging is consistent; bandwidth usage depends on number of FT-protected VMs • IP-based storage traffic is high-bandwidth, large scope, consistent traffic
  13. Mixing traffic • My recommendations: • Don’t mix IP-based storage

    traffic with other traffic types unless absolutely necessary • Mix FT traffic with bursty traffic with small number of FT- protected VMs • Management and vMotion are OK to mix • Try to keep VM-facing traffic segregated from “back end” traffic
  14. Number of uplinks • Many different factors come into play:

    • vSwitch/dvSwitch arrangement (separate vSwitch means more uplinks) • VLAN configuration (no VLANs means more uplinks) • Traffic mixing (separate traffic streams means more uplinks) • Upstream network configuration (disjointed L2 networks means separate vSwitches)
  15. Number of uplinks • For 1 GbE environments, I recommend:

    • Minimum of 4 uplinks for non-IP-based storage • Minimum of 6 uplinks for IP-based storage • For 10 GbE environments, only 2 uplinks are necessary unless functional requirements dictate otherwise • Minimum of 4 uplinks for hybrid vSwitch/dvSwitch configuration (can use “virtual NICs” if necessary)
  16. Deciding on link aggregation • Link aggregation refers to bonding

    multiple links together for greater aggregate throughput (e.g., EtherChannel) • NIC teaming refers to use multiple physical NICs as uplinks on a vSwitch or dvSwitch • Both techniques offer redundancy
  17. Deciding on link aggregation • Let’s compare link aggregation and

    NIC teaming Feature Link Aggr NIC Team Supports multiple physical switches Only with MLAG Yes Requires physical switch config Yes No Per-flow load balancing Yes No Increased throughput for each traffic flow No No
  18. Deciding on link aggregation • My recommendation: • NIC teaming

    is fine for most implementations • Use link aggregation only if physical switches support MLAG (otherwise can’t use multiple physical switches) • Don’t use link aggregation for IP-based storage traffic (it’s generally useless)