Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Design Considerations for vSphere on NAS

Design Considerations for vSphere on NAS

Discusses some design considerations for using vSphere with NFS, including link aggregation, datastore bandwidth, and sizing.

Scott Lowe

June 18, 2012
Tweet

More Decks by Scott Lowe

Other Decks in Technology

Transcript

  1. Design Considerations for vSphere on NFS Discussing some design considerations

    for using vSphere with NFS Scott Lowe, VCDX 39 vExpert, Author, Blogger, Geek http://blog.scottlowe.org / Twitter: @scott_lowe
  2. Before we start •Get involved! Audience participation is encouraged and

    requested. •If you use Twitter, feel free to tweet about this session (use hashtag #VMUG or handle @SeattleVMUG) •I encourage you to take photos or videos of today’s session and share them online •This presentation will be made available online after the event
  3. •Some NFS basics •Some link aggregation basics •NFS bandwidth •Link

    redundancy •NFS and iSCSI interaction •Routed NFS access •Other considerations Agenda
  4. •All versions of ESX/ESXi use NFSv3 over TCP •NFSv3 uses

    a single TCP session for data transfer •This single session originates from one VMkernel port and terminates at the NAS IP interface/export •vSphere 5 adds support for DNS round robin but still uses single TCP session and only resolves DNS name once Some NFS Basics
  5. •Requires unique hash values to place flows on a link

    in the bundle •Identical hash values will always result in the same link being selected •Does provide link redundancy •Doesn’t increase per-flow bandwidth, only aggregate bandwidth •Need special support to avoid single point of failure (SPoF) Some Link Aggregation Basics
  6. •Can’t use link aggregation to increase per-datastore bandwidth •Can’t use

    DNS round robin to increase per-datastore bandwidth •Can’t use multiple VMkernel NICs to increase per-datastore bandwidth •Must move to a faster network transport (from 1Gb to 10Gb Ethernet, for example) •That being said, most workloads are not bandwidth constrained NFS Bandwidth
  7. •No concept of multipathing; link redundancy must be managed at

    the network layer •No concept of multiple active “paths” per datastore •Link aggregation helps but is not required Link Redundancy
  8. •iSCSI traffic is generally “pinned” to specific uplinks via port

    binding/multipathing configuration; not so for NFS traffic •Traffic could “cross” uplinks under certain configurations •Need to keep separate with: •Per-port group failover configurations •Separate vSwitches •Separate IP subnets for iSCSI and NFS traffic NFS and iSCSI Interaction
  9. Routed NFS Access •Supported as of vSphere 5.0 U1 •Be

    sure to use FHRP (HSRP or VRRP) for gateway redundancy and apply QoS where needed •Can’t use IPv6 or vSphere Distributed Switch (VDS) •Be sure latency won’t be an issue (WAN routing not supported) •More information available at http://blogs.vmware.com/vsphere/ 2012/06/vsphere-50-u1-now-supports-routed-nfs-storage- access.html
  10. •Thin provisioned VMDKs: need VAAI-NFS plugin to do thick provisioned

    VMDKs •Datastore sizing: SCSI locking not an issue, but still need to consider: •Underlying disk architectures/layout and IOPS requirements •Ability to meet RPO/RTO •Jumbo frames: can be useful, but not necessarily required •ESXi configuration recommendations: follow vendor-provided recommended practices Other Considerations
  11. Coming to VMworld? •If you’re coming to VMworld (and you

    should be!), consider bringing your spouse/partner with you! •Spousetivities will be offering planned, organized activities for spouses/partners/friends traveling with VMworld conference attendees •See http://spousetivities.com for more information