Escolar Documentos
Profissional Documentos
Cultura Documentos
Table of Contents
Executive Summary ................................................................................................................................................... 3
VMware vSphere Networking Overview ........................................................................................................... 3
Network Discovery Protocols ................................................................................................................................ 4
Cisco Nexus 1000V / VMware NSX ..................................................................................................................... 5
Network I/O Control (NIOC) .................................................................................................................................. 5
Management Traffic ............................................................................................................................................. 6
vMotion Traffic ....................................................................................................................................................... 6
Fault Tolerance Traffic ........................................................................................................................................ 6
Virtual Machine Traffic ........................................................................................................................................ 7
NFS Traffic ............................................................................................................................................................... 7
Nutanix CVM Traffic ............................................................................................................................................. 7
iSCSI Traffic ............................................................................................................................................................. 8
vSphere Replication Traffic ............................................................................................................................... 8
Other Defaults ........................................................................................................................................................ 8
Load Balancing, Failover, and NIC Teaming .................................................................................................... 9
NIC Team Load Balancing .................................................................................................................................. 9
Recommendation for VSS - Route Based on Originating Virtual Port ............................................. 9
Recommendation for VDS - Route Based on Physical NIC Load (LBT) .......................................... 9
Network Failover Detection ............................................................................................................................ 10
Notify Switches .................................................................................................................................................... 10
Failover Order ....................................................................................................................................................... 10
Failback ..................................................................................................................................................................... 11
Summary ................................................................................................................................................................... 11
Security .......................................................................................................................................................................... 11
Virtual Networking Configuration Examples .................................................................................................. 12
Option 1 - Virtual Distributed Switch (VDS) with VSS for Nutanix Internal deployment ......... 13
Option 2 - Virtual Standard Switch (VSS) with VSS for Nutanix Internal Deployment ............ 14
Network Performance Optimization with Jumbo Frames ........................................................................ 15
Sample Jumbo Frame Configuration............................................................................................................ 16
Recommended MTU Sizes for Traffic Types using Jumbo Frames .................................................. 16
Jumbo Frames Recommendations ................................................................................................................ 17
Conclusion .................................................................................................................................................................... 18
Further Information .................................................................................................................................................. 18
Executive Summary
The Nutanix Virtual Computing Platform is a highly resilient converged compute and
storage platform, designed for supporting virtual environments such as VMware
vSphere. The Nutanix architecture runs a storage controller in a VM, called the
Nutanix Controller VM (CVM). This VM is run on every Nutanix server node in a
Nutanix cluster to form a highly distributed, shared-nothing converged infrastructure.
All CVMs actively work together to aggregate storage resources into a single global
pool that can be leveraged by user virtual machines running on the Nutanix server
nodes. The storage resources are managed by the Nutanix Distributed File System
(NDFS) to ensure that data and system integrity is preserved in the event of node,
disk, application, or hypervisor software failure. NDFS also delivers data protection
and high availability functionality that keeps critical data and VMs protected.
Networking and network design are critical parts of any distributed system. A
resilient network design is important to ensure connectivity between Nutanix CVMs,
for virtual machine traffic, and for vSphere management functions, such as ESXi
management and vMotion. The current generation of Nutanix Virtual Computing
Systems comes standard with redundant 10GbE and 1GbE NICs, which can be used
by vSphere for resilient virtual networking.
This Tech Note is intended to help the reader understand core networking concepts
and configuration best practices for a Nutanix cluster running with VMware vSphere.
Implementing the following best practices will enable Nutanix customers to get the
most out of their storage, networking, and virtualization investments.
Bundled in vSphere Enterprise Plus, VDS supports NIOC with Network Resource
Pools and can be centrally managed, allowing the VDS configuration to be applied to
remote ESXi hosts easily. It also supports network discovery protocols, including
Cisco Discovery Protocol (CDP) and Link Layer Discovery Protocol (LLDP).
The following sections include a discussion of the various features of VSS and VDS,
along with configuration recommendations on the Nutanix platform.
Operation
The following table shows recommended values for the Network Resource Pool
Share:
Network Resource Pool
Management Traffic
Share
Value
25
Low
Unlimited
vMotion Traffic
50
Normal
Unlimited
50
Normal
Unlimited
100
High
Unlimited
100
High
Unlimited
100
High
Unlimited
100
High
Unlimited
50
Normal
Unlimited
50
Normal
Unlimited
iSCSI Traffic
NFS Traffic
Nutanix Traffic
Notes:
1. This is a custom Network Resource Pool, which needs to be created manually.
2. These pools are generally not applicable or relevant in Nutanix deployments.
Management Traffic
Management traffic requires minimal bandwidth with a share value of 25 over two
10Gb interfaces. This configuration will ensure a minimum of approximately 1.5Gbps
bandwidth. This is more than sufficient for ESXi management traffic and above the
minimum requirement of 1Gbps.
vMotion Traffic
vMotion is a burst-type workload, which uses no bandwidth until DRS or a vSphere
administrator starts a vMotion (or puts a host into maintenance mode). As such, it is
unlikely to have any significant ongoing impact on the network traffic. Nutanix
recommends a share value of 50 over two 10Gb interfaces. This will guarantee a
minimum of approximately 3Gbps which is sufficient for vMotion activity to complete
in a timely manner. This also ensures vMotion has well above the minimum
bandwidth requirement of 1Gbps.
secondary VMs in "lockstep". Generally, virtual machines using FT are critical and you
need to ensure FT traffic (which is also sensitive to latency) is not impacted during
periods of contention. Nutanix recommends using a share value of 50 and sharing
two 10Gb interfaces. Based on this configuration, FT will be guaranteed a minimum
of 3Gbps which is well above VMware's recommended minimum of 1Gbps.
NFS Traffic
NFS traffic is essential to the Nutanix Distributed File System and to virtual machine
performance, so this traffic is always critical. If NFS performance is degraded, it will
have an immediate impact on Nutanix CVM and VM performance. As such, it is
important to ensure this traffic has a significant share of the available bandwidth
during periods of contention.
In normal operation, NFS traffic is serviced locally. So NFS traffic will not impact the
physical network card unless the Nutanix Controller VM is offline in the event of
maintenance or unavailability. Under normal circumstances, there will be no NFS
traffic going across the physical NICs and the NIOC share value will have no impact
on other traffic. For this reason, it is excluded from calculations.
As a safety measure, to ensure that in the event of network contention, CVM
maintenance, or unavailability, a share value of 100 is assigned to NFS traffic. This
guarantees a minimum of 6Gbps bandwidth.
write I/O will always utilize the physical network due to synchronous replication for
data fault tolerance and availability.
For optimal NDFS performance, each CVM will be guaranteed a minimum of 6Gbps
bandwidth. In most environments, this bandwidth will be more than what is required
and ensure a good amount of headroom in case of unexpected burst activity.
iSCSI Traffic
iSCSI is not a normal protocol within a Nutanix environment. However, if iSCSI is used
within the environment, this traffic is also given a share value of 100.
Note: NIOC does not cover In Guest iSCSI traffic regulation by default. In the event In
Guest iSCSI is used, it is recommended to create a dvPortGroup for In Guest iSCSI
traffic and assign it to a custom network resource pool called "In Guest iSCSI" and
give it a share value of 100 (High).
Other Defaults
Other default points are generally not relevant and can be disregarded with their
share value of 50.
Network Failover Detection
VMware ESXi uses one of two methods of network failover detection: beacon
probing and link status.
Beacon probing works by sending out and listening for beacon probes which are
made up of broadcast frames. Beacon probing is dependent on having three network
connections. As a result of this requirement, it is not recommended for the current
generation of Nutanix Virtual Computing Platforms which currently have two
network ports.
Link status is dependent on the link status provided by the physical NIC. Link status
can detect failures, such as a cable disconnection and/or physical switch power
failures. Link status cannot detect configuration errors or upstream failures.
To avoid the limitations of link status relating to upstream failures, enable "link state
tracking" on physical switches that support this option. This enables the switch to
pass upstream link state information back to ESXi, which will allow link status to
trigger a link down on ESXi where appropriate.
Notify Switches
The purpose of the notify switches policy setting is to enable or disable
communication by ESXi with the physical switch in the event of a failover. If
configured as "Yes", ESXi will send a notification to the physical switch to update its
lookup tables on a failover event. Nutanix recommends enabling this option to
ensure that failover occurs in a timely manner with minimal interruption to network
connectivity.
Failover Order
Using failover order allows the vSphere administrator to specify the order in which
NICs failover in the event of an issue. This is configured by assigning a physical NIC
to one of three groups: active adapters; standby adapters; or unused adapters.
In the event all active adapters lose connectivity, the highest priority standby
adapter will be used. Failover order is only required in a Nutanix environment when
using Multi-NIC vMotion.
When configuring Multi-NIC vMotion, the first dvPortGroup used for vMotion must be
configured to have one dvUplink active and the other standby, with the reverse
configured for the second dvPortGroup used for vMotion. For more information see:
Multiple-NIC vMotion in vSphere 5 (KB2007467)
Failback
For customers not using the VDS and LBT, the failback feature can help rebalance
network traffic across the original NIC. This can result in improved network
performance. The only significant disadvantage of setting failback to Yes is in the
unlikely event of network instability or network route flapping, since having network
traffic fail back to the original NIC may result in intermittent or degraded network
connectivity. Nutanix recommends setting failback to Yes when using VSS and No
when using VDS.
Summary
The following table summarizes Nutanix recommendations for NIC
Recommendation for Load Balancing, Failover, and NIC teaming
Virtual Distributed Switch (VDS)
Load Balancing
Failback
No
Failback
Yes
Security
When configuring a VSS or VDS, there are three configurable options under security:
promiscuous mode; MAC address changes; and forged transmits. Each of these can
be set to "Accept" or "Reject".
In general, the most secure and appropriate setting for each of the three options is
"Reject". There are several use cases which may require you to set a specific option
to Accept. An example of use cases to consider configuring "Accept" on forged
transmits and MAC address changes are:
1. Microsoft load balancing in Unicast mode
2. iSCSI deployments on select storage types
For more information, see the Network Load Balancing Unicast Mode Configuration
(KB1006778) in the VMware Knowledgebase. The following are general
recommendations for Virtual Network Security settings, but Nutanix suggests all
customers carefully consider their requirements for their specific applications.
Recommendation for Virtual Networking Security
Promiscuous Mode
Reject
Reject
Forged Transmits
Reject
Option 1 - Virtual Distributed Switch (VDS) with VSS for Nutanix Internal
deployment
Option 1 is recommended for customers using VMware vSphere Enterprise Plus who
would like to use Virtual Distributed Switches. Option 1 has a number of benefits,
including:
1.
2.
3.
4.
5.
The ability to leverage advanced networking features, such as NIOC and LBT
(route based on physical NIC load)
It reduces cabling/switching requirements
It provides the ability for all traffic types to "burst" where required up to
10Gbps
It is a simple solution which only requires 802.1q configured on the physical
network
It can be centrally configured and managed
Physical Adapters
Management Network
Vmnic0
10000
Auto
Vmnic1
10000
Auto
VMKernel Port
vMotion
Vmk2
:
vMotion
IP
Address
VLAN:
11
Onboard
Dual
Port
1GB
NIC
UNUSED
VMKernel Port
Fault
Tolerance
Vmk3
:
Fault
Tolerance
IP
Address
VLAN:
12
Virtual Machine Port Group
VLAN:
15
Virtual Machine Port Group
VLAN: 16
Nutanix VLAN 10
VLAN: 10
Svm-iscsi-pg
Physical Adapters
No adapters
1
virtual
machine(s)
NTNX-XXXXXXXXXXX-A-CVM
VMKernel Port
Vmk-svm-iscsi-pg
Vmk1
:
192.168.5.1
Option 2 - Virtual Standard Switch (VSS) with VSS for Nutanix Internal
Deployment
Option 2 is for customers not using VMware vSphere Enterprise Plus, or those who
do not wish to use the VDS. Option 2 and has a number of benefits, including:
1. It reduces cabling/switching requirements (No requirement for 1Gb ports)
2. It is a simple solution which only requires 802.1q configured on the physical
network.
The following diagram shows a sample configuration for a VDS in a Nutanix
environment:
Virtual
Switch:
vSwitchNTNX
VMKernel Port
Physical Adapters
Management Network
Vmnic0
10000
Auto
Vmnic1
10000
Auto
vMotion
Vmk2
:
vMotion
IP
Address
VLAN:
11
VMKernel Port
Fault
Tolerance
Vmk3
:
Fault
Tolerance
IP
Address
VLAN:
12
Virtual Machine Port Group
VLAN: 15
VLAN:
16
Virtual Machine Port Group
Nutanix VLAN 10
VLAN:
10
10Gb
Adapter
1
Port
1
Svm-iscsi-pg
Physical Adapters
No adapters
1
virtual
machine(s)
10Gb
Adapter
1
Port
2
NTNX-XXXXXXXXXXX-A-CVM
VMKernel Port
Vmk-svm-iscsi-pg
Vmk1
:
192.168.5.1
Sample Jumbo Frame Configuration
The following diagram shows an MTU size of 9,216 bytes configured on the physical
network with two ESXi hosts configured with Jumbo Frames where appropriate. This
illustrates the recommended configuration in a Nutanix environment. A combination
of Jumbo and non-Jumbo Frames can be supported on the same deployment.
(dv)PortGroup
Virtual Machine Traffic
MTU 1500
VMKernel
NFS or iSCSI
(dv)PortGroup
Nutanix
MTU 9000
MTU 9000
(dv)PortGroup
Virtual Machine Traffic
VMKernel
NFS or iSCSI
(dv)PortGroup
Nutanix
MTU 9000
MTU 9000
MTU 1500
ESXi Host
ESXi Host
MTU SIZE
9216
MTU 9000
MTU 9000
MTU 1500
VMKernel
Fault Tolerance
VMKernel
vMotion
VMKernel
ESXi Management
MTU 9000
MTU 9000
MTU 1500
VMKernel
Fault Tolerance
VMKernel
vMotion
VMKernel
ESXi Management
The following table shows the various traffic types in a VMware vSphere / Nutanix
environment, and Nutanix recommended Maximum Transmission Unit (MTU).
Traffic Types
2
1)
2)
3)
4)
5)
6)
1) ESXi management
3
2) Virtual machine traffic
Notes:
1 Minimum MTU supported in 1,524, but >=1,600 is recommended
2 Assumes traffic types are not routed
3 Jumbo Frames can be beneficial in selected use cases, but not always required.
1)
Note: VMware also recommends using Jumbo Frames for IP-based storage, as
discussed in Performance Best Practices for vSphere 5.5 on the VMware Knowledge
Base site.
The Nutanix CVMs must be configured for Jumbo Frames on both the internal and
external interfaces. The converged network also needs to be configured for Jumbo
Frames. Most importantly, the configuration needs to be validated to ensure Jumbo
Frames are properly implemented end-to-end.
If Jumbo Frames are not properly implemented on the end-to-end network, packet
fragmentation can occur. Packet fragmentation will result in degraded performance
and higher overhead on the network.
The following ping commands can help you to test that end-to-end communication
can be achieved at Jumbo Frame MTU size without fragmentation occurring:
Windows: ping l 9000 -f
Linux: ping s 9000 M do
ESXi: vmkping d s 8972 <ip_address>
Note: In ESXi 5.1 and later, you can specify which vmkernel port to use for
outgoing ICMP traffic with the I option.
Conclusion
The Nutanix Virtual Computing Platform is a highly resilient converged compute and
storage platform designed for supporting virtual environments such as VMware
vSphere. Understanding fundamental Nutanix and VMware networking configuration
features and recommendations is key to designing a scalable and high performing
solution which meets customer requirements. Leveraging the best practices outlined
in this document will enable Nutanix and VMware customers to get the most out of
their storage, compute, virtualization, and networking investments.
Further Information
You can continue the conversation on the Nutanix Next online community
(next.nutanix.com). For more information relating to VMware vSphere or to review
other Nutanix Tech Notes, please visit the Nutanix website at
http://www.nutanix.com/resources/.