Escolar Documentos
Profissional Documentos
Cultura Documentos
Overview
Administrators implementing VMware® infrastructure 3 (VI3) on a LeftHand Networks® SAN
should read this document in its entirety. Important configuration, best practices, and frequently
asked questions are outlined to accelerate a successful deployment.
Contents
Overview ......................................................................................................................................... 1
Initial iSCSI Setup of VI3 server .................................................................................................... 2
Licensing ..................................................................................................................................... 2
Networking for the Software initiator ......................................................................................... 2
Enabling the iSCSI software adapter .......................................................................................... 4
HBA connectivity and networking .............................................................................................. 4
Connecting and using iSCSI volumes ............................................................................................ 5
Creating the first iSCSI volume on the SAN .............................................................................. 5
Enabling VIPLB for performance ............................................................................................... 5
Discovery of the first iSCSI volume ........................................................................................... 5
Discovering additional volumes .................................................................................................. 6
Disconnecting iSCSI volumes from ESX or ESXi hosts ............................................................ 6
Troubleshooting Volume Connectivity ....................................................................................... 6
Creating a new datastore on the iSCSI volume ........................................................................... 7
Expanding a SAN/iQ volume and extending a datastore on the iSCSI volume ......................... 7
Snapshots, Remote IP Copy, and SmartClone volumes ................................................................. 7
Resignaturing .............................................................................................................................. 7
SAN/iQ snapshots of VI3 raw devices ........................................................................................ 8
SAN/iQ snapshots of VI3 VMFS datastores ............................................................................... 8
SAN/iQ Remote IP Copy Volumes and SRM ............................................................................ 8
SAN/iQ SmartClone Volumes .................................................................................................... 8
VMotion, Clustering, HA, DRS, and VCB ..................................................................................... 9
Page 1 of 13
Choosing Datastores and Volumes for virtual machines ................................................................ 9
Best Practices ................................................................................................................................ 10
FAQ............................................................................................................................................... 11
The ideal networking configuration for iSCSI depends on the number of Gigabit network
connections available to a VI3 server. The most common; 2, 4, and 6 ports, are outlined here for
reference.
VI3 servers with only 2 Gigabit network ports are not ideal for iSCSI SANs connected by the
software initiator due to lack of network bandwidth to ensure good performance. If performance
is not a concern, VI3 servers can function well with only 2 gigabit ports. VI3 servers with only 2
Gigabit network ports should be configured with:
Page 2 of 13
o As a best practice the VMkernel
network’s failover order should
be reversed from the rest of the
port groups on the switch. This
will make best use of bandwidth
by favoring the second adapter
for iSCSI and VMotion and
favoring the first adapter for VM network traffic and management access.
VI3 servers with 4 Gigabit network ports are capable of performing better by separating
management and virtual machine traffic away from iSCSI and VMotion traffic. VI3 servers with
4 Gigabit network ports should be configured with:
o Two virtual switches each comprised of two Gigabit ports teamed together. If possible one
port from two separate Gigabit adapters should be used. For example if using two onboard
Gigabit adapters and a dual port Ethernet card, team together port 0 from the onboard and
port 0 from the Ethernet card. Then team together port 1 from the onboard and port 1 from
the Ethernet card. This provides protection from some bus or card failures.
For ESX a service console (required for iSCSI authentication, not required for
ESXi)
VI3 servers with 6 Gigabit network ports are ideal for delivering performance with the software
iSCSI initiator. The improvement over 4 ports is achieved by separating VMotion traffic and
iSCSI traffic so they don’t have to share bandwidth. Both iSCSI and VMotion will perform
better in this environment. VI3 servers with 6 Gigabit network ports should be configured with:
o Three virtual switches each comprised of two Gigabit ports teamed together. If possible one
port from separate Gigabit adapters should be used in each team to prevent some bus or card
failures from affecting an entire virtual switch.
Page 3 of 13
The first virtual switch should have:
More than 6 Ports: If more than 6 network adapters are available more adapters can be added to
the iSCSI virtual switch to increase available bandwidth or used for any other network services
desired.
Page 4 of 13
each configured with a path to all iSCSI targets for failover. Configuring multiple HBA initiators
to connect to the same target requires configuring authentication for each initiators iqn name on
the SAN. Typically this is configured in as two SAN/iQ ”Servers” (one for each HBA initiator)
each with permissions to the same volumes on the SAN.
Page 5 of 13
Discovering additional volumes
A reboot of a VI3 server will completely refresh iSCSI connections and volumes included
removing ones that are no longer available. Without rebooting additional volumes can be logged
into and discovered by simply performing a rescan of the iSCSI software adapter or all adapters.
HBAs can also add targets manually by configuring static targets.
o Ping the virtual IP address from the iSCSI initiator to ensure basic network connectivity
For the software initiator this can be done by logging into the VI3 service console and
executing vmkping x.x.x.x and ping x.x.x.x. Both of those commands must be successful
or networking is not correct for iSCSI. The vmkping ensures that a VMkernel network
can reach the SAN and the ping ensures the service console can reach the SAN. Both
must be able to network to the SAN to log into new volumes.
HBAs typically have their own ping utilities inside the BIOS of the HBA
ESXi has a network trouble shooting utility that can be used from KVM console to
attempt a ping to the SAN.
o Double check all iqn names or CHAP entries. For iSCSI authentication to work correctly
these must be exact. Simplifying the iqn name to something shorter than the default can help
with trouble shooting.
o Make sure all “Servers” on the SAN have load balancing enabled. If there is a mix, some
have it enabled and some don’t, then those that don’t might not connect to their volumes
o Enable resignaturing. Some volumes that have been copied, snapshot, or restored from
backup could look like a snapshot LUN to VI3. If resignaturing is not enabled VI3 will hide
those volumes.
o Verify that the iSCSI protocol is allowed in the firewall rules of ESX. Version 3.5 of ESX
does not allow iSCSI traffic through the firewall by default in most installations.
Page 6 of 13
Creating a new datastore on the iSCSI volume
Now that the VI3 server has an iSCSI SAN volume connected it can be formatted as a new
VMFS data store or mounted as a raw device mapping (RDM) directly to virtual machines. New
datastores are
formatted from
within the VMware
virtual center client.
Snapshots, Remote IP
Copy, and SmartClone
volumes
Resignaturing
In order for VI3 servers to utilize SAN based
snapshots resignaturing must be enabled on
the VI3 server. If resignaturing is not
Page 7 of 13
enabled the VI3 server will report that the volume is blank and needs to be formatted even
though it contains a valid datastore. Resignaturing is one of the advanced settings of an ESX or
ESXi server that can be edited in the virtual center client. Be aware that some SANs cannot
support resignaturing. If SAN storage other than LeftHand Networks SANs is also attached to
the same VI3 server refer to VMware’s SAN configuration guide to verify resignaturing is an
option. For more information on resignaturing please refer to VMware documentation.
http://www.vmware.com/support/pubs/vi_pubs.html
For more information or to download the LeftHand Networks SRA for VMware Site Recovery
Manager got to http://resources.lefthandnetworks.com/forms/VMware-LeftHand-SRA-
Download
SAN/iQ SmartClone
Volumes
SmartClone™ volumes are very useful in a
VI3 environment. All virtual machines
stored on a single volume can be cloned
instantly and without replicating data.
SmartClone volumes only consume
changed data from the time the clone was
Page 8 of 13
taken. This is the best way to deploy large quantities of cloned virtual machines or virtual
desktops. SmartClone volumes can be used with any other SAN/iQ features such as snapshots or
Remote IP Copy without limitation. SmartClone volumes are also very useful for performing
tests on virtual machines by quickly reproducing them without taking up space on the SAN to
actually copy them.
VCB proxy servers should have their own “Server” configured on the SAN with read only access
to the volumes that ESX or ESXi servers are accessing. VCB does not require write access and
this prevents the Windows based VCB proxy server from inadvertently writing new signatures to
VMFS volumes that are in use by ESX or ESXi.
Page 9 of 13
machines that have no relationship are mixed on a single volume those virtual machines will
have to be snapshot, rolled back, remotely copied, and cloned together.
Performance of virtual
machines could also be
affected if too many virtual
machines are located on a
single volume. The more
virtual machines on a volume
the more IO and SCSI
reservation contention there is
for that volume. Up to sixteen
virtual machines on a single
volume will function but might
experience degraded
performance, depending on the
hardware configuration, if all
VMs are booted at the same
time. Four to eight virtual
machines per volume is less
likely to affect performance.
Best Practices
Use at least two Gigabit network adapters teamed together for performance and failover of
the iSCSI connection.
Teaming network adapters provides redundancy for networking components such as adapters,
cables, and switches. An added benefit to teaming is an increase in available IO bandwidth.
Network Teams on SAN/iQ storage nodes are easily configured in the CMC by selecting 2 active
links and enabling a “bond”. Balance ALB is the most common teaming method used on
SAN/iQ storage nodes. Network Teams on VI3 servers are configured at the virtual switch level.
Testing has shown that a VI3 server’s NIC team for iSCSI handles networks failures smoother if
it has “rolling failover” enabled.
The VMkernel network for iSCSI should be separate from the management and virtual
networks used by virtual machines. If enough networks are available VMotion should use a
separate network also.
Page 10 of 13
Enable load balancing for iSCSI for improved performance.
The Load Balancing feature of a SAN/iQ “Server” allows iSCSI connections to be re-directed to
the least busy storage node in the cluster. This keeps the load on storage nodes throughout the
cluster as even as possible and improves performance of the SAN overall. This settings in
enabled by default in SAN/iQ 8 management groups but had to be enabled explicitly in previous
versions.
Virtual machines that can be backed up and restored together can share the same volume.
Since SAN/iQ snapshots, Remote IP Copy volumes, and SmartClone volumes work on a per
volume basis it is best to group virtual machines on volumes based on their backup and restore
relationships. For example, a test environment made up of a domain controller and a few
application servers would be a good candidate to put on the same volume. Those could be
snapshot, cloned, and restored as one unit.
FAQ
I added the SAN/iQ cluster virtual IP address to VI3 server’s dynamic discovery but don’t
see a new target?
Most likely you just need to select “rescan” under the “configuration” tab and “storage adapters”
or you have not configured authentication on the SAN correctly. Also, confirm all network
configurations. Please refer to the Discovery of the first iSCSI volume section.
SAN/iQ software version 6.5 with patch 10004 or higher is necessary to mount multiple volumes
on VI3 ESX servers. Contact support@lefthandnetworks.com to receive the patch for 6.5.
Upgrading to 6.6.00.4101 or higher is preferred.
The VI3 software adapter and hardware adapters are supported by SAN/iQ Virtual IP Load
Balancing. As a best practice this should be enabled on the authentication groups of all VI3
initiators. Load Balancing is enabled by default in SAN/iQ 8 management groups.
I rolled back / mounted a snapshot and VI3 server says it needs to be formatted?
You need to enable resignaturing on your VI3 servers to mount or roll back SAN based
snapshots. Please refer to the Resignaturing section.
Page 11 of 13
There are many ways to connect and present an iscsi volume to a virtual machine on an VI3
server. These include using the VI3 software adapter, a hardware adapter (HBA), and some guest
operating systems have their own software iSCSI adapters. For VMFS datastores containing
virtual machine’s definitions and virtual disk files the VI3 servers hardware or software adapter
must be used. Which one, HBA vs software, is debateable. Each gives you full VI3 functionality
and is supported equally. HBA’s advantages are in supporting boot from SAN and offloading
iSCSI processing from the VI3 server. If boot from SAN is not neccessary then the software
initiator is a good choice. The impact of iSCSI processing on modern processors is minimal.
With either one performance is more a function of the quality of physical network, disk quantity,
and disk rotation speed of the SAN being attached to.
What Initiator Should I use for additional Raw Device Mappings or Virtual Disks?
Aside from the boot LUN/Volume additional volumes should be used for storing application
data. Specifically database and logs volumes for many applications require as a best practice
separate volumes. These should be presented as either raw devices (RDM) through your chosen
VI3 server initiator or connected as an iSCSI disk directly through the virtual machines guest
operating system software initiator. Using RDMs or direct iSCSI allows these application
volumes to be transported between physical and virtual servers seamlessly since they are
formatted in the native file systems of the operating system (NTFS, EXT3, etc). In order to use
the guest operating system initiator successfully ensure these guidelines are followed:
o The guest operating system initiator is supported and listed in LeftHand Networks
compatibility matrix
o For good performance and failover the guest network the initiator is going to use is at least
dual gigabit and separate from other virtual networks; VMkernel, VMotion, Service Console,
Virtual Machine public networks, etc.
o The guest operating system is using the vmxnet NIC driver from VMware tools.
o The virtual machine will not be used in conjunction with VMware Site Recovery Manager
(SRM). SRM does not work with volumes connected by guest initiators.
Please refer to the Choosing Datastores and Volumes for virtual machines section.
Jumbo frames (IP packets configured larger than the typical 1500 bytes, up to 9000) are
supported by all SAN/iQ SANs. In order for Jumbo frames to be affective they must be enabled
end to end including network adapters and switches. ESX version 3.5 allows configuration of
Jumbo frames but does not support them for use with the VI3 software iSCSI initiator. HBAs are
capable of utilizing Jumbo frames on any version of ESX.
Page 12 of 13
Is VCB (VMware Consolidated Backup) supported on iSCSI?
VMware Consolidated Backup 1.0.3 and higher is fully supported on iSCSI SANs.
Why does my 2TB or higher iSCSI volume show up as 0MB to the VI3 server?
SAN/iQ software 6.5 + patch 10004 will support all features except for VMotion with more than
one virtual machine on a volume.
SAN/iQ software 6.6 + patch 10005 support all features VMware enables for iSCSI. The
SAN/iQ software version of storage nodes with the patch applied should be 6.6.00.4101 or
higher.
All subsequent releases, 6.6 SP1, 7, 7 SP1, and SAN/iQ 8 software, support VI3
Page 13 of 13