Escolar Documentos
Profissional Documentos
Cultura Documentos
Infrastructure Virtualization
Table of Contents
Executive Summary 04
Assessment Scenario 07
Concentrator Value 23
03
Executive Summary
Executive Summary
“For cost-conscious IT decision makers, StoneFly Storage
Concentrators, incorporate a virtualization engine for storage
provisioning and management in order to add another important
advantage: the ability to cut operating costs.”
04
Executive Summary
CIOs, who are frequently under the gun to provide a more demonstrably
responsive IT infrastructure to meet rapidly accelerating changes in
business cycles. As a result of that pressure, IT must frequently deploy
new resources or repurpose existing resources. More importantly, it is not
the acquisition of resources so much as the management of those
resources that is the biggest driver of IT costs. The general rule of thumb
is that operating costs for managing storage on a per-gigabyte basis are
three to ten times greater than the capital costs of storage acquisition.
That's because provisioning and management tasks associated with
storage resources are highly labor-intensive and often burdened by the
bureaucratic inefficiencies.
That's why system and storage virtualization share the spotlight in the
McKinsey CIO survey. What's more, deriving the maximal benefits from
system virtualization in a VI environment requires storage virtualization as
a necessary prerequisite. The issues of availability and mobility of both a
VM and its data plays an important role in such daily operational tasks as
load balancing and system testing. Not surprisingly, VM availability and
mobility really rise to the forefront in a disaster recovery scenario. The
image of files stranded on storage directly attached to a nonfunctional serv-
er makes a bad poster for high availability.
SAN technology has long been the premier means of consolidating stor-
age resources and streamlining management in large data centers.
Nonetheless, storage virtualization for physical servers and commercial
operating systems, such as Microsoft® Windows and Linux®, is burdened
with complexity because most commercial operating systems assume exclu-
sive ownership of storage volumes.
05
Executive Summary
file locking between systems, VMFS renders the issue of volume ownership
moot. That opens the door to using iSCSI to extend the benefits of physical
and functional separation via a cost-effective lightweight SAN. As a result,
iSCSI has become de rigueur in large datacenters for ESX servers.
06
Assessment Scenario
Assessment Scenario
“By performing all partitioning and management functions for virtual
storage volumes on the iSCSI concentrator and not on the FC array,
openBench Labs was able to leverage key capabilities of StoneFusion to
reduce operating costs by enabling system administrators to carry out
tasks that normally require co-ordination with a storage administrator.”
07
Assessment Scenario
To support numerous
iSCSI client systems, storage
capacity is often a primary
concern when configuring
an iSCSI fabric. Using low-cost high-capacity SATA drives, we were able to
configure our IBM DS4100 array with 3.2TB of storage: From that pool, we
assigned 1.6TB to the StoneFly i4000 in bulk via a single LUN.
08
Assessment Scenario
By performing all
partitioning and manage-
ment functions for virtual
storage volumes on the
iSCSI concentrator and not
on the FC array,
openBench Labs was able
to leverage key capabilities
of StoneFusion to reduce
operating costs by enabling
system administrators to
carry out tasks that
normally require
co-ordination with a
storage administrator. In
particular, we were able to
consolidate storage from
multiple FC arrays into a
pool that could be managed
from the StoneFly i4000. More importantly, we were able to configure
09
Assessment Scenario
BENCHMARK BASICS
Like all other storage transport protocols, iSCSI performance has two
dimensions: data throughput, which is typically measured in MB per
second, and data accessibility, which is measured in I/O operations
completed per second (IOPS). To assess overall iSCSI performance, we
ran our oblDisk and oblLoad benchmarks, which measure throughput
and accessibility respectively.
10
Assessment Scenario
network packets and SCSI commands. To eliminate this overhead from our
host server, we installed a QLogic iSCSI HBA for use in physical server tests.
In addition, for TCP packet processing, which a TOE offloads, the QLogic
HBA also handles the processing of the embedded SCSI packets.
That hot spot provides a means to test the caching capabilities of the
underlying storage system. As the number of disk daemons increases, so too
should the effectiveness of the array controller's caching increase within the
hot spot. As earlier noted, the IBM DS4100 storage system's robust ability to
support the dynamic tuning of cache performance is precisely why we
chose that array to support our tests.
VIRTUAL CONSOLIDATION
The standalone tests on the HP ProLiant ML350 G3 servers also provid-
ed an interesting case study for server consolidation through system, storage
and network virtualization. Virtualization extends the power of IT to inno-
vate by providing the means to leverage logical representations of resources.
Whether through aggregation or deconstruction, virtualized resources are
not restricted by physical configuration, implementation, or geographic
location: That makes a virtual representation more powerful and able to
provide greater benefits than the original physical configuration. When
maximally exploited by IT, virtualization becomes a platform for innova-
tion for which the benefits move far beyond basic reductions in the total
cost of ownership (TCO).
11
Assessment Scenario
To assess the
performance of the
StoneFly i4000 in a VI
environment,
openBench Labs set up
two quad-processor
servers: an HP ProLiant
DL580 G3 and a Dell
1900. Both servers ran
VMware® ESX v3.0.1
and hosted from one-
to-four simultaneous
VMs that were running
either Windows Server
2003 SP2 or SUSE
Linux Enterprise Server
10 SP1.
12
Assessment Scenario
On the ESX server's virtual LAN, we created a VMware kernel port for
the VMware software initiator to enable iSCSI connections. In addition,
we also installed a QLogic iSCSI HBA on each ESX server. Within the VI
console, the iSCSI HBA immediately appeared as an iSCSI-based Storage
Adapter. Through either the hardware HBA or the software initiator, ESX
handled every iSCSI connection.
13
Assessment Scenario
On the other hand, the file system for ESX, dubbed VMFS, has a built-
in mechanism to handle distributed file locking. Thanks to that
mechanism, exclusive volume ownership is not a burning issue in a VI
environment. What's more, VMFS avoids the massive overhead that a
DLM typically imposes: VMFS simple treats each disk volume as a single-
file image in a way that is loosely analogous to an ISO-formatted
CDROM. When a VM’s OS mounts a disk, it opens a disk-image file;
14
Real Performance, Virtual Advantage
VMFS locks that file; and the VM's OS gains exclusive ownership of the
disk volume. Using StoneFusion's
management GUI,
openBench Labs was able
With the issue of to invoke a rich collection
volume ownership moot, of storage manage utilities.
iSCSI becomes a perfect Among these utilities are a
way to extend the benefits number of high-availability
of physical and functional tools to create copies and
maintain mirror images of
separation via a more cost- volumes. Within a small VI
effective, easy-to-manage, environment, system
lightweight IP SAN fabric. administrators can also
That has made iSCSI de utilize these tools in
rigueur for ESX servers in conjunction with the basic
VI client software to
large datacenters. provide simple VM
template management
By using the StoneFly capabilities that would
i4000 Storage Concentrator running the StoneFusion OS to anchor an normally require an
iSCSI fabric, IT can limit the involvement of storage administrators with additional server running
the VMware Virtual Center.
the iSCSI fabric. A storage administrator will only be needed to provision
the iSCSI concentrator with bulk storage from an FC SAN array. System
administrators can easily manage the storage provisioning needs of their
iSCSI client systems, including ESX servers, by invoking the storage
provisioning functions within StoneFusion.
PHYSICAL BASELINE
We began testing on an HP Proliant ML350 G3 server running
Windows Server 2003. Thanks to Microsoft's freely available software ini-
tiator, systems running a Windows OS have become the premier platform
for iSCSI. Though far less prevalent than the Microsoft iSCSI initiator, the
StoneFusion OS also supports the Microsoft Internet Storage Name Service
(iSNS). By registering with iSNS, the StoneFly i4000 insures automatic
discovery by the Microsoft initiator.
15
Real Performance, Virtual Advantage
CH
LABS
The Qlogic iSCSI HBA IOPS throughput patterns
O PE N BE N
6000 •
• true for small numbers of
5000
processing of the TCP daemons, which is the
packets that encapsulate time that the host is most
4000 • QLA4050 iSCSI HBA the SCSI command sensitive to changes in
• • MS Initiator and Ethernet TOE
packets—and thereby overhead. With more than
3000
12 daemons, the
provides a distinct edge in
difference in the number of
2000
• • processing IOPS. This is IOPS completed varied by
• very significant for less than 2%.
1000
maximizing performance
0
0 2 4 6 8 10 12 14 16 18 20 of the StoneFly i4000,
Number of daemon processes which was able to sustain a
load of 10,000 IOPS with
8KB data requests.
We observed a very
LABS
On Linux™, the push different pattern in IOPS
CH
16
Real Performance, Virtual Advantage
configuration and control, which is still a very manual task that requires
each iSCSI target portals to be explicitly defined. More importantly, the
developers classify the current Open-iSCSI release as "semi-stable." As a
result, the initiator remains as an optional component in most Linux
server distributions.
CH
LABS Nonetheless, peak IOPS For sequential I/O, the
O PENBEN
17
Real Performance, Virtual Advantage
The VMware ESX Server provides two ways to make virtual storage
volumes accessible to virtual machines. The first way is to use a VMFS
datastore to encapsulate a VM's disk-in a way that is analogous to a
CD-ROM image file. The VM disk is a single large VMFS file that is
presented to the VM's OS as a SCSI disk drive, which contains a file
system with many individual files. In this scheme, VMFS provides a
distributed lock manager (DLM) for the VMFS volume and its content of
VM disk images. With a DLM, a datastore can contain multiple VM disk
files that are accessed by multiple ESX Servers.
18
Real Performance, Virtual Advantage
CH
LABS We began testing iSCSI In terms of IOPS
OPENBEN
19
Real Performance, Virtual Advantage
SP1 within a VM. In this case, IOPS performance improved with both the
QLogic iSCSI HBA and with the VMware iSCSI initiator in conjunction
with the Ethernet TOE as compared to running a physical server.
CH
LABS With a VM running Using a ReiserFS-
OPENBEN
1200
• • same, the net performance server employed a
• • hardware iSCSI HBA. In
1000 • result was a throughput particular, IOPS
• • level that was often on a performance rose by
800
• • • • scale showing an absolute upwards of 200% over a
• increase in performance physical server when we
600 • • • • • • • used the VMware's iSCSI
that was often on the order
400 • • • initiator. The jump in
• • • • QLA4050 HBA SLES 10 of 200-to-250% higher for performance was on the
200 • QLA4050 HBA VMware ESX Server any given number of order of 300% using the
• • VMware Initiator and Ethernet TOE oblLoad disk daemons. QLogic iSCSI HBA on the
0 ESX server.
0 2 4 6 8 10 12 14 16 18 20
Number of daemon processes With both physical and
virtual systems sustaining
10,000 IOPS throughout loads using 8KB data packets, the StoneFly i4000
provided exceptional performance in routing FC data traffic over a 1-Gb
Ethernet fabric via iSCSI. Nonetheless, it was in the added provisioning
features of StoneFusion that the StoneFly i4000 made the biggest impact
in managing a VI environment.
20
Real Performance, Virtual Advantage
21
Real Performance, Virtual Advantage
22
Concentrator Value
Concentrator Value
“ESX system administrators can leverage the high-availability functions
of the StoneFusion OS, including the creation of snapshots and mir-
rors, to generate and maintain OS
templates and distribute data files as VMs are migrated in a VI envi-
ronment.”
DOING IT
For CIOs
StoneFly i4000 Storage Concentrator Quick ROI today, two
top-of-mind
1) Aggregate and Manage FC Array Storage for Better Resource Utilization propositions
2) Extended iSCSI Provisioning Functionality are resource
3) Advanced HA Functionality Including Snapshots and Mirrors consolidation
4) Fibre Channel Path Management and Automatic Ethernet TOE Teaming and resource
5) 10,000 IOPS Benchmark Throughput (8KB Requests with Windows Server 2003) virtualization.
6) 133MB/s Benchmark Sequential I/O Throughput (SUSE Linux Enterprise Server 10) Both are
considered to
be excellent ways to reduce IT operations costs through efficient and
effective utilization of IT resources, extending from capital equipment to
human capital. Via the StoneFusion OS storage-provisioning engine, the
StoneFly i4000 Storage Concentrator can directly help raise the utilization
rate of FC storage while extending the benefits of storage virtualization to a
broad array of new client systems over Ethernet.
23
Concentrator Value
On the other hand, exclusive volume ownership is not an issue for ESX
servers, since VMFS handles distributed file locking. In addition, the files
in a VMFS volume are single-file images of VM disks. This means that
when a VM mounts a disk image, VMFS locks that image as a VMFS file
and the VM has exclusive ownership of its disk volume.
With the issue of ownership moot for VMFS datastores, iSCSI becomes
a perfect way to cost effectively extend the benefits of physical and
functional separation from an FC SAN. With the StoneFly i4000, that
functionality can be further leveraged by allowing system administrators
to take on many of the storage provisioning tasks that normally require
coordination with a storage administrator. What’s more, StoneFusion's
built-in advanced RAS storage management features make it easy to create
virtual-disk templates for VM operating systems in order to standardize
IT configurations and simplify system provisioning.
24