Escolar Documentos
Profissional Documentos
Cultura Documentos
To the business application, everything is virtualized................................................................................ 2 Storage virtualization today .................................................................................................................. 2 Fabric-based virtualization .................................................................................................................... 3 HP StorageWorks External Storage XP ................................................................................................... 3 Application transparent data mobility ................................................................................................. 4 Simplified volume management and provisioning................................................................................. 4 Heterogeneous local and remote data replication ................................................................................ 4 External Storage XP details ................................................................................................................... 5 The redundant StorageWorks XP architecture ...................................................................................... 5 Fibre Channel CHIP architecture ........................................................................................................ 6 XP software ..................................................................................................................................... 8 XP terms.......................................................................................................................................... 8 External Storage XP configurations......................................................................................................... 9 An XP system using External Storage XP without remote replication ........................................................ 9 StorageWorks XP using External Storage XP with remote replication .................................................... 10 Supported External Storage XP configuration topologies ..................................................................... 11 Basic building block ....................................................................................................................... 13 Specific configurations.................................................................................................................... 15 Array repurposing ...................................................................................................................... 15 Data migration across heterogeneous external disk array tiers ......................................................... 17 Remote data replication .............................................................................................................. 19 Low-utilization data archive ......................................................................................................... 20 Microsoft Exchange .................................................................................................................... 21 Oracle ...................................................................................................................................... 22 Basic rules of thumb ....................................................................................................................... 22 Sizing External Storage XP configurations ......................................................................................... 22 Partitioning.................................................................................................................................... 23 Summary .......................................................................................................................................... 23 For more information.......................................................................................................................... 24
Today, the industry is focusing on three main areas of storage virtualization: storage-based, networkor fabric-based, and server-based. Storage-based virtualization focuses on storage devices (disk, tape, and so on) within a single storage subsystem, and abstracts the physical differences in the storage devices to enable the capacity to be aggregated and used and managed as a single logical storage pool. Fabric-based virtualization performs similar functions as storage-based but does the abstracting, aggregating, and logical pooling at the fabric level, thus encompassing multiple storage systems. Server-based virtualization isolates the physical differences in storage devices from the applications at the operating system and lower level drive code on each server running such software thus encompassing multiple storage systems, but only one server. Virtualization in any of the three areas ultimately leads to results that can be measured in terms of efficiency and simplified management. However, each area of virtualization offers specific advantages and drawbacks, mainly in terms of its scope (amount and type of resources that are affected) and the granularity of the resulting logical view. Storage-based virtualization, for instance, is easily implemented and simple to manage, but its span of control is often limited to a single storage subsystem. Fabric-based virtualization, by comparison, gives you huge scalability and heterogeneous support although implementations differ among manufacturers. Also, depending on the implementation, unique attributes of the subsystems being pooled may be lost in the fabric-based pool.
Fabric-based virtualization
Fabric-based virtualization offers the ability to span multiple heterogeneous storage subsystems. It raises the level of abstraction from the storage subsystem to the storage network. It enables a logical view of multiple storage subsystems connected to a storage area network (SAN). Also, the SAN itself inherently virtualizes the routing of data over multiple paths to reduce points of failure and improve quality of service, expedite zoning and partitioning, and facilitate the movement of data across long distances. In this way, fabric-based virtualization goes right to the heart of the cost-complexity paradox by removing the complexity that drives up the cost of operating collections of storage subsystems, such as enterprise SANs, which, as previously noted, already are virtualized connections between host servers and multiple storage arrays. Such fabric-based virtualization reduces the complexity and improves service levels by letting administrators manage the SAN and provision its resources more holistically and logically instead of having to wrestle with the specifics of each component system.
and features to be added to meet the needs of today and tomorrowwithout ever requiring a power down. The advanced virtualization technology of ES XP simplifies data migration, volume management, and data replication in SAN environmentssupporting a theoretical maximum of 1630 petabytes2 of heterogeneous storageall of which can be managed from one single pane of glass3.
2 16 PB for the HP StorageWorks XP10000 Disk Array, 32 PB for the HP StorageWorks XP12000 Disk Array, and 247 PB for the HP StorageWorks XP24000 Disk Array, although a typical/reasonable maximum configuration for the XP10000 Disk Array would be more in the 200500-TB range (depending greatly on workload type and intensity). For more details, see your HP storage representative. 3
While all the data can be centrally managed from the XP, the individual storage arrays still require their own management tools for configuration changes.
HP EVA
IBM
XP
Note Due to performance considerations, OLTP applications and SATA external storage are not recommended uses for ES XP.
Figure 2 illustrates the fully redundant internal architecture of the XP storage array, which is designed with no single points of failure. There are two or more of every component in an XP system for redundancy (including the disk spindles and controllernot shown). One half of each component pair is located in each of the two clusters. Each cluster can be powered by either of the AC power supplies. The Client Host Interface Processors (CHIPs) are XP components that contain the Fibre Channel ports. The orange lines show the external Fibre Channel connections to hosts and external disk arrays. One CHIP pair (16 Fibre Channel ports) is included in every XP10000 Disk Array; the other CHIP pair (additional 32 Fibre Channel ports) is optional.
The central, highly available cache in the XP can be used to increase the write speed to external arrays by buffering write data in cache and using an ES XP cache de-staging mechanism to write to the external disk array. Depending on the needs of the application and the relative speed of the host compared to the speed of the external disk array connections, ES XP can be configured to use cache in an asynchronous way (in-order data buffering), or in a synchronous way where data is written to the external disk array before acknowledging completion to the host.
Shared Memory
Shared Memory
Included CHIP AC-DC Power Power Supply Supply Battery Box Box
Optional CHIP
Optional CHIP
Included CHIP
Cache Switch
Cache Switch
Cluster 1
Cache
Cache
Cluster 2
FC Port
FC Port
FC FC Port Port
FC FC Port Port
FC FC Port Port
Micro processor
Micro processor
Micro processor
Micro processor
Microprocessor Module
Microprocessor Module
8 2 16 8
16 4 32 16
ES XP performance depends on the workload type and the number of CHIP microprocessors installed in the StorageWorks XP. Each CHIP microprocessor is capable of approximately 34 KIOPS. Random workloads consist of approximately 60% reads/40% writes, with 8 KB of data per IO. Best performance is achieved by matching the number of StorageWorks XP ports connected to a device with the random IOPS performance desired/available from that device. Sequential workloads usually consist of more data (64 or 128 KB) per IO. Best performance is achieved by matching the required bandwidth of device to the number of StorageWorks XP ports times 3 KIOPS per port times the IO size. Up to 16 PB of inactive data, like archived data that is seldom accessed, can be virtualized by one StorageWorks XP10000 Disk Array. Best total performance is achieved with the same number of CHIP microprocessors facing the servers as facing the external disk arrays.
While the microprocessors in the XP12000 Disk Array are capable of approximately 3 KIOPS, the microprocessors in the XP24000 Disk Array are capable of approximately 2x the XP12000 Disk Array value.
XP software
A full complement of management software tools is available with every StorageWorks XP. These software tools are licensed based on the capacity to be virtualized. A sampling of available software titles are: HP StorageWorks Command View XP Advanced EditionProvides centralized web-based management for ES XP and XP. It consists of a user friendly GUI, SMI-S provider, host agents, and a command line interface (CLI). LUN Configuration and Security Manager XPAdd and delete paths, create custom-sized volumes5, and configure LUN security to provide controlled, secure access to data stored behind an XP. Auto LUN XPEnables manual data migration. Tiered Storage Manager XPEnables data migration to match key user quality of service requirements to the storage attributes of ES XP controlled storage. Data Shredder XPCan optionally be invoked after a successful Tiered Storage Manager data migration to delete the data on the migration source volume. HP StorageWorks Business Copy XP, Snapshot XPMakes nearly instantaneous local full or space efficient point-in-time copies or mirrors of data without ever interrupting online production. HP StorageWorks Continuous Access XP Synchronous/Asynchronous/JournalEnables data mirroring between local and remote XP disk arrays. HP StorageWorks XP Disk/Cache PartitionAllows resources on a single XP to be divided into a number of distinct subsystems that are independently and securely managed.
XP terms
The following terms are important in understanding ES XP: CHIP PairA printed circuit assembly pair used for data transfer between the external environment and ES XP cache. A CHIP pair contains multiple Fibre Channel ports, which are used to connect to hosts, external storage devices, and remote sites by way of SANs or by direct connect. Cluster (CL)An isolated portion of ES XP cache, Fibre Channel CHIP blades, and so on such that hardware failures within one cluster will not affect the continued operation of the other cluster. Device groupAn LU group containing one or more Business Copy XP or Continuous Access XP pairs such that operations applied to the group affect every group LU. External disk arrayThe disk arrays connected to ES XP. External device groupA grouping of one or more external disk array volumes. External portAn XP port configured to connect to an external disk arraylike a host bus adapter (HBA) on a server. LULogical Unit or disk volume. LDEVAn XP logical device typically manifesting in a particular emulation type (for example, an LDEV might be an OPEN-V LDEV in 3D+1P RAID 5). After an LDEV is registered within ES XP, it can either be mapped directly to a host group, and host viewable LU, aggregated with other LDEVs to create a larger LU (Logical Unit Size Expansion [LUSE]), or carved up to create a smaller LDEV and LU. Microprocessor (MP)The solid state engine responsible for the functionality and performance of one or more Fibre Channel ports.
5 For best performance, map external volumes into the XP at their native size. In that the use of many mirroring products (for example, Business Copy XP) requires an exact match between volumes, EVA volumes should ideally be sized in multiples of 15,350 MB (31457280 blocks, 16384 cylinders) to avoid the performance ramifications of using CVS with external storage.
PvolThe primary or production side volume of a Business Copy XP or Continuous Access XP pair. SvolThe secondary or mirror volume of a Business Copy XP pair or Continuous Access XP pair.
(1) Use the external arrays local mgmt station & GUI to provision/expose LUs to the XP (select RAID type, sizes, etc).
Typically, a one time operation.
(2) The external LUs are discovered by the XP and mapped in as V/LDEVS, & assigned LUNs, host ports & host groups.
Typically, a one time operation.
(3) Day-to-day XP storage management operations (LUN provisioning, re-sizing, aggregating, etc.) can typically occur without further external storage changes. The XP adds its premium features to the external storage (which is treated as generic storage)
Or up to 56 storage-facing and 56 host-facing microprocessors (MPs) for the XP12000 Disk Array For instance, an HP StorageWorks Enterprise Virtual Array (EVA4000) could have one active and one inactive Fibre Channel path into each of eight EVA ports, to eight EVA LUs. 8 That is, one active and one passive path per disk array subsystem/frame.
6 7
Servers
OOO
Backup Server
XP
1-12 Active Storage Paths
10
Four Fibre Channel paths are recommended for the Continuous Access links at a minimum (two in each direction). A maximum of eight Fibre Channel ports may be used (four in each direction). Another best practice for a heavily write-biased workload in this configuration is to assure that the Continuous Access XP Svol (remote disk array) is as fast or faster than the Continuous Access Pvol (source disk array), so data does not needlessly accumulate in ES XP cache. The Figure 6 configuration also supports a remote RAID Manager command device on the Continuous Access XP RCU (remote site), which can communicate with the Continuous Access XP MCU (local site) command device for the purpose of remote pair (for example, Business Copy XP and Continuous Access XP) management.
Servers
OOO
Servers
OOO
Backup Server
XP
4 Continuous Access Paths 1-10 Active Storage Paths
XP
11
(a) Host
(b) Host
(d) Host
XP LU
XP LUSE
XP
BC pvol Part of 1 or both can be in XP Cache LUN
LU
LU
LU
LU
Array
LU
Array
LU
LU
Array
Array
Array
external LVM aggregated XP Regular or CVS. XP cache BC/snapshot and/or SW RAID aggregated on or off. SAA, A/P or pvol & svol. (can use >1 XP) (LUSE) AAA controllers. With or without Nishan extender pair. note: path lines shown are logical paths. Most configurations require at least 2 physical paths
(f) Host
XP
pvol
CA
XP
XP pvol
svol
LU
LU
LU
LU
LU
Array
Array
Array
(supported for all arrays but MSA)
note: path lines shown are logical paths. Most configurations require at least 2 physical paths
12
Application Host For high availability use a matched set of 1 host-facing CHIP microprocessor and I storage-facing CHIP microprocessor 1 Fibre Channel path 1 external array port 1 external LU FC Switch Good for light Random or heavy Sequential workloads. Scale up from this basic configuration based on the type and intensity of the workload, etc. Controller-A FC Switch Controller-B
mp
FC Switch
FC Switch
XP CL-1
1 active and 1 inactive failover port per MP mp
XP CL-2
mp
LU
LU
mp
13
For optimum performance, ES XP should be configured with up to a maximum of 12 (XP10000 Disk Array) host-facing and 12 external disk array-facing CHIP microprocessors, each actively using only one of its two ports. The diagram in Figure 9 illustrates (on the external disk array side) how each CHIP microprocessor should be connected to one active and one non-active (failover) path by way of its two Fibre Channel ports. The external array LU should consist of enough disk spindles (at typically less than 150 IOPS per spindle) to keep up with the potential throughput of the active Fibre Channel link (for example, 20 spindles).
Hosts
mp mp mp mp mp
mp
mp
mp
mp
mp
mp
mp
XP CL-1
mp mp mp mp mp mp mp mp
XP CL-2
mp mp mp
FC Switch
FC Switch
mp
14
When Continuous Access XP is used, best practice is to use at least two CHIP microprocessors from the host-facing side and two CHIP microprocessors from the external disk array-facing side for the bidirectional Continuous Access XP links. Doing this results in a maximum of 10 host-facing CHIP microprocessors and 10 external disk array-facing CHIP microprocessors. If MSA9 external disk arrays are being considered: MSA disk arrays with two controllers are strongly recommended, with two paths connected to the XP. MSA disk arrays with only a single controller are not recommended. MSA disk arrays are not supported for use with Continuous Access (Pvols, Svols), and not recommended for use with Snapshot (Pvols or pool). Failed external disk array port failover is handled automatically by the XP for all supported disk array classes: Symmetrical Active/Active controller disk arrays like the HP XP/EMC DMX/EMC Symmetrix/HDS Lightning/SUN StorEdge 9900/IBM DS4000 series Asymmetrical Active/Active controller disk arrays like the HP StorageWorks 4000/6000/8000 Enterprise Virtual Arrays (EVA4000/6000/8000), or the EMC CX/HDS Thunder series Active/Standby controller disk arrays like the HP StorageWorks 3000/5000 Enterprise Virtual Arrays (EVA3000/5000), or the HP MSA series Active load balancing to an external disk array volume by way of multiple paths will only occur for the symmetrical Active/Active controller class of arrays. For more details, see your HP storage representative.
Specific configurations
Array repurposing
The diagram in Figure 10 illustrates how an XP with ES XP may be permanently placed between a working host and array connection, without taking the application offline. Benefits: Re-direct an applications access path from direct connect to by way of an XP Preserve data integrity and application uptime during the process Use mirror/UX and LVM used with HP-UX, VxVM and VERITAS Volume Manager, mirroring may be used for other operating systems
15
Sequence: Time 0Begin. Application running on a legacy array LU as LVM Lvol A mapped to LVM Pvol 1. Time 1Create a SW RAID duplicate. LVM Lvol A can now be served by either LVM Pvol 1 or 2. Time 2Break one side of the SW mirror. The other side caries on without interruption. Time 3Re-establish the SW mirror by way of a second external LU. Time 4Break the second SW mirror path and decommission the second external LU. Time 5Application running on a legacy array LU by way of the XP, still using LVM Lvol A.
LVM Pvol 1
LVM Pvol 1
LVM Pvol 2 LU Y
LVM Pvol 1
LVM Pvol 2 LU Y
LVM Pvol 1 LU Z
LVM Pvol 2 LU Y
LVM pvol 1 LU Z
LVM Pvol 2 LU Y
LVM Pvol 1 LU Z
XP
XP
XP
XP
XP
LU X
LU X
LU W
LU X
LU W
LU X
LU W
LU X
Active
Unused
Active
Active
Inactive
Active
Active
Active
Active
inactive
Active
Unused
Array
Array
Array
Array
Array
Array
Time-0
Time-1
Time-2
Time-3
Time-4
Time-5
16
Array repurposing can also take place offline to the application, as illustrated in Figure 11. Sequence: Time 0Begin. Application running. Time 1Shut down application and remove connection to legacy array. Time 2Re-establish the LU by way of ES XP as an external LU, now accessed by way of XP LU Y.
Application
Application
Application
Host
Host
LU Y
Host
XP
LU X
LU X
Active
Active
Active
Array
Array
Array
Time-0
Time-1
Time-2
As the two diagrams show in Figure 12, the Tiered Storage Manager (TSM) plug-in for Command View XP AE (or alternatively, Auto LUN XP by way of the remote Web Console) is a very useful tool for managing the online migration of data from either external to external, or from internal to external, by way of a Graphical User Interface (GUI). Data migrations can occur while the application is using the data, and can be optionally scheduled to occur automatically during a period of low storage demand. Low utilization data can be moved to less expensive (slower) disks, while maintaining the same host LU number and XP LDEV number. Use manual TSM/Auto LUN XP to online migrate less frequently accessed data from one external disk array tier or frame to another Benefits: Improved $/GB for online access to older data Preservation of prior investments in legacy disk arrays
17
SCSI
15K rpm
SCSI
15K rpm
SATA
SATA
7.5K rpm
Host
XP
XP
XP
Tier-0/1
e.g. Tier-0: internal cache LUNs, Tier-1: internal XP spindle LUNs
Tier-4 Storage
Tier-3 Storage
e.g. external EVA e.g. external MSA
Tier-2 Storage
e.g. external XP
18
As illustrated in Figure 13, ES XP may be used to accomplish remote replication between two XPs. Continuous Access Asynchronous will often10 give higher application performance as data replication occurs outside of host I/O cycles. Continuous Access Synchronous will keep the data on the two sites virtually consistent at all times.
FC Switch
FC Switch
FC Switch
mp
mp
mp
mp
mp
mp
mp
mp
mp
mp
mp
mp
mp
mp
mp
mp
mp
mp
mp
mp
mp
mp
XP CL-1
mp mp mp mp mp mp mp
XP CL-2
mp mp mp mp mp
Continuous Access XP
mp
XP CL-1
mp mp mp mp mp mp
XP CL-2
mp mp mp mp mp
FC Switch
FC Switch
FC Switch
FC Switch
CA Svol
CA Svol
10
mp
mp
19
The diagram in Figure 14 illustrates how ES XP may be used to back up data from disparate external disk arrays in which accesses will be infrequent, largely of a sequential nature, and very light duty. For this application, capacity is much more important than random or sequential performance.
Backup Host
Tape
LU LU LU MSA
LU LU LU MSA
LU LU LU MSA
LU LU LU MSA
LU LU LU MSA
LU LU LU MSA
LU LU LU
LU LU LU
LU LU LU
LU LU LU
LU LU LU
LU LU LU
20
Microsoft Exchange
The diagram in Figure 15 defines a Microsoft Exchange email Module using an XP10000 Disk Array connected to EVA3000/5000 disk arrays. At less than one11 I/O operation per second per user, ES XP would be a suitable match for 4,000, 8,000, or even 12,000 Microsoft Exchange users. The hardware requirements of each group of 1,000 users is a single 350-GB LU, consisting of enough12 disk spindles to accommodate 800 IOPS of random access workload. Note from Figure 15 that the best practice is to dedicate an external disk array port and LU to a single active (and one inactive) Fibre Channel path.
Each module consists of: A 2-node host cluster Four 350 GB LUNs (of at least 6 spindles each) Each of the 4 LUNs requires the ability to handle:
1000 Microsoft Exchange users: 0.8 IOPS max. per user normally 0.4 IOPS max. per user during backup
XP with 12GB
Not shown is the switch between the XP and the storage, enabling 2 Fibre Channel connections per EVA port. Also not shown are the internal EVA LU failover paths to the non-owning controller. Each EVA LU much consist of at least 6 spindles, in order to provide the necessary 800 IOPS
2.4 KIOPS
2.4 KIOPS
EVA
350GB 350GB 350GB 350GB 350GB 350GB 350GB 350GB 350GB 350GB 350GB 350GB
2.4 KIOPS
2.4 KIOPS
0.8 KIOPS
0.8 KIOPS
EVA
350GB 350GB 350GB 350GB
0.8 KIOPS
0.8 KIOPS
controller A
controller B
controller A
controller B
Three modules for 12,000 Exchange users using EVA disk arrays
11 12
0.8 I/Os per second, per user For instance, eight (the minimum number of disk drives in an EVA disk group).
21
Oracle
The performance of ES XP in an Oracle environment depends on the application and workload to be placed on the external disk arrays. In general, ES XP is suitable for random access (OLTP/dB) environments with less than 3,000 8 KB13 IOPs per active Fibre Channel link. ES XP can support an application of up to about 2.5 KIOPS per LU application on top of an Oracle database. Figure 16 shows the 12 external disk array-facing CHIP microprocessors fully consumed by three EVA3000/5000 disk arrays for use in a random access workload environment.
Tape
2.5 KIOPS 2.5 KIOPS
2.5 KIOPS
2.5 KIOPS
2.5 KIOPS
2.5 KIOPS
2.5 KIOPS
2.5 KIOPS
EVA
500GB 500GB 500GB 500GB
2.5 KIOPS
2.5 KIOPS
FastT
500GB 500GB 500GB 500GB
2.5 KIOPS
2.5 KIOPS
controller A
controller B
controller A
controller B
controller A
controller B
13
22
As a liberal example, a customer with a seldom accessed deep archive (sequential workload) may connect each group of two CHIP MPs to one or more entire arrays. For more details, see your HP storage representative.
Partitioning
HP StorageWorks XP Disk/Cache Partition can be very useful to isolate different workloads and users on the same XP. For instance, partitioning is considered a best practice if ES XP is used for random access (OLTP/dB) and large-block sequential (backup/archive) workloads XP simultaneously. Likewise, partitioning is recommended as a best practice if disparate user groups (for example, finance and R&D) share the same XP. For many external storage applications, partitioning is recommended. A limited license (CLPR0-3 within SLPR0) is provided at no cost. For more details, see your HP storage representative.
Finances host
Marketings host
Manufacturings host
XP
Summary
External Storage XP delivers fabric-based storage virtualization based on the proven highperformance, high-availability XP disk array architecture. In a world where every business decision seems to trigger an IT event, ES XP helps enable an adaptive enterpriseone that can quickly capitalize on and manage change.
23
2007 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation. Oracle is a registered US trademark of Oracle Corporation, Redwood City, California. 4AA0-6162ENW, Rev. 1, May 2007
24