Você está na página 1de 24

HP StorageWorks External Storage XP white paper

To the business application, everything is virtualized................................................................................ 2 Storage virtualization today .................................................................................................................. 2 Fabric-based virtualization .................................................................................................................... 3 HP StorageWorks External Storage XP ................................................................................................... 3 Application transparent data mobility ................................................................................................. 4 Simplified volume management and provisioning................................................................................. 4 Heterogeneous local and remote data replication ................................................................................ 4 External Storage XP details ................................................................................................................... 5 The redundant StorageWorks XP architecture ...................................................................................... 5 Fibre Channel CHIP architecture ........................................................................................................ 6 XP software ..................................................................................................................................... 8 XP terms.......................................................................................................................................... 8 External Storage XP configurations......................................................................................................... 9 An XP system using External Storage XP without remote replication ........................................................ 9 StorageWorks XP using External Storage XP with remote replication .................................................... 10 Supported External Storage XP configuration topologies ..................................................................... 11 Basic building block ....................................................................................................................... 13 Specific configurations.................................................................................................................... 15 Array repurposing ...................................................................................................................... 15 Data migration across heterogeneous external disk array tiers ......................................................... 17 Remote data replication .............................................................................................................. 19 Low-utilization data archive ......................................................................................................... 20 Microsoft Exchange .................................................................................................................... 21 Oracle ...................................................................................................................................... 22 Basic rules of thumb ....................................................................................................................... 22 Sizing External Storage XP configurations ......................................................................................... 22 Partitioning.................................................................................................................................... 23 Summary .......................................................................................................................................... 23 For more information.......................................................................................................................... 24

To the business application, everything is virtualized


Applications need resources. They do not care about servers or networks or disk arrays per se. To the application, all these are simply resources intended to serve its needs. It does not care which physical servers or network or storage is being consumed (although the business might). It just assumes the appropriate resources will be there when needed. In effect, as far as the application is concerned, all IT infrastructure resources, hardware and software, are already virtualized. From the organizations standpoint, however, the resources may not be virtualized. Instead, each resource is painstakingly deployed, configured, provisioned, and managed. Furthermore, although the application may not care which particular storage resource it is using, the organization certainly does. By virtualizing its resources, the organization can improve efficiency and lower costs through better asset utilization and simplified administration. It can standardize management processes and hardware and, most importantly, match the physical resources, particularly storage, to the needs of the data and ultimately the business. Virtualization allows the organization, for example, to store data in ways that balance the application and datas needs in terms of availability, performance, and cost at the logical level instead of physically connecting and reconnecting the right resources with the right applications. Data and applications requiring high performance, high availability, or other attributes can be logically connected to the resources that deliver those characteristics while less important data can be connected to other resources that provide a better match between application needs and resource cost. Virtualization, which is just another word for abstraction, masks the particulars of the physical resources behind logical objects. It also allows the organization to dynamically change the composition of the logical objectsin effect mixing and matching different physical resources and their attributeswhile remaining transparent to the application. In short, virtualization enables the organization to achieve what the application ideally has wanted all along, which is transparent access to the most appropriate resources when and how it needs them. In truth, virtualization need not stop with the physical storage, fabric, and server resources. All the management capabilities that present the infrastructure components, in fact the entire management path from the application to the physical device, can be virtualized. To the application, everything physical devices, operating systems, software, management services, components of all sortsis a transparent resource that exists to serve its needs.

Storage virtualization today


Storage virtualization first appeared in the 1970s in the form of mirroring, or volume shadowing. By 2006 it evolved to encompass not only storage but servers and even networks. Functionally, virtualization is really an enablera host of technologies that allows organizations to reduce or remove the complexity of individual components within the IT infrastructure through abstraction for the purpose of achieving efficiency and flexibility. Virtualization creates logical views of resources that are distinct from the underlying physical components. This, in turn, allows for resource encapsulation, which lets organizations aggregate and group the logical resources and capabilities in useful and meaningful ways. A mixture of storage devices with different performance and cost characteristics, for example, could be aggregated and grouped logically to create tiered storage. Virtualization provides the technological underpinnings that group and manage resources so that the services required by applications can be dynamically provisioned, much like the so-called on-demand IT utility. Virtualization itself can be viewed as a services architecture intended to transparently deliver infrastructure services (CPU, storage, networking) to the application.

Today, the industry is focusing on three main areas of storage virtualization: storage-based, networkor fabric-based, and server-based. Storage-based virtualization focuses on storage devices (disk, tape, and so on) within a single storage subsystem, and abstracts the physical differences in the storage devices to enable the capacity to be aggregated and used and managed as a single logical storage pool. Fabric-based virtualization performs similar functions as storage-based but does the abstracting, aggregating, and logical pooling at the fabric level, thus encompassing multiple storage systems. Server-based virtualization isolates the physical differences in storage devices from the applications at the operating system and lower level drive code on each server running such software thus encompassing multiple storage systems, but only one server. Virtualization in any of the three areas ultimately leads to results that can be measured in terms of efficiency and simplified management. However, each area of virtualization offers specific advantages and drawbacks, mainly in terms of its scope (amount and type of resources that are affected) and the granularity of the resulting logical view. Storage-based virtualization, for instance, is easily implemented and simple to manage, but its span of control is often limited to a single storage subsystem. Fabric-based virtualization, by comparison, gives you huge scalability and heterogeneous support although implementations differ among manufacturers. Also, depending on the implementation, unique attributes of the subsystems being pooled may be lost in the fabric-based pool.

Fabric-based virtualization
Fabric-based virtualization offers the ability to span multiple heterogeneous storage subsystems. It raises the level of abstraction from the storage subsystem to the storage network. It enables a logical view of multiple storage subsystems connected to a storage area network (SAN). Also, the SAN itself inherently virtualizes the routing of data over multiple paths to reduce points of failure and improve quality of service, expedite zoning and partitioning, and facilitate the movement of data across long distances. In this way, fabric-based virtualization goes right to the heart of the cost-complexity paradox by removing the complexity that drives up the cost of operating collections of storage subsystems, such as enterprise SANs, which, as previously noted, already are virtualized connections between host servers and multiple storage arrays. Such fabric-based virtualization reduces the complexity and improves service levels by letting administrators manage the SAN and provision its resources more holistically and logically instead of having to wrestle with the specifics of each component system.

HP StorageWorks External Storage XP


HP StorageWorks External Storage XP (ES XP) running on an XP storage array decreases the stress of everyday heterogeneous data management by reducing the amount of effort needed to manage a group of storage systems. Because ES XP data is accessed from a unified pool, and the amount of management needed to maintain the pool is relatively low, total cost of ownership maybe reduced. As an added benefit, the pool can encompass a number of tiers of disk storage. The internally clustered XP storage array delivers storage virtualization based on a proven highperformance/high-availability architecture with no single point of failure and boot-once online scalability. All XP components are redundant and hot-swappable, including: processors, I/O interfaces, power supplies, batteries, and fans. Non-disruptive XP online1 upgrades allows capacity
1 While the XP firmware (FW) may be updated online, individual external arrays must typically be taken offline (due to their limitations) when their FW is updated. If non-access to data is not an option, consider host-level RAID-1, or using Tiered Storage Manager to online migrate data to a spare array before the FW upgrade.

and features to be added to meet the needs of today and tomorrowwithout ever requiring a power down. The advanced virtualization technology of ES XP simplifies data migration, volume management, and data replication in SAN environmentssupporting a theoretical maximum of 1630 petabytes2 of heterogeneous storageall of which can be managed from one single pane of glass3.

Application transparent data mobility


ES XP enables transparent data movement. Data being virtualized by ES XP can be transparently migrated anywhere within the virtualized storage pool. This enables application transparent tiered storage data migration capability and data movement for array decommissioning, repurposing, upgrading, maintenance, data center moves or consolidation, implementation of Information Lifecycle Management (ILM) policies, and performance tuning. Benefits include: Data movement within an array or to a different array Non-disruptive data migration No SAN or server reconfiguration Fewer staff management hours Elimination of application downtime More agile environment

Simplified volume management and provisioning


ES XP simplifies the management of virtual volumes, dynamically reallocating capacity, pooling heterogeneous volumes into a single reservoir that results in increased capacity utilization. The ES XP Set and Forget approach to managing external disk arrays allows the external disk arrays to be completely configured one time. When configured, all connected storage can then be managed from ES XP as virtual volumes, with the capability of dynamically reallocating capacity and the ability to pool heterogeneous volumes into a single reservoir. Reduction in costs and complexity in provisioning heterogeneous storage are experienced. Heterogeneous storage appears as a single pool of capacity, managed from a single console, easing administrative tasks. Physical storage resources can be reallocated without a disruptive remounting of volumes on servers. A single data management tool can be used to provision storage capacity from the heterogeneous storage pool.

Heterogeneous local and remote data replication


ES XP enables local and remote heterogeneous data replication for data distribution, data mining, backup and restore, and data validation testing. Data can be replicated between unlike storage systems both synchronously and asynchronously, locally and over wide-area distances.

2 16 PB for the HP StorageWorks XP10000 Disk Array, 32 PB for the HP StorageWorks XP12000 Disk Array, and 247 PB for the HP StorageWorks XP24000 Disk Array, although a typical/reasonable maximum configuration for the XP10000 Disk Array would be more in the 200500-TB range (depending greatly on workload type and intensity). For more details, see your HP storage representative. 3

While all the data can be centrally managed from the XP, the individual storage arrays still require their own management tools for configuration changes.

External Storage XP details


This document is intended to be an overview of the capabilities, possible configurations, and uses of ES XP. This document is not intended to be a complete listing of all features, operating systems, or external devices supported by ES XP. For a current support matrix, see your HP representative. ES XP can be connected between Microsoft Windows, HP-UX, AIX, Solaris, NetWare, SG-IRIX, Tru64, OpenVMS and Linux hosts and a wide variety of HP, IBM, EMC, and HDS disk arrays as shown in Figure 1. Connections to hosts and disk arrays can be either direct Fibre Channel connection or Fibre Channel SAN fabric connections.

Figure 1. StorageWorks XP utilizing External Storage XP to virtualize storage

HP EVA

HP MSA SAN SAN Hosts HP XP

IBM

HDS HP MSA EMC

XP

The redundant StorageWorks XP architecture


Note For the purposes of consistency, the majority of this document describes the attributes of ES XP running on the XP10000 Disk Array (for example, number of ports, MPs, and so on) However, ES XP can also run on the StorageWorks XP24000/12000 Disk Arrays, which have significantly more port/MP/crossbar resources.

Note Due to performance considerations, OLTP applications and SATA external storage are not recommended uses for ES XP.

Figure 2 illustrates the fully redundant internal architecture of the XP storage array, which is designed with no single points of failure. There are two or more of every component in an XP system for redundancy (including the disk spindles and controllernot shown). One half of each component pair is located in each of the two clusters. Each cluster can be powered by either of the AC power supplies. The Client Host Interface Processors (CHIPs) are XP components that contain the Fibre Channel ports. The orange lines show the external Fibre Channel connections to hosts and external disk arrays. One CHIP pair (16 Fibre Channel ports) is included in every XP10000 Disk Array; the other CHIP pair (additional 32 Fibre Channel ports) is optional.

The central, highly available cache in the XP can be used to increase the write speed to external arrays by buffering write data in cache and using an ES XP cache de-staging mechanism to write to the external disk array. Depending on the needs of the application and the relative speed of the host compared to the speed of the external disk array connections, ES XP can be configured to use cache in an asynchronous way (in-order data buffering), or in a synchronous way where data is written to the external disk array before acknowledging completion to the host.

Figure 2. Redundant StorageWorks XP Fibre Channel connection architecture

Fibre Channel Ports

Shared Memory

Shared Memory

Input Power AC-Box

Included CHIP AC-DC Power Power Supply Supply Battery Box Box

Optional CHIP

Optional CHIP

Included CHIP

Cache Switch

Cache Switch

Cluster 1

Cache

Cache

Cluster 2

Fibre Channel CHIP architecture


Each Fibre Channel CHIP blade contains microprocessor modules, each of which contains two microprocessors and four Fibre Channel ports. Figure 3 illustrates the architecture of an XP Fibre Channel CHIP microprocessor module. Two Fibre Channel ports are typically serviced by a single CHIP microprocessor, except during firmware upgrades when one CHIP microprocessor services all four Fibre Channel ports of the module while the other CHIP microprocessor in that module is being updated. The recommended best practice for ES XP configurations is to actively use only one port for each CHIP microprocessor. The other port associated with that CHIP microprocessor can be used for failover in case it needs to take over the workload from a port on the other blade of the pair. Given that, the XP10000 Disk Array can be thought of as having up to 12 host-to-storage flow-through data paths. That is, half of the 24 multiprocessors facing hosts, and half facing external storage.

Figure 3. The XP10000 Disk Array Fibre Channel CHIP architecture

FC Port

FC Port

FC FC Port Port

FC FC Port Port

FC FC Port Port

Micro processor

Micro processor

Micro processor

Micro processor

Microprocessor Module

Microprocessor Module

1 Blade of a 16 Port CHIP pair


Built in 16 Port Pair Optional 32 Port Pair

Fibre Channel CHIP


# FC Ports /Blade Architecture # 2 Microprocessor Modules/Blade # FC Ports /Blade pair # Microprocessors /Blade pair

8 2 16 8

16 4 32 16

ES XP performance depends on the workload type and the number of CHIP microprocessors installed in the StorageWorks XP. Each CHIP microprocessor is capable of approximately 34 KIOPS. Random workloads consist of approximately 60% reads/40% writes, with 8 KB of data per IO. Best performance is achieved by matching the number of StorageWorks XP ports connected to a device with the random IOPS performance desired/available from that device. Sequential workloads usually consist of more data (64 or 128 KB) per IO. Best performance is achieved by matching the required bandwidth of device to the number of StorageWorks XP ports times 3 KIOPS per port times the IO size. Up to 16 PB of inactive data, like archived data that is seldom accessed, can be virtualized by one StorageWorks XP10000 Disk Array. Best total performance is achieved with the same number of CHIP microprocessors facing the servers as facing the external disk arrays.

While the microprocessors in the XP12000 Disk Array are capable of approximately 3 KIOPS, the microprocessors in the XP24000 Disk Array are capable of approximately 2x the XP12000 Disk Array value.

XP software
A full complement of management software tools is available with every StorageWorks XP. These software tools are licensed based on the capacity to be virtualized. A sampling of available software titles are: HP StorageWorks Command View XP Advanced EditionProvides centralized web-based management for ES XP and XP. It consists of a user friendly GUI, SMI-S provider, host agents, and a command line interface (CLI). LUN Configuration and Security Manager XPAdd and delete paths, create custom-sized volumes5, and configure LUN security to provide controlled, secure access to data stored behind an XP. Auto LUN XPEnables manual data migration. Tiered Storage Manager XPEnables data migration to match key user quality of service requirements to the storage attributes of ES XP controlled storage. Data Shredder XPCan optionally be invoked after a successful Tiered Storage Manager data migration to delete the data on the migration source volume. HP StorageWorks Business Copy XP, Snapshot XPMakes nearly instantaneous local full or space efficient point-in-time copies or mirrors of data without ever interrupting online production. HP StorageWorks Continuous Access XP Synchronous/Asynchronous/JournalEnables data mirroring between local and remote XP disk arrays. HP StorageWorks XP Disk/Cache PartitionAllows resources on a single XP to be divided into a number of distinct subsystems that are independently and securely managed.

XP terms
The following terms are important in understanding ES XP: CHIP PairA printed circuit assembly pair used for data transfer between the external environment and ES XP cache. A CHIP pair contains multiple Fibre Channel ports, which are used to connect to hosts, external storage devices, and remote sites by way of SANs or by direct connect. Cluster (CL)An isolated portion of ES XP cache, Fibre Channel CHIP blades, and so on such that hardware failures within one cluster will not affect the continued operation of the other cluster. Device groupAn LU group containing one or more Business Copy XP or Continuous Access XP pairs such that operations applied to the group affect every group LU. External disk arrayThe disk arrays connected to ES XP. External device groupA grouping of one or more external disk array volumes. External portAn XP port configured to connect to an external disk arraylike a host bus adapter (HBA) on a server. LULogical Unit or disk volume. LDEVAn XP logical device typically manifesting in a particular emulation type (for example, an LDEV might be an OPEN-V LDEV in 3D+1P RAID 5). After an LDEV is registered within ES XP, it can either be mapped directly to a host group, and host viewable LU, aggregated with other LDEVs to create a larger LU (Logical Unit Size Expansion [LUSE]), or carved up to create a smaller LDEV and LU. Microprocessor (MP)The solid state engine responsible for the functionality and performance of one or more Fibre Channel ports.

5 For best performance, map external volumes into the XP at their native size. In that the use of many mirroring products (for example, Business Copy XP) requires an exact match between volumes, EVA volumes should ideally be sized in multiples of 15,350 MB (31457280 blocks, 16384 cylinders) to avoid the performance ramifications of using CVS with external storage.

PvolThe primary or production side volume of a Business Copy XP or Continuous Access XP pair. SvolThe secondary or mirror volume of a Business Copy XP pair or Continuous Access XP pair.

Figure 4. The Set and Forget configuration sequence

(1) Use the external arrays local mgmt station & GUI to provision/expose LUs to the XP (select RAID type, sizes, etc).
Typically, a one time operation.

(2) The external LUs are discovered by the XP and mapped in as V/LDEVS, & assigned LUNs, host ports & host groups.
Typically, a one time operation.

(3) Day-to-day XP storage management operations (LUN provisioning, re-sizing, aggregating, etc.) can typically occur without further external storage changes. The XP adds its premium features to the external storage (which is treated as generic storage)

External Storage XP configurations


An XP system using External Storage XP without remote replication
An XP using ES XP without remote data replication should be configured as shown in Figure 5. In this configuration, data travels vertically in the diagram between hosts and virtualized storage. Maximum XP10000 ES XP performance is achieved by configuring up to 126 active paths between the hosts and the XP10000 Disk Array, with one CHIP microprocessor servicing each path; and up to 12 active paths between the XP10000 Disk Array and the external disk arrays, with one CHIP microprocessor servicing each path. The best practice for configuring the number of active Fibre Channel paths per external disk array varies by the workload type. A workload containing a significant random-access component (for example, OLTP/dB) can be configured with one active and one inactive Fibre Channel path per external disk array port and LU7. For a predominantly large-block sequential workload (where only one LU will typically be accessed at a time) two active and two passive paths8 to the entire external disk array may allow for maximum storage connectivity.

Or up to 56 storage-facing and 56 host-facing microprocessors (MPs) for the XP12000 Disk Array For instance, an HP StorageWorks Enterprise Virtual Array (EVA4000) could have one active and one inactive Fibre Channel path into each of eight EVA ports, to eight EVA LUs. 8 That is, one active and one passive path per disk array subsystem/frame.
6 7

Figure 5. StorageWorks XP10000 Disk Array using ES XP without remote replication

Servers
OOO

Backup Server

1-12 Active Host Paths

XP
1-12 Active Storage Paths

Heterogeneous Disk Arrays

StorageWorks XP using External Storage XP with remote replication


The XP can replicate data to another XP disk array located on a remote site, either synchronously or asynchronously. An XP using ES XP with remote data replication should be configured as shown in Figure 6. In this configuration, data not only travels vertically in the diagram between hosts and virtualized storage, but may also travel across between sites. Best practice for configuring an XP10000 Disk Array with ES XP or maximum performance with remote data replication is to configure up to10 active paths between the hosts and ES XP, with one CHIP microprocessor servicing each path and up to 10 active paths between ES XP and the external disk arrays, with one CHIP microprocessor servicing each path. The remaining four Fibre Channel ports are used for the Continuous Access XP links.

10

Four Fibre Channel paths are recommended for the Continuous Access links at a minimum (two in each direction). A maximum of eight Fibre Channel ports may be used (four in each direction). Another best practice for a heavily write-biased workload in this configuration is to assure that the Continuous Access XP Svol (remote disk array) is as fast or faster than the Continuous Access Pvol (source disk array), so data does not needlessly accumulate in ES XP cache. The Figure 6 configuration also supports a remote RAID Manager command device on the Continuous Access XP RCU (remote site), which can communicate with the Continuous Access XP MCU (local site) command device for the purpose of remote pair (for example, Business Copy XP and Continuous Access XP) management.

Figure 6. StorageWorks XP10000 Disk Array using ES XP with remote replication

Servers
OOO

Servers
OOO

Backup Server

1-10 Active Host Paths

XP
4 Continuous Access Paths 1-10 Active Storage Paths

XP

Heterogeneous Disk Arrays

Heterogeneous Disk Arrays

Supported External Storage XP configuration topologies


Diagrams (a) through (f) in Figure 7 describe the various ways in which an XP may be configured in relationship to hosts and external disk arrays. Although only a single host is shown, ES XP is capable of connecting to many hosts. For best performance, configuration (b) should be avoided (or at least restricted to a LUSE configuration composed of four or less LDEVs). Configuration (d) is not recommended to contain Snapshot Pvols or Snapshot pools on Modular Storage Array (MSA) arrays. Not shown is the fact that the Nishan FC-IP-FC extender may be used to locate the external array up to 1000 km from the XP.

11

Figure 7. Valid ES XP topologies

(a) Host

(b) Host

(c) Host Lvol


pv pv XP LU LU svol

(d) Host

XP LU

XP LUSE

XP
BC pvol Part of 1 or both can be in XP Cache LUN

LU

LU

LU

LU
Array

LU
Array

LU

LU

Array

Array

Array

external LVM aggregated XP Regular or CVS. XP cache BC/snapshot and/or SW RAID aggregated on or off. SAA, A/P or pvol & svol. (can use >1 XP) (LUSE) AAA controllers. With or without Nishan extender pair. note: path lines shown are logical paths. Most configurations require at least 2 physical paths

(e) Host Host Host

(f) Host

XP

pvol

CA

XP

XP pvol

svol

LU

LU

LU

LU

LU

Array

Array

Array
(supported for all arrays but MSA)

(supported for all arrays but MSA)

CA XP, with External LUs

Direct access to an LU not involved with the XP

note: path lines shown are logical paths. Most configurations require at least 2 physical paths

12

Basic building block


The ES XP basic building block is shown in Figure 8. This building block consists of a matched set of: One host-facing Fibre Channel CHIP microprocessor using only one active port (the inactive port is reserved for failover) One external disk array-facing Fibre Channel CHIP microprocessor using only one active port (the inactive port is reserved for failover) One Fibre Channel link to one dedicated external disk array port and one LU

Figure 8. Basic building block

Application Host For high availability use a matched set of 1 host-facing CHIP microprocessor and I storage-facing CHIP microprocessor 1 Fibre Channel path 1 external array port 1 external LU FC Switch Good for light Random or heavy Sequential workloads. Scale up from this basic configuration based on the type and intensity of the workload, etc. Controller-A FC Switch Controller-B
mp

FC Switch

FC Switch

XP CL-1
1 active and 1 inactive failover port per MP mp

XP CL-2
mp

LU

for instance, an EVA

LU

mp

13

For optimum performance, ES XP should be configured with up to a maximum of 12 (XP10000 Disk Array) host-facing and 12 external disk array-facing CHIP microprocessors, each actively using only one of its two ports. The diagram in Figure 9 illustrates (on the external disk array side) how each CHIP microprocessor should be connected to one active and one non-active (failover) path by way of its two Fibre Channel ports. The external array LU should consist of enough disk spindles (at typically less than 150 IOPS per spindle) to keep up with the potential throughput of the active Fibre Channel link (for example, 20 spindles).

Figure 9. ES XP port configuration (XP10000 Disk Array)

Hosts
mp mp mp mp mp

mp

mp

mp

mp

mp

mp

mp

XP CL-1
mp mp mp mp mp mp mp mp

XP CL-2
mp mp mp

FC Switch

FC Switch

External Disk Arrays

mp

14

When Continuous Access XP is used, best practice is to use at least two CHIP microprocessors from the host-facing side and two CHIP microprocessors from the external disk array-facing side for the bidirectional Continuous Access XP links. Doing this results in a maximum of 10 host-facing CHIP microprocessors and 10 external disk array-facing CHIP microprocessors. If MSA9 external disk arrays are being considered: MSA disk arrays with two controllers are strongly recommended, with two paths connected to the XP. MSA disk arrays with only a single controller are not recommended. MSA disk arrays are not supported for use with Continuous Access (Pvols, Svols), and not recommended for use with Snapshot (Pvols or pool). Failed external disk array port failover is handled automatically by the XP for all supported disk array classes: Symmetrical Active/Active controller disk arrays like the HP XP/EMC DMX/EMC Symmetrix/HDS Lightning/SUN StorEdge 9900/IBM DS4000 series Asymmetrical Active/Active controller disk arrays like the HP StorageWorks 4000/6000/8000 Enterprise Virtual Arrays (EVA4000/6000/8000), or the EMC CX/HDS Thunder series Active/Standby controller disk arrays like the HP StorageWorks 3000/5000 Enterprise Virtual Arrays (EVA3000/5000), or the HP MSA series Active load balancing to an external disk array volume by way of multiple paths will only occur for the symmetrical Active/Active controller class of arrays. For more details, see your HP storage representative.

Specific configurations
Array repurposing

The diagram in Figure 10 illustrates how an XP with ES XP may be permanently placed between a working host and array connection, without taking the application offline. Benefits: Re-direct an applications access path from direct connect to by way of an XP Preserve data integrity and application uptime during the process Use mirror/UX and LVM used with HP-UX, VxVM and VERITAS Volume Manager, mirroring may be used for other operating systems

Consider a re-marketed EVA as a low-cost, high-functionality alternative to MSA.

15

Sequence: Time 0Begin. Application running on a legacy array LU as LVM Lvol A mapped to LVM Pvol 1. Time 1Create a SW RAID duplicate. LVM Lvol A can now be served by either LVM Pvol 1 or 2. Time 2Break one side of the SW mirror. The other side caries on without interruption. Time 3Re-establish the SW mirror by way of a second external LU. Time 4Break the second SW mirror path and decommission the second external LU. Time 5Application running on a legacy array LU by way of the XP, still using LVM Lvol A.

Figure 10. Online LU reconfiguring from direct connect to virtualized


Application using LVM Lvol-A Mirroring middleware e.g. Mirror/UX & LVM Application using LVM Lvol-A Mirroring middleware e.g. Mirror/UX & LVM Application using LVM Lvol-A Mirroring middleware e.g. Mirror/UX & LVM Application using LVM Lvol-A Mirroring middleware e.g. Mirror/UX & LVM Application using LVM Lvol-A Mirroring middleware e.g. Mirror/UX & LVM Application using LVM Lvol-A Mirroring middleware e.g. Mirror/UX & LVM

LVM Pvol 1

LVM Pvol 1

LVM Pvol 2 LU Y

LVM Pvol 1

LVM Pvol 2 LU Y

LVM Pvol 1 LU Z

LVM Pvol 2 LU Y

LVM pvol 1 LU Z

LVM Pvol 2 LU Y

LVM Pvol 1 LU Z

XP

XP

XP

XP

XP

LU X

LU X

LU W

LU X

LU W

LU X

LU W

LU X

Active

Unused

Active

Active

Inactive

Active

Active

Active

Active

inactive

Active

Unused

Array

Array

Array

Array

Array

Array

Time-0

Time-1

Time-2

Time-3

Time-4

Time-5

16

Array repurposing can also take place offline to the application, as illustrated in Figure 11. Sequence: Time 0Begin. Application running. Time 1Shut down application and remove connection to legacy array. Time 2Re-establish the LU by way of ES XP as an external LU, now accessed by way of XP LU Y.

Figure 11. Offline reconfiguring from direct connect to virtualized

Application

Application

Application

Host

Host

LU Y

Host

XP

LU X

LU X

Active

Active

Active

Array

Array

Array

Time-0

Time-1

Time-2

Data migration across heterogeneous external disk array tiers

As the two diagrams show in Figure 12, the Tiered Storage Manager (TSM) plug-in for Command View XP AE (or alternatively, Auto LUN XP by way of the remote Web Console) is a very useful tool for managing the online migration of data from either external to external, or from internal to external, by way of a Graphical User Interface (GUI). Data migrations can occur while the application is using the data, and can be optionally scheduled to occur automatically during a period of low storage demand. Low utilization data can be moved to less expensive (slower) disks, while maintaining the same host LU number and XP LDEV number. Use manual TSM/Auto LUN XP to online migrate less frequently accessed data from one external disk array tier or frame to another Benefits: Improved $/GB for online access to older data Preservation of prior investments in legacy disk arrays

17

Figure 12. Migrate or replicate LUs across internal/external tiers or arrays

TSM Client via web console

SCSI

15K rpm

TSM Client via web console

SCSI

15K rpm

SATA

7.5K rpm Host

SATA

7.5K rpm

Host

XP

XP

Data Migration/Replication Across Tiers

Data Migration/Replication Across Domains (Arrays)

Seasonal Migration Across internal/external storage tiers


Example: Seasonally migrate application data online to internal/external storage with different performance attributes, as application demand seasonably fluctuates.
[This a hypothetical example for the U.S. Income Tax department]

XP

Tier-0/1
e.g. Tier-0: internal cache LUNs, Tier-1: internal XP spindle LUNs

Seasonable User Demand

Tier-4 Storage

Tier-3 Storage
e.g. external EVA e.g. external MSA

Tier-2 Storage
e.g. external XP

High Medium Low


May Jun Jul Aug Sep Oct Nov Dec Jan Feb Mar Apr

18

Remote data replication

As illustrated in Figure 13, ES XP may be used to accomplish remote replication between two XPs. Continuous Access Asynchronous will often10 give higher application performance as data replication occurs outside of host I/O cycles. Continuous Access Synchronous will keep the data on the two sites virtually consistent at all times.

Figure 13. Remote data replication

Site-1 Host Host Host

Site-2 Backup Server FC Switch

FC Switch

FC Switch

FC Switch

mp

mp

mp

mp

mp

mp

mp

mp

mp

mp

mp

mp

mp

mp

mp

mp

mp

mp

mp

mp

mp

mp

XP CL-1
mp mp mp mp mp mp mp

XP CL-2
mp mp mp mp mp

Continuous Access XP
mp

XP CL-1
mp mp mp mp mp mp

XP CL-2
mp mp mp mp mp

FC Switch

FC Switch

FC Switch

FC Switch

Controller-A Controller-B Controller-A Controller-B Controller-A CA Controller-B

Pvol CA CA CA Pvol Pvol CA Pvol CA CA Pvol Pvol CA Pvol Pvol CA Pvol

Controller-A Controller-B Controller-A Controller-B Controller-A CA Controller-B


CA Svol
BC BC BC

CA Svol

CA Svol

Pvol CA CA Pvol Pvol CA CA Pvol Pvol CA Pvol

10

Depending on the distance.

mp

mp

19

Low-utilization data archive

The diagram in Figure 14 illustrates how ES XP may be used to back up data from disparate external disk arrays in which accesses will be infrequent, largely of a sequential nature, and very light duty. For this application, capacity is much more important than random or sequential performance.

Figure 14. Low utilization archive

Backup Host

Tape

XP10000 with 24+ GB Cache

LU LU LU MSA

LU LU LU MSA

LU LU LU MSA

LU LU LU MSA

LU LU LU MSA

LU LU LU MSA

LU LU LU

LU LU LU

LU LU LU

LU LU LU

LU LU LU

LU LU LU

EVA 3000/5000 A/P

EVA 4000/6000/8000 AAA

20

Microsoft Exchange

The diagram in Figure 15 defines a Microsoft Exchange email Module using an XP10000 Disk Array connected to EVA3000/5000 disk arrays. At less than one11 I/O operation per second per user, ES XP would be a suitable match for 4,000, 8,000, or even 12,000 Microsoft Exchange users. The hardware requirements of each group of 1,000 users is a single 350-GB LU, consisting of enough12 disk spindles to accommodate 800 IOPS of random access workload. Note from Figure 15 that the best practice is to dedicate an external disk array port and LU to a single active (and one inactive) Fibre Channel path.

Figure 15. Microsoft Exchange

4000 user Exchange module

Each module consists of: A 2-node host cluster Four 350 GB LUNs (of at least 6 spindles each) Each of the 4 LUNs requires the ability to handle:
1000 Microsoft Exchange users: 0.8 IOPS max. per user normally 0.4 IOPS max. per user during backup

Exchange host cluster Exchange host cluster

Exchange host cluster Exchange host cluster

Exchange host cluster Exchange host cluster

XP with 12GB

Exchange host cluster Exchange host cluster

XP with 8GB Cache

Not shown is the switch between the XP and the storage, enabling 2 Fibre Channel connections per EVA port. Also not shown are the internal EVA LU failover paths to the non-owning controller. Each EVA LU much consist of at least 6 spindles, in order to provide the necessary 800 IOPS

2.4 KIOPS

2.4 KIOPS

EVA
350GB 350GB 350GB 350GB 350GB 350GB 350GB 350GB 350GB 350GB 350GB 350GB

2.4 KIOPS

2.4 KIOPS

0.8 KIOPS

0.8 KIOPS

EVA
350GB 350GB 350GB 350GB

0.8 KIOPS

0.8 KIOPS

controller A

controller B

controller A

controller B

Basic 4000 user Exchange Module using EVA disk arrays

Three modules for 12,000 Exchange users using EVA disk arrays

11 12

0.8 I/Os per second, per user For instance, eight (the minimum number of disk drives in an EVA disk group).

21

Oracle

The performance of ES XP in an Oracle environment depends on the application and workload to be placed on the external disk arrays. In general, ES XP is suitable for random access (OLTP/dB) environments with less than 3,000 8 KB13 IOPs per active Fibre Channel link. ES XP can support an application of up to about 2.5 KIOPS per LU application on top of an Oracle database. Figure 16 shows the 12 external disk array-facing CHIP microprocessors fully consumed by three EVA3000/5000 disk arrays for use in a random access workload environment.

Figure 16. Oracle


Application Backup Server

Oracle host XP10000 with 12GB Cache EVA


500GB 500GB 500GB 500GB

Tape
2.5 KIOPS 2.5 KIOPS

2.5 KIOPS

2.5 KIOPS

2.5 KIOPS

2.5 KIOPS

2.5 KIOPS

2.5 KIOPS

EVA
500GB 500GB 500GB 500GB

2.5 KIOPS

2.5 KIOPS

FastT
500GB 500GB 500GB 500GB

2.5 KIOPS

2.5 KIOPS

controller A

controller B

controller A

controller B

controller A

controller B

Can be either two EVA3000/5000s or one EVA4000/6000/8000


In order for Each EVA LU to provide 2.5 KIOPS, it must consist of at least 17--21 spindles.

Basic rules of thumb


ES XP performs best under sequential disk access workloads compared to random access workloads. Low-intensity random access (OLTP/dB) applications should be considered a secondary use for ES XP, with sequential access (backup/archive) applications being the primary use. Medium-high intensity random access workloads should not be considered appropriate for ES XP.

Sizing External Storage XP configurations


Given that: Each storage-facing CHIP MP is capable of about 3000 IOPS (8KB 60/40 Read/Write), and Each CHIP MP is strongly recommended to serve only a single active port, and Most single external array ports and LUNs are capable of at least 3000 IOPS Performance will peak with one CHIP MP and one XP port dedicated to one external array port and LUN. To the extent that peak performance is not required, customers may choose to deviate from this configuration. As a conservative example, a customer with a light random access workload may choose to limit his/her configuration to 12 external LUNs, served by 12 CHIP MPs (each using a single/dedicated XP port and a single/dedicated external array port). Depending on the situation, some customers may choose to have a single CHIP MP and port serve two or more external LUNs.

13

60/40 Read/Write ratio

22

As a liberal example, a customer with a seldom accessed deep archive (sequential workload) may connect each group of two CHIP MPs to one or more entire arrays. For more details, see your HP storage representative.

Partitioning
HP StorageWorks XP Disk/Cache Partition can be very useful to isolate different workloads and users on the same XP. For instance, partitioning is considered a best practice if ES XP is used for random access (OLTP/dB) and large-block sequential (backup/archive) workloads XP simultaneously. Likewise, partitioning is recommended as a best practice if disparate user groups (for example, finance and R&D) share the same XP. For many external storage applications, partitioning is recommended. A limited license (CLPR0-3 within SLPR0) is provided at no cost. For more details, see your HP storage representative.

Figure 17. Partitioning

Finances host

Marketings host

Manufacturings host

XP

Summary
External Storage XP delivers fabric-based storage virtualization based on the proven highperformance, high-availability XP disk array architecture. In a world where every business decision seems to trigger an IT event, ES XP helps enable an adaptive enterpriseone that can quickly capitalize on and manage change.

23

For more information


For more information on HP StorageWorks XP, visit: www.hp.com/go/storage

2007 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation. Oracle is a registered US trademark of Oracle Corporation, Redwood City, California. 4AA0-6162ENW, Rev. 1, May 2007

24

Você também pode gostar