Você está na página 1de 158

A technical overview of

HP 3PAR Utility Storage

The worlds most agile and efficient Storage Array

Peter Mattei, Senior Storage Consultant


November 2011

Copyright 2011 Hewlett-Packard Development Company, L.P. The information


contained herein is subject to change without notice. Confidentiality label goes here
Table of content
The IT Sprawl and how 3PAR can help
HP Storage & SAN Portfolio
Introducing the HP 3PAR Storage Servers
F-Class
T-Class
V-Class
HP 3PAR InForm OS Virtualization Concepts
HP 3PAR InForm Software and Features
Thin Technologies
Full and Virtual Copy
Remote Copy
Dynamic and Adaptive Optimization
Peer Motion
Virtual Domain
Virtual Lock
System Reporter
VMware Integration
Recovery Manager
HP Copyright 2011 Peter Mattei
The IT Sprawl
Creates challenges for Mission Critical Infrastructure

70% of resources captive in


operations and maintenance

Increased Risk

Inefficient and Siloed


Business innovation

30%
throttled to
Complicated and Inflexible

Source: HP research
3 HP Copyright 2011 Peter Mattei
The world has changed
And storage must change with it

Explosive growth Virtualization Cloud & utility Infrastructure &


& new workloads & automation computing technology shifts

Customers tell us storage is: Storage needs to be:


Too complicated to manage Simple

Expensive & hard to scale Scalable

Isolated & disconnected Smart

Inefficient & inflexible Self-Optimized

4 HP Copyright 2011 Peter Mattei


HP 3PAR Industry Leadership

Best new technology in the market

3PAR Thin Provisioning


Industry leading technology to maximize storage utilization

3PAR Autonomic Storage Tiering


Automatically optimizes using multiple classes of storage

3PAR Virtual Domains


Multi-tenancy for service providers and private clouds

3PAR Dynamic Optimization


Workload management and load balancing

3PAR Full Mesh Architecture


Advanced shared memory architecture

HP Copyright 2011 Peter Mattei


HP 3PAR History
Constant evolution
November
InForm OS v2.3.1 released
May 1999 July 2001 November With many new features
3PAR founded 3PAR secures $100 Million 3PAR IPO March
with 5 employees in third-round financing Introduction of Introduction of Adaptive
Virtual Domains Optimization and Recovery
June and iSCSI support Manager for VMware
3PAR Utility Storage and September
Thin Provisioning launch 3PAR acquired by HP
in the US and Japan

1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012

September August August


General Introduction of the Introduction of
Availability of the E-Class midrange V-Class w. Gen 4 ASIC,
InServ S-Class Storage Server InForm v3.1.1,
Storage Server Peer Motion
April
Introduction of the F-Class
December 2000 May the first quad controller
Bring-up of the Gen 1 3PAR introduces midrange array
3PAR ASIC Dynamic Optimization
and Recovery Manager September
Introduction of the T-Class
with Gen 3 ASIC - the first
Thin Built in storage array

6 HP Copyright 2011 Peter Mattei


HP 3PAR Leadership Efficient

Thin Optimized Green

Tier 0 SSD
Tier 1
FC

Tier 2
Nearline

Reduce capacity Reduce power


Tiering balances
requirements by at and cooling costs at
$/GB and $/IOP
least 50% least 50%

HP 3PAR Customers reduce TCO by 50%

7 HP Copyright 2011 Peter Mattei


HP 3PAR Leadership Autonomic

Maintain Respond to Change


Up Fast
Service Levels Quickly

15 Seconds to Deliver high performance Quickly adapt to the


provision a LUN to all applications. Even unpredictable
under failure scenarios.

HP 3PAR Customers reduce storage management


burden by 90% compared to competitors arrays
8 HP Copyright 2011 Peter Mattei
HP 3PAR Leadership Multi-Tenant

Shared Secure Resilient

Massive Virtual Private Ready for Change


Consolidation Array

Storage can be used Secure segregation of Sustain and consolidate


across many different storage while preserving diverse or changing service
applications and lines the benefits of massive levels without compromise
of business parallelism

The Tier-1 Storage for Utility Computing

9 HP Copyright 2011 Peter Mattei


The HP Storage Portfolio
Online

X1000/X3000 X5000 X9000 P2000 P4000 P6000 EVA 3PAR P9500 XP


Infrastructure

E5000
for Exchange
HP Networking Wired, Wireless, HP Networking SAN Connection B, C & H Series
Data Center, Security & Management Enterprise Switches Portfolio FC Switches & /Directors
Nearline

VLS virtual
EML tape ESL tape library
RDX, tape drives MSL tape libraries libraries D2D Backup systems
& tape autoloaders libraries Systems

Business Copy
Software

Continuous Access
Data Protector Storage Storage Array Data Storage Cluster Extension
Express Mirroring Software Protector Essentials

SupportPlus 24 Proactive Select SAN Assessment Proactive 24 Backup & Recovery Critical Service
Services

Entry Data Migration SAN Implementation Installation & Start-up Data Migration Storage Performance Analysis
Data Protection Remote Support Consulting services (Consolidation, Virtualization, SAN Design)

HP Copyright 2011 Peter Mattei


HP Storage Array Positioning
Storage Application Mission Critical
Virtual IT Utility Storage
Consolidation Consolidation Consolidation
P2000 MSA P4000 LeftHand P6000 EVA 3PAR P9500

Architecture Dual Controller Scale-out Cluster Dual Controller Mesh-Active Cluster Fully Redundant
Connectivity SAS, iSCSI, FC iSCSI FC, iSCSI, FCoE iSCSI, FC, (FCoE) FC, FCoE

Performance 30K Random Read IOPs ; 35K Random read IOPs 55K Random read IOPS > 400K random IOPs; >300K Random IOPS
1.5GB/s seq reads 2.6 GB/s seq reads 1.7 GB/s seq Reads > 10 GB/s seq reads > 10GB/s seq reads

Application SMB , enterprise ROBO, SMB, ROBO and Enterprise Enterprise - Microsoft, Enterprise and Service Large Enterprise - Mission
Sweet spot consolidation/ virtualization Virtualized inc VDI , Virtualized, OLTP Provider , Utilities, Cloud, Critical w/Extreme
Server attach, Video Microsoft apps Virtualized Environments, availability, Virtualized
surveillance BladeSystem SAN (P4800) OLTP, Mixed Workloads Environments, Multi-Site DR

Capacity 600GB 192TB; 7TB 768TB; 2TB 480TB; 5TB 1600TB; 10TB 2000 TB;
6TB average 72TB average 36TB average 120TB average 150TB average

Key features Price / performance All-inclusive SW Ease of use and Simplicity Multi-tenancy Constant Data Availability
Controller Choice Multi-Site DR included Integration/Compatibility Efficiency (Thin Provisioning) Heterogeneous
Replication Virtualization Multi-Site Failover Performance Virtualization
Server Attach VM Integration Autonomic Tiering and Multi-site Disaster Recovery
Virtual SAN Appliance Management Application QOS (APEX)
Smart Tiers

OS support Windows, vSphere, HP-UX, vSphere. Windows, Linux, Windows, VMware, HP-UX, vSphere, Windows, Linux, All major OSs including
Linux, OVMS, Mac OS X, HP-UX, MacOS X, AIX, Linux, OVMS, Mac OS X, HP-UX, AIX, Solaris Mainframe and Nonstop
Solaris, Hyper-V Solaris, XenServer Solaris, AIX

HP Copyright 2011 Peter Mattei


B-Series SAN Portfolio
Brocade switch, director, HBA and software family
SN8000B 8-slot
32 384 16Gb FC ports
SN8000B 4-slot 2.11Tb ICL bandwidth Director Blades
32 - 192 16Gb FC ports
1Tb ICL bandwidth

16Gb FC
Data Center Fabric Manager
32 & 48 port
Enhanced capabilities DC SAN Backbone Director
32 - 512 8Gb FC ports 8Gb FC & FICON
DC04 SAN Director + 4x 128Gb ICL 16, 32, 48 & 64 port
Host Bus Adapters 32 - 256 8Gb FC ports
4Gbps single and dual port HBA + 4x 64Gb ICL
8Gbps single and dual port HBA

10/24 FCoE
SN6000B FC Switch
DC Encryption
24-48 16Gb ports
MP Router MP Extension
16FC+2IP port

8/80 SAN Switch


48-80 8Gb ports HP 2408 CEE ToR Switch
1606 Extension SAN (24x 10Gb CEE
8/40 SAN Switch Integrated 8Gb SAN Switch Switch - FC & GbE + 8 8Gb FC ports)
24-40 8Gb ports for HP EVA4400

8/8 & 8/24 SAN Switch 8Gb SAN Switch HP 400 MP-Router
Encryption Switch
8-24 8Gb ports for HP c-Class BladeSystem (16x 4Gb FC + 2GbE IP ports)
(32x 8Gb FC ports)
HP Copyright 2011 Peter Mattei
C-Series SAN Portfolio
Cisco MDS9000 and Nexus 5000 family

Fabric Manager and


Fabrice Manager
Server Package
Enhanced capabilities

SN6000C
(MDS 9148)

MDS 9124e MDS 9124 MDS 9134


c-class switch MDS 9222i MDS 9506 MDS 9509 MDS 9513

FC switches MDS9000 multiprotocol switches and directors


Nexus DCE/CEE ToR switches MDS9000 blades

Nexus 5010 Nexus 5020


20 28 ports 40 56 ports
18/4, SSM
Supervisor 2 1 - 4Gb FC 1 - 8Gb FC 10Gb FC IP Storage Virtualization
Nexus Expansion Modules 12, 24, 48-Port 24 & 48-Port 4-Port Services blade blade
FC FC/4 10Gb Eth - 10Gb Eth

Management Cisco Fabric Manager

Embedded OS Cisco NX-OS


HP Copyright 2011 Peter Mattei
Announcement of 23th August 2011
New Hardware
New HP 3PAR top models P10000 V400 and V800
Higher performance 1.5 to 2 times T-Class
New SPC-1 performance world record of 450213 IOPS
Higher capacities 2 times T-Class
V400: 800TB
V800: 1600TB
Higher number of drives 1.5 times T-Class
V400: 960 disks
V800: 1920 disks
New faster Gen4 ASIC now 2 per node
PCI-e bus architecture provides higher bandwidth and resilience
8Gb FC ports higher IO performance
Chunklet size increased to 1GB to address future higher capacities
T10 DIF increased data resilience

14 HP Copyright 2011 Peter Mattei


Announcement of 23th August 2011
New InForm OS and Features
New InForm OS 3.1.1 for F-, T- and V-Class
64-bit architecture
Remote Copy enhancements
Thin Remote Copy reduces initial copy size
More FC RC links up to 4 from 2
Firmware upgrade enhancements
All upgrades are now node by node
RC copy groups can now stay online during FW upgrades
New additional Virtual Domain user roles
More granular 16kB Thin Provisioning space reclamation
VMware enhancements
Automated VM space reclamation (T10 compliant)
VASA support
Peer Motion for F-, T- and V-Class
Allows transparent tiering and data migration between F-, T- and V-Class systems
New license bundles
Thin Suite for F-, T- and V-Class
Optimization Suite for V-Class

15 HP Copyright 2011 Peter Mattei


HP 3PAR InServ Storage Servers

Same OS, Same Management Console, Same Replication Software

F200 F400 T400 T800 V400 V800


Controller Nodes 2 24 24 28 24 28
Fibre Channel Host Ports 0 12 0 24 0 48 0 96 0 96 0 192
Optional 1Gb iSCSI Ports 08 0 16 0 16 0 32 NA NA
Optional 10Gb iSCSI Ports 3) NA NA NA NA 0 32 0 32
Optional 10Gb FCoE Ports 3) NA NA NA NA 0 32 0 32
Built-in IP Remote Copy Ports 2 24 24 24 24 24
GBs Control Cache 8 8 16 8 16 8 32 32 - 64 64 - 256
GBs Data Cache 12 12 24 24 48 24 96 64 - 128 128 - 512
Disk Drives 16 192 16 384 16 640 16 1,280 16 - 960 16 - 1920
Drive SSD 1) 100, 200GB 100, 200GB 100, 200GB 100, 200GB 100, 200GB 100, 200GB
Types FC 15krpm 300, 600GB 300, 600GB 300, 600GB 300, 600GB 300, 600GB 300, 600GB
NL 7.2krpm 2) 2TB 2TB 2TB 2TB 2TB 2TB
Max Capacity 128TB 384TB 400TB 800TB 800TB 1600TB
Read throughput MB/s 1,300 2,600 3,800 5,600 6,500 13,000
IOPS (true backend IOs) 34,400 76,800 120,000 240,000 180,000 360,000
SPC-1 Benchmark IOPS 93,050 224,990 450213
1) max. 32 SSD per Node Pair
2) NL = Nearline = Enterprise SATA
16 HP Copyright 2011 Peter Mattei 3) Planned 1H2012
Comparison between T- and V-Class

HP 3PAR T-Class HP P10000 3PAR


Bus Architecture PCI-X PCIe
CPUs 2 x dual-core per node 2 x quad-core per node
ASIC 1 per node 2 per node
Control cache 4GB per node V400: 16GB per node
V800: 32GB per node
Data cache 12GB per node V400: 32GB per node
V800: 64GB per node
I/O slots 6 9
FC host ports 0 - 128 4GB/s 0 - 192 8GB/s
iSCISI host ports 0 - 32 1GB/s 0 - 32 10GB/s*
FCoE host ports N/A 0 - 32 10GB/s*
Rack options 2M HP 3PAR rack 2M HP 3PAR rack
or 3rd party rack for V400*
Drives 16 - 1280 16 - 1920
Max capacity 800TB 1.6PB
T10DIF N/A Supported

*planned
17 HP Copyright 2011 Peter Mattei
P10000 3PAR Bigger, Faster, Better! ...all round

1.5x Disk Drives 6x Total Cache (GB) 2.5x Throughput (MB/s)


1,920 768
13000
1,280
960
640 5600 6500
6400
384 192 3800
192 64
128 2600 3200
20 40 1300

2x Raw Capacity (TB) 1.5x Host Ports 1.5x Disk IOPS (,000)
192 360
1,600 240
312
128
96 180
800 800 120
156
64
384 400 93
128 12 24 46

18 HP Copyright 2011 Peter Mattei


Scalable performance
SPC-1 IOPS HP 3PAR P10000 World Record

For details see: http://www.storageperformance.org/results/benchmark_results_spc1


19 HP Copyright 2011 Peter Mattei
Scalable performance without high cost
SPC-1 $/IOPS

For details see: http://www.storageperformance.org/results/benchmark_results_spc1


20 HP Copyright 2011 Peter Mattei
HP 3PAR Four Simple Building Blocks

F-Class T-Class V-Class


Controller Nodes
Performance/connectivity building block
CPU, Cache and 3PAR ASIC
System Management
RAID and Thin Calculations

Node Mid-Plane
Cache Coherent Interconnect
Completely passive encased in steel
Defines Scalability

Drive Chassis
Capacity Building Block
F Chassis 3U 16 Disk
T & V Chassis 4U 40 Disks

Service Processor
One 1U SVP per system
Only for service and monitoring
HP Copyright 2011 Peter Mattei
HP 3PAR Architectural differentiation
Purpose built on native virtualization
HP 3PAR Utility Storage
F-, T-, V-Class V-Class
Thin Suite Optimization Suite Recovery

3PAR Software
Additional HP
System Reporter Virtual Lock Cluster Extension
Thin Dynamic Managers
Provisioning Optimization
Thin Adaptive
Conversion Optimization Virtual
Peer Motion Virtual Domains Remote Copy
Thin Copy
Persistence System Tuner

Full Copy Rapid Provisioning LDAP Access Guard

3PAR InForm Operating


System Software
Self-Configuring Self-Healing
Autonomic Policy
Self-Optimizing Management
Self-Monitoring

Utilization Performance
InForm
Manageability fine-grained OS Instrumentation

Active Mesh Mixed Workload

V-Class
F-, T- &
Fast RAID 5 / 6 ASIC Zero Detection

22
HP 3PAR ASIC
Hardware Based for Performance

Thin Built in
Zero Detect

Fast RAID 10, 50 & 60 Tightly-Coupled Cluster


Rapid RAID Rebuild High Bandwidth, Low Latency
Integrated XOR Engine Interconnect

Mixed Workload
Independent Metadata and Data
Processing

23 HP Copyright 2011 Peter Mattei


Legacy vs. HP 3PAR Hardware Architecture
Traditional Tradeoffs

Traditional Modular Storage


HP 3PAR meshed and active

Cost-efficient but scalability and resiliency limited


by dual-controller design

Traditional Monolithic Storage


Host Connectivity

Distributed
Controller
Functions
Cost-effective, scalable and resilient architecture.
Meets cloud-computing requirements for efficiency,
Disk Connectivity multi-tenancy and autonomic management.

Scalable and resilient but costly.


Does not meet multi-tenant requirements efficiently

HP Copyright 2011 Peter Mattei


HP 3PAR Hardware Architecture
Scale without Tradeoffs

3PAR InSpire 3PAR InSpire


F-Class Architecture T- and V-Class Architecture

A finely, massively, and


automatically load
balanced cluster

3PAR
ASIC
3PAR
ASIC

Host Connectivity
Legend

Data Cache
Disk Connectivity
Passive Backplane

25 HP Copyright 2011 Peter Mattei


3PAR Mixed workload support
Multi-tenant performance
hosts I/O Processing : Traditional Storage

Heavy throughput
workload applied
Host Unified Processor Disk
and/or Memory interface disk
interface
Heavy transaction
workload applied
small IOPs wait for large IOPs to
be processed

= control information (metadata)


I/O Processing : 3PAR Controller Node = data
hosts

Heavy throughput
workload sustained 3PAR ASIC &
Memory
Host Disk
interface disk
interface
Control Processor
Heavy transaction & Memory
workload sustained

control information and data are


pathed and processed separately

26 HP Copyright 2011 Peter Mattei


3PAR Adaptive Cache
Self-adapting Cache 50 to 100% for reads / 50 to 0% for writes
3000

Host Load
20K IOPs
2500
MBs of Cache Dedicated to Writes per Node

30K IOPs
40K IOPs

2000

1500

1000

500

0
0 10 20 30 40 50 60 70 80 90 100
% Read IOPS from Host

Measured System: 2-Node T800 with 320 15K FC Disks and12 GB data cache per Node
HP Copyright 2011 Peter Mattei
HP 3PAR High Availability
Spare Disk Drives vs. Distributed Sparing

3PAR InServ
Traditional Arrays

Spare chunklets

Spare drive

Many-to-many rebuild
parallel rebuilds in less time
Few-to-one rebuild
hotspots & long rebuild exposure

28 HP Copyright 2011 Peter Mattei


HP 3PAR High Availability
Guaranteed Drive Shelf Availability

3PAR InServ
Traditional Arrays Raidlet Groups

Raid Group RG

Shelf
A1 A2 A5 A3 A6
A4
Shelf

A E A

Shelf
B1 B2 B5 B3 B6
B4
Shelf

B F B

Shelf
C1 C2 C5 C3 C6
C4
Shelf

C G C

Shelf
D1 D2 D5 D3 D6
D4
Shelf

D H D

Shelf-independent RAID
Despite shelf failure Data access preserved
Shelf-dependent RAID
Shelf failure might mean no access to data

29 HP Copyright 2011 Peter Mattei


HP 3PAR High Availability
Write Cache Re-Mirroring

3PAR InServ
Traditional Arrays

Write Cache Write Cache Write-Cache stays on


thanks to redistribution
Mirror Mirror

Write-Cache off for data security

Persistent Write-Cache Mirroring


No write-thru mode consistent performance
Traditional Write-Cache Mirroring Works with 4 and more nodes
Either poor performance due to write-thru mode
F400
or risk of write data loss
T400, T800
V400, V800

30 HP Copyright 2011 Peter Mattei


HP 3PAR virtualization advantage
Traditional Array HP 3PAR
Each RAID level requires dedicated disks All RAID levels can reside on same disks
Dedicated spare disk required Distributed sparing, no dedicated spare disks
Limited single LUN performance Built-in wide-striping based on Chunklets

0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7

R1 R1 R5 R6 R6 R1 R5 R5 R1 R1 R5 R6 R6 R1 R5 R5

Traditional Controllers 3PAR InServ Controllers


RAID5 Set RAID1 RAID6 Set

LUN 6 LUN 4
Spare

LUN 5
LUN 7 LUN 3

RAID1 Set RAID5 Set


Physical Disks
Spare

LUN 0
LUN 1 LUN 2

31 HP Copyright 2011 Peter Mattei


HP 3PAR F-Class InServ Components

Drive Chassis (3U)


Capacity building block
Drive Magazines
Add non-disruptively
Industry leading density

3PAR 40U, 19 Cabinet


or Customer Provided
Controller Nodes (4U)
Performance and connectivity building block
Adapter cards
Add non-disruptively
Runs independent OS instance

Full-mesh Back-plane
Post-switch architecture
High performance, tightly coupled
Completely passive

Service Processor (1U)


Remote error detection
Supports diagnostics and maintenance
Reporting to HP 3PAR Central
32 HP Copyright 2011 Peter Mattei
HP 3PAR F-Class Node
Configuration Options
One Xeon Quad-Core 2.33GHz CPU
One 3PAR Gen3 ASIC per node
4GB Control & 6GB Data Cache per node
Built-in I/O ports per node
10/100/1000 Ethernet port & RS-232
Gigabit Ethernet port for Remote Copy
4 x 4Gb/s FC ports
Optional I/O per node
Up to 4 more FC or iSCSI ports (mixable)
GigE Management Port
Preferred slot usage (in order); depending on
GigE IP Replication Port customer requirements
2 built-in FC Disk Ports Disk Connections: Slot 2 (ports 1,2), 0, 1
higher backend connectivity and performance
2 built-in FC Disk or Host Ports Host Connections: Slot 2 (ports 3,4), 1, 0
higher front-end connectivity and performance
Slot 1: optional 2 FC Ports for Host , Disk or FC Replication
or 2 GbE iSCSI Ports RCFC Connections: Slot 1 or 0
Enables FC based Remote Copy (first node pair only)
Slot 0: optional 2 FC Ports for Host , Disk or FC Replication iSCSI Connections: Slot 1, 0
or 2 GbE iSCSI Ports adds iSCSI connectivity

33 HP Copyright 2011 Peter Mattei


HP 3PAR InSpire Architecture
F-Class Controller Node

Controller Node(s) Quad-Core Xeon


2.33 GHz

Cache per node Multifunction


LAN
SERIAL
Controller SATA
Control Cache: 4GB (2 x 2048MB
DIMMs)
Control
4GB
Data Cache: 6 GB (3 x 2048MB Cache

DIMMs)
6 GB Data
SATA : Local boot disk Cache

Gen3 ASIC
Data Movement
XOR RAID Processing
Built-in Thin Provisioning
I/O per node
3 PCI-X buses/ 2 PCI-X slots and one High Speed
Data Links
0 2 Onboard
onboard 4 port FC HBA 1
4 Port FC

34 HP Copyright 2011 Peter Mattei


F-Class DC3 Drive Chassis Configurations
Minimum F-Class configurations
Non-Daisy Chained Daisy Chained

Minimum configuration
2 Drive Chassis
16 same Drives
Min upgrade is 8 Drives

35 HP Copyright 2011 Peter Mattei


F-Class DC3 Drive Chassis Configurations
Maximum 2-node F-Class configurations Daisy Chained
192 Drives
Non-Daisy Chained 96 Drives

36 HP Copyright 2011 Peter Mattei


Connectivity Options: Per F-Class Node Pair

Ports Ports PCI PCI # of FC # of iSCSI # of Remote # of Drive Max # of


01 2-3 Slot 1 Slot 2 Host Ports Ports Copy FC Ports Chassis Disks
Disk Host - - 4 - - 4 64

Disk Host Host - 8 - - 4 64

Disk Host Host Host 12 - - 4 64

Disk Host Host iSCSI 8 4 - 4 64

Disk Host iSCSI RCFC 4 4 2 4 64

Disk Host Disk - 4 - - 8 128

Disk Host Disk Host 8 - - 8 128

Disk Host Disk iSCSI 4 4 - 8 128

Disk Host Disk RCFC 4 - 2 8 128

Disk Host Disk Disk 4 - - 12 192

37 HP Copyright 2011 Peter Mattei


HP 3PAR T-Class InServ Components

Drive Chassis (4U)


Capacity building block
Drive Magazines
Add non-disruptively
Industry leading density
Built-In Cable Management

Service Processor (1U)


3PAR 40U, 19 Cabinet

Remote error detection


Supports diagnostics and maintenance
Reporting to HP 3PAR Central

Controller Nodes (4U)


Performance and connectivity building block
Adapter cards
Add non-disruptively
Runs independent OS instance

Full-mesh Back-plane
Post-switch architecture
High performance, tightly coupled
Completely passive

38 HP Copyright 2011 Peter Mattei


The 3PAR Evolution
Bus to Switch to Full Mesh Progression

3PAR InServ Full Mesh Backplane


High Performance / Low Latency
Passive Circuit Board
Slots for Controller Nodes
Links every controller (Full Mesh)
1.6 GB/s (4 times 4Gb FC)
28 links (T800)
Single hop
3PAR InServ T800 with 8 Nodes
8 ASICS with 44.8 GB/s bandwidth
16 Intel Dual-Core processors
32 GB of control cache
96GB total data cache
24 I/O buses, totaling 19.2 GB/s of
peak I/O bandwidth
T800 with 8 Nodes
123 GB/s peak memory bandwidth
and 640 Disks of 1280 max

39 HP Copyright 2011 Peter Mattei


HP 3PAR T-Class Controller Node

Controller Node(s) T-Class Node pair


2 to 8 per System installed in pairs 0 1 2 3 4 5 PCI Slots 0 1 2 3 4 5

2 Intel Dual-Core 2.33 GHz


16GB Cache
4GB Control/12GB Data
Gen3 ASIC
Data Movement, ThP & XOR RAID
Processing
Console port C0
Scalable Connectivity per Node
3 PCI-X buses/ 6 PCI-X slots Remote Copy Eth port E1

Preferred slot usage (in order) Mgmt Eth port E0


2 slots 8 FC disk ports
Host FC/iSCSI/RC FC ports
Up to 3 slots 24 FC Host ports
Disk FC ports
1 slot 1 FC port used for Remote Copy
(first node pair only)
Up to 2 slots 8 1GbE iSCSI Host ports

40 HP Copyright 2011 Peter Mattei


HP 3PAR InSpire architecture
T-Class Controller Node
Controller Node(s)
Scalable Performance per Node
2 to 8 Nodes per System
Gen3 ASIC
Data Movement
XOR RAID Processing
Built-in Thin Provisioning
2 Intel Dual-Core 2.33 GHz
Control Processing
SATA : Local boot disk
GEN3 ASIC
Max host-facing adapters
Up to 3 (3 FC / 2 iSCSI)
Scalable Connectivity Per Node
3 PCI-X buses/ 6 PCI-X slots

41 HP Copyright 2011 Peter Mattei


T-Class DC04 Drive Chassis

From 2 to 10 Drive Magazines


(1+1) redundant power supplies
Redundant dual FC paths
Redundant dual switches

Each Magazine always holds


4 disks of the same drive type
Each Magazines in a Chassis
can have different Drive types.
For example:
3 magazines of FC
1 magazine of SSD
6 magazines of SATA.

42 HP Copyright 2011 Peter Mattei


T400 Configuration examples
Minimum configuration is 2 nodes and 4 drive chassis
with 2 magazines per chassis. That means a starting
configuration with 600GB drives is 19.2 TB of raw
storage.
Upgrades are done as Columns of Magazines down
the Drive Chassis. In this example we added 4 600GB
magazines or 16 Drives.
Once we fill up the original 4 Drive Chassis we have a
choice. Add 2 more nodes, drive chassis and disks or
just add 4 more drive chassis and some disks.
Considerations:
Do I need more IOP performance? (A node pair can
drive 320 15K disks or 8 fully loaded Chassis.)
It is virtually impossible to run out of CPUs power
with so few drives. Only SSD drives may hit node
IOP and CPU limits.
Do I need more Bandwidth? A nodes bandwidth can
be reached with much fewer resources. Adding
nodes increases overall bandwidth.

* Diagram is not intended to show all components in the 2M Cabinet, but rather to show how
controllers and drive chassis scale. Controllers and Drive Chassis are populated from bottom to top
43 HP Copyright 2011 Peter Mattei
T400 Configuration examples
How do we grow? After looking at the performance
requirements it is decided that adding capacity to the
existing nodes is the best option. This offers a good balance
of capacity and performance.
We decide that the next upgrade should be filling out the
first two nodes.
The next upgrade is going to require additional Controller
Nodes, Drive Chassis and Drive magazines. The minimum
upgrade allowed is:
2 Controller nodes
4 Drive Chassis
8 Drive Magazines
Just because you can do something doesnt mean it is a
good idea. This upgrade makes the Node Pairs very
unbalanced.
Over 50,000 IOPs on 2 nodes and 6400 on the other 2
Over 320 TB on one Node Pair and 19TB on the other 2
A much cleaner upgrade would be to add a lot more FC
capacity. This will bring the node IOP balance up much
closer. 44,800 to 32,000 FC IOPs There will still be a lot
more capacity behind 2 nodes but the volumes that need
more IOPs can be balanced across all FC disks.
Due to power distribution limits in a 3PAR rack you can
only have 8 Chassis per rack. A T400 with 8 Chassis
requires 2 full racks and a mostly un-filled 3rd rack.
44 HP Copyright 2011 Peter Mattei
T400 Configuration examples
Youll notice that the T400 has space for 6 Drive
Chassis but the normal building block is 4 Chassis.

With a T400 you are allowed to deploy 6 Drive


Chassis on the initial deployment

But this has some important caveats:


Min. upgrades increment are 6 Magazines, 24
drives. In this example with 600GB drives that is
a minimum upgrade of 14TB.
This is the maximum configuartion in one rack:
2 nodes, 6 Chassis, 60 Magazines, 240 drives
The next min. upgrade requires:
2 nodes, 6 Chassis with 12 Magazines of 48
drives
You can finally fill out the configuration by
adding 4 more drive Chassis (2 per node)
Important note: To rebalance you probably
need a TS engagement

45 HP Copyright 2011 Peter Mattei


T800 Fully Configured 224000 SPC IOPS
8 Nodes
32 Drive Chassis
1280 Drives
768TB raw capacity
with 600GB drives

224000 SPC IOPS

Disk Chassis/Frames may be up to 100m apart from the Controllers (1st Frame)
46 HP Copyright 2011 Peter Mattei
T-Class redundant power

Controller Nodes and Disk Chassis (shelves) are


powered by (1+1) redundant power supplies.

The Controller Nodes are backed up by a string


of two batteries.

47 HP Copyright 2011 Peter Mattei


HP P10000 3PAR V400 Components

Drive Chassis (4U)


Up to 6 in first, 8 in expansion racks
Capacity building block
2 to 10 Drive Magazines
Add non-disruptively
Industry leading density
Full-mesh Back-plane
Post-switch architecture
High performance, tightly coupled
Completely passive
Controller Nodes
Performance and connectivity building block
Adapter cards
Add non-disruptively
Runs independent OS instance
Service Processor (1U)
Remote error detection
First rack Expansion rack(s)
Supports diagnostics and maintenance
with controllers with disks only
and disks Reporting to HP 3PAR Central

48 HP Copyright 2011 Peter Mattei


HP P10000 3PAR V800 Components

Drive Chassis (4U)


2 in first, 8 in expansion racks
Capacity building block
2 to 10 Drive Magazines
Add non-disruptively
Industry leading density
Full-mesh Back-plane
Post-switch architecture
High performance, tightly coupled
Completely passive
Controller Nodes
Performance and connectivity building block
Adapter cards
Add non-disruptively
Runs independent OS instance
Service Processor (1U)
Remote error detection
First rack Expansion rack(s)
Supports diagnostics and maintenance
with controllers with disks only
and disks Reporting to HP 3PAR Central

49 HP Copyright 2011 Peter Mattei


The 3PAR V-Class Evolution
Bus to Switch to Full Mesh Progression

V-Class Full Mesh Backplane


High Performance / Low Latency
112 GB/s Backplane bandwidth
Passive Circuit Board
Slots for Controller Nodes
Links every controller (Full Mesh)
2.0 GB/s ASIC to ASIC
Single hop

Fully configured P10000 3PAR V800


8 Controller Nodes
16 Gen4 ASICs 2 per node
16 Intel Quad-Core processors
256 GB of control cache
512 GB total data cache
Max V800 configuration
136 GB/s peak memory bandwidth with 8 Nodes and 1920 Disks

50 HP Copyright 2011 Peter Mattei


HP 3PAR V-Class Controller Node

Controller Node(s)
PCI Slots
2 to 8 per System installed in pairs

P10000 3PAR Controllers


2 Intel Quad-Core per node
6 7 8
48GB or 96GB Cache per node
V400: 16GB Control/32GB Data
V800: 32GB Control/64GB Data
Remote Copy Ethernet port
2 Gen4 ASIC per node 3 4 5 RCIP E1
Data Movement, ThP & XOR RAID Processing Management Eth port E0

Scalable Connectivity per Node Serial ports


3 PCI-e buses/ 9 PCI-e slots
4-port 8Gb/s FC Adapter 0 1 2
10Gb/s FCoE ready (post GA)
10Gb/s iSCSI ready (post GA) PCI-e card installation order
Internal SSD drive for Drive Chassis Connections 6, 3, 0

InServe OS Host Connections 2, 5, 8, 1, 4, 7

Cache destaging in case of power failure Remote Copy FC Connections 1, 4, 2, 3

51 HP Copyright 2011 Peter Mattei


HP 3PAR InSpire architecture
V-Class Controller Node
Controller Node(s)
Intel Intel
Multi-Core
Processor
Multi-Core
Processor
Scalable Performance per Node
2 to 8 Nodes per System
Multifunction
Thin Built In Gen4 ASIC
Control (SCSI Command Path)

Controller
2.0 GB/s dedicated ASIC-to-ASIC bandwidth
Control Cache 112 GB/s total backplane bandwidth
16 or 32GB
Data Cache
Inline Fat-to-Thin processing in DMA engine2
32 or 64GB 2 x Intel Quad-Core Processors
V400: 48GB Cache
3PAR Gen4
ASIC
3PAR Gen4
ASIC
V800: 96GB Maximum Cache
8Gb/s FC Host/Drive Adapter
Data Paths
PCIe PCIe PCIe
10Gb/s FCoE/iSCSI Host Adapter (planned)
Switch Switch Switch Warm-plug Adapters

PCI-e
Slots

52 HP Copyright 2011 Peter Mattei


V-Class P10000 Drive Chassis

From 2 to 10 Drive Magazines


(1+1) redundant power supplies
Redundant dual FC paths
Redundant dual switches

Each Magazine always holds


4 disks of the same drive type
Each Magazines in a Chassis
can have different Drive types.
For example:
3 magazines of FC
1 magazine of SSD
6 magazines of SATA.

53 HP Copyright 2011 Peter Mattei


V400 Configuration Examples
With 2 Controllers and 4 Drive Chassis Increments

Minimum initial Configuration


1 Rack
2 Controller Nodes
4 Drive Chassis
8 Drive Magazines (32 Disks)

Minimum upgrade
4 Drive Magazines (16 Disks)

Maximum 2-node Configuration


2 Racks
12 Drive Chassis
480 Disks

54 HP Copyright 2011 Peter Mattei


V400 Configuration Examples
With 4 Controllers and 4 Drive Chassis Increments

Minimum initial Configuration


2 Racks
4 Controller Nodes
8 Drive Chassis
16 Drive Magazines (64 Disks)

Minimum upgrade
4 Drive Magazines (16 Disks)

Up to 320 Disks in 8 Chassis

55 HP Copyright 2011 Peter Mattei


V400 Configuration Examples
With 4 Controllers and 4 Drive Chassis Increments

Maximum Configuration
4 Racks
4 Controller Nodes
24 Drive Chassis
960 Disks

56 HP Copyright 2011 Peter Mattei


V800 Configuration Examples
With 2 Controllers and 4 Drive Chassis Increments

Minimum initial Configuration


2 Rack
2 Controller Nodes
4 Drive Chassis
8 Drive Magazines (32 Disks)

Minimum upgrade
4 Drive Magazines (16 Disks)

Up to 160 Disks in 4 Chassis

57 HP Copyright 2011 Peter Mattei


V800 Configuration Examples
With 4 Controllers and 4 Drive Chassis Increments

Minimum initial Configuration


2 Rack
4 Controller Nodes
8 Drive Chassis
16 Drive Magazines (64 Disks)

Minimum upgrade
4 Drive Magazines (16 Disks)

Up to 320 Disks in 8 Chassis

58 HP Copyright 2011 Peter Mattei


V800 Configuration Examples
With 8 Controllers and 4 Drive Chassis Increments

Minimum initial Configuration


3 Rack
8 Controller Nodes
16 Drive Chassis
32 Drive Magazines (128 Disks)

Minimum upgrade
4 Drive Magazines (16 Disks)

Up to 640 Disks in 16 Chassis

59 HP Copyright 2011 Peter Mattei


V800 Configuration Examples
Max 8 Controllers configuration 450213 SPC-1 IOPS
7 Racks
8 Controller Nodes
192 Host Ports
768GB Cache (256GB Control / 512GB Data)
48 Drive Chassis
1920 Disks

Disk Chassis/Frames may be up to 100m apart from the Controllers (1st Frame)
60 HP Copyright 2011 Peter Mattei
HP 3PAR InForm OS
Virtualization Concepts

Copyright 2011 Hewlett-Packard Development Company, L.P. The information


contained herein is subject to change without notice. Confidentiality label goes here
HP 3PAR Virtualization Concept
Example: 4-Node T400 with 8 Drive Chassis

Nodes are added in pairs for


cache redundancy
An InServ with 4 or more nodes
supports Cache Persistence
which enables maintenance
windows and upgrades without 3PAR Mid-Plane
performance penalties.

Drive Chassis are point-to-point


connected to controllers nodes in
the T-Class to provide cage
level availability to withstand
the loss of an entire drive
enclosure without losing access to
your data.
62 HP Copyright 2011 Peter Mattei
HP 3PAR Virtualization Concept
Example: 4-Node T400 with 8 Drive Chassis

T-Class Drive Magazines hold


4 of the very same drives
SSD, FC or SATA
Size
Speed
SSD, FC, SATA drive
magazines can be mixed
A minimum configuration has 2
magazines per enclosure

Each Physical Drive is divided


into Chunklets of
- 256MB on F- and T-Class
- 1GB on V-Class

63 HP Copyright 2011 Peter Mattei


HP 3PAR Virtualization Concept
Example: 4-Node T400 with 8 Drive Chassis

RAID sets will be built across


enclosures and massively striped
to form Logical Disks (LD)
LDs are equally allocated to

RAID5 (3+1)
Exported
controller nodes LUN

Logical Disks are bound together


to build Virtual Volumes
LD LD LD LD
LD LD LD LD
LD LD LD LD
Virtual
Each Virtual Volume is LD
LD
LD
LD
LD
LD
LD
LD
Virtual
Volume
Virtual
Volume
automatically wide-striped across
LD LD LD LD
Volume

Chunklets on all disk spindles


of the same type creating a
massively parallel system
Virtual Volumes can now be
exported as LUNs to servers
64 HP Copyright 2011 Peter Mattei
Why are Chunklets so Important?

Ease of use and Drive Utilization


Same drive spindle can service many different LUNs
and different RAID types at the same time
Allows the array to be managed by policy, not by
0 1 2 3 4 5 6 7
administrative planning R1 R1 R5 R6 R6 R1 R5 R5

Enables easy mobility between physical disks, RAID 3PAR InServ Controllers
types and service levels by using Dynamic or Adaptive
Optimization
Physical Disks
Performance
Enables wide-striping across hundreds of disks
Avoids hot-spots
Allows Data restriping after disk installations
High Availability
HA Cage - Protect against a cage (disk tray) failure.
HA Magazine - Protect against magazine failure
65 HP Copyright 2011 Peter Mattei
Common Provisioning Groups (CPG)

CPGs are Policies that define Service and Availability level by


Drive type (SSD, FC, SATA)
Number of Drives (striping width)
RAID level (R10, R50 2D1P to 8D1P, R60 6D2P or 14D2P)

Multiple CPGs can be configured and optionally overlap the same drives
i.e. a System with 200 drives can have one CPG containing all 200 drives and
other CPGs with overlapping subsets of these 200 drives.

CPGs have many functions:


They are the policies by which free Chunklets are assembled into logical disks
They are a container for existing volumes and used for reporting
They are the basis for service levels and our optimization products.

66 HP Copyright 2011 Peter Mattei


HP 3PAR Virtualization the Logical View
The base for autonomic utility storage
3PAR autonomy User initiated
Virtual Exported
Physical Disks Chunklets Logical Disks CPGs Volumes LUNs

Physical Disks are divided in Chunklets (256MB or 1GB)


The majority is used to build Logical Disks (LD)
Some are used for distributed sparing
Logical Disks (LD)
Are collections of Raidlets - Chunklets arranged as rows of RAID sets (Raid 0, 10, 50, 60)
Provide the space for Virtual Volumes, Snapshot and Logging Disks
Are automatically created when required
Common Provisioning Groups (CPG)
User created virtual pools of Logical Disks that allocates space to virtual volumes on demand
The CPG defines RAID level, disk type and number, striping pattern etc.
Virtual Volumes (VV) Exported LUNs
User created volumes composed of LDs according to the corresponding CPG policies
Can be fat or thin provisioned
User exports VV as LUN
HP Copyright 2011 Peter Mattei
HP 3PAR Virtualization the Logical View
User created User
Autonomically built User created Virtual exported
Physical Disks
Logical Disks CPGs Volumes LUNs

R5 R1 AO
SSD

R1
ThP

R5
R5
AO
FC

R5
R5 R1
Fat
ThPThP
ThP

R1
R6 Fat

R5
ThP
Nearline

ThP

R6 ThP
ThP
Fat

HP Copyright 2011 Peter Mattei


Chunklets
Create CPG(s)
Easy and straight forward

In the Create CPG Wizard select


and define
3PAR System
Residing Domain (if any)
Disk Type
SSD Solid State Disk
FC Fibre Channel Disk
NL Near-Line SATA Disks

Disk Speed
RAID Type
By selecting advanced options more
granular options can be defined
Availability level
Step size
Preferred Chunklets
Dedicated disks

69 HP Copyright 2011 Peter Mattei


Create Virtual Volume(s)
Easy and straight forward

In the Create Virtual Volume


Wizard define
Virtual Volume Name
Size
Provisioning Type: Fat or Thinly
CPG to be used
Allocation Warning
Number of Virtual Volumes

By selecting advanced options more


options can be defined
Copy Space Settings
Virtual Volume Geometry

70 HP Copyright 2011 Peter Mattei


Export Virtual Volume(s)
Easy and straight forward

In the Export Virtual Volume


Wizard define
Host or Host Set to be presented to

Optionally
Select specific Array Host Ports
Specify LUN ID

71 HP Copyright 2011 Peter Mattei


HP 3PAR Autonomic Groups
Simplify Provisioning
Traditional Storage Autonomic HP 3PAR Storage
Cluster of VMware ESX Servers Autonomic Host Group

V1 V2 V3 V4 V5 V6 V7 V8 V9 V10
V1 V2 V3 V4 V5 V6 V7 V8 V9 V10

Individual Volumes Autonomic Volume Group

Initial provisioning of the Cluster Initial provisioning of the Cluster


Requires 50 provisioning actions Add hosts to the Host Group
(1 per host volume relationship) Add volumes to the Volume Group
Add another host Export Volume Group to the Host Group
Requires 10 provisioning actions Add another host
(1 per volume) Just add host to the host group
Add another volume Add another volume
Requires 5 provisioning actions Just add the volume to the Volume Group
(1 per host) Volumes are exported automatically
72 HP Copyright 2011 Peter Mattei
HP 3PAR InForm
Software and Features

Copyright 2011 Hewlett-Packard Development Company, L.P. The information


contained herein is subject to change without notice. Confidentiality label goes here
Four License Models:
HP 3PAR Software and Licensing Consumption Based
Spindle/Magazine Based
Frame Based
Free*
3PAR InForm Software * Support fee associated

InForm Additional Software InForm Host Software


Thin Suite Optimization Suite

Thin Provisioning Dynamic Optimization System Reporter

Thin Conversion Adaptive Optimization Recover Manager for


Host Explorer
SQL
Thin Persistence System Tuner
3PAR Manager for Recovery Manager for
VMware vCenter VMware
Virtual Copy Virtual Domains
Multi Path IO IBM AIX Recovery Manager for
Remote Copy Virtual Lock Oracle

Multi Path IO Recovery Manager for


Peer Motion
Windows 2003 Exchange

InForm Operating System


Thin Copy RAID MP
Full Copy Access Guard Autonomic Groups
Reclamation (Multi-Parity)
InForm
Rapid Provisioning LDAP Scheduler Host Personas
Administration Tools

74 HP Copyright 2011 Peter Mattei


HP 3PAR Software Support Cost & Capping

Care Pack Support Services for spindle base licenses are charged by the number of
magazines
Support Services cost incrementally increase until they reach a predefined threshold/cap
and stay flat i.e. will not increase anymore.
Capping threshold by array
F200 11Magazine
F400 13 Magazines
T400 / V400 33 Magazines
T800 / V800 41 Magazines
Capping occurs for each software title per magazine type
Example for InForm OS on V800 with 3 Years Critical Service:
50 x 600GB Disk Magazine ---- 41 x HA112A3 - QQ6 - 3PAR InForm V800/4x600GB Mag LTU Support
24 x 2TB Disk Magazine ---- 24 x HA112A3 - QQ6 - 3PAR InForm V800/4x2TB Mag LTU Support
24 x 200GB SSD Magazine ---- 24 x HA112A3 - QQ6 - 3PAR InFrm V800/4x200GB SSD Mag LTU Support

The Thin Suite, Thin Provisioning, Thin Conversion and Thin Persistence do not have any
associated support cost

HP Copyright 2011 Peter Mattei


HP 3PAR
Thin Technologies

Copyright 2011 Hewlett-Packard Development Company, L.P. The information


contained herein is subject to change without notice. Confidentiality label goes here
HP 3PAR Thin Technologies Leadership Overview

Start Thin Get Thin Stay Thin

Thin Provisioning Thin Conversion Thin Persistence


No pool management or Eliminate the time & complexity of Free stranded capacity
reservations getting thin Automated reclamation for 3PAR
No professional services Open, heterogeneous migrations for offered by Symantec, Oracle
Fine capacity allocation units any array to 3PAR Snapshots and Remote Copies stay
Variable QoS for snapshots Service levels preserved during inline thin
conversion

Buy up to 75% less Reduce Tech Refresh Thin Deployments


storage capacity Costs by up to 60% Stay Thin Over time

77 HP Copyright 2011 Peter Mattei


HP 3PAR Thin Technologies Leadership Overview
Built-in
HP 3PAR Utility Storage is built from the ground up to support Thin
Provisioning (ThP) by eliminating the diminished performance and
functional limitations that plague bolt-on thin solutions.
In-band
Sequences of zeroes are detected by the 3PAR ASIC and not
written to disks. Most other vendors ThP implementation write
zeroes to disks, some can reclaim space as a post-process.
Reservation-less
HP 3PAR ThP draws fine-grained increments from a single free
space reservoir without pre-dedication of any kind. Other vendors
ThP implementation require a separate, pre-dedicated pool for
each data service level.
Integrated
API for direct ThP integration in Symantec File System, VMware,
Oracle ASM and others

78 HP Copyright 2011 Peter Mattei


HP 3PAR Thin Provisioning Start Thin
Dedicate on write only

Traditional Array HP 3PAR Array


Dedicate on allocation Dedicate on write only

Server
presented
Capacities
/ LUNs

Required
net Array
Capacities Free
Chunkl

Physical
Disks
Physically installed Disks
Physically installed Disks

Actually written data


79 HP Copyright 2011 Peter Mattei
HP 3PAR Thin Conversion Get Thin

Thin your online SAN storage up to 75%


A practical and effective solution to
eliminate costs associated with:
Gen3 ASIC

Storage arrays and capacity


Software licensing and support Fast
0000
Power, cooling, and floor space
0000 0000
Unique 3PAR Gen3 ASIC with built-in
zero detection delivers:
Simplicity and speed eliminate the time &
complexity of getting thin Before After
Choice - open and heterogeneous migrations for
any-to-3PAR migrations
Preserved service levels high performance during
migrations

80 HP Copyright 2011 Peter Mattei


HP 3PAR Thin Conversion Get Thin
How to get there

1. Defragment source Data


a) If you are going to do a block level migration via an appliance or host volume
manager (mirroring) you should defragment the filesystem prior to zeroing the free
space

b) If you are using filesystem copies to do the migration the copy will defragment the
files as it copies eliminating the need to defragment the source filesystem

2. Zero existing volumes via host tools


a) On Windows use sdelete (free utility available from Microsoft )
sdelete c <drive letter>

b) On UNIX/Linux use dd to create files containing zeros like


dd if=/dev/zero of=/path/10GB_zerofile bs=128K count=81920
or zero and delete a file directly with shred
shred n 0 z u /path/file

81 HP Copyright 2011 Peter Mattei


HP 3PAR Thin Conversion at a Global Bank

No budget for additional storage Reduced


power &
Recently had huge layoffs cooling
Moved 271 TBs, DMX to 3PAR Capacity costs $3 million
requirement savings
Online/non-disruptive in upfront
s reduced
No Professional Services capacity
by >50%
Large capacity savings purchases
The results shown within this
Sample volume migrations on different OSs
document demonstrate a highly
200
efficient migration process which
removes the unused storage 150
No special host software 100 EMC
GBs
components or professional services 50 3PAR
are required to utilise this 0
functionality Unix ESX Win
(VxVM) (VMotion) (SmartMove)
82 HP Copyright 2011 Peter Mattei
HP 3PAR Thin Persistence Stay Thin
Keep your array thin over time
Non-disruptive and application-
transparent re-thinning of thin
provisioned volumes Fast
Gen3 ASIC
Returns space to thin provisioned
volumes and to free pool for reuse
New with InForm 3.1.1:
intelligently reclaims16KB pages
0000
Unique 3PAR Gen3 ASIC with 0000

built-in zero detection delivers:


Simplicity No special host software required. Before
Leverage standard file system tools/scripts to
After
write zero blocks.
Preserved service levels zeroes detected and
unmapped at line speeds
Integrated automated reclamation with
Symantec and Oracle

83 HP Copyright 2011 Peter Mattei


HP 3PAR Thin Persistence manual thin reclaim
Remember: Deleted files still occupy disk space

LUN 2 LUN 2
LUN 1 LUN 1

Unused Unused
Data 2 Free Free
Data 1 Chunklets Chunklets
Data1 Data 2

Initial state: After a while:


LUN1 and 2 are ThP Vvols Files deleted by the servers/file system
Data 1 and 2 is actually written data still occupy space on storage

LUN 2 LUN 2
LUN 1 LUN 1

00000000
00000000 Free
00000000 Chunklets
00000000
Free
Data1 Data 2 Chunklets Data 1 Data 2

Zero-out unused space: Run Thin Reclamation:


Windows: sdelete * Compact CPC and Logical Disks
Unix/Linux: dd script Freed-up space is returned to the free Chunklets
84 HP Copyright 2011 Peter Mattei
* sdelete is a free utility available from Microsoft
HP 3PAR Thin Persistence Thin API

Partnered with Symantec

Jointly developed a Thin APIAn industry first!


File system / array communication API (write same)
Most elements now captured as part of emerging T10 SCSI standard

HP has introduced API to other operating system


vendors and offered development support
VMware
Microsoft

85 HP Copyright 2011 Peter Mattei


HP 3PAR Thin Persistence Oracle Integration

Oracle auto-extend allows customers to save on Oracle


database capacity with Thin Provisioning Tables

Database Capacity can get stranded after writes Tablespace


and deletes
3PAR Thin Persistence and Oracle ASM Storage ASM with ASRU

Reclamation Utility (ASRU) can reclaim 25% or Disk Group 00000000

more stranded capacity


After Tablefile shrink/drop or Database drop
After a new LUN is added to ASM Disk Group
Oracle ASM Utility compacts files and writes zeroes to free space
3PAR Thin Built-In ASIC-based, zero-detection eliminates free
space
0000000000
From a DBA perspective:
Non disruptive does not impact storage performance. ASIC
huge advantage Traditional Array 3PAR Array with
Increase DB Miles Per Gallon Thin Persistence
Unused space remains Files compacted by ASRU
Zeroes are written Zeroes removed
Space reclaimed

86 HP Copyright 2011 Peter Mattei


HP 3PAR Thin Persistence in VMware Environments
Introduced with vSphere 4.0
VMware VMFS supports three formats for VM disk images
o Thin
o Thick - ZeroedThick (ZT)
o EagerZeroedThick (EZT)
VMware recommends EZT for highest performance
More info http://www.vmware.com/pdf/vsp_4_thinprov_perf.pdf
3PAR Thin Technologies work with and optimize all three formats

Introduced with vSphere 4.1


vStorage API for Array Integration (VAAI)
Thin Technologies enabled by the 3PAR Plug-in for VAAI
Thin VMotion - Uses XCOPY via the plug-in
Active Thin Reclamation - Using Write-Same to offload zeroing to array

Introduced with vSphere 5.0


Automated VM Space Reclamation
Leverages industry standard T10 UNMAP
Supported with VMware vSphere 5.0 and InForm OS 3.1.1

87 HP Copyright 2011 Peter Mattei


Autonomic VMware Space Reclamation
Traditional Storage with Space Reclaim HP 3PAR with Thin Persistence
Coarse Reclaim, Slow Fine-grained Reclaim, Fast

X X X X
DATASTORE DATASTORE
25GB 25GB 25GB 25GB 25GB 25GB 25GB 25GB

X X X
00000000

X
00000000 00000000 00000000
00000000 00000000 00000000 00000000

10GB 10GB 10GB 10GB

0 0
0 Slow, Post-process T10 UNMAP 0 Rapid, Inline T10 UNMAP
ASIC Zero Detect
0 Overhead (768kB42MB Coarse) 0 (16kB granularity)
0 0

Standard Thin Provisioning 3PAR Scalable Thin Provisioning

100GB 40+ GB 100GB


20 GB

TIME

20GB VMDKs post deletions consume 40+ GB 20GB VMDKs post deletions consume ~20GB
HP Copyright 2011 Peter Mattei
HP 3PAR Thin Provisioning positioning
Built-in not bolt on

No upfront allocation of storage for Thin Volumes


No performance impact when using Thin Volumes unlike competing
storage products
No restrictions on where 3PAR Thin Volumes should be used unlike
many other storage arrays
Allocation size of 16k which is much smaller than most ThP
implementations
Thin provisioned volumes can be created in under 30 seconds
without any disk layout or configuration planning required
Thin Volumes are autonomically wide striped over all drives within
that tier of storage

89 HP Copyright 2011 Peter Mattei


HP 3PAR
Full and Virtual Copy

Copyright 2011 Hewlett-Packard Development Company, L.P. The information


contained herein is subject to change without notice. Confidentiality label goes here
HP 3PAR Full Copy Flexible point-in-time copies
Part of the base InForm OS
3PAR Full Copy

Share data quickly and easily


Base Volume
Full physical point-in-time copy of
base volume

Independent of base volumes RAID


and physical layout properties for
maximum flexibility Full Copy

Fast resynchronization capability


Full Copy
Thin Provisioning-aware
Full copies can consume same physical
capacity as Thin Provisioned base volume

91 HP Copyright 2011 Peter Mattei


HP 3PAR Virtual Copy Snapshot at its best

3PAR Virtual Copy Up to 8192 Snaps per array

Smart
Promotable snapshots
Base Volume 100s of Snaps
Individually deleteable snapshots
Scheduled creation/deletion
Consistency groups but just
Thin one CoW
No reservations needed
Non-duplicative snapshots
Thin Provisioning aware
Variable QoS
Ready
Instant readable or writeable snapshots
Snapshots of snapshots
Control given to end user for snapshot Integration with
management Oracle, SQL, Exchange, VMware
Virtual Lock for retention of read-only snaps

92 HP Copyright 2011 Peter Mattei


HP 3PAR Virtual Copy Snapshot at its best

Base volume and virtual copies can be mapped to different CPGs


This means that they can have different quality of service
characteristics. For example, the base volume space can be derived
from a RAID 1 CPG on FC disks and the virtual copy space from a
RAID 5 CPG on Nearline disks.

The base volume space and the virtual copy space can grow
independently without impacting each other (each space has its own
allocation warning and limit).

Dynamic optimization can tune the base volume space and the virtual
copy space independently.

93 HP Copyright 2011 Peter Mattei


HP 3PAR Virtual Copy Relationships
The following shows a complex relationship scenario

94 HP Copyright 2011 Peter Mattei


Creating a Virtual Copy Using The GUI
Right Click and select Create Virtual Copy

95 HP Copyright 2011 Peter Mattei


InForm GUI View of Virtual Copies
The GUI gives a very easy to read graphical view of VCs:

96 HP Copyright 2011 Peter Mattei


HP 3PAR
Remote Copy

Copyright 2011 Hewlett-Packard Development Company, L.P. The information


contained herein is subject to change without notice. Confidentiality label goes here
HP 3PAR Remote Copy Protect and share data

3PAR Remote Copy


Primary Secondary
Smart
S Sync or P
Initial setup in minutes
Simple and intuitive commands P Async Perodic
S
No consulting services
1:1 Configuration
VMware SRM integration
Primary
Complete
Native IP-based, or FC
P
No extra copies or infrastructure needed
Thin provisioning aware
Thin conversion Sync
Synchronous, Asynchronous Periodic or
Synchronous Long Distance (SLD)
Mirror between any InServ size or model Async Periodic
S1 S2

Many to one, one to many Standby

Secondary Tertiary

Synchronous Long Distance


98 HP Copyright 2011 Peter Mattei
1:2 Configuration
HP 3PAR Remote Copy N:1 Configuration

P
You can use Remote Copy over IP
(RCIP) and/or Fibre Channel (RCFC) Primary Site A
connections
InServ Requirements P

Max support is 4 to 1. RC RC
One of the 4 can mirror bi-directionally Primary Site B
RC P
Each RC relationship requires dedicated RC

node-pairs. In a 4:1 setup the target site Target Site


requires 8 node-pairs. P

Primary Site C
P

RC

Primary / Target
Site D

99 HP Copyright 2011 Peter Mattei


HP 3PAR Remote Copy 1:N Configuration

You can use Remote Copy over IP


(RCIP) and/or Fibre Channel (RCFC)
connections
InServ Requirements
Max support is 1 to 2.
One mirror can be bi-directionally RC

Each RC relationship requires dedicated P Target Site A


node-pairs. The primary site requires 4 P
node-pairs.
Primary Site RC

Target Site B

100 HP Copyright 2011 Peter Mattei


HP 3PAR Remote Copy Synchronous

Primary Secondary
Real-time Mirror Volume Volume
Highest I/O currency
1 2
Lock-step data consistency P S
Space Efficient 4 3
Thin provisioning aware
Step 1 : Host server writes I/Os to primary cache
Targeted Use
Campus-wide business continuity
Step 2 : InServ writes I/Os to secondary cache

Step 3 : Remote system acknowledges the receipt


of the I/O

Step 4 : I/O complete signal communicated back


to primary host

101 HP Copyright 2011 Peter Mattei


HP 3PAR Remote Copy
Data integrity

Assured Data Integrity

Single Volume
All writes to the secondary volume are completed in the
same order as they were written on the primary volume

Multi-Volume Consistency Group


Volumes can be grouped together to maintain write
ordering across the set of volumes
Useful
for databases or other applications that make
dependant writes to more than one volume

102 HP Copyright 2011 Peter Mattei


HP 3PAR Remote Copy Asynchronous Periodic
The Replication Solution for long-distance implementations

Efficient even with high latency replication links


Host writes are acknowledged as soon as the data is written into cache of the primary array
Bandwidth-friendly
The primary and secondary Volumes are resynchronized periodically either scheduled or manually
If data is written to the same area of a volume in between resyncs only the last update needs to be
resynced
Space efficient
Copy-on-write Snapshot versus full PIT copy
Thin Provisioning-aware
Guaranteed Consistency
Enabled by Volume Groups
Before a resync starts a snapshot of the Secondary Volume or Volume Group is created

103 HP Copyright 2011 Peter Mattei


Remote Copy Asynchronous Periodic
Primary Site Remote Site

Sequence Base Volume Snapshot Base Volume Snapshot

1 Initial Copy A SA

Resynchronization.
Starts with snapshots B SA
2
P
Resynchronization.
Delta Copy
B-A
delta SB

3 Upon Completion.
Delete old snapshot A SA

Ready for next


resynchronization B SB
104 HP Copyright 2011 Peter Mattei
HP 3PAR Remote Copy
Supported Distances and Latencies

Remote Copy Type Max Supported Distance Max Supported Latency


Synchronous IP 210 km /130 miles 1.3ms
Synchronous FC 210 km /130 miles 1.3ms
Asynchronous Periodic IP N/A 150ms round trip
Asynchronous Periodic FC 210 km /130 miles 1.3ms
Asynchronous Periodic FCIP N/A 60ms round trip

105 HP Copyright 2011 Peter Mattei


Cluster Extension for Windows
Clustering solution protecting against server and storage failure

What does it do?


File share Witness
Data Center 3 Manual or automated site-failover for
Server and Storage resources
Transparent Hyper-V Live Migration
A Microsoft A between site
Cluster B
Supported environments:
Microsoft Windows Server 2003
Cluster Extension Microsoft Windows Server 2008
HP ProLiant Storage Server
Up to 210km (RC supported max)
Remote Copy
Requirements:
3PAR Disk Arrays
Remote Copy
Data Center 1 Data Center 2
Microsoft Cluster
Up to 210km Cluster Extension
Max 20ms network round-trip delay

See also http://h18006.www1.hp.com/storage/software/ces/index.html


HP Copyright 2011 Peter Mattei
Metrocluster for HP-UX
End-to-end clustering solution to protect against server and storage failure

Quorum
Data Center 3 What does it do?
Provides manual or automated site-
Serviceguard
failover for Server and Storage resources
A A
A
for HP-UX
Supported environments:
B
HP-UX 11i v2 & v3 with Serviceguard
HP Metrocluster Up to 210km (RC supported max)
Requirements:
HP 3PAR Disk Arrays
Remote Copy 3PAR Remote Copy
HP Serviceguard Metrocluster
Max 200ms network round-trip delay
Data Center 1 Data Center 2

Up to 210km

See also: http://h20000.www2.hp.com/bc/docs/support/SupportManual/c02967683/c02967683.pdf

HP Copyright 2011 Peter Mattei


VMware ESX DR with SRM
Automated ESX Disaster Recovery

Production Site What does it do?


Recovery Site Simplifies DR and increases reliability
Integrates VMware Infrastructure with HP
Site 3PAR Remote Copy and Virtual Copy
VirtualCenter Recovery
Manager Site
Recovery VirtualCenter
Makes DR protection a property of the VM
Manager

Virtual Machines Virtual Machines Allowing you to pre-program your disaster


response
VMware Infrastructure
Enables non-disruptive DR testing
VMware Infrastructure Servers

Servers
Requirements:
HP 3PAR
VMware vSphere
VMware vCenter
HP 3PAR
VMware vCenter Site Recovery Manager
HP 3PAR Replication Adapter for VMware
vCenter Site Recovery Manager
HP 3PAR Remote Copy Software
HP 3PAR Virtual Copy Software (for DR
failover testing)
Production LUNs
Remote Copy DR LUNs
108 HP Copyright 2011 Peter Mattei Virtual Copy Test LUNs
HP 3PAR
Dynamic and Adaptive
Optimization

Copyright 2011 Hewlett-Packard Development Company, L.P. The information


contained herein is subject to change without notice. Confidentiality label goes here
A New Optimization Strategy for SSDs

Flash Price decline has enabled


SSD as a viable storage tier but Non-optimized
approach
data placement is difficult on a SSD only

per LUN basis


Non-Tiered Volume/LUN

Tier 0 SSD
A new way of autonomic data
Tier 1 FC Optimized
placement and cost/performance approach for
optimization is required: Tier 2 NL leveraging SSDs
HP 3PAR Adaptive Optimization
Multi-Tiered Volume/LUN

110 HP Copyright 2011 Peter Mattei


HP 3PAR Dynamic and Adaptive Optimization
Manual or Automatic Tiering

3PAR Dynamic 3PAR Adaptive


Optimization Optimization
Tier 0 SSD

Tier 1 FC

Tier 2 SATA

Autonomic Autonomic
Data Tiering and
- Region
Movement Data Movement

111 HP Copyright 2011 Peter Mattei


Storage Tiers HP 3PAR Dynamic Optimization

SSD

RAID 5
RAID 5
2+1)
(3+1)
RAID 5
RAID 1
(7+1)
FC RAID 6 (6+2)

RAID 6 (14+2)
Performance

Nearline RAID 5
RAID 5 2+1)
(3+1)
RAID 5
RAID 1
(7+1) In a single command
RAID 5
RAID 6 (6+2) non-disruptively optimize and
RAID 5
(3+1)
(2+1) RAID 6 (14+2) adapt cost, performance,
RAID 5 RAID 1
efficiency and resiliency
(7+1)
RAID 6 (6+2)

RAID 6 (14+2)

Cost per Useable TB


112 HP Copyright 2011 Peter Mattei
HP 3PAR Dynamic Optimization Use Cases
Deliver the required service levels for the lowest possible cost throughout the data lifecycle

~50% ~80%
Savings Savings
10TB net 10TB net 10TB net

RAID 10 RAID 50 (3+1) RAID 50 (7+1)


300GB FC Drives 600GB FC Drives 2TB SATA-Class Drives

Accommodate rapid or unexpected, application growth on demand by freeing raw capacity

7.5TB net free


Free 7.5 TBs of net
capacity on demand !
10 TB net 10 TB net

20 TB raw RAID 10 20 TB raw RAID 50

113 HP Copyright 2011 Peter Mattei


HP 3PAR Dynamic Optimization at a Customer
Optimize QoS levels with autonomic rebalancing without pre-planning

Distribution after 2 disk upgrades Distribution after Dynamic Optimization


Free After Dynamic Optimization Free
Before Dynamic Optimization
Used Used

600 600

500 500

400 400

Chunklets
Chunklets

300 300

200 200

100 100

0 0
1 20 39 58 77 96 1 20 39 58 77 96
Physical Disks REBALANCE Physical Disks

HP Copyright 2011 Peter Mattei


How to Use Dynamic Optimization

115 HP Copyright 2011 Peter Mattei


How to Use Dynamic Optimization

116 HP Copyright 2011 Peter Mattei


How to Use Dynamic Optimization

117 HP Copyright 2011 Peter Mattei


Performance Example with Dynamic Optimization

Volume Tune from R5, 7+1 SATA to R5, 3+1 FC 10K

118 HP Copyright 2011 Peter Mattei


IO density differences across applications

100,00%

90,00%

80,00%
ex2k7db_cpg
70,00%
Cumulative Access Rate %

ex2k7log_cpg
oracle
60,00%
oracle-stage
50,00% oracle1-fc
windows-fc
40,00%
unix-fc
vmware
30,00%
vmware2
20,00% vmware5
windows
10,00%

0,00%
0,00% 10,00% 20,00% 30,00% 40,00% 50,00% 60,00%
Cumulative Space %

119 HP Copyright 2011 Peter Mattei


HP 3PAR Adaptive Optimization
Improve Storage Utilization

Traditional deployment Deployment with HP 3PAR AO


Single pool of same disk drive type, speed An AO Virtual Volume draws space from 2 or 3
and capacity and RAID level different tiers/CPGs
Number and type of disks are dictate by Each tier/CPG can be built on different disk types,
the max IOPS + capacity requirements RAID level and number of disks

Single pool of High-speed


high-speed media media pool
100% 100%
Required IOPS

Required IOPS
Wasted space
Medium-speed
media pool Low-speed
media pool

IO IO
distribution distribution
0% 0%
0% Required Capacity 100% 0% Required Capacity 100%

120 HP Copyright 2011 Peter Mattei


HP 3PAR Adaptive Optimization
Improve Storage Utilization

One tier without Adaptive Optimization Two tiers with Adaptive Optimization running
Used Space GiB

Used Space GiB


Access/GiB/min Access/GiB/min

This chart out of System reporter shows A Nearline tier has been added and
that most of the capacity has very low Adaptive Optimization enabled
IO activity Adaptive Optimization has moved
Adding Nearline disks would lower the least used chunklets to the
cost without compromising overall Nearline tier
performance

121 HP Copyright 2011 Peter Mattei


HP 3PAR Peer Motion

Copyright 2011 Hewlett-Packard Development Company, L.P. The information


contained herein is subject to change without notice. Confidentiality label goes here
Beyond Virtualization: Storage Federation
Virtualization Federation

The delivery of consolidated or distributed The delivery of distributed volume


volume management through appliances management across a set of self-
that hierarchically control a set of governing, homogeneous, peer
heterogeneous storage arrays storage arrays
Pros Pros
Broader, heterogeneous array support Less expensive
Cons Minimized failure domains
More expensive (dual controller layer) Simpler administration
Additional failure domains
Cons
Lowest common denominator function
No heterogeneous array support
Likely additional administration
HP Copyright 2011 Peter Mattei
Converged Migration HP 3PAR Peer Motion

Traditional Block Migration HP 3PAR Peer Motion


Complex, time-consuming, risky 1st Non-Disruptive DIY Migration for Enterprise SAN

Simple
Extra tools SW Downtime Fool-proof

SLA risk Pre-planning Online or offline


Appliance
Non-disruptive
MIGRATE Any-to-any 3PAR
Complex,
post-process Thin Built-In
thinning

With Peer Motion, you can:


load balance at will
perform tech refresh seamlessly
With Peer Motion, customers
cost-optimize can:Management
Asset Lifecycle
lower tech refresh CAPEX (thin-landing)
Load balance at will
= Block Migration Approaches
Perform tech refresh seamlessly
Cost-optimize Asset Lifecycle Management
Lower tech refresh CAPEX (thin-landing)
HP Copyright 2011 Peter Mattei

HP Confidential
Peer Motion Migration Phases
Non-disruptive array migration the steps behind the scene

Zone Zone Zone

SAN SAN SAN

Initial 3PAR Configuration 1. Install new 3PAR array 6. Create new destination-host zone
2. Configure Array Peer Ports on target 7. Admit source volumes on destination (admitvv)
3. Create new source-destination zone 8. Export destination volumes to host
4. Configure destination as host on source (This adds additional paths to source)
5. Export volumes from source to destination

HP Copyright 2011 Peter Mattei


Peer Motion Migration Phases
Non-disruptive array migration the steps behind the scene

Zone Zone Zone

SAN SAN SAN

9. Unzone source from host Migration has finished Post migration


IO flow: host - destination source only 11. Remove exported source volume
10. Start data migration (importvv) 12. Remove destination-source zone

HP Copyright 2011 Peter Mattei


HP 3PAR Peer Motion Manager
Wizard-based do it yourself data migration

Easy and straight forward CLUI =============================================================================


--- Main Menu ---

Source Array: WWN=2FF70002A000144 SerialNumber=1300324 SystemName=s324


Destination Array: WWN=2FF70002A00017D SerialNumber=1300381 SystemName=s381

Automated processes -------------- Migration Links/Host --------------


Destination array peer links : < OK >
System configuration import Source array host links : < OK >
Source array host configuration : < OK >
Source volume presentation ------------------------ Current Migration Volume Status ------------------------
Volume migration Current selected host
Domain of currently selected host
: < win83-12 >
: < -- >
Source VV exports detected on destination array :<0>
Cleanup Source VVs admitted state on destination array :<0>
Source VVs importing state on destination array :<0>
Source VVs import completed state on destination array :<0>
Source VVs import incomplet state on destination array :<0>
Current support Total completed source VVs migrated :<2>

Windows, Linux, Solaris (more to come) 1 ==> Copy Source Array Configuration Data to Destination Array
2 ==> Analyze Migration Links
No existing snapshots 3 ==> Migtae Volumes
4 ==> Display Array Information
Not part of a replication group
Enter Refresh screen
x Exit

HP Copyright 2011 Peter Mattei


Peer Motion supported environments
Peer Motion online migration support
InForm OS InForm OS
Operating System Multipath Host Clustering
Source Versions Target Versions
Windows Server 3PAR MPIO 1.0.22.7,
Currently not supported 2.2.4, 2.3.1, 3.1.1 3.1.1
Enterprise 2003 1.0.23
Windows Server Windows 2008
Currently not supported 2.2.4, 2.3.1, 3.1.1 3.1.1
Enterprise 2008 embedded
Solaris embedded Sun
Solaris 10 Stor Edge Traffic Currently not supported 2.2.4, 2.3.1, 3.1.1 3.1.1
Manager SSTM (MPxIO)
Red Hat Enterprise RHEL embedded Device
Currently not supported 2.2.4, 2.3.1, 3.1.1 3.1.1
Linux 4, 5, 6 Mapper
RHEL embedded Device
SUSE SLES 10 , 11 Currently not supported 2.2.4, 2.3.1, 3.1.1 3.1.1
Mapper

Peer Motion offline migration support


All operating system stated in the FC connectivity in SPOCK 3.1.1 OS tables

Note: Find more details and the most current support matrix in HP SPOCK
128 HP Copyright 2011 Peter Mattei
HP 3PAR
Virtual Domains

Copyright 2011 Hewlett-Packard Development Company, L.P. The information


contained herein is subject to change without notice. Confidentiality label goes here
What are HP 3PAR Virtual Domains?
Multi-Tenancy with Traditional Storage Multi-Tenancy with 3PAR Domains

Admin A Admin B Admin C Admin A Admin B Admin C


App A App B App C App A App B App C
Dept A Dept B Dept C Dept A Dept B Dept C
Customer A Customer B Customer C Customer A Customer B Customer C

Domain A

Domain B

Domain C

Separate, Physically-Secured Storage Shared, Logically-Secured Storage

130 HP Copyright 2011 Peter Mattei


What are the benefits of Virtual Domains?
Centralized Storage Admin Self-Service Storage Admin
with Traditional Storage with 3PAR Virtual Domains

End Users Provisioned


(Dept, Storage
Customer)

Virtual
Provisioned
Domains
Storage

Centralized Centralized
Storage Storage
Administration Administration

Physical Storage Physical Storage


Consolidated Consolidated
Storage Storage

131 HP Copyright 2011 Peter Mattei


3PAR Domain Types & Privileges
Super User(s) Edit User(s) (set to All Domain)
Domains, Users, Provisioning Policies Provisioning Policies

All Domain Engineering Domain Set

Domain A (Dev) Domain B (Test)

No Domain
CPG(s) Any element
Unassigned Host(s)
elements

User(s) & respective user level(s) Any element

Unassigned
elements VLUNs Any element
VVs & TPVVs
VCs & FCs & RCs
Chunklets & LDs

132 HP Copyright 2011 Peter Mattei


HP 3PAR Virtual Domains Overview

Requires a license

Allows fine-grained access control on a 3PAR array

Up to 1024 domains or spaces per array

Each User may have privileges over one, up to 32 selected or all domains

Each domain can be dedicated to a specific application

System provides different privileges to different users for Domain Objects with
no limit on max # Users per Domain

Also see the analyst report and product brief on http://www.3par.com/litmedia.html

133 HP Copyright 2011 Peter Mattei


Authentication and Authorization
LDAP Login

Management 3PAR InServ LDAP Server


Workstation
2
1
3
6 4
5

Step 1 : User initiates login to 3PAR InServ via 3PAR CLI/GUI or SSH
Step 2 : InServ searches local user entries first.
Upon mismatch, configured LDAP Server is checked
Step 3 : LDAP Server authenticates user.
Step 4 : InServ requests Users Group information
Step 5 : LDAP Server provides LDAP Group information for user
Step 6 : InServ authorizes user for privilege level based on Users group-to-role mapping.

134 HP Copyright 2011 Peter Mattei


HP 3PAR
Virtual Lock

Copyright 2011 Hewlett-Packard Development Company, L.P. The information


contained herein is subject to change without notice. Confidentiality label goes here
HP 3PAR Virtual Lock

HP 3PAR Virtual Lock Software prevents alteration


and deletion of selected Virtual Volumes for a
specified period of time
Supported with
Fat and Thin Vitual Volumes
Full Copy, Virtual Copy and Remote Copy

Locked Virtual Volumes cannot be overwritten


Locked Virtual Volumes cannot be deleted, even by
a HP 3PAR Storage System administrator with the
highest level privileges.
Because its tamper-proof, its also a way to avoid
administrative mistakes.
Also see the product brief on http://www.3par.com/litmedia.html

136 HP Copyright 2011 Peter Mattei


HP 3PAR Virtual Lock

Easily set just by defining


Retention and/or Expiration
Time in a Volume Policy
Remember:
Locked Virtual Volumes
cannot be deleted, even by
a HP 3PAR Storage System
user with the highest level
privileges.

137 HP Copyright 2011 Peter Mattei


HP 3PAR
System Reporter

Copyright 2011 Hewlett-Packard Development Company, L.P. The information


contained herein is subject to change without notice. Confidentiality label goes here
HP 3PAR System Reporter
Allows monitoring performance, creating
charge back reports and plan storage
resources
Enables metering of all physical and
logical objects including Virtual Domains
Provides custom thresholds and e-mail
notifications
Run or schedule canned or customized
reports at your convenience
Export data to a CSV file
Controls Adaptive Optimization
Use DB of choice
SQLite, MySQL, MS SQL or Oracle
DB Access:
Clients: Windows IE, Mozilla, Excel
Directly via published DB schema

139 HP Copyright 2011 Peter Mattei


HP 3PAR System Reporter
Example Histogram VLUN Performance

Export data to a CSV file

140 HP Copyright 2011 Peter Mattei


HP 3PAR System Reporter and Adaptive Optimization

One tier without Adaptive Optimization Two tiers with Adaptive Optimization running
Used Space GiB

Used Space GiB


Access/GiB/min Access/GiB/min

This chart shows that most of the A Nearline tier has been added and
capacity has very low IO activity Adaptive Optimization enabled
Adding Nearline disks would lower Adaptive Optimization has moved
cost without compromising overall the least used chunklets to the
performance Nearline tier

141 HP Copyright 2011 Peter Mattei


HP 3PAR
VMware Integration

Copyright 2011 Hewlett-Packard Development Company, L.P. The information


contained herein is subject to change without notice. Confidentiality label goes here
3PAR Management Plug-In for vCenter
Enhanced visibility into Storage Resources

Improved Visibility
VM-to-Datastore-to-LUN mapping
Storage Properties
View LUN properties
including Thin versus Fat
See capacity utilized

Integration with
3PAR Recovery Manager
Seamless rapid online recovery

Also see the whitepapers, analyst reports and brochures on http://www.3par.com/litmedia.html

143 HP Copyright 2011 Peter Mattei


3PAR Recovery Manager for VMware
Array-based Snapshots for Rapid Online Recovery

Solution composed of
3PAR Recovery Manager for VMware
3PAR Virtual Copy
VMware vCenter

Use Cases
Expedite provisioning of new
virtual machines from VM copies
Snapshot copies for testing and
development
Benefits
Hundreds of VM snapshots granular,
rapid online recovery
Reservation-less, non-duplicative without agents
vCenter integration superior ease of use

144 HP Copyright 2011 Peter Mattei


VMware VAAI Primitives Overview
vStorage API for Array Integration

Primitive Description
Atomic Test and Set. Stop locking entire LUN and only lock
ATS
blocks.
vSphere 4.1 Also known as Fast or Full Copy. Leverage array ability to mass
XCOPY
copy and move blocks within the array.
3PAR support
introduced with WRITE SAME Eliminate redundant and repetitive write commands
Inform 2.31mu2+
Report array TP state to ESX so VM can gracefully pause if out-
TP Stun
of-space
Used for space reclamation rather than WRITE_SAME. Reclaim
UNMAP
space after a VMDK is deleted within the VMFS environment.
vSphere 5.0
TP LUN identified via TP enabled (TPE) bit from READ CAPACITY
TP LUN Reporting
3PAR support (16) response as described in section 5.16.2 of SBC3 r27
introduced with
Inform 3.11 Uses CHECK CONDITION status with either NOT READY or
Out of Space Condition
DATA PROTECT sense condition
Quota Exceeded Done through THIN PROVISIONING SOFT THRESHOLD
Behavior REACHED (described in 4.6.3.6 of SBC3 r22)

HP Copyright 2011 Peter Mattei


vStorage API for array integration (VAAI)
Hardware Assisted Full Copy

Optimized data movement


within the SAN
Storage VMotion
Deploy Template
Clone

Significantly lower CPU


and network overhead
Quicker migration

146 HP Copyright 2011 Peter Mattei


HP 3PAR VMware VAAI support Example
VMware Storage VMotion with VAAI enabled and disabled

Backend
Disk IO

Frontend
IO

DataMover. DataMover.
HardwareAcceleratedMove=1 HardwareAcceleratedMove=0
147 HP Copyright 2011 Peter Mattei
vStorage API for array integration (VAAI)
Hardware Assisted Locking
Increase I/O performance and scalability, by offloading block locking mechanism
Moving a VM with VMotion;
Creating a new VM or deploying a VM from a template;
Powering a VM ON or OFF;
Creating a template;
Creating or deleting a file, including snapshots

ESX ESX

SCSI Reservation SCSI Reservation


locks entire LUN locks at Block Level

Without VAAI With VAAI

148 HP Copyright 2011 Peter Mattei


vStorage API for array integration (VAAI)
Hardware Assisted Block Zero

offloads large, block-level write operations of zeros to storage hardware


reduction of the ESX server workload.

ESX ESX
0 0

0000000 0000000
0000000 0
0000000
0000000 0000000

Without VAAI With VAAI

149 HP Copyright 2011 Peter Mattei


Autonomic VMware Space Reclamation
With vSphere 5 and InForm OS 3.1.1

HP 3PAR with Thin Persistence


Autonomic
Thin Persistence allows reclaiming
VMware space autonomically with X X
T10 Unmap support in vSphere 5.0
and InForm OS 3.1.1. DATASTORE

X
25GB 25GB 25GB 25GB

X
00000000 00000000 00000000 00000000
00000000 00000000 00000000

Granular
20GB
10GB 10GB 15GB

0 Rapid, Inline
Reclamation granularity is as low as 0 ASIC Zero Detect
T10 UNMAP
(16kB granularity)
16kB compared to 768kB with EMC 0

VMAX or 42MB with HDS VSP. 0


3PAR Scalable Thin Provisioning

Freed blocks of 16 KB of contiguous 100GB ThP 100GB ThP

space are returned to the source


volume 55GB
20 GB

Freed blocks of 128 MB of contiguous


space are returned to the CPG for use
20GB VMDKs finally only consume ~20GB rather than 100GB
by other volumes.

150 HP Copyright 2011 Peter Mattei


VMware vStorage VAAI
Are there any caveats that I should be aware of?

The VMFS data mover does not leverage hardware offloads and instead uses
software data movement if:
The source and destination VMFS volumes have different block sizes
The source file type is RDM and the destination file type is non-RDM (regular file)
The source VMDK type is eagerzeroedthick and the destination VMDK type is thin
The source or destination VMDK is any sort of sparse or hosted format
The source Virtual Machine has a snapshot
The logical address and/or transfer length in the requested operation are not aligned to
the minimum alignment required by the storage device
all datastores created with the vSphere Client are aligned automatically
The VMFS has multiple LUNs/extents and they are all on different arrays
Hardware cloning between arrays (even if within the same VMFS volume) does not work.

vStorage APIs for Array Integration FAQ


http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1021976

Also see the analyst report and brochure on http://www.3par.com/litmedia.html

151 HP Copyright 2011 Peter Mattei


HP 3PAR
Recovery Managers

Copyright 2011 Hewlett-Packard Development Company, L.P. The information


contained herein is subject to change without notice. Confidentiality label goes here
3PAR Recovery Manager for VMware
Array-based Snapshots for Rapid Online Recovery

Solution composed of
3PAR Recovery Manager for
VMware
3PAR Virtual Copy
VMware vCenter

Use Cases
Expedite provisioning of new
virtual machines from VM copies
Snapshot copies for testing and
development
Benefits
Hundreds of VM snapshots granular,
rapid online recovery
Reservation-less, non-duplicative without
agents
vCenter integration superior ease of use

153 HP Copyright 2011 Peter Mattei


Recovery Manager for Microsoft

Exchange & SQL Aware


Automatic discovery of Exchange and SQL Servers and their associated databases
VSS Integration for application consistent snapshots
Support for Microsoft Exchange Server 2003, 2007, and 2010
Support for Microsoft SQL Server 2005 and Microsoft SQL Server 2008
Database verification using Microsoft tools

Built upon 3PAR Thin Copy technology


Fast point-in-time snapshot backups of Exchange
& SQL databases
100s of copy-on-write snapshots with just-in-time,
granular snapshot space allocation
Fast recovery from snapshot, regardless of size
3PAR Remote Copy integration
Export backed up databases to other hosts

Also see the brochure on http://www.3par.com/litmedia.html


154 HP Copyright 2011 Peter Mattei
3PAR Recovery Manager for Oracle

Allows PIT Copies of Oracle Databases


Non-disruptive, eliminating production downtime
Uses 3PAR Virtual Copy technology

Allows Rapid Recovery of Oracle Databases


Increases efficiency of recoveries
Allows Cloning and Exporting of new databases

Integrated High Availability with


Disaster Recovery Sites
Integrated 3PAR Replication / Remote Copy for
Array to Array DR

Currently supported with


Oracle 10g and 11g
RHEL 4 and 5
OEL 5
Solaris 10 SPARC

155 HP Copyright 2011 Peter Mattei


Further Information
Questions ???
3PAR Product Page
http://h18006.www1.hp.com/storage/disk_storage/3par/index.html

3PAR literature
http://h20195.www2.hp.com/v2/erl.aspx?keywords=3PAR&numberitems=25&query=yes

HP Storage Videos
http://www.youtube.com/hewlettpackardvideos#p/c/4884891E6822C9C6

156 HP Copyright 2011 Peter Mattei


SOME HP 3PAR CUSTOMERS
Internal Service Bureaus External Service Providers

Finance Enterprise Government Cloud computing / Hosting / Web 2.0

157
157 HP Copyright 2011 Peter Mattei
HP 3PAR the right choice!

Thank you
Serving Information. Simply.

Copyright 2011 Hewlett-Packard Development Company, L.P. The information


contained herein is subject to change without notice. Confidentiality label goes here

Você também pode gostar