Você está na página 1de 80

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

- 1
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved.
Symmetrix V-Max Series With Enginuity
Technical Presales
Symmetrix V-Max Series With Enginuity
Technical Presales
April 2009
Welcome to the Symmetrix V-Max Series with Enginuity Technical Presales course.
The AUDIO portion of this course is supplemental to the material and is not a replacement for the student notes accompanying
this course.
EMC recommends downloading the Student Resource Guide from the Supporting Materials tab, and reading the notes in their
entirety.
Copyright 2009 EMC Corporation. All rights reserved.
These materials may not be copied without EMC's written consent.
EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without
notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONS
OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY
DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.
EMC2, EMC, EMC ControlCenter, AlphaStor, ApplicationXtender, Captiva, Catalog Solution, Celerra, CentraStar, CLARalert,
CLARiiON, ClientPak, Connectrix, Co-StandbyServer, Dantz, Direct Matrix Architecture, DiskXtender, DiskXtender 2000,
Documentum, EmailXaminer, EmailXtender, EmailXtract, eRoom, FLARE, HighRoad, InputAccel, Navisphere, OpenScale,
PowerPath, Rainfinity, RepliStor, ResourcePak, Retrospect, Smarts, SnapShotServer, SnapView/IP, SRDF, Symmetrix,
TimeFinder, VisualSAN, VSAM-Assist, WebXtender, where information lives, Xtender, Xtender Solutions are registered
trademarks; and EMC Developers Program, EMC OnCourse, EMC Proven, EMC Snap, EMC Storage Administrator, Acartus,
Access Logix, ArchiveXtender, Authentic Problems,Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, C-Clip,
Celerra Replicator, Centera, CLARevent, Codebook Correlation Technology, EMC Common Information Model, CopyCross,
CopyPoint, DatabaseXtender, Direct Matrix, EDM, E-Lab, Enginuity, FarPoint, Global File Virtualization, Graphic Visualization,
InfoMover, Infoscape, Invista, Max Retriever, MediaStor, MirrorView, NetWin, NetWorker, nLayers, OnAlert, Powerlink,
PowerSnap, RecoverPoint, RepliCare, SafeLine, SAN Advisor, SAN Copy, SAN Manager, SDMS, SnapImage, SnapSure,
SnapView, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix DMX, UltraPoint, UltraScale, Viewlets, VisualSRM
are trademarks of EMC Corporation.
All other trademarks used herein are the property of their respective owners.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 2
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Course Overview - 2
Course Overview
This course is intended for audiences with an understanding of Symmetrix
systems and Solutions Enabler, and who are responsible for technically
positioning the Symmetrix V-Max Series with Enginuity. This course describes
what is new relative to this launch.
Intended Audience
EMC believes the information in this publication is accurate as of its publication date and is based on
PreGA product information. The information is subject to change without notice. For the most current
information see the EMC Support Matrix and the product release notes in Powerlink.
This technical Presales training provides students with an introduction to the
Symmetrix V-Max Series with Enginuity. This course describes the hardware
architecture, key Enginuity enhancements, new features and customer
benefits. It includes technical positioning, configuration requirements, planning,
design and integration considerations which are important to a Symmetrix V-
Max array deployment.
Course Description
This technical Presales training provides students with an introduction to the Symmetrix Virtual-Matrix Series with
Enginuity. This course describes the hardware architecture, key Enginuity enhancements, new features and customer
benefits. It includes technical positioning, configuration requirements, planning, design and integration considerations
which are important to a Symmetrix V-Max array deployment.
This course is intended for audiences with an understanding of Symmetrix systems and Solutions Enabler, and are
responsible for technically positioning the Symmetrix V-Max Series with Enginuity. This course describes what is new
relative to this launch.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 3
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Course Overview - 3
Course Objectives
Upon completion of this course, you should be able to:
Describe the new key features and enhancements provided with the
Symmetrix Virtual-Matrix (V-Max) system
Explain how these features benefit the customer
Position the Symmetrix V-Max system
Explain the Symmetrix V-Max system architecture and operation
Describe how the key features work, and their configuration requirements
Discuss planning, design and integration considerations
Upon completing the course, you will be able to perform the following tasks:
Describe the new key features and enhancements provided with the Symmetrix Virtual-Matrix (V-Max) system
Explain how these features benefit the customer
Position the Symmetrix V-Max system
Explain the Symmetrix V-Max system architecture and operation
Describe how the key features work, and their configuration requirements
Discuss planning, design and integration considerations
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 4
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 1: Symmetrix V-Max System: Launch Overview - 4
Module 1: Symmetrix V-Max System: Launch
Overview
Upon completion of this module, you should be able to:
Describe, at a high level, the Symmetrix V-Max system
Identify customer datacenter challenges and explain how the Symmetrix
V-Max array addresses these challenges
State the benefits provided by the Symmetrix V-Max Series with Enginuity
Position the Symmetrix V-Max system
List the Symmetrix V-Max array models and their capabilities
The first module is intended to provide an overview of the product launch.
Upon completing this module, you will be able to:
Describe, at a high level, the Symmetrix V-Max system
Identify customer datacenter challenges and explain how the Symmetrix V-Max array addresses these challenges
State the benefits provided by the Symmetrix V-Max Series with Enginuity
Position the Symmetrix V-Max system
List the Symmetrix V-Max Array models and their capabilities
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 5
Symmetrix V-Max Series with Enginuity Technical Presales
5
2009 EMC Corporation. All rights reserved. Module 1: Symmetrix V-Max System: Launch Overview - 5
Introducing The Symmetrix V-Max System
New Virtual Matrix architecture
Highly scalable
Highly available
Higher performance and usable capacity
Substantially more IOPs per $ when compared to similar DMX-4 configurations:
Symmetrix V-Max array with 8 Engines vs. DMX-4 4500: More than twice the
IOPs per $
Symmetrix V-Max array with 1 Engine vs. DMX-4 1500: 1.3 times the IOPs per $
More usable capacity and more efficient cache utilization
More value at better TCO
Leverage the latest drive technologies
Save on energy, footprint, weight, and acquisition cost
Simpler management of virtual & physical environments
Fastest and easiest configuration
Reduce labor and risk of error
Cost and performance-optimized BC capabilities
Zero RPO, 2-site long distance replication solution
Accelerate replication tasks and recovery times
EMC is introducing a revolutionary new Virtual Matrix architecture within the Symmetrix system family which will
redefine high-end storage capabilities. This new Symmetrix V-Max system architecture allows for unprecedented
levels of scalability. Robust high availability is enabled by clustering, with fully redundant V-Max Engines and
interconnects.
Lets look at some specific cost comparisons between the Symmetrix V-Max system and the Symmetrix DMX-4.
Comparing the base configuration and the maximum configurations of both systems, Symmetrix V-Max delivers from
1.3X to 2X plus the IOPs per dollar. These comparisons assume similar configurations, drive types, and RAID
protection levels. The new Symmetrix platform is powered by a new version of the Enginuity operating environment. It
is optimized for increased availability, performance and capacity, and utilization of all storage tiers with all RAID levels.
Symmetrix V-Max systems provide more usable capacity and more efficient cache utilization.
Total Cost of Ownership is improved via full leverage of the latest drive technologies and savings on energy, footprint,
weight, and acquisition cost.
Enhanced device configuration and replication operations result in simpler, faster and more efficient management of
large virtual and physical environments. This allows organizations to save on administrative costs, reduce the risk of
operational errors and respond rapidly to changing business requirements.
Enginuity 5874 also introduces cost and performance optimized business continuity solutions. This includes the zero
RPO 2-site long distance solution.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 6
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 1: Symmetrix V-Max System: Launch Overview - 6
Datacenter Environment To Illustrate Customer Challenges
Navisphere
Manager
Solutions
Enabler/SMC
3
rd
Party
Array Mgmt
Mgmt LAN
Fabric A Fabric B
PRIMARY SITE (Boston)
Symmetrix
DMX-4
Symmetrix
DMX-3
IP SAN
Test and
Development
Exchange,
File/Print
Production
OLTP/SAP
Mission-critical Business-critical
Hitachi
CLARiiON
CX3 UltraScale
Arrays
Host Based
Replication
DR SITE (Philadelphia)
SRDF/A
MirrorView/A
Hitachi
DR Failover Hosts
Test and
Development
Exchange,
File/Print
Production
OLTP/SAP
Fabric A
Symmetrix
DMX-4
Symmetrix
DMX-3 CLARiiON
CX3 UltraScale
Arrays
DR Failover Hosts
Test and
Development
Exchange,
File/Print
Production
OLTP/SAP
Fabric A
Hitachi
DR Failover Hosts
Test and
Development
Exchange,
File/Print
Production
OLTP/SAP
Symmetrix
DMX-4 CLARiiON
CX3 UltraScale
Arrays
DR Failover Hosts
Test and
Development
Exchange,
File/Print
Production
OLTP/SAP
Fabric A Fabric B IP SAN
Lets examine the challenges of an IT environment and how the Symmetrix V-Max array can provide cost effective,
high performing solutions.
Represented here is an example of a customer production environment, which includes hundreds of hosts. In this
scenario, the high-end servers in Boston run mission-critical applications such as online transaction processing.
These applications require the highest possible levels of availability and require disaster recovery solutions. Also
present are a larger number of second tier, or business-critical hosts, running email and file serving applications. This
second tier has lower service level requirements. Lastly, a small number of test and development servers exist that
require the lowest levels of service.
To meet the differing service level requirements of multiple application classes, a number of EMC arrays and third-
party storage arrays have been deployed at the site.
The mission-critical hosts use mirrored Fibre Channel fabrics to access storage, while the business-critical hosts use
a Gigabit Ethernet network dedicated to iSCSI. A few storage arrays such as the CLARiiON CX3s provide Front End
access via both Fibre Channel and iSCSI. These arrays are being managed by multiple applications.
A secondary site with a similar configuration exists in Philadelphia. Hosts are available for failover in the event of
either a disaster, or a planned shutdown in Boston. Several array-centric disaster recovery solutions have been
implemented to get copies of the production data over the WAN to the DR site in Philadelphia. The Symmetrix DMX-3
and DMX-4 arrays use SRDF/Asynchronous with Fibre Channel SAN Extension over IP, the CLARiiONs use
MirrorView/Asynchronous over iSCSI, and for Hitachi arrays, host-based replication using transfers of database logs
is being used.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 7
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 1: Symmetrix V-Max System: Launch Overview - 7
Current Datacenter Customer Challenges
- 7
Navisphere
Manager
Solutions
Enabler/SMC
3
rd
Party
Array Mgmt
Mgmt LAN
Fabric A Fabric B
PRIMARY SITE (Boston)
Symmetrix
DMX-4
Symmetrix
DMX-3
IP SAN
Test and
Development
Exchange,
File/Print
Production
OLTP/SAP
Mission-critical Business-critical
Hitachi
CLARiiON
CX3 UltraScale
Arrays
Host Based
Replication
DR SITE (Philadelphia)
SRDF/A
MirrorView/A
Hitachi
DR Failover Hosts
Test and
Development
Exchange,
File/Print
Production
OLTP/SAP
Fabric A
Symmetrix
DMX-4
Symmetrix
DMX-3 CLARiiON
CX3 UltraScale
Arrays
DR Failover Hosts
Test and
Development
Exchange,
File/Print
Production
OLTP/SAP
Fabric A
Hitachi
DR Failover Hosts
Test and
Development
Exchange,
File/Print
Production
OLTP/SAP
Symmetrix
DMX-4 CLARiiON
CX3 UltraScale
Arrays
DR Failover Hosts
Test and
Development
Exchange,
File/Print
Production
OLTP/SAP
Fabric A Fabric B IP SAN
Challenges from Server Virtualization:
More initiators per server: more
complex storage provisioning
More active paths per server: load
balancing, failover
Storage Tiering:
Tiers are distributed across storage
frames
Data mobility across frames is a
challenge
Management complexity:
Multiple management applications for:
storage arrays, hosts, and DR
Nearing capacity limit on storage arrays
Disaster Recovery challenges:
Separate DR solutions for
Symmetrix, CLARiiON, HDS
Complexity of managing multiple
solutions
Long-distance replication:
Asynchronous => RPO is non-zero
Now lets take a look at this customers challenges.
Many of the existing storage arrays, with the exception of the DMX-4, are nearing capacity limits. Increasing capacity
by purchasing additional disks is no longer practical.
In an effort to streamline operations, there is a continuous ongoing push to consolidate servers into a virtualized host
environment. The end goal is a mix of VMware, Hyper-V and other virtualization solutions. This server consolidation
introduces several challenges. Although up to a 75% reduction in the number of physical hosts has been achieved,
Fibre Channel link traffic has increased significantly. The HBAs per virtual server has also increased. With this
increase in the number of HBAs, provisioning tasks have become more complex. As a result, storage provisioning
requests, while fewer in number, are taking much longer to complete, resulting in project delays. Also, with the larger
number of HBAs and paths to storage, load-balancing and failover are becoming important concerns. Performance
and availability requirements are generally much higher for consolidated servers, since the loss of a server usually
means that multiple applications on multiple virtual engines will go down simultaneously.
For cost efficiencies, storage tiering has been implemented to match application service level requirements with
storage performance levels. The tiering is currently done on a frame-by-frame basis. While this keeps the
implementation simple, it creates problems with data mobility when application requirements change.
The biggest challenge with the current DR approach is the sheer complexity of the multiple solutions. The current
approach of an array-by-array DR strategy has evolved in an ad-hoc manner and offers no zero-RPO opportunity.
Symptomatic of a relatively complex solution that has evolved over time, there are multiple management software
applications in the environment each catering to a different storage array, host and DR solution.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 8
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 1: Symmetrix V-Max System: Launch Overview - 8
IP SAN
Cost Reduction Via Consolidation
Symmetrix
DMX-4
DMX-3 Hitachi
CLARiiON
CX3 UltraScale
Arrays
Symmetrix
V-Max array
Navisphere
Manager
Solutions
Enabler/SMC
3
rd
Party
Array Mgmt
Mgmt LAN
Exchange,
File/Print
Production
OLTP/SAP
Test and
Development
Exchange,
File/Print
Production
OLTP/SAP
Mission-critical Business-critical
PRIMARY SITE (Boston)
Fabric A Fabric B
Eliminating capacity limitations:
Consolidate multiple storage
arrays
- Lower TCO: less power, less
cooling ( green )
- Massive scalability for future
capacity expansion
- Simpler management: fewer
storage frames
Lets now focus on how the Symmetrix V-Max array can address some of the outlined customer challenges.
To address the increasing demand for storage, and simplify the infrastructure, several of the existing disparate arrays
can be consolidated onto a Symmetrix V-Max array. In this case, the DMX-4, which was recently purchased, will be
left in place. The Symmetrix V-Max array offers the needed capacity, performance and massive scalability for future
expansion. This solution offers a lower TCO for several reasons. First, power and cooling costs are reduced because
of the consolidation of multiple arrays. Second, the Symmetrix V-Max array has a smaller footprint as compared to a
DMX-4 with a similar capacity and RAID configuration. Third, administrative costs are lower because of the reduced
infrastructure complexity. Lastly, the throughput per dollar available on the Symmetrix V-Max array is more than 2x
that of the DMX-4.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 9
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 1: Symmetrix V-Max System: Launch Overview - 9
IP SAN
Reducing Virtualization Complexity
Symmetrix
DMX-4
Symmetrix
V-Max array
Navisphere
Manager
Solutions
Enabler/SMC
3
rd
Party
Array Mgmt
Mgmt LAN
PRIMARY SITE (Boston)
Fabric A Fabric B
Exchange,
File/Print
Mission-critical Business-critical
Exchange,
File/Print
Production
OLTP/SAP
Test and
development
APP APP APP APP APP APP
OS OS OS OS OS OS
Virtual Machines
Virtual Servers (ESX/Hyper-V/)
Simpler provisioning for virtualized
server environments:
New Auto Provisioning feature
Easier LUN mapping, masking to
multiple initiators per host
Lower operational cost
PowerPath/VE for ESX:
Effective, dynamic load balancing
over more active paths
Better utilization of available paths
Automatic path discovery, path
failover
Lets address the datacenter virtualization challenges.
The environment described includes many hard to manage, expensive and under-utilized large servers.
Consolidating, or virtualizing the server environment can simplify the infrastructure, and more importantly, significantly
reduce costs and improve hardware utilization. Although virtualization offers clear advantages, it introduces some
new challenges; specifically around storage provisioning.
Enginuity 5874 offers a new Autoprovisioning Groups feature. This feature reduces the time and complexity of
assigning storage to virtual elements in a virtualized environment. The simplified LUN mapping and masking tasks
are done in less time, thereby reducing the chance of error. This becomes more important in a virtualized
environment as the need for provisioning increases.
PowerPath/VE can provide path management and load balancing for ESX. This solution offers effective load
balancing across paths, better utilization of available paths and automatic path discovery and failover in virtualized
environments.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 10
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 1: Symmetrix V-Max System: Launch Overview - 10
More Flexible/Powerful Storage Tiering Opportunity
Symmetrix
DMX-4
Symmetrix
V-Max
Tier 2
Tier 1
Tier 0
APP APP APP APP APP APP
OS OS OS OS OS OS
Virtual Machines
Virtual Servers (ESX/Hyper-V/)
Navisphere
Manager
Solutions
Enabler/SMC
3
rd
Party
Array Mgmt
Mgmt LAN
PRIMARY SITE (Boston)
Fabric A Fabric B
Storage tiering within a single frame:
More tiering options: Enterprise
Flash, FC 15K,FC 10K,SATA II
New: VLUN mobility across RAID
level and tiers
Enhanced Virtual Provisioning:
enables tier differentiation via thin
pools
IP SAN
This customer has the need to manage and maintain multiple tiers of storage. The applications service level
requirements are balanced with the performance level offered by the specific storage tier.
With the Symmetrix V-Max system, all storage tiers, including Enterprise Flash, Fibre Channel 15K, Fibre Channel
10K, and SATA II, can all be housed and managed within a single frame. This offers a more flexible and consolidated
approach to managing multiple tiers of storage.
The Symmetrix V-Max system with Enginuity provides the ability to move LUNs across tiers of storage, and across
RAID levels. This makes it much easier for our customers to manage changing application requirements that may
require LUN mobility. The customer can implement Virtual Provisioning with all tiers and all RAID levels, and perform
local and remote replication for thin volumes and pools using any RAID level.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 11
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 1: Symmetrix V-Max System: Launch Overview - 11
Zero RPO & Continuous Application Uptime
PRIMARY SITE (Boston)
Symmetrix
DMX-4
Symmetrix
V-Max
APP APP APP APP APP APP
OS OS OS OS OS OS
Virtual Machines
Virtual Servers (ESX/Hyper-V/)
Fabric A Fabric B
R1
TERTIARY SITE (Philadelphia)
SECONDARY
SITE
(Manchester)
Symmetrix
V-Max
R21
SRDF/S SRDF/A
IP SAN
Symmetrix
DMX-4
Symmetrix
V-Max
APP APP APP APP APP APP
OS OS OS OS OS OS
Virtual Machines
Virtual Servers (ESX/Hyper-V/)
Fabric A Fabric B
R2
IP SAN
SRDF/EDP: Zero RPO solution with
long distance replication
Secondary site can be diskless:
lower cost, lower power, green
solution
Highest availability long-distance
DR solution today
As previously stated, this customer has implemented several disaster recovery solutions; one for each of the storage
arrays within the environment. The current DR solution involves a remote site in Philadelphia. The solutions are
complex, hard to manage and have long RPO times. With the consolidation of the disparate arrays onto the
Symmetrix V-Max system, a single, zero-RPO, long distance DR solution can be realized. Not only does this improve
efficiencies, but provides a more robust solution at a lower cost.
The enabling feature offered in Enginuity 5874 is SRDF/EDP, or SRDF/Extended Distance Protection. This is a
cascaded solution including a diskless Symmetrix V-Max system at the secondary site, in this case, Manchester. This
significantly reduces the cost of a traditional cascaded solution in that the secondary site does not require any disk
capacity for data.
The first leg of this solution, from the primary to secondary site, is implemented with SRDF/S providing synchronous
replication between Boston and Manchester. From the Boston R1 site, we replicate to the cache based Diskless R21
data devices in the Manchester Symmetrix V-Max system.
From the secondary site, we can replicate via SRDF/A to the tertiary system in Philadelphia. One of the advantages
of this solution is that a Symmetrix DMX-4 with Enginuity 5773, or a Symmetrix V-Max system running Enginuity 5874
can replicate to the diskless cache only Symmetrix V-Max system. Likewise, from the Symmetrix V-Max system in the
secondary site, we can replicate to a DMX-4 or Symmetrix V-Max system at the tertiary site. This gives the customer
the opportunity to preserve their investment in the DMX while leveraging the benefits of SRDF/EDP.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 12
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 1: Symmetrix V-Max System: Launch Overview - 12
Consolidated Management Infrastructure
Symmetrix
DMX-4
Symmetrix
V-Max
APP APP APP APP APP APP
OS OS OS OS OS OS
Virtual Machines
Virtual Servers (ESX/Hyper-V/)
IP SAN
PRIMARY SITE (Boston)
Fabric A Fabric B
ControlCenter,SMC
Mgmt
LAN
Simplified management infrastructure:
EMC ControlCenter and SMC
Moving towards single pane of
glass solution to manage all data
center components
New in ControlCenter V6.1: includes
better integration with VMware ESX
New in SMC 7.0: new wizards and
templates for ease of management
Navisphere
Manager
Solutions
Enabler/SMC
3
rd
Party
Array Mgmt
Mgmt LAN
Lastly, lets discuss how this environment is managed.
As previously stated, this customer required the use of multiple, disparate management applications to manage
different arrays and servers. This required a costly, broad range of administrative talent.
(1) With EMC ControlCenter and SMC, this customer can perform many management operations from a single pane
of glass. Not only can the customer manage their EMC arrays, but they also have a view into their VMWare Server
environment. ControlCenter V6.1 provides better integration with VMware.
Lastly, with SMC V7.0, there are new wizards and templates which allow easier storage management. Overall, the
Symmetrix V-Max array solution provides less complexity, lower cost and simplified management of the virtualized
datacenter.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 13
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 1: Symmetrix V-Max System: Launch Overview - 13
Symmetrix V-Max System: Value Proposition
PowerPath multipathing in virtualized environments
Non-disruptive data movement across storage tiers and RAID levels
A zero-RPO DR solution with no distance limitations
Better software integration and improved connectivity in Mainframe
environments
High application availability 24
X 7 X Forever
Easier provisioning with Autoprovisioning Groups, enhanced Virtual
Provisioning and support for concurrent provisioning
Simplified storage management across storage environments, including
virtualized environments
Reduced management complexity: moving towards single pane of glass
management
Simplified management of
environment
Lower power, cooling, footprint per GB of storage reduced operational
cost
Multi-core processor technology offering lower $/IOP
Lower-cost cascaded disaster recovery solutions
Storage tiering flexibility
Lower TCO: Reduces cost yet
delivers high service levels
Massive scalability and consolidation opportunity
Improved tiering within a single frame
Support for 512 hypers per drive (up 2X)
Larger volumes up to 256 GB (up 4X)
Scalable V-Max architecture
Enabler Benefit
This table highlights many of the benefits discussed in the previous customer environment scenario. All of the
benefits center around four common themes.
First, the new Symmetrix V-Max Series architecture offers massive scalability, consolidation opportunity and the ability
to easily tier storage within a single frame.
Second, this new platform can reduce the Total Cost of Ownership with lower power and cooling requirements and a
smaller footprint per GB for the same or higher capacity at a given protection level. The Multi-core processor
technology offers a lower cost per IOP.
Third, Enginuity 5874 offers several administrative enhancements which simplify storage provisioning, specifically in
virtualized environments. This platform also reduces complexity by moving towards a single pane of glass
management.
Lastly, the Symmetrix V-Max array offers 24 X 7 X Forever availability. The enabling technologies include PowerPath,
non-disruptive LUN movement and a zero-RPO DR solution with no distance limitations.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 14
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 1: Symmetrix V-Max System: Launch Overview - 14
Architecture: Comparison With DMX
Engines
Up to 8 Engines, 16 CPU cores each
Up to 128 BE ports
Up to 128 FE ports
Up to 472GB usable global memory
Over 2 PB usable capacity
Virtual Matrix Architecture
Separate purpose-built directors
Up to 16 I/O directors
Up to 64 BE ports
Up to 64 FE ports
Up to 256GB usable global memory
Up to 585TB usable storage capacity
Direct Matrix Architecture
Virtual Matrix Virtual Matrix
Interconnect Interconnect
The Symmetrix V-Max system uses Engine-based packaging containing Front End, Back End and memory
components. The Symmetrix V-Max system implements a virtual matrix interconnect, enabled by a new version of
Enginuity.
As the table shows, the new architecture enables significant increases in scalability. Scalability has improved in all
aspects: Front End connectivity, Global Memory, Back End connectivity and usable capacity.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 15
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 1: Symmetrix V-Max System: Launch Overview - 15
Symmetrix V-Max Array Model Comparison
Two Symmetrix V-Max Series with Enginuity options:
Symmetrix V-Max array
Symmetrix V-Max SE array
8
8
16
128 GB
48 - 360
2
1
Symmetrix V-Max SE
array
8 - 64 GigE/iSCSI Ports
16 - 128 Fibre Channel ports
8 - 64 FICON Ports
128 1024 GB Physical Memory (max)
96 - 2400 Disks (min/ max)
2 - 16 Symmetrix V-Max Directors
1 - 8 Symmetrix V-Max Engines
Symmetrix V-Max
array
With this launch, EMC announces two variations of the Symmetrix V-Max Series with Enginuity options: Symmetrix V-
Max array, and the Symmetrix V-Max SE array.
The Symmetrix V-Max array may be configured with one to eight Engines. It contains 216 Directors, 96-2,400 disk
drives, and a maximum of 128 Fibre Channel Front End ports, 64 FICON ports, or 64 GigE/iSCSI ports.
The Symmetrix V-Max SE array always consists of a single Engine with 2 Directors. Depending on expansion bay
configuration, the system contains 48-360 disk drives, 16 FC Front End ports, 8 FICON ports, and 8 GigE/iSCSI ports.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 16
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 1: Symmetrix V-Max System: Launch Overview - 16
Mainframe Support: Enhancements
Persistent Pacing support: eliminates need for Channel Extenders Extended Distance FICON
Co-existence of SNAP/FlashCopy on same volumes
MF Extent track expansion per volume
Converting PPRC to SRDF volumes without requiring synchronization
IBM Compatibility
SPA - Symmetrix Performance Analyzer 1.1
Advanced Performance
Analysis
EzSM - EMC z/OS Storage Manager 3.0
SMC - Symmetrix Manager Console 7.0
Ease-of-Use
RDFGRPS per port increased to 64
SRDF/Extended Distance Protection (SRDF/EDP)
Support more SRDF groups needed for SRDF/Star and server
virtualization 250 Groups
Independent consistency protection with concurrent SRDF/S
Replication
Mainframe Enablers 7.0 6 Titles combined, shipped and installed in
single package
Extended Address Volume (EAV) Large device support
HyperVolume support extended to 512 volumes per drive
Software
64 FICON Ports (vs. 48 FICON ports in DMX) Hardware
Description Enhancements
In addition to improved Front End scalability for FICON, Symmetrix V-Max Series with Enginuity introduces several
new features that allow for better integration with mainframe host environments. These include co-existence of
TimeFinder/Clone and FlashCopy on the same volumes, and support for Extended Distance FICON. All Enginuity
5874 enhancements related to SRDF apply to mainframe environments as well as Open Systems environments. New
versions of EzSM, SMC and SPA provide compatibility with the latest Enginuity features, and enable enhanced
management and monitoring capabilities in mainframe environments.
In addition, integration with mainframe System Management Facilities (SMF) has been improved. SMF is an optional
control program feature of z/OS providing the means to gather and record information that can be used to evaluate
system usage. The new SMF 74.8 enhancement provides additional information such as rank statistics, extent pool
statistics, and link statistics. This data, which is being provided as a requested customer enhancement, can help with
modeling and capacity planning.
Perhaps the most significant benefit that the new platform has to offer is the opportunity for large-scale consolidation.
A big trend in mainframe environments today is consolidation driven by mergers/changes as well as by unrelenting
pressure to lower TCO. For example, your customer may be considering consolidation of hundreds of Linux servers
onto a single mainframe. This represents a logical point to consolidate the multiple legacy storage frames as well, into
a much smaller number of V-Max systems that can fit within your customers existing data center, while also providing
the flexibility to grow as needed.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 17
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 17
Module 2: Symmetrix V-Max Array Architecture
Upon completion of this module, you should be able to:
Discuss the Symmetrix V-Max system hardware architecture and explain
differences within the Symmetrix family
Explain the theory of operations
Describe the configuration and scalability options
Discuss the redundancy and high availability features of the Symmetrix
V-Max array
Explain key planning, design and integration considerations for
Symmetrix V-Max array
This module focuses on the hardware architecture of the Symmetrix V-Max array.
After completing this module, youll be in a position to discuss the architecture with your customer, and explain any
differences within the Symmetrix family.
You will also learn to explain the core components and mechanisms within the product, describe supported
configurations, and possible ways to scale up the system after initial deployment.
Redundancy and high availability features of Symmetrix V-Max array are also covered in this module.
The primary intent of this module is to prepare you to perform basic configuration planning and design for Symmetrix
V-Max array from a hardware perspective.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 18
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 18
DMX: Centralized Global Memory Architecture
Memory Memory Memory
Global
Memory
Global
Memory
Global
Memory
Global
Memory
Memory Memory Memory
. . .
Front End (Hosts) Directors
Back End (Disk) Directors
One of the changes introduced in Symmetrix system design via the new architecture is a shift from a centralized to a
distributed Global Memory model. As this diagram illustrates, DMX systems feature Global Memory boards that
contain a DRAM memory pool. This pool is accessible by Front End Directors and by Back-End Directors via the
Direct Matrix. In the Direct Matrix architecture, the interconnect between Global Memory and the Front End or Back
End Directors relies on copper etch on a backplane. As we'll see next, the new Virtual Matrix architecture differs from
DMX architecture in this respect.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 19
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 19
Virtual Matrix:
Distributed Global Memory Architecture
Virtual Matrix Virtual Matrix
Interconnect Interconnect
Symmetrix V-Max array combines Front End, Back End and Memory into a single Director, reducing cost and
increasing performance.
As with all Symmetrix systems, the Global Memory is truly global in nature. In the Virtual Matrix architecture, Global
Memory is distributed across all Directors. The Virtual Matrix allows access to all Global Memory from all Directors.
Each Director contributes a portion of the total Global Memory space. Memory on each Director stores the Global
Memory data structures including: Common area, Track Tables and Cache entries.
A distributed Global Memory means that from the viewpoint of a Director, some Global Memory is local and some
resides with other Directors. The Virtual Matrix Architecture allows direct access to local parts of Global Memory.
Access to Global Memory on other Directors is by way of highly available, low-latency, high-speed RapidIO
interconnect. This Interconnect enables a Director to communicate with every other Director. The "Virtual Matrix
Interconnect" is a core logical construct of the architecture, requiring some form of fabric-based, redundant mesh
design - in contrast to copper etch on a single backplane. This form of interconnect ensures that the system can scale
to large numbers of Directors. It also allows for Directors to be dispersed geographically as the system grows (in the
future).
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 20
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 20
Symmetrix V-Max System: Architecture
Virtual Matrix Virtual Matrix
Interconnect Interconnect
Engine
Director
Symmetrix V-Max Series with Enginuity is the first implementation of the Virtual Matrix architecture.
The Symmetrix V-Max system is built around a scalable interconnect based on redundant RapidIO fabrics. The
current implementation uses two fabrics.
Engines represent the basic building blocks of a Symmetrix V-Max array. Each Engine contains a pair of Symmetrix
V-Max Directors. Each Director connects to both RapidIO fabrics via Virtual Matrix Interface ports.
This ensures that there is no single point of failure in the virtual interconnect.
A Symmetrix V-Max system may scale from 1 to 8 Engines. This provides a high degree of flexibility and scalability.
Shown is a logical view of a system that grows to the current maximum of 8 Engines and 16 Directors.
The design eliminates the need for separate interconnects for data, control, messaging, environmental and system
test. The dual highly-available interconnect suffices for all communications between the Directors, thus reducing
complexity.
RapidIO is an industry-standard, packet-switched fabric architecture. It has been adopted in a variety of applications
including computer storage, automotive, digital signal processing and telecommunications. It is important to note that
the use of industry-standard RapidIO fabrics represents just one instantiation of part of the logical Virtual Matrix
Architecture i.e. the communication mechanism for the Directors. By itself, the Virtual Matrix Architecture can
support any number of redundant fabrics, and any number of switching elements per fabric. The use of two RapidIO
fabrics is a design choice that applies to the current Symmetrix V-Max only these are not restrictions imposed by the
architecture.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 21
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 21
Engine: Logical View
8-16 ports per Engine
(up to 128 ports per system)
16-128 Back End 4GB/s Channels
for over 2PB of usable Enterprise Flash,
Fibre Channel, and SATA capacity
Multi-core 2.33 GHz processors to
provide 2X+ more IOPS than DMX4
32-128 GB of Memory per Engine, 1024
GB maximum per system
Low-latency Virtual Matrix interface
shares resources across Directors
providing massive scalability
Host & Disk Ports
Core Core
Core Core
Core Core
Core Core
CPU
Complex
Host & Disk Ports
Core Core
Core Core
Core Core
Core Core
Global
Memory
CPU
Complex
Virtual Matrix
Interface
Virtual Matrix
Interface
Front End Back End Front End Back End
Global
Memory
B A B A
Symmetrix V-Max Engine
Up to 8 Engines
per Symmetrix V-Max system
Lets take a closer look inside a Symmetrix V-Max Engine, and make some comparisons with DMX-4.
A Symmetrix Engine combines the Front End, Back End, and memory directors of a Symmetrix DMX system into a
single component. A single Engine combines host ports, memory, and disk channels. It is configured to provide highly
available access to drives, as each Director is the primary initiator to the connected disks, and the alternate for the
other .
In addition, the new Symmetrix provides twice as many host ports with up to 128 per system and is capable of
supporting thousands of physical and virtual server connections combining Fibre Channel, iSCSI, and FICON support.
The system also supports twice as many Back End connections with up to 128 Point to Point Fibre Channel ports.
2,400 disks are supported at general availability, and Enterprise Flash Drives, Fibre Channel and SATA disks can be
configured with a total usable protected capacity of over 2 PB in a single system.
The new Directors introduce support for Multi-core processors that provide a significant increase in processing power
within a smaller footprint that can deliver up to 2X more system performance.
Because 5GB of local memory is reserved by each Director for control store & buffers, the total amount of Global
Cache is up to 944 GB of Global Memory, or 472 GB mirrored, protected memory. Global cache and CPU complexes
are redundant across each Director and allows resources to be dynamically accessed and shared.
The new Virtual Matrix interface connects resources within and across Engines to share system resources. At general
availability, up to 8 Engines can be combined to provide massive scalability in a single system.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 22
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 22
Single Director Logical I/O Flow (Read Hit to Local Cache)
FE I/O Module
FE I/O Module
Virtual Matrix
Interface
BE I/O Module
BE I/O Module
CPU
Memory
Director
CS
S and F
GM
R
e
a
d

r
e
q
u
e
s
t
Read request from host hits in local Global Memory slot
CPU moves data from Global Memory to Store and Forward buffer
I/O device moves data from S and F buffer to host
Virtual Matrix Virtual Matrix
Interconnect Interconnect
CS: Control Store
S and F: Store and Forward Buffer
GM: Global Memory
To support the new distributed Global Memory implementation, Symmetrix V-Max systems use a store-and-forward
(S&F) architecture. In this version of the microcode, all I/O is completed from the local S&F buffer, and then is moved
to the relevant Global Memory space. Note that since Global Memory is distributed, the relevant Global Memory space
may reside either on the Director that received the I/O, or on a different Director.
With this in mind, let us consider the various possible data flow scenarios when a host performs an I/O request to a
Symmetrix Logical Volume in this version of Enginuity. In our simplified representation of a Director in these diagrams,
we view the shared memory space as consisting of three main sections: Global Memory, Store-and-Forward (S&F)
Buffer, and Control Store. Control Store is private memory reserved for control purposes such as hosting the
microcode.
Our first example here is very simple: we get a read cache hit, and the relevant cache slot is on the same Director:
The read request from the host experiences a cache hit in a local Global Memory slot; the CPU moves data from
Global Memory to the Store and Forward buffer; and the I/O device moves data from the Store and Forward buffer to
the host.
In this simplest of cases, there is no traffic generated on the RapidIO fabrics since all activity is local to one Director.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 23
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 23
CPU
Two Directors Logical I/O Flow (Read Hit to Remote Cache)
CPU
Director 1 Director 2
CS
S and F
GM
FE I/O Module
FE I/O Module
Virtual Matrix
Interface
BE I/O Module
BE I/O Module
FE I/O Module
FE I/O Module
Memory
Read request from host hits in remote Global Memory slot
CPU moves data across the Virtual Matrix Interconnect, from remote
Global Memory to local S and F buffer
I/O device moves data from S and F buffer to host
R
e
a
d

r
e
q
u
e
s
t
CS
S and F
GM
Host port on Director 1, Cache slot on Director 2
Memory
Virtual Matrix
Interface
BE I/O Module
BE I/O Module
Virtual Matrix Virtual Matrix
Interconnect Interconnect
CS: Control Store
S and F: Store and Forward Buffer
GM: Global Memory
In our second example, lets again consider a read cache hit.
But this time, the relevant cache slot is on a different Director from the one which provides Front End connectivity to
the host.
As before, the process starts with: a read request from the host experiences a cache hit in the remote Global Memory
slot (within Director 2 in our picture); Next, the CPU moves data across one of the RapidIO fabrics from remote
Global Memory in Director 2, to the local Store and Forward buffer in Director 1; and finally, the I/O device moves
data from the Store and Forward buffer to the host.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 24
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 24
Three Directors Logical I/O Flow (Read Miss)
CPU
CS
S and F
GM
FE I/O Module
FE I/O Module
CPU
Director 2
FE I/O Module
FE I/O Module
CPU
Director 3
FE I/O Module
FE I/O Module
Memory
Read Data
Read Miss request from host to Director 1
Cache slot allocated on Director 2 (could be allocated to any Director)
Read data from disk on Director 3 into S and F buffer on Director 3
Move data across fabric to allocated cache slot on Director 2
Move data across fabric to S and F buffer on Director 1
Move data to host connected to Director 1
R
e
a
d

M
i
s
s
Read from Disk on Director 3, Through Cache slot on Director 2, to Host port on Director 1
Disk
CS
S and F
GM
CS
S and F
GM
Memory
Memory
Virtual Matrix
Interface
BE I/O Module
BE I/O Module
Virtual Matrix
Interface
BE I/O Module
BE I/O Module
Virtual Matrix
Interface
BE I/O Module
BE I/O Module
Director 1
Virtual Matrix Virtual Matrix
Interconnect Interconnect
CS: Control Store
S and F: Store and Forward Buffer
GM: Global Memory
Our third example considers the general case of a read cache miss, with three Directors involved:
Director 1 where the host is connected; Director 2 which hosts the cache slot in Global Memory for the I/O block
involved in the read request; and Director 3 which services the disks requiring I/O activity due to this cache miss.
The sequence begins with the host issuing the read request, and experiencing a Read Miss on Director 1. The cache
slot happens to be allocated on Director 2 in this case note that any Director may be selected for this purpose. Data
is read from disk on Director 3 into the Store and Forward buffer on Director 3; moved over the RapidIO fabric to the
allocated cache slot on Director 2; moved over the RapidIO fabric to the Store and Forward buffer on Director 1; and
finally moved to the host which is connected to Director 1.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 25
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 25
Disk
Four Directors Logical I/O Flow (Host Write)
CS: Control Store
S and F: Store and Forward Buffer
GM: Global Memory
CPU
Director 3
FE I/O Module
FE I/O Module
CPU
Director 2
CS
S and F
GM
FE I/O Module
FE I/O Module Memory
CPU
Director 4
FE I/O Module
FE I/O Module
Write request from host to Director 1, data placed in S+F buffer on Director 1
Write data across fabric to allocated cache slot on Director 2
CPU
Director 1
CS
S and F
GM
FE I/O Module
FE I/O Module Memory
W
r
i
t
e

R
e
q
u
e
s
t
Write data across fabric to allocated cache slot on Director 3
Read data across fabric into S and F buffer on Director 4
Write data to disks on Director 4
Cache slots allocated on Directors 2 and 3
Write from Host port on Director 1 to Cache slots on Director 2 & 3, with destage to Disk on Director 4
CS
S and F
GM
CS
S and F
GM
Memory
Memory
Virtual Matrix
Interface
BE I/O Module
BE I/O Module
Virtual Matrix
Interface
BE I/O Module
BE I/O Module
Virtual Matrix
Interface
BE I/O Module
BE I/O Module
Virtual Matrix
Interface
BE I/O Module
BE I/O Module
Virtual Matrix Virtual Matrix
Interconnect Interconnect
Our final example deals with the general case of a write I/O request to a Symmetrix Logical Volume. Up to four
Directors may be involved in the processing of this request.
In our example, Director 1 provides the front-end connection to the host, Directors 2 and 3 host the mirrored cache
slots for the particular I/O block of interest, and Director 4 provides the back-end connection to the disk drives to which
data must be destaged from cache.
Now lets look at the data flow for a write in this general case.
The write request is sent from the host to Director 1, and the data is placed in the Store and Forward buffer on
Director 1.
In this particular case, the cache slots happen to be allocated on Directors 2 and 3.
Next, data gets moved across the RapidIO fabric to the allocated cache slot on Director 2; moved across the
RapidIO fabric to the allocated cache slot on Director 3; read across the fabric into the Store and Forward buffer on
Director 4; and finally, destaged to disks on Director 4.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 26
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 26
System Interconnects and Buses
Host & Disk Ports
Core Core
Core Core
Core Core
Core Core
CPU
Complex
Host & Disk Ports
Core Core
Core Core
Core Core
Core Core
Global
Memory
CPU
Complex
Virtual Matrix Interface Virtual Matrix Interface
Front End Back End Front End Back End
Global
Memory
B A B A
5GB/s
Virtual Matrix Interface
Bandwidth:
5 + 5 = 10GB/s
5GB/s
I/O Bandwidth:
8+8 = 16GB/s
8GB/s 8GB/s
Memory Bandwidth:
12+12 = 24GB/s
12GB/s 12GB/s
Engine Specifications
This diagram illustrates the interconnects between the various components within a Symmetrix V-Max system. Also
shown is the raw bandwidth limit for the current generation of each interconnect. Of particular interest given the new
distributed memory architecture is the achievable aggregate bandwidth of the Virtual Matrix. This may be derived as
follows:
Each Director is capable of 2.5 GB/sec full-duplex transmission on each of the two RapidIO fabrics. Noting that the
Virtual Matrix implementation uses Active/Active connections on the fabrics, each Engine is therefore capable of 2 x 2
x 2.5 = 10.0 GB/sec aggregate.
The Global Memory on each Director is also accessible at 12 GB/sec, so each Engine has a Global Memory
Bandwidth of 24 GB/sec.
Since the Virtual Matrix enables direct access to Global Memory as well as to Global Memory on other Directors via
the Virtual Matrix Interconnect, a fully loaded system has 8 x 24 = 192 GB/sec aggregate Virtual Matrix bandwidth.
When evaluating storage systems for suitability for a given application, it is critically important to not focus purely
on architectural differences. You should make sure to consider actual benchmark data. Performance data on V-Max
systems that enable real-world comparisons will be made available throughout the launch period.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 27
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 27
Back End
I/O Module
I/O Module Carrier
Engine: Physical View
Virtual Matrix
Interface
Front End I/O Module
Front End I/O Module
Management
Module A
Management
Module B
Power Supply B
Power Supply A
Director
Board
Director
Board
Shown is a physical view of a Engine in a Symmetrix V-Max system.
The enclosure holds two Director Boards.
Each Director has one connection to each of the two Virtual Matrix Interconnect, via its two Virtual Matrix Interfaces.
The Virtual Matrix Interface is also referred to as System Interface Board (SIB).
There are two Back End I/O modules per Director, each providing four ports for connection to Disk Enclosures.
These are the Front End I/O modules. Again, there are two of these per Director. The example shown here provides
four Fibre Channel ports per module.
The I/O Module Carriers are available in two types: with or without Hardware Compression.
In addition, there are redundant power supplies and redundant Management Modules. The Management Modules
provide redundant GigE connectivity to the service processor, and Ethernet connectivity to each Director.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 28
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 28
Web Object Placeholder
Address:https://education.emc.com/main/vod_gs/Hard
ware/Symm/SymmTour
Displayed in: Articulate Player
Window size:482 X 392
DEMO: Hardware Tour
This video describes the Symmetrix V-Max Array Hardware.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 29
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 29
Enhanced Back End Scalability
E
Engine 4
Direct
Connect
Engine 4
System Bay
Storage Bay
1A
Engine 5
Direct
Connect
Engine 5
Engine 5
Daisy
Chain
Engine 3
Direct
Connect
Storage Bay
1B
Engine 3
Daisy
Chain
Storage Bay
2B
Engine 6
Direct
Connect
Engine 6
Daisy
Chain
Engines in the system bay are numbered 1 through 8, starting from the bottom of the rack. A single-Engine system
uses Engine 4, and can be grown to accommodate more Engines later. We start with Engine 4 in the system bay, and
one direct-attached set of 8 drive enclosures in storage Bay 1A. This provides for up to 120 disk drives behind the
Director pair.
We can now double the number of drives in the system by simply daisy-chaining another set of 8 drive enclosures to
the first set. This provides for growth up to 240 drives.
Beyond this, wed need to add our second Engine - Engine 5 - to the System Bay, and direct-attach its first set of 8
drive enclosures.
This leads up to our next daisy chain, this time behind Engine 5.
This way, we can grow the system up to four Engines with a maximum of 240 drives behind each of the Director pairs.
What weve seen up to this point is quite similar to the current DMX-4 in terms of Storage Bays and Back End
connections. Beyond four Engines, Symmetrix V-Max system introduces new configuration conventions.
The fifth Engine, Engine 2, uses storage Bay 1C for its first direct-connect, and 2C for its first daisy-chain. The system
can continue to grow in this manner up to 8 Engines in the System Bay. Note that for the Engines numbered 2, 7, 1
and 8 it is possible to configure up to 360 drives per Engines. In our example here, we have chosen to fully populate
the daisy chain behind each Engine before proceeding to add the next Engine. This is not strictly required for
example, it is also possible to restrict each Engine to one direct-attach storage bay only.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 30
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 30
2x Back End Ports, Shorter Daisy Chains Vs. DMX
Symmetrix V-Max array: Eight Engines, 2400 Dri ves
Symmetrix DMX-4 4500: Four DA Pairs, 2400 Dri ves
A fully loaded system either Symmetrix V-Max system or DMX-4 can accommodate up to 2400 disk drives,
requiring 10 storage bays and one system bay.
The Symmetrix V-Max system with up to twice as many Back End ports requires shorter daisy chains on the Back
End.
With the doubling in the number of Director-pairs relative to earlier models 8 instead of 4 we now have the notion
of octants. The drive enclosures behind a given Director-pair constitute one octant. This is conceptually similar to
drive quadrants in a DMX-4 system.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 31
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 31
Designing a Scalable Solution: Capacity Options
Maximum Number of Drives
Maximum Number of Drives
This table shows the supported Back End configurations for a given number of Engines in the system. Details are
presented for both the Symmetrix V-Max array, which can grow up to 8 Engines and 2400 drives, and Symmetrix V-
Max SE array which is limited to one Engine and a maximum of 360 drives. This table can be helpful during initial
solutions design for a Symmetrix V-Max system.
For example, lets assume we have an initial estimate of around 960 disk drives required in our storage array. The
customers capacity requirement, together with the RAID protection level needed and the disk drive capacity, could
dictate this number. Another possibility is - we derive the needed number of drives from the applications IOPs
requirement, and the selected RAID level.
The table indicates that we could specify as few as four Engines that is, 8 Directors - to support this number of disk
drives. However, if we went this route, wed have no room to expand capacity by adding disk drives in the future.
If we went with 5 Engines, the table suggests wed have room for expansion for up to 1320 disk drives with 6 Engines,
we could expand further to 1680 drives.
A higher number of Engines also provides the benefit of increased scalability on the front-end, and additional Global
Memory. All of this can contribute to better performance. Lets examine these other design aspects of the solution
next.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 32
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 32
Designing a Scalable Solution: Connectivity
Options
Option 1: FC Modules Option 4: Combination FC/FICON
Option 2: FICON Modules
Option 3: GigE/iSCSI Modules
Option 5: Combination FC/GigE
Option 6: Combination FICON/GigE
FE I/O Module
FE I/O Module
Virtual Matrix
Interface
BE I/O Module
BE I/O Module
CPU
Director
I/O Module Carrier
FE I/O Module
FE I/O Module
Virtual Matrix
Interface
BE I/O Module
BE I/O Module
CPU
Director
I/O Module Carrier
6 Options:
CS
GM
Memory
S and F
CS
GM
Memory
S and F
Virtual Matrix Virtual Matrix
Interconnect Interconnect
Three types of Front End I/O modules are currently available, each supporting a specific type of connectivity: Fibre
Channel, iSCSI or GigE, and FICON for mainframes.
The Fibre Channel I/O Module provides four ports, while the GigE and FICON modules provide two ports each.
All three types of modules can support either optical media (via suitable SFPs) or copper.
The FC ports and the FICON ports are capable of up to 4 Gbits/sec.
The first three options shown represent all-FC, all-FICON and all-iSCSI/GigE environments.
The options 4,5 and 6 are meant for mixed environments, with a need for multiple types of Front End connectivity.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 33
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 33 - 33
Designing a Scalable Solution: Connectivity
Configuration Options
Option 2: FICON Modules
Option 3: GigE/iSCSI Modules
Option 1: FC Modules
FICON I/O Modules
8 FICON FE Ports
iSCSI/GigE I/O Modules
8 iSCSI FE Ports
6 iSCSI FE/ 2 GigE RDF Ports
4 iSCSI FE/ 4 GigE RDF Ports
Fibre Channel I/O Modules
16 FC FE Ports
12 FC FE /2 FC RDF Ports
8 FC FE /4 FC RDF Ports
Symmetrix V-Max Series with Enginuity Technical Presales
As we just saw, there are six possible hardware options for Front End connectivity.
Let us examine the supported logical configurations within each of these six hardware options.
With Option 1, an all-FC environment, it is possible to configure the 16 available ports in three different ways: with no
RDF ports, with 2 RDF ports, or with 4 RDF ports. Note that configuring one RDF port consumes two available FC
ports. This is no different from what exists today with DMX-4 systems.
In an all-FICON environment, there is just one supported logical configuration providing 8 FICON ports for mainframe
connectivity.
In a configuration where all the ports are GigE, it is possible to configure the ports for either iSCSI or GigE RDF. This
gives us three possible logical configurations with up to 4 GigE RDF ports per Engine.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 34
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 34
Designing a Scalable Solution: Connectivity Configuration
Options (Continued)
Option 4: Combination FC/FICON
Option 5: Combination FC/GigE
Option 6: Combination FICON/GigE
Fibre Channel FICON
8 FC FE /4 FICON FE Ports
4 FC FE / 2 FC RDF / 4 FICON FE Ports
4 FC RDF / 4 FICON FE Ports
Fibre Channel iSCSI/GigE
4 FC FE / 2 FC RDF / 4 iSCSI FE Ports
8 FC FE / 2 GigE RDF / 2 iSCSI FE Ports
4 FC RDF / 4 iSCSI FE Ports
8 FC FE / 4 iSCSI FE Ports
8 FC FE / 4 GigE RDF Ports FICON iSCSI/GigE
4 FICON FE /4 iSCSI FE Ports
4 FICON FE /2 iSCSI FE / 2 GigE RDF Ports
4 FICON FE /4 GigE RDF Ports
With mixed environments, again the FC ports and the GigE ports may be configured for either RDF, or for Front End
connectivity to hosts. The supported logical configurations in mixed environments are shown here.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 35
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 35
Global Memory: Design Considerations
Each Director can be configured with 16 GB, 32 GB or 64 GB of memory
Directors of a given Engine must have the same memory configuration
In a single Engine system, memory is mirrored within the same Engine
In multiple Engine systems (2 thru 8
Engines), memory is mirrored across
Engines
Three-Engine system
X,Y,Z are mirrored pairs
X Y
3
2

G
B
3
2

G
B
3
2

G
B
3
2

G
B
3
2

G
B
3
2

G
B
Y Z Z X
Example: A
Four-Engine system
A,B,C,D are mirrored pairs
A B
3
2

G
B
3
2

G
B
3
2

G
B
3
2

G
B
6
4

G
B
6
4

G
B
B A
C D
6
4

G
B
6
4

G
B
D C
This does not require that all Engines
have identical memory configurations
Example: B
Each Director provides eight DIMMs. All DIMMs must be populated with 2, 4, or 8 GB DIMMs. This results in a raw
memory size of 16, 32 or 64 GB per Director. It is also required that each of the two Directors in a Engine have
identical memory configurations.
Memory is always mirrored. In a single-Engine system, memory is mirrored across the two Directors of the Engine.
With multiple Engines, memory is always mirrored across Engines. The example illustrates mirroring in a three-Engine
system. Due to the mirroring requirement, all six Directors must have the same amount of memory in this
configuration.
Note however, that this does not apply to every supported configuration. In a four-Engine configuration, it is possible to
have two pairs of Engines, with each pair having a different memory size. This is illustrated in our second example
here.
As well see shortly, global memory is expandable. Adding memory to a production system is non-disruptive to hosts
that conform to EMC best practices for multipathing.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 36
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 36
Tiering Options: Available Disk Drive Types
4 Gb drives only
2 Gb drives are not supported with Symmetrix V-Max arrays
Ultra-high performance: 4 Gb Enterprise Flash Drives
200 GB
400 GB
High performance: 4 Gb FC (15K)
146 GB
300 GB
450 GB
Price / performance: 4 Gb FC (10K)
400 GB
High capacity: SATA (7.2K)
3 Gb SATA
Adapted to 4 Gb FC via up-conversion
1 TB
Symmetrix V-Max systems offer many choices for drive capacity and performance characteristics.
Enterprise Flash Drives are available to provide maximum performance for latency sensitive Tier 0 applications, such
as currency exchange, electronic trading systems and real-time data processing. Note that the current generation of
Enterprise Flash Drives operate at 4 Gb, enabling even better performance than when they were initially introduced.
Legacy Enterprise Flash Drives (with 73 GB and 146 GB capacities) are not supported with Symmetrix V-Max arrays.
Fibre channel drives are offered in various capacity and rotational speeds.
High-capacity SATA II drives represent the lowest tier. These can provide a cost-effective option for applications such
as backup to disk, local replication with TimeFinder/Clone and test environments with low I/O workloads.
When selecting drives it should be noted that 15k rpm drives perform better than 10k rpm drives, which perform better
than 7200 rpm drives. Seek time and rotational latency can significantly affect performance.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 37
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 37
Configuring Tiered Storage: Best Practices
Option 1: Segregate Tiers by Octant
Isolate the tiers in separate octants (by Engine)
Delivers more predictable performance by separating all resources for each tier
Restricts Director resources available to each tier
Node 4
Node 3
Node 5
Node 6
Octant C, Node 3
7200 rpm
drives
Octant D, Node 6
15k rpm
drives
Octant A , Node 4
Flash
drives
Octant B, Node 5
per octant
7200 rpm
drives
Flash
drives
10k rpm
drives
10k rpm
drives
15k rpm
drives
Engine 4
Engine 3
Engine 5
Engine 6
Octant 3, Engine 3
Octant 4, Engine 6
Flash
Drives
Single tier
per octant
Flash
Drives
7200
rpm
Drives
7200
rpm
Drives
Octant 2, Engine 5
15k rpm
Drives
15k rpm
Drives
Octant 1, Engine 4
10k rpm
Drives
10k rpm
Drives
Symmetrix V-Max systems are specifically designed with hardware and software to support tiering within the storage
array. Two or more application tiers residing within a single Symmetrix V-Max system can be configured on different
drive types and protection schemes to meet differing workload demands. One possible implementation of tiered
storage within a Symmetrix V-Max system is to isolate the tiers in separate octants (that is, by Engine). For example,
certain Engines may contain slower (high capacity) drives and others may have faster (high performance) drives, as
shown here. This configuration delivers more predictable performance. Predictable performance is ensured by
separating all resources for each tier including director processing power, global memory, and Back End ports.
Completely isolating the tiers physically will not, however, deliver the best overall system performance across all tiers.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 38
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 38
Configuring Tiered Storage: Best Practices (Continued)
Option 2: Mixing Drives Throughout the System Without Segregating
Delivers the highest total performance to all applications
This configuration maximizes the Director resources available for any tier
Symmetrix Priority Controls, Dynamic Cache Partitioning and Virtual LUN can be
deployed to fine tune system resources and define priorities
Node 4
Node 3
Node 5
Node 6
Octant C, Node 3
7200 rpm
drives
Octant D, Node 6
15k rpm
drives
Octant A , Node 4
Flash
drives
Octant B, Node 5
Single tier
per octant
7200 rpm
drives
Flash
drives
10k rpm
drives
10k rpm
drives
15k rpm
drives
Engine 4
Engine 3
Engine 5
Engine 6
Octant 3, Engine 3
Octant 4, Engine 6
Flash +
7200 rpm
Drives
Tiering
Independent
of Octant
Flash +
15k rpm
Drives
15k rpm
Drives
10k rpm+
7200 rpm
Drives
Octant 2, Engine 5
10k rpm+
7200 rpm
Drives
15k rpm
Drives
Octant 1, Engine 4
15k rpm
Drives
10k rpm+
7200 rpm
Drives
Shown here is an alternative recommended layout when the goal is best aggregate performance. In this configuration,
drives from all tiers are mixed throughout the system, instead of isolating by octant. This strategy maximizes the
Director resources available to any of the tiers, ensuring effective use of the available hardware. Symmetrix Priority
Controls, Dynamic Cache Partitioning and Virtual LUN may be used to fine-tune priorities and allocation of resources
to each tier.
Unless predictability of performance is the customers primary concern, this Option 2 configuration with mixed drive
types across the system represents the recommended best practice.
As weve seen so far, there are several critical design decisions to make when configuring a Symmetrix V-Max system
that will meet your customers specific needs for performance, tiering by application, capacity and future growth. Your
local SPEED guru can provide configuration assistance during the design process.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 39
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 39
Other Design Considerations
Up to 3.6 meters distance
RPQ required
For details, consult your local
Symmetrix Champions
5 Vault drives per back-end
loop at initial configuration
DMX-4 has 4 Vault drives per
loop at initial configuration
Permanent Sparing is allowed
to move a Vault dri ve to a
different Director
DMX-4 Vault drives cannot
move to a different DA
Spares for hard disk dri ves:
Spares are required for each
unique hard disk type (Note: drive
type implies: rotational speed,
capacity as well as block size)
For each unique drive type: 2
spares for every 100 drives (or
portion thereof)
For an entire Symmetrix V-Max
system: minimum of 8 hard disk
drive spares
Spares for Enterprise Flash drives:
Spares are required for each drive
size
For 32 or fewer Flash drives: one
spare for each drive size
For more than 32 Flash drives: at
least 2 spares for every 100, for
each drive size
Storage Bay Separation Vault Spares
Sparing considerations apply when configuring Back End storage for a Symmetrix V-Max system. Sparing rules are
similar to existing rules for currently-shipping DMX-4 systems.
A Symmetrix V-Max array requires 5 vault drives per Back End loop this is one additional drive per loop when
compared to a DMX-4 array. The Symmetrix V-Max system has the flexibility to move a vault drive to a different
Director via permanent sparing. This is a key difference from DMX-4 arrays where the vault drives cannot move
across DAs.
As with DMX-4 arrays, certain limits and limitations apply in situations where it becomes necessary to physically
separate storage bays. Storage bay separation of up to 3.6 meters is supported in specific instances, and an RPQ is
required. Note that separation is not allowed for direct-connect storage bays.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 40
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 40
Non-Disruptive System Expansion: Options
The above listed expansions are non-disruptive to production hosts
Refer to latest documentation on Powerlink for supported configurations
Add one or more Engines
Expanding Front End
Connectivity
Upgrade memory on existing Directors
Add one or more Engines
Adding Global Memory
Add drives to existing Drive Enclosures
Add more Drive Enclosures and drives
In increments of 120 drives
Add one or more Engines
Adding Storage Capacity
Description Type of Expansion
Typical upgrades you might perform on a Symmetrix V-Max array include: adding storage capacity, adding memory
and adding Front End connectivity. These upgrade procedures are non-disruptive to hosts, provided all EMC-
recommended best practices are followed.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 41
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 41
Symmetrix V-Max Series Vs. Symmetrix DMX-4
32x GigE remote replication ports 8x GigE remote replication ports
64x FICON host ports 32x FICON host ports
64x iSCSI ports 48x iSCSI ports
32x Fibre Channel remote replication
ports
8x Fibre Channel remote replication
ports
128x Fibre Channel host/SAN ports 64x Fibre Channel host/SAN ports
Front End Ports
(maximum)
472 GB 256 GB
Maximum Usable Global
Memory
Over 2 PB Up to 585 TB
Maximum Usable
Capacity
48 - 2400 40 - 2400 Number of disks
16 - 128 16 - 64 Back End Ports
4 Gb/s 2 or 4 Gb/s
Maximum Back End
Channel Speed
Point-to-Point Point-to-Point Fibre Channel Back End
Open systems and Mainframe Open systems and Mainframe Host Support
Symmetrix V-Max Series Symmetrix DMX-4 Features/Specification
Lets summarize our architectural discussion by comparing the Symmetrix V-Max array with the DMX-4 on various
scalability metrics.
On the Back End, while both systems can support up to 2400 drives, the Symmetrix V-Max array offers twice the
capacity more than 2 PB of usable space. This can be achieved using 2400 1-TB SATA II drives in a RAID-6 14+2
configuration. The Symmetrix V-Max array can also provide better performance on the Back End, since it can be
configured with twice as many Back End ports.
Relative to the memory, the Symmetrix V-Max array can be configured with up to 472 GB of usable cache.
Front End scalability has improved as well for all three supported types of host interconnect, and for remote
replication connections.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 42
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 3: Storage Administration Enhancements - 42
Module 3: Storage Administration Enhancements
Upon completion of this module, you should be able to:
Explain the key ease-of-use enhancements available with the Symmetrix
V-Max system
Describe the benefits of the key configuration and management
enhancements
This module focuses on the storage administration enhancements of the Symmetrix V-Max array. After completing this
module, youll be in a position to discuss these ease of use features and their benefits to the customers.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 43
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 3: Storage Administration Enhancements - 43
Enhanced LUN Masking Capability
Autoprovisioning Groups: new ease-of-use feature with Enginuity 5874
Benefits:
Faster and easier provisioning to hosts
By associating groups of initiators, storage ports and storage devices
Reduced risk of operator error
Reduced administrative overhead
More efficient provisioning for:
Virtualized host environments: ESX server, Hyper-V
High Availability implementations with clustered hosts
Autoprovisioning Groups provide an easier, faster way to provision storage in Symmetrix V-Max arrays.
Autoprovisioning Groups is a new feature that was developed in Enginuity 5874 to make storage allocation easier and
faster. It reduces labor cost and risk of error, especially with configurations involving mapping to large numbers of FA
ports, and masking volumes to large numbers of host initiators. Common examples of these scenarios include: high-
availability implementations with host clusters; and virtualized host environments.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 44
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 3: Storage Administration Enhancements - 44
Simpler Provisioning: Using Autoprovisioning Groups
Symmetrix V-Max system
Initiator
Group
Masking View
Port
Group
Storage
Group Host-B
Fabric-A
Fabric-A
Fabric-B
Fabric-B
050
051
052
053
054
055
Host-A
Storage provisioning in previous Enginuity versions required a separate command for each initiator/port combination
through which devices would be accessed.
With Enginuity 5874, users can create a group of host initiators, or initiator group; a group of director ports, or port
group; and group of Symmetrix devices, or storage group; and associate all three i.e. the initiator group, port group
and storage group in a masking view. When the masking view is created, the devices are automatically mapped and
masked and, thereby, accessible to the host(s).
After the masking view is created, any objects (devices, ports, or initiators) added to an existing group automatically
become part of the associated masking view. This means that no additional steps are necessary to add additional
devices, ports, or initiators to an existing configuration. All necessary operations to make them part of the
configuration are handled automatically by Enginuity once the objects are added to the applicable group. This reduces
the number of commands needed for mapping and masking devices and allows for easier storage allocation and de-
allocation.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 45
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 3: Storage Administration Enhancements - 45
Faster Provisioning
New command: symaccess
Device mapping to ports is automatic no need to run symconfigure
Making changes to existing provisioning scheme is much faster.
Example: Provision additional devices to a host by just adding devices to the storage group.
Provisioning steps with
Enginuity 5874
Create and populate initiator groups, port
groups and device group
One symaccess command per group
Create masking view for a given
{initiator group, port group, device group}
set
One symaccess command per masking view
Provisioning Steps with
Prior Versions of Enginuity
Map devices to director ports
Requires a symconfigure operation
Mask devices to host initiators
Requires several symmask commands
One command for each {director port,
initiator} pair
Back up the VCM database (symmask
backup)
Update the array with configuration changes
made to the VCM database (symmask
refresh)
A new Solutions Enabler command, symaccess, takes the place of symmask and symmaskdb from prior
Enginuity versions.
Lets examine how the provisioning mechanics have changed, in more detail.
These are the steps with the prior versions of Enginuity. In particular, note the number of symmask commands
required. A single symmask command can only mask a set of devices to one host initiator via one FA port. So wed
need one symmask command for each {FA port, host HBA} combination in our configuration.
In contrast, all mapping and masking for one provisioning task may be accomplished in as few as four steps with
Enginuity 5874. This is regardless of the number of host initiators, FA ports, and devices.
With the new provisioning method, it is easy and quick to make changes. We will look at a concrete example of this
next.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 46
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 3: Storage Administration Enhancements - 46
DEMO: Autoprovisioning Groups
2009 EMC Corporation. All rights reserved.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 47
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 3: Storage Administration Enhancements - 47
Concurrent Provisioning: Improving Efficiency
More efficient storage administration
Ability for multiple, concurrent configuration changes within Symmetrix V-
Max array
Enginuity 5874 allows parallel execution of change sessions
Customer benefit: ease and efficiency of storage administration
Requirement: configuration changes that do not require the same
resources
Locks are finer grained, and taken out for a shorter duration
Typical operation that now requires just a single step:
create metas + map metas to 2 FA ports + set attributes on metas
A new feature in Enginuity 5874 provides the ability for multiple configuration changes to be executed concurrently
within the Symmetrix V-Max array. The key change from prior versions of Enginuity is that locks are finer-grained, thus
allowing for more concurrent configuration activity.
In a typical usage scenario, the storage administrator may wish to create several metadevices, map them to FA ports,
and also set attributes on the devices. Previously, each of these actions had to be performed separately and required
their own symconfigure command. With Enginuity 5874, it becomes possible to use one script with one symconfigure
command to perform this operation.
This feature enables more efficient storage administration.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 48
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 3: Storage Administration Enhancements - 48
Virtual Provisioning Enhancements
Non-disruptive shrinking of thin pools
Production host with a provisioned thin device is unaffected
Customer benefits:
Allows reuse of thin pool space
Improves utilization of thin pool
How data device draining works:
1. Data on the device is moved to the other enabled data devices in
the pool
2. Once the draining is complete, the device (volume) is disabled
3. Now the device can be removed from the pool
Other improvements include:
RAID 5 7+1, which in turn can support TimeFinder/Clone, TimeFinder/Snap, SRDF/S and SRDF/A
TimeFinder/Snap, SRDF/S and SRDF/A with RAID 5 3+1
TimeFinder/Snap and TimeFinder/Clone from SRDF/A R1
TimeFinder/Clone from SRDF/A R2
Customer benefits:
Can virtually provision all tiers and RAID levels
TimeFinder/Clone, TimeFinder/Snap, SRDF/S and SRDF/A can be performed with all RAID levels
Thin pools now can be shrunk non-disruptively, helping reuse space to improve efficiency. When a data device is
disabled, it will first move data elsewhere in the pool by draining any active extents to other, enabled data devices in
the thin storage pool.
Once the draining is complete, that device (volume) is disabled and the device can then be removed from the pool.
In addition to reusing space more efficiently, benefits of this capability include the ability to:
Adjust the subscription ratio of a thin pool that is, the total amount of host perceived capacity divided by the
underlying total physical capacity
Adjust the utilization or percent full of a thin pool
Remove all data volumes from one or more physical drives, possibly in preparation for removal of physical drives
with a plan to replace them with higher capacity drives
Customers can now virtually provision all tiers and RAID levels, and support local and remote replication for thin
volumes and pools using any RAID level.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 49
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 3: Storage Administration Enhancements - 49
Improved Capacity Utilization
512 Hypers per physical drive (Up 2x)
Enginuity 5874: maximum of 512 hypers per physical drive
Prior Enginuity versions: maximum of 256 hypers per physical drive
Benefits:
Better capacity utilization
More flexibility in managing newer, higher capacity drives
Large volume support 4x SLV capacity
Enginuity 5874: logical volume size limit increased to 256 GB
Benefits:
Reduces need to create meta volumes: simplifies storage management
Accommodates high capacity and high growth application requirements
Previous versions of Enginuity supported a maximum of 256 hypers per physical drive. Enginuity 5874 raises this limit
to 512 hypers per physical drive. This allows customers to configure more granular volumes that meet their space
requirements. Particularly with newer, higher capacity disk drives, this provides for better flexibility and helps improve
capacity utilization.
Prior to Enginuity 5874, the largest single logical volume that could be created on a Symmetrix was 65,520 cylinders.
Now, a logical volume can be configured with a maximum capacity of 262,668 cylinders (over 256 GB). This is about
four times as large as with prior versions of Enginuity. This simplifies storage management by reducing the need to
create meta volumes, and by more easily accommodating high capacity and high growth application requirements.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 50
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 3: Storage Administration Enhancements - 50
Virtual LUN
New in Enginuity 5874: Non-disruptive mobility of VLUNs between RAID
levels
Key enabler for storage tiering within the array
New Solutions Enabler 7.0 symmi gr at e command
Simple one-step migration of a set of VLUNs, with a single command
New Virtual RAID architecture facilitates implementation of this feature
All RAID levels now use a single mirror position (same as RAID-6)
Virtual LUN is a feature of Symmetrix Optimizer. It enables users to non-disruptively relocate volumes to different tiers
transparently to the host, and without impacting local or remote replication.
With Enginuity 5874, it has become possible to perform non-disruptive migration of VLUNs between RAID levels. With
V7.0, Solutions Enabler includes the symmigrate command to perform VLUN migrations in one step. These operations
can be also be performed via SMC.
Inter-RAID mobility of VLUNs is one of the benefits that has been made possible by virtualizing the RAID architecture.
With Enginuity 5874, the underlying implementation for all existing RAID levels has been overhauled. The RAID-6
design, which uses just one device mirror position, has been extended to support all current RAID levels in a single
Back End Engine. RAID protection is now associated with individual device mirror positions, rather than being spread
across multiple mirror positions. This is the enabler for inter-RAID mobility and other new features in this release.
Note that VLUN mobility applies to standard, provisioned LUNs only not to thin devices or internal devices.
Restrictions are covered in greater detail later.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 51
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 3: Storage Administration Enhancements - 51
Enabler for Virtual LUN: New Virtual RAID Architecture
1 Available Position
No Available Position
Pre-5874
M1 M2 M3 M4
RDF1+Mir
CRDF R1+Mir
Data Data
Data Data
2 Available Positions
1 Available Position
RDF1+Mir
CRDF R1+Mir
RAID 1
RAID 1
M1 M2 M3 M4
With 5874
RAID Virtual Architecture implemented with Enginuity 5874 is the new method for handling device mirror position
usage and is based on the RAID 6 architecture introduced in the previous release.
First, lets take a look at how mirror position usage is handled in a pre-5874 Symmetrix array. One fairly typical
configuration is a mirrored device that is also SRDF protected. This configuration leaves one mirror position available
for operations such as TimeFinder Mirror, or hot sparing. In cases where Concurrent SRDF is implemented, for
example against a mirrored device, you are left with no available mirror positions. With this configuration you cannot
use TimeFinder mirror or hot sparing.
The RAID virtual architecture in Enginuity 5874 expands on the mirror positioning handling implemented with RAID 6.
Now, a mirror position holds a logical representation of a RAID group rather than a device resulting in additional free
mirror positions. Our initial example of a mirrored device with SRDF no longer has two data devices consuming two
mirror positions. Instead the RAID 1 group occupies one mirror position with the SRDF protection occupying a second
position. This frees two mirror positions for other operations. Lets take a look at how this new architecture will
change our concurrent SRDF example. Again, the two data devices are replaced by the RAID group representation
with each RDF device occupying additional mirror positions. This frees up 1 mirror position giving the customer more
flexibility. Virtualizing the RAID Architecture is an enabling technology for Symmetrix to implement other features,
such as VLUN migration.
Keep in mind that RVA does not introduce new RAID protection levels. Also, be aware that the optimizer swap
process and definition of RAID groups are under RVA architecture and the historical flags for Maintain RAID Groups,
and Maintain Mirrors, are no longer required.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 52
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 3: Storage Administration Enhancements - 52
Virtual LUN Migration: Use Case
Production Host
Migration of production volumes across RAID tiers
RAID-5 (3+1)
Source_Devices
Source_Devices
020 021
RAID-6 (6+2)
# symmigrate establish sid 123 nop v g Source_Devices tgt tgt_raid6
tgt_prot 6+2 tgt_dsk_grp 1 tgt_unconfig name session_1
Lets consider a simple example to illustrate online, non-disruptive migration of VLUNs across RAID tiers. Migrations
can be performed to either configured or unconfigured space. When migrating to configured space, the Symmetrix will
choose from existing logical volumes to migrate the data to. Migrating to unconfigured space will create new hypers,
from free space, to be used as the target of the migration.
Shown is a production host accessing two VLUNs our Symmetrix devices 020 and 021, which are configured in a
device group called Source_Devices. These are RAID-5 devices.
Subsequently, newer and faster disk drives have been added to the storage array. To take advantage of the better
performance, the host administrator would like us to move the symdevs 020 and 021 to a disk group containing the
newer disk drives. Since the newer drives have much higher capacity and longer RAID rebuild times, the customer
would also like to transition to RAID-6 protection.
The single command shown here would accomplish the task with zero disruption to the host and its I/O activity to
symdevs 020 and 021.
The command specifies that the target is unconfigured space on disk group 1. This causes the two target devices to
be automatically created in the target disk group, with the correct RAID protection level.
Next, each source device is synchronized to its target, while also servicing production I/O from the host.
After synchronization completes, the target devices assume the identity of the source devices. The source devices are
deallocated and their storage capacity is freed up in the original disk group. From the hosts point-of-view this
operation is completely transparent.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 53
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 3: Storage Administration Enhancements - 53
Virtual LUN Migration: Interoperability
Device types not supported:
VDEVs (TimeFinder Snap)
Thin devices
DATA devices
Vault devices/SFS/VCM
SAVE devices
Device types supported:
Standard Symmetrix devices,
unprotected Symmetrix devices
Metadevices
FBA, CKD, iSeries
Protected source to unprotected target Unprotected source to unprotected
target
Between drive types: Flash drives,
Fibre Channel and SATA disks
Within one disk group, if RAID protection
level also does not change
Between protection levels: RAID-1,
RAID-5 and RAID-6 protection
NOT Supported Supported
Interoperability considerations for the VLUN migration features are shown here.
Please note that VLUN migration requires that either the RAID protection level changes, or the disk group changes, or
both change. You cannot migrate a VLUN to the same protection level RAID 5 (3+1) to RAID 5 (3+1) and the same
disk group number (disk group 1 to disk group 1). The desired protection type or disk group number must be different
from what exists on the source devices.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 54
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 3: Storage Administration Enhancements - 54
New Management Integration
Simpler deployment of Symmetrix Management Console (SMC)
SMC can now be accessed via the Service Processor of the Symmetrix V-Max
system
Requires secure HTTPS connection from management station to the Service
Processor
Benefits:
Reduces TCO: eliminates need for additional server to manage the array via
SMC or SMI-S providers
Facilitates operations: simpler solution for convenient, secure out-of-band
management access
Starting with Enginuity 5874, Symmetrix Management Console can be accessed directly via the service processor of a
Symmetrix V-Max system. By joining the service processor to the corporate network, storage administrators will have
immediate access to SMC from anywhere in the enterprise. Communication to the service processor must occur over
a secure HTTPS connection. This support for out of band management reduces TCO by eliminating the need for the
customer to purchase an additional server to manage the Symmetrix with SMC and/or SMI-S Providers. Note however
that a traditional implementation via installing SMC provider on a dedicated host will continue to be supported.
With this new implementation, there are two ways to login to SMC:
1. As a customer, with the default SMC username/password (this may be changed from the default at the customer's
discretion); or
2. As EMC/Partner Customer Service Engineer, with RSA credentials.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 55
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 4: Improvements in Business Continuity - 55
Module 4: Improvements in Business Continuity
Upon completion of this module, you should be able to:
Explain the Symmetrix V-Max system business continuity functional
enhancements with SRDF, TimeFinder and ECA
Explain the SRDF/Extended Distance Protection (SRDF/EDP) feature
Propose specific business continuity solutions using the Symmetrix V-Max
system enhancements
Upon completion of this module, you should be able to:
Explain the Symmetrix V-Max system business continuity functional enhancements with SRDF, TimeFinder, and
ECA
Explain the SRDF/Extended Distance Protection (SRDF/EDP) feature
Propose specific business continuity solutions using the V-Max enhancements
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 56
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 4: Improvements in Business Continuity - 56
Optimized Business Continuity Capabilities
Concurrent TimeFinder/Clone and SRDF Restore with accelerated
resynchronization times
Now allows an SRDF restore while a Clone Restore is in progress
Works for both Native Clone and TimeFinder/Clone Emulation
Support for 250 SRDF Groups
Can now define all 250 SRDF groups
Support for a maximum of 64 SRDF Groups per SRDF Director
SRDF/Extended Distance Protection (EDP) long distance replication
with near zero data loss
Support for 128 Directors, can be configured for varying uses
Add/Remove Devices from SRDF/A (Consistency Exempt)
SRDF/A Consistency per Cycle
Listed is a summary of the key features of Enginuity 5874. Lets take a moment to briefly discuss each feature.
Enginuity 5874 now supports Concurrent TimeFinder/Clone and SRDF Restore. With this feature, the RDF and
TimeFinder/Clone Features have been enhanced. The enhancement has implemented a change which removed all
blocks that prevented the concurrent restores. Now the RDF restore forces the TimeFinder/Clone protected tracks to
become RDF remote invalid tracks.
This release now supports 250 SRDF groups and also allows for support for 128 Directors; these Directors being
configured for replication, host connectivity.
Another key benefit of this release is the implementation of SRDF/Extended Distance Protection, also referred to as
SRDF/EDP. We will discuss this feature in-depth later in this module.
There are two major changes to the SRDF/A Consistency behavior in 5874:
Allow addition and removal of dynamic SRDF/A devices on an active SRDF/A group
Allow calculation and display of consistency of active and inactive cycles
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 57
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 4: Improvements in Business Continuity - 57
TimeFinder Clones Improved Operations
Accelerate resynchronization times
Speeds up disaster restart readiness
Improved TimeFinder/Clone operations to create, activate, terminate
TimeFinder/Snap avoids elimination of write penalty for Snaps
Cascaded TimeFinder/Clone
Provides Gold Copy for iterative testing against time specific data
Clone Parent Clone Child
Std
The Symmetrix V-Max array platform, running Enginuity 5874, greatly improves TimeFinder Clone operations.
Accelerate resynchronization times
Speeds up disaster restart readiness
Improved TimeFinder/Clone operations to create, activate, terminate
TimeFinder/Snap avoids elimination of write penalty for Snaps.
Cascaded TimeFinder/Clone
Provides Gold Copy for iterative testing against time specific data
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 58
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 4: Improvements in Business Continuity - 58
Concurrent SRDF/S with Independent Consistency
Protection
Suspend
Split
Incrementally Establish
Re-establish
Production Site
Enginuity 5874
Recovery Site
Enginuity 5874 or 5773
Recovery Site
Enginuity 5874 or 5773
SRDF/S
CG
SRDF/S
CG
Replication remains protected
Test OFF
Split Clone or R2
Concurrent SRDF/S with Independent Consistency Protection provides an enhancement to Concurrent SRDF/S/S
configurations to allow independent ConGroup protection. The key benefit for this solution is the ability to disable a leg
of a Concurrent SRDF/S/S configuration while retaining the other legs ConGroup protection. With this feature the
customer will now have two options for managing ConGroups in a concurrent SRDF/S/S configuration. These two
options are:
Independent ConGroup protection
Joint ConGroup protection
Lets take a moment to watch this solution in action:
Shown is an environment using Concurrent SRDF/S with Independent Consistency Protection. In this environment
there is a main site concurrently synchronously replicating to two different recovery sites. To use this feature, first
<click> suspend the link to the first recovery site. Although this link has been suspended, the application and data
remains protected since we are still replicating to the alternate DR site. <click> At the primary recovery site, we can
now split a BCV copy of the R2 to <click> access the data in real-time to perform any required testing. <click> Once
testing is complete, we can re-establish the R2 with the Clone or restore the Clone to the R2 depending on which
volume was used, <click> then split, <click> and then incrementally establish the link which had been previously
suspended.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 59
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 4: Improvements in Business Continuity - 59
Increased Group Capacity: 250 SRDF Groups
With Enginuity 5874, all 250 SRDF groups may be used by the same
Symmetrix V-Max system, with the following limits:
64 SRDF Groups per RA Director
251 maximum logical links per RA Director
Device limits the same as Enginuity 5773
32K devices per SRDF Group
~ 64K devices per RA director
SRDF and SRDF/A Groups are still not supported on the same RA director
PPRC requires using only the first 64 SRDF groups (0-3F)
With Enginuity 5874, all 250 SRDF groups may be used by the same Symmetrix V-Max system. However, there are
the following limits:
There can only be 64 SRDF Groups per RA director
And, there can only be 251 max logical links per RA director
The device limits for 5874 remain the same as with 5773, as shown.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 60
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 4: Improvements in Business Continuity - 60
Achieving Zero RPO: SRDF/EDP
SRDF/EDP based on Cascaded SRDF (starting with Enginuity 5773).
Allows for:
Synchronous replication between the primary and secondary
Asynchronous replication between the secondary and tertiary
Introduces a Diskless R21 data device
Does not require storage used in only the secondary array
SRDF/A
Adaptive Copy WP
Secondary Tertiary Primary
SRDF/S
Adaptive Copy Disk
Adaptive Copy WP
R1
R21
R2
This release introduces SRDF/Extended Distance Protection, EDP. SRDF/EDP is constructed on the concepts of
Cascaded SRDF which was available starting with Enginuity 5773. With SRDF/EDP, we now allow for synchronous
replication between the Primary and Secondary and asynchronous replication between the Secondary and Tertiary
site.
SRDF/EDP introduces a new device type, Diskless R21 data device that does not consume storage. Diskless R21
data devices are used in the secondary Symmetrix V-Max system running Enginuity 5874.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 61
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 4: Improvements in Business Continuity - 61
Benefits of SRDF/EDP
Single solution to achieve both zero RPO + long distance replication
Less expensive two site DR solution at great distances with very small or zero RPO and
low response time impact to the host
Data will be stored at the primary and tertiary sites
Only differences are temporarily stored at the secondary site
Low cost - only need a V-Max system with 5 disks per BE loop, vault and SFS drives
plus enough cache to hold common area, user data/updates, and device tables
Minimum drive count for SRDF/EDP is 40 plus 8 spares
Diskless R21
data device
SFS
R21 Secondary
Symmetrix V-Max
The key benefits of SRDF/EDP offers a two site extended distance no data loss solution. It also offers the following
benefits:
Benefits of single solution to achieve both zero RPO + long distance replication:
In-expensive two site DR solution at great distances with very small or zero RPO and low response time impact to
the host
SRDF/Star differential relationship support between the out-of-region site and the production site for Failover
Operations with reverse SRDF/Asynchronous protection
Data will be stored at the primary and tertiary sites
Only differences are temporarily stored at the secondary site
Low cost - only need a Symmetrix V-Max system with vault and SFS drives plus enough cache to hold common
area, user data/updates, and device tables. The minimum drive count for the secondary site array in an EDP
configuration is 40 plus 8 for spares.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 62
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 4: Improvements in Business Continuity - 62
SRDF/EDP vs. Cascaded SRDF
Licensed feature Also uses R21 Devices but not diskless
New Device: Diskless R21data device allows for
cascading of changes from Secondary to Tertiary
sites
Allows for restartable/recoverable image at the
Secondary site
Loss of fault tolerance since secondary site cannot
be used for restart/recovery
Requires physical devices in the Secondary
Symmetrix system
Data only at the Primary and Tertiary sites Data at Primary, Secondary, and Tertiary sites
SRDF/EDP Cascaded SRDF
Shown is a table comparing Cascaded SRDF to SRDF/EDP. A key difference between the technologies is that EDP
introduces a new Diskless R21 data device and does not require a full copy of data at the secondary site. One key
consideration with implementing EDP is that there is a loss of fault tolerance since the secondary site cannot be used
for restart/recovery since there is not a full copy of data kept in this Symmetrix V-Max system.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 63
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 4: Improvements in Business Continuity - 63
WHITEBOARD: SRDF/EDP Architecture
Web Object Placeholder
Address:https://education.emc.com/main/vod_gs/Hard
ware/Symm/FRU/q109/NGS_SRDF_EDP_WB_MRues
t.html
Displayed in: Articulate Player
Window size:482 X 392
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 64
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 4: Improvements in Business Continuity - 64
SRDF/EDP: Interoperability and Copy Modes
Adaptive Copy WP Adaptive Copy WP
Adaptive Copy WP Adaptive Copy Disk
Adaptive Copy WP Synchronous
Asynchronous Adaptive Copy WP
Asynchronous Adaptive Copy Disk
Asynchronous Synchronous
Second Leg First Leg
SRDF/A
Adaptive Copy WP
Secondary Tertiary Primary
SRDF/S
Adaptive Copy Disk
Adaptive Copy WP
R1
R21 R2
R1->R21 R21->R2
5773 / 5874
5874 5773 / 5874
As you saw in the SRDF/EDP Architecture video, data can be copied in a number of different replication scenarios.
Shown is the basis for interoperability and copy modes with SRDF/EDP.
The Diskless R21 data device cannot be used in a Concurrent SRDF/EDP configurations due to lack of data drives.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 65
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 4: Improvements in Business Continuity - 65
Design Considerations for SRDF/EDP
System considerations:
Need Enginuity 5874 code and Symmetrix V-Max system at the secondary site
to implement
Only supported on RF or RE SRDF connections
No ESCON support
Diskless R21 data device considerations:
Cannot be mapped to hosts; used as a SRC or TGT for Snap, Clone,
TimeFinder/Clone Emulation or for Open Replicator
SRDF considerations:
Diskless R21 devices follow the same rules as a regular R21 device
The R1 of the second leg must be ready before the R1 of the first leg is made
ready on the link
Some essential Design Considerations for SRDF/EDP include:
System considerations:
Need Enginuity 5874 and Symmetrix V-Max system hardware platform at the secondary site to implement
The primary and tertiary systems must run Enginuity 5773 or 5874 code to connect to secondary system
Enginuity 5773 will need fixes to connect to the new device in the secondary system
Only supported on RF (Fibre) or RE (GigE)
No ESCON support
No SRDF/A on the first leg
Diskless Device considerations:
Cannot be mapped to a host
When used for SRDF/A all devices in the SRDF/A session MUST be Diskless
Cannot be used as a SRC or TGT for Snap, Clone, TimeFinder/Clone Emulation, or ORS; cannot be SRDF paired
with another Diskless R21 data device
SRDF considerations:
The R1 of the second leg must be ready before the R1 of the first leg is made ready on the link
Can configure smaller to larger to larger
Diskless devices cannot be THIN devices
Diskless device still uses the device table like a disk device - device will also consume a Symmetrix Device
number
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 66
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 4: Improvements in Business Continuity - 66
SRDF/EDP Best Practices
The first hop of a Cascaded SRDF configuration should have Consistency
Group (ConGroup) enabled with synchronous SRDF mode
A R21 device cannot have one static and one dynamic SRDF mirror.
Configure either static or dynamic on both mirrors, not a mixture
For the second hop, the R21 device cannot be in a Consistency Group
because SRDF synchronous mode is not supported on the second hop
A gold copy Clone device at the tertiary site C is recommended should
re-synchronization become necessary following a network or link outage
A basic Cascaded SRDF configuration is recommended and should consist of a primary or workload site replicating
synchronously to a secondary site with SRDF/S, and then replicating the same data asynchronously to a tertiary site
with SRDF/A.
The first hop of a Cascaded SRDF configuration should have Consistency Group (ConGroup) enabled with
synchronous SRDF mode.
For the second hop, the R21 device cannot be in a Consistency Group because SRDF synchronous mode is not
supported on the second hop.
A gold copy Clone device at the tertiary site is recommended should re-synchronization become necessary following
a network or link outage.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 67
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 4: Improvements in Business Continuity - 67
Tertiary Site
SRDF/EDP Use Case 1: Production Site Disaster
Unplanned Production Site Disaster and Go Home
SRDF/A/AC SRDF/S/AC
Restartable
Copy to the
Point of Disaster
Adaptive Copy Adaptive Copy
Drain Deltas
Start Production
Start Production
To Go Home
Halt Production
If the production box comes back up
Secondary Site
R1 R2
Full Copy
Full Copy
Not Full Copy
Differences Only
Not Host Accessible
Primary Site
R21
Lets look at SRDF/EDP in action. This use case is a simulated production site disaster with failover and then
ultimately a return home.
In this environment, there is a 3 site cascaded replication environment with synchronous replication from site one to
site 2 and asynchronous replication from site two to site three.
At any point in time, there may be an unplanned production site disaster, when this happens the R21 device at the
secondary site will drain it's deltas to the failover devices.
After this has completed, restart production at the tertiary site.
At this point, production is restartable to the time of the disaster.
If the production box at site one comes backup and is available, we can return home by reversing direction of
replication and copying the changed data from the tertiary site to the secondary to the primary with adaptive copy
SRDF.
Then production would halt at the tertiary site so that no new writes are committed.
Once the reverse adaptive copy sync is complete, production can start at the primary site, and continue with
SRDF/EDP replication.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 68
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 4: Improvements in Business Continuity - 68
SRDF/EDP Use Case 2: Unplanned Link or
Secondary Down
SRDF/A/AC SRDF/S/AC
Production
Continues Without
Protection
If the link/secondary site comes back on line
Differentiall y Resynchronize
Recovery Automation available for SRDF/A
Leg
Gold Copies
R1
Clone
Full Copy
R2
Clone
Full Copy
Tertiary Site Secondary Site Primary Site
R21
Not Full Copy
Differences Only
Not Host Accessible
The second use case shows what will happen with a SRDF/EDP environment when there is an unplanned link failure
or if the secondary site goes down - this would be the secondary site.
Similar to the previous example, there is a cascaded SRDF environment with EDP constructed replication.
If the secondary site goes away, production will continue at the primary site however without the protection of
replication.
Since we are not replicating with SRDF at this point, we can still protect the environment somewhat by copying
the data locally with TimeFinder.
If the link or the secondary site comes back online we can then do a differential re-sync of the changes to the
tertiary site.
Keep in mind that at the secondary site there is not a full copy of data since we are using Diskless R21 data devices in
the secondary Symmetrix V-Max system.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 69
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 4: Improvements in Business Continuity - 69
R2
Tertiary Site Secondary Site Primary Site
SRDF/EDP Use Case 3: Planned Failover with
Reverse Asynchronous Replication
SRDF/A/AC SRDF/S/AC
SRDF/A Protection with Differential Resynch
To Go Home!!!
Halt Production
Drain Deltas
Gold Copies
Start Production
Halt Production
And allow data to
drain
Resynchronize and start replication
Start Production
R11
Clone
R2 R22
Clone
Not Full Copy
Differences Only
Not Host Accessible
Full Copy
Full Copy
R21
Our last scenario with SRDF/EDP includes a failover to the tertiary site with a reverse asynchronous replication back
to the primary site.
Using our previous environment example, we halt host write I/O access to volumes at the primary site. Next, we drain
the delta differences from the secondary Symmetrix system to the tertiary Symmetrix. As an added level of protection,
we take Clone gold copies of both the data at the primary and tertiary sites. Utilizing SRDF/A we can protect the data
volumes at the tertiary site by replicating the changes back to the primary site. We can then restart production at the
remote site. To return home, first halt production at the tertiary site and allow the data to drain from this site to the
primary site with SRDF/A. Re-sync and start replication at the primary site. Now we will resume our normal
SRDF/EDP replication with production running at the primary site.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 70
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 4: Improvements in Business Continuity - 70
Dynamic Devices With SRDF/A Consistency Exempt
Add/Remove Devices from SRDF/A CE (Consistency Exempt)
dynamically
Dynamic SRDF Devices can be added to an active SRDF/A group by
setting the Consistency Exempt flag
If set on the R1 side only, the SRDF/A State Machine will propagate the CE
flag to the R2 side before the next cycle switch
R1
R2
R1
SRDF/A Consistency Group with CE, removal of active device
R1
With SRDF/A Consistency Exempt we are able to Add/Remove Devices from SRDF/A dynamically
Essentially, this allows for addition and removal of dynamic SRDF/A devices on an active SRDF/A group.
Dynamic SRDF Devices can be added to an active SRDF/A group by setting the Consistency Exempt flag
The consistency calculation will ignore any devices with CE set and report the number of devices in that mode
The Consistency Exempt indication will be cleared 2 cycle switches after the new devices are completely
synchronized
If set on the R1 side only, the SRDF/A State Machine will propagate the CE flag to the R2 side before the next
cycle switch
Similarly the CE flag can be cleared on either side and if cleared on the R1 side only, the SRDF/A State machine
will propagate this information to the R2 before the next cycle switch
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 71
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 4: Improvements in Business Continuity - 71
Enginuity 5874 SRDF and Migration Support
Enginuity Inter-Family Support
Full Support 5874 with 5874
5874 with 5773 Full Support
5874 with 5772 Not Supported
5874 with 5671 SRDF Migration Only
5874 with 5670 and below Not Supported
Listed are the supported Enginuity inter-family support allowances for SRDF environments and migration. Keep in
mind that certain features such as SRDF/EDP and SRDF/Star with R22 Dual Secondary Device are only available in
Enginuity 5874. When planning for these features in a mixed Enginuity environment, location of the Symmetrix V-Max
array will be determined by these requirements. Additionally, prior to implementation, verify the patches (if any) that
may be required to support inter-family connectivity.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 72
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 5: Mainframe Support Enhancements - 72
Module 5: Mainframe Support Enhancements
Upon completion of this module, you should be able to:
Describe the enhancements to mainframe support with Symmetrix V-Max
system
This module focuses on the enhancements to mainframe support provided with the Symmetrix V-Max array.
Upon completion of this module, you should be able to:
Describe the enhancements to mainframe support with Symmetrix V-Max system
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 73
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 5: Mainframe Support Enhancements - 73
Streamlined Software Packaging
Mainframe Enablers 7.0 Software Distribution Plan:
Combined, tested, shipped, and installed in single package
One single SMPE zone used
Multiple FMIDs
One for each product
- ResourcePak Base (RPB), Symmetrix Remote Data Facility (SRDF),TimeFinder
(TF), and Consistency Groups (ConGroups)
All the following products are shipped if any of them are ordered
EMC ResourcePak Base for z/OS
EMC Consistency Groups for z/OS
SRDF Host Component for z/OS
EMC TimeFinder / Clone Mainframe SNAP Facility
EMC TimeFinder / Mirror for z/OS
EMC TimeFinder Utility
The packaging for the mainframe has been greatly simplified starting with version 7.0. All components are shipped in
a single package though they may be licensed separately. This simplifies installation and maintenance, increases
productivity and reduces cost.
Note that the current version of ResourcePak Extended (RPE) is not compatible with Enginuity 5874 and Mainframe
Enablers 7.0. RPE will be end of life ,however a few of the utilities will be upgraded for Enginuity 5874 including the
Change Tracker, Data Eraser and Disk Compare.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 74
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 5: Mainframe Support Enhancements - 74
Support for Larger Devices
Support for Extended Address Volume (EAV)
Device capacity up to 262,668 cylinders, or 223 GB: 4x the current capacity
(65,520 cylinders)
Matches the current capacity announced with z/OS 1.10 for 3390A Extended
Address Volume
Targeted at
Large installations in dire need of device addressing relief
- Limit of 65,280 devices per LPAR
Environments with significant and growing business continuity requirements
Hyper-volume limit extended from 256 to 512 splits per physical device
Significantly enhances configuration flexibility
Accommodates increasing device density and subsystem density
In addition to increasing the FICON port limits from 48 to 64 , the Symmetrix V-Max systems offer several
improvements in software as well.
Enginuity 5874 introduces the ability to create and utilize logical volumes which can be up to 262,668 cylinders in size.
These large volumes are supported as IBM 3390 format with a capacity of up to 223GBs and are configured as 3390
Model As. This large volume size matches the capacity announced in z/OS 1.10 for 3390 Extended Address Volume
(EAV).
With Enginuity 5874, large volumes may be configured and used in a similar manner to the old, regular devices that
users are already familiar with. Large volumes can co-exist alongside the older volumes. However there are some
limitations to using large volumes for example, with some independent vendor software; and with using certain
access methods.
Enginuity 5874 also introduces a sizeable increase in the number of hypervolumes per physical; it is now possible to
configure up to 512 hypervolumes - up from 256. Increasing the number of hypervolumes is mandated due to the ever
increasing drive capacities being offered.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 75
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 5: Mainframe Support Enhancements - 75
Enhanced and More Scaled Up DR Solutions
All SRDF related enhancements that apply to both Open Systems hosts
and mainframes:
Support more SRDF groups needed for SRDF/Star and server virtualization
RDFGRPs per port increased to 64
RDFGRPs total subsystem increased to 250
SRDF/Extended Distance Protection (SRDF/EDP)
Three-site, cascaded environment with a diskless R21 device in the middle
Symmetrix
Independent consistency protection with concurrent SRDF/S
Concurrent SRDF device can now be in two different consistency groups
Previously, we have already discussed various Enginuity 5874 SRDF enhancements. All of these new features and
enhancements apply to Open Systems as well as mainframe environments. These features are summarized here.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 76
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 5: Mainframe Support Enhancements - 76
Enhanced Management and Monitoring Utilities
Ease-of-Use Enhancements
EzSM - EMC z/OS Storage Manager 3.0
Mainframe software - provides z/OS-oriented view of storage through an easy-to-use ISPF
interface
V3.0 is qualified with Symmetrix V-Max Series with Enginuity
SMC - Symmetrix Manager Console 7.0 includes wizards and templates to support
Replication Wizard to configure remote replication (SRDF)
LUN Masking Wizard to support Autoprovisioning Groups
LUN Migration Wizard to support Enhanced VLUN Technology
Template Manager to easily configure storage
Advanced Performance Analysis
SPA - Symmetrix Performance Analyzer v1.1( Service Release)
Licensed add-on to SMC
Provides a historical view of all key performance indicators
Ability to monitor performance and utilization trends by application or system component
SMF 74.8 enhancement
New SMF record 74 sub type 8 provides additional data: rank statistics, extent pool statistics, link
statistics
Data that can be used for modeling and capacity planning
To facilitate storage array administration for Symmetrix V-Max systems, new versions of EzSM and SMC will become
available. EzSM v3.0 (which is expected to GA in Q2) and SMC v7.0 are qualified with Symmetrix V-Max arrays, and
Enginuity 5874. SMC v7.0 introduces several new wizards and templates that simplify storage management tasks
such as replication configuration, provisioning and VLUN migration.
In addition, a Service Release v1.1, is planned for Symmetrix Performance Analyzer (SPA) to accommodate
Symmetrix V-Max systems.
Another enhancement that enables more effective performance analysis and capacity planning is the new SMF record
74 subtype 8. This provides additional information including rank statistics, extent pool statistics, and link statistics.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 77
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 5: Mainframe Support Enhancements - 77
Improved Integration With Mainframe Features
TimeFinder/Clone and FlashCopy on the same volumes is now possible
TimeFinder/Clone enhanced to support larger datasets
Implements z/OS large sequential dataset support
Implements VSAM Extent Constraint Relief
Converting PPRC to SRDF: no longer requires target synchronization
Enginuity 5874 has been enhanced to allow co-existence of TimeFinder/Clone as well as FlashCopy on the same
volumes. Note that this is not possible with prior versions of Enginuity. Previously, users had to set the
TimeFinder/Clone Mainframe SNAP Facility (TFCMSF) site option when using FLASHCOPY to avoid conflicts. With
the enhancements in Enginuity 5874, this is no longer required. TimeFinder recognizes when a FlashCopy session is
active at the logical volume level.
Changes have been made to Timefinder/Clone to support larger datasets. TimeFinder/Clone now implements z/OS
large sequential dataset support. This z/OS feature allows non SMS sequential datasets to be larger than 65535
tracks for each volume. Also, TimeFinder/Clone implements VSAM Extent Constraint Relief. This z/OS feature allows
VSAM dataset to have more than 255 total extents.
With Enginuity 5874, it has become possible to convert an existing IBM PPRC configuration to SRDF (and vice versa)
without the requirement to resync the target devices.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 78
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 5: Mainframe Support Enhancements - 78
Support For Extended Distance FICON
Extended Distance FICON feature is defined in the May 2008 update to
the IBM 2107 Specification
Provides for Persistent Pacing
Improves link utilization
Supports greater distances between servers and storage
Reduces need for channel extenders even for distances greater than 50Km
Customer benefit:
More cost-effective long distance host-to-storage connectivity
Extended Distance FICON is an enhancement to the industry-standard FICON architecture (FC-SB-3). It implements
a new protocol for persistent Information Unit pacing. By increasing the number of read commands in flight,
persistent pacing helps improve link utilization, and enables increased distance between servers and control units.
Previously, channel extenders were required for distances of 50 km or more between the server and the storage
array. Extended Distance FICON can eliminate or reduce this need for extenders, thereby enabling a more cost-
effective solution.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 79
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Module 5: Mainframe Support Enhancements - 79
RESOURCES
Symmetrix V-Max system Hardware Tour
Demo: Autoprovisioning Groups
Whiteboard: SRDF/EDP Architecture
EMC Portal:
http://one.emc.com/clearspace/docs/DOC-10132
Sales training:
Q2 2009 Symmetrix V-Max: Announcement Overview
https://learning.emc.com/Saba/Web/Main/goto/180053891
Q2 2009 Symmetrix V-Max: Product & Competitive Overview
https://learning.emc.com/Saba/Web/Main/goto/181150991
Direct Express Training
Q2 2009 Launch: Configuration and Ordering
https://learning.emc.com/Saba/Web/Main/goto/182833211
Job Aids
Videos
Listed here are additional resources available to you to learn more about the Symmetrix V-Max system.
Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 80
Symmetrix V-Max Series with Enginuity Technical Presales
2009 EMC Corporation. All rights reserved. Course Summary - 80
Course Summary
Key points covered in this course:
Symmetrix V-Max Series with Enginuity stands out with its higher performance
(over 2X performance of DMX-4), more usable capacity and more efficient cache
utilization.
Enginuity 5874 introduces several innovations such as Autoprovisioning Groups
that can greatly simplify storage provisioning in large virtual and physical
environments.
Virtual Provisioning and Virtual LUN empower the storage administrator to non-
disruptively optimize data layout among tiers and RAID levels, without impacting
business continuity.
Support for larger volumes and larger numbers of splits per drive allows for more
effective use of the higher capacity drives that have become available with the
Symmetrix V-Max system.
Enginuity 5874 introduces SRDF/Extended Distance Protection (SRDF/EDP)
which is an out-of-region business continuity / disaster recovery (BC/DR) solution
with zero RPO.
These are the key points covered in this course. Please take a moment to review them.
This concludes the training. In order to receive credit for this course, please proceed to the Course Completion slide to
update your transcript and access the assessment.

Você também pode gostar