Você está na página 1de 47

Skip to content

DAVID RING
Virtualization & Storage

Home

DellEMC

VMware

Cisco

About
HomeVNX
VNX
EMC VNX2 Drive Layout (Guidelines & Considerations)
September 7, 2015 VNX DRIVE, EMC, Layout, mcx, VNX
Applies only to VNX2 Systems.
CHOICES made in relation to the physical placement of Drives within a VNX can have an impact on how the
VNX performs. The intention here is to shed some light on how to best optimize the VNX by placing Drives in
their best physical locations within the VNX Array. The guidelines here deal with optimising the Back-End
system resources. While these considerations and examples may help with choices around the physical location
of Drives you should always work with a EMC certified resource in completing such an exercise.

Maximum Available Drive Slots


You cannot exceed the maximum slot count, doing so will result in drives becoming unavailable. Drive form
factor and DAE type may be a consideration here to ensure you are not exceeding the stated maximum. Thus the

max slot count dictates the maximum drives and the overall capacity a system can support.

BALANCE
BALANCE is the key when designing the VNX drive layout:
Where possible the best practice is to EVENLY BALANCE each drive type across all available back-end
system BUSES.This will result in the best utilization of system resources and help to avoid potential system
bottlenecks. VNX2 has no restrictions around using or spanning drives across Bus 0 Enclosure 0.

DRIVE PERFORMANCE
These are rule of thumb figures which can be used as a guideline for each type of drive used in a VNX2 system.
Throughput (IOPS) figures are based on small block random I/O workloads:

Bandwidth (MB/s) figures are based on large block sequential I/O workloads:

Recommended Order of Drive Population:


1. FAST Cache
2. FLASH VP
3. SAS 15K

4. SAS 10K
5. NL-SAS
Physical placement should always begin at Bus0 Enclosure0 (0_0) and the first drives to get placed are always
the fastest drives as per the above order. Start at the first available slot on each BUS and evenly balance the
available Flash drives across the first slots of the first enclosure of each bus beginning with the FAST Cache
drives. This ensures that FLASH Drives endure the lowest latency possible on the system and the greatest RoI is
achieved.
FAST CACHE
FAST Cache drives are configured as RAID-1 mirrors and again it is good practice to balance the drives across
all available back-end buses. Amount of FAST Cache drives per B/E Bus differs for each system but ideally aim
for no more than 8 drives per bus (Including SPARE), this is due to the fact that FAST Cache drives are
extremely I/O Intensive and placing more than the recommended maximum per Bus may cause I/O saturation
on the Bus.

Note: Do not mix different drive capacity sizes for FAST Cache, either use all 100GB or all 200GB drive types.
Also for VNX2 systems there are two types of SSD available:
FAST Cache SSDs are single-level cell (SLC) Flash drives that are targeted for use with FAST Cache. These
drives are available in 100GB and 200GB capacities and can be used both as FAST Cache and as TIER-1 drives
in a storage pool.
FAST VP SSDs are enterprise Multi-level cell (eMLC) drives that are targeted for use as TIER-1 drives in a
storage pool (Not supported as FAST Cache drives). They are available in three flavors 100GB, 200GB and
400GB.
More detailed post on FAST Cache: EMC VNX FAST Cache
DRIVE FORM FACTOR
Drive form factor (2.5 | 3.5) is an important consideration. For example if you have a 6 BUS System with 6
DAEs (one DAE per BUS) consisting of 2 x 2.5 Derringer DAEs and 4 x 3.5 Viper DAEs as follows:

MCx HOT SPARING CONSIDERATIONS


Best practice is to ensure 1 spare is available per 30 of each drive type. When there are different drives of the
same type in a VNX, but different speeds, form factors or capacities, then these should ideally be placed on
different buses.
Note: Vault drives 0_0_0 0_0_3 if 300GB in size then no spare is required, but if larger than 300G is used and
user luns are present on the Vault then a spare is required in this case.

While all un-configured drives in the VNX2 Array will be available to be used as a Hot Spare, a specific set of
rules are used to determine the most suitable drive to use as a replacement for a failed drive:
1. Drive Type: All suitable drive types are gathered.
2. Bus: Which of the suitable drives are contained within the same bus as the failing drive.
3. Size: Following on from the Bus query MCx will then select a drive of the same size or if none available then
a larger drive will be chosen.
4. Enclosure: This is another new feature where MCx will analyse the results of the previous steps to check if
the Enclosure that contains the actual Failing drive has a suitable replacement within the DAE itself.
See previous post for more info: EMC VNX MCx Hot Sparing
Drive Layout EXAMPLE 1:
VNX 5600 (2 BUS)

FAST Cache:
1 X Spare, 8 x FAST Cache Avail.
8 / 2 BUSES = 4 FAST Cache Drives Per BUS
1 x 2.5 SPARE Placed on 0_0_24
1 X Spare, 20 x Flash VP Avail.

Fast VP:
20 / 2 BUSES = 10 Per BUS
10 x 3.5 Placed on BUS 0 Encl 1
10 x 2.5 Placed on BUS 1 Encl 0
1 X 2.5 SPARE Placed on 1_0_24

Drive Layout EXAMPLE 2:

VNX 5800 (6 BUS)

Drive Layout EXAMPLE 3:


VNX 8000 (16 BUS)

Useful Reference:
EMC VNX2 Unified Best Practices for Performance
2 Comments
EMC VNX Registering RecoverPoint Initiators
July 10, 2015 RecoverPoint, VNX
The VNX will have been previously zoned to the RPAs at this stage. For example purposes the config below
will have the RPA1-port-3 Zoned to the VNX SP-A&B Port 4 on Fabric-A and RPA1-port-1 Zoned to the VNX
SP-A&B Port 5 on Fabric-B. Note: In a synchronous RP solution all 4 RPA ports should be zoned.
Parameters as follows:
Initiator Type = RecoverPoint Appliance (-type 31)
Failover Mode = 4 (ALUA this mode allows the initiators to send I/O to a LUN regardless of which VNX
Storage Processor owns the LUN)
RPA1_IP = IP Address of RPA1
RPA1_NAME = Appropriate name for RPA1 (E.g. RPA1-SITE1)
RPA WWNs can be recognized in the SAN by their 50:01:24:81:.. prefix.
Example:
Create a storage group for all RPAs on Site1:
naviseccli -User sysadmin -Password password -Scope 0 -h SP_IPstoragegroup -create -gname RPA-Site1-SG
##############
## FABRIC A: ##
##############
RPA1-Port-3 initiator registered to both VNX SP A&B Port 4:

naviseccli -User sysadmin -Password sysadmin -Scope 0 -h SP_IPstoragegroup -setpath -gname RPA-Site1-SG
-hbauid 50:01:24:80:00:64:1C:E3:50:01:24:81:00:64:1C:E3 -type 31 -ip RPA1_IP -host RPA1_NAME -sp a
-spport 4 -failovermode 4 -o
naviseccli -User sysadmin -Password sysadmin -Scope 0 -h SP_IPstoragegroup -setpath -gname RPA-Site1-SG
-hbauid 50:01:24:80:00:64:1C:E3:50:01:24:81:00:64:1C:E3 -type 31 -ip RPA1_IP -host RPA1_NAME -sp b
-spport 4 -failovermode 4 -o
##############
## FABRIC B: ##
##############
RPA1-Port-1 initiator registered to both VNX SP A&B Port 5:
naviseccli -User sysadmin -Password sysadmin -Scope 0 -h SP_IPstoragegroup -setpath -gname RPA-Site1-SG
-hbauid 50:01:24:80:00:64:1C:E1:50:01:24:81:00:64:1C:E1 -type 31 -ip RPA1_IP -host RPA1_NAME -sp a
-spport 5 -failovermode 4 -o
naviseccli -User sysadmin -Password sysadmin -Scope 0 -h SP_IPstoragegroup -setpath -gname RPA-Site1-SG
-hbauid 50:01:24:80:00:64:1C:E1:50:01:24:81:00:64:1C:E1 -type 31 -ip RPA1_IP -host RPA1_NAME -sp b
-spport 5 -failovermode 4 -o
Registered Initiators displayed in Unisphere:

Leave a comment
EMC VNX SMI-S Configuration & Discovery
June 26, 2015 Vblock, VNX ECOM, EMC, SMI-S, Solutions Enabler, VCE, VIPR, Vision, VNX
The following are some configuration notes for configuring SMI-S to allow communication with the VNX
Storage Processors, SMI-S can then be leveraged by for example VCE Vision or ViPR to configure/report on
the VNX array. Before proceeding ensure you have the both VNX Storage Processor A&B IP addresses to hand,
the SMI-S host will use these IPs to allow for out-of-band communication over IP with the VNX. EMC SMI-S
provider is included as part of Solutions Enabler with SMIS install package which can be downloaded from
support.emc.com.

Begin by installing SMI-S Provider, ensuring you select the Array provider (Windows does not require Host
provider) and chose the option forSMISPROVIDER_COMPONENT:

From the windows services.msc console check that both the ECOM and storsrvdservices are set to
automatic and in a running state:

Check that EMC storsrvd daemon is installed and running from a Windows cmd prompt using stordaemon.exe:
stordaemon install storsrvd -autostart
stordaemon start storsrvd
stordaemon.exe list

Or using the SC (service control) command you can query/start/config the ECOM and storsrvd services:
sc query ECOM.exe
sc query storsrvd
sc start ECOM.exe
sc start storsrvd

sc config ECOM.exe start=auto


sc config storsrvd start=auto

Run netstat -a and check the host is listening on ports 5988 5989:

UPDATE ENVIRONMENT VARIABLES:


Add the SYMCLI installation directory path (DRIVE:\Program Files\EMC\ECIM\ECOM\BIN) to the list of
system paths:

Or use the windows CLI to add the SYMCLI and ECOM directories to the PATH environment variable:

setx /M PATH "%PATH%;C:\Program Files\EMC\SYMCLI\bin;C:\Program Files\EMC\ECIM\ECOM\bin"


If experiencing issues such as the ECOM service failing to start it is worth rebooting the mgmt server at this
stage.
ECOM SERVER: ADD A NEW SMI-S Provider User
Provided all the validations are successful then proceed to login to the ECOM server and create the user you
would like to use for (Vision/ViPR) connectivity:
Open https://localhost:5989/ecomconfig
Login with default credentials of: admin / #1Password

Select the option to add a new user and create the Vision user with administrator role and scope local:

Windows Firewall
If the Windows firewall is enabled then rules will need to be created to allow ECOM ports TCP 5988&5989 and
SLP port UDP 427. For example using the windows command line netsh to create rules for SLP and ECOM:
netsh advfirewall firewall add rule name="SLP" dir=in protocol=UDP localport=427 action=allow
netsh advfirewall firewall add rule name=ECOM dir=in protocol=TCP localport=5988-5989 action=allow

netsh advfirewall firewall show rule name=SLP


netsh advfirewall firewall show rule name=ECOM

Discover and Add the VNX using TestSMIProvider:


Confirm communication to the VNX from the SMI-S host by running the navicli getagent cmd on both VNX
Storage Processors from the Element Manager cmd prompt:
naviseccli -h SPA-IP getagent
choose option 2 if prompted
naviseccli -h SPB-IP getagent
choose option 2 if prompted
Open a Windows cmd prompt session as admin user, if the environment variable has not been set then you will
need to cd to cd D:\Program Files\EMC\SYMCLI\bin
symcfg auth add -host SPA_IP -username sysuser -password syspw
symcfg auth add -host SPB_IP -username sysuser -password syspw
Create a text file, for example called SPIP.txt that contains the IP addresses for SP A&B. Then run the following
commands to discover and list the VNX:
symcfg discover -clariion -file D:\spip.txt
symcfg list -clariion
Again from a Windows cmd prompt session as admin user, if the environment variable has not been set then you
will need to cd to c:\Program Files\EMC\ECIM\ECOM\BIN. Type TestSMIProvider.exe at the prompt, from

here chose all defaults except for the Vision user and password created through the ECOM console:

At the prompt type addsys to confirm connectivity between the VNX Array and the SMI-S Host:
(localhost:5988) ? addsys
Add System {y|n} [n]: y
ArrayType (1=Clar, 2=Symm) [1]:
One or more IP address or Hostname or Array ID
Elements for Addresses
IP address or hostname or array id 0 (blank to quit): SPA_IP
IP address or hostname or array id 1 (blank to quit): SPB_IP
IP address or hostname or array id 2 (blank to quit):
Address types corresponding to addresses specified above.
(1=URL, 2=IP/Nodename, 3=Array ID)
Address Type (0) [default=2]:
Address Type (1) [default=2]:
User [null]: sysuser
Password [null]: syspw
++++ EMCAddSystem ++++
OUTPUT : 0
Legend:0=Success, 1=Not Supported, 2=Unknown, 3=Timeout, 4=Failed
5=Invalid Parameter
4096=Job Queued, 4097=Size Not Supported
Note: Not all above values apply to all methods see MOF for the method.
System : //SPA_IP/root/emc:Clar_StorageSystem.CreationClassName=Clar_Stora
geSystem,Name=CLARiiON+CKM00100000123
In 12.468753 Seconds
Please press enter key to continue

At the prompt type dv to confirm connectivity between the VNX and SMI-S Host:

For any troubleshooting please refer to: C:\Program Files\EMC\ECIM\ECOM\log


Note: When configuring VCE Vision please ensure to use the SMI-S Host IP address for VNX Block entries
in the Vblock.xml configuration file, the NAS portion of the VNX uses the Control Station IP addresses for
communication which have ECOM configured by default.
How to remove VNX systems using SMI-S remsys command:
1.

Log into the SMI-S Provider server

2.

Open a command prompt (cmd).

3.

Change (cd) to C:\Program Files\EMC\ECIM\ECOM\bin

4.

Run TestSmiProvider.exe

5.

Enter ein

6.

Enter symm_StorageSystem

7.

Copy the line that specifies the VNX system you want to remove:
Clar_StorageSystem.CreationClassName=Clar_StorageSystem,Name=CLARiiON+CKM001xxxxxxxx

8.

Enter remsys

9.

Enter Y

10.

Paste the line specifying the VNX system you want to remove that you copied in the preceding step.

11.

Enter Y

12.

Run a dv command to confirm the VNX system has been removed.


Built with EMC SMI-S Provider: V4.6.2
Namespace: root/emc
repeat count: 1
(localhost:5988) ? remsys
Remove System {y|n} [n]: y
Systems ObjectPath[null]: Clar_StorageSystem.CreationClassName=Clar_StorageSys
tem,Name=CLARiiON+CKM001xxxxxxxx
About to delete system Clar_StorageSystem.CreationClassName=Clar_StorageSystem
,Name=CLARiiON+CKM001xxxxxxxx
Are you sure {y|n} [n]: y

Leave a comment
EMC VNX Batch Enabler Installation (By Software Suite)
June 2, 2015 VNX

Reference:: VNX Command Line Interface Reference for Block:


The naviseccli ndu command -install function transfers one or more SP driver packages
from a user-accessible file system to the system private storage LUN (PSM). Media should
be present before you issue this command.
Preinstallation validation checks identify unsupported or unsafe installation conditions.
You initiate the validation checks functionality when you issue the ndu -install command.
The validation checks run in the background, prior to installing the software. If a validation
check fails, the CLI displays the error and terminates the installation. You can choose to
display all validation checks as the functionality executes by specifying the -verbose switch,
otherwise the CLI only displays failures that prevent installation.
When you install new SP software using the CLI, the only way to determine when the
installation is finished is to issue periodic ndu -status commands until the CLI shows the
operation is completed.
The software prompts for information as needed; then it installs or upgrades the specified
software packages and restarts the SPs. The SPs then load and run the new packages. After
successful installation, it deletes the files from the system.
You can install more than one package with one ndu command.

Software suites available for VNX2 and their associated Enablers:

ENABLER INSTALL PROCEDURE USING CLI


Check the list of all ENABLERS currently installed on the VNX:
naviseccli -h SP_IP ndu -list
A series of rule checks need to be performed in advance and correct any rule failures before proceeding:
naviseccli -h SP_IP ndu -runrules -listrules
Your configuration will run the following rules
===============================================
Host Connectivity
Redundant SPs
No Thin Provisioning Transitions
Version Compatibility
No Active Replication I/O
Acceptable Processor Utilization
Statistics Logging Disabled
No Transitions
All Packages Committed
Special Conditions
No Trespassed LUNs
No System Faults
No Interrupted Operations
No Incompatible Operations
FAST Cache Status
No Un-owned LUNs
Run through the Pre-installation rules to ensure the success of this software upgrade:
naviseccli -h SP_IP ndu -runrules -verbose
A common result is a warning for tresspassed LUNs:
RULE NAME: No Trespassed LUNs
RULE STATUS: Rule has warning.
RULE DESCRIPTION: This rule checks for trespassed LUNs on the storage system.
A total of 1 trespassed LUNs were found.
RULE INSTRUCTION: If these LUNs are not trespassed back, connectivity will be disrupted.
To remediate this rule failure and change the Current Owner you will need to execute a trespass command on
the LUN using navicli or by right clicking on the LUN in Unisphere and click the trespass option:
naviseccli -h SP_IP trespass lun 1
If changing on multiple LUNs then running the trespass mine command from the SP will trespass all the LUNs
that the SP has DEFAULT ownership of. For example to trespass LUNs with Default Ownership of SP B but

which are currently owned by SP A:


naviseccli -h SPB_IP trespass mine
Statistics Logging Disabled : Rule failed.
naviseccli -h SP_IP setstats -off
Local Protection Suite Enablers:
naviseccli -h SP_IP ndu -runrules SnapViewEnabler-01.01.5.002-xpfree.ena VNXSnapshot01.01.5.001.ena RPSplitterEnabler-01.01.5.002.ena -verbose
naviseccli -h SP_IP ndu -install SnapViewEnabler-01.01.5.002-xpfree.ena VNXSnapshot-01.01.5.001.ena
RPSplitterEnabler-01.01.5.002.ena -delay 360 -force -gen -verbose
Remote Protection Suite Enablers:
naviseccli -h SP_IP ndu -runrules MirrorViewEnabler-01.01.5.002-xpfree.ena MVAEnabler-01.01.5.002xpfree.ena RPSplitterEnabler-01.01.5.002.ena -verbose
naviseccli -h SP_IP ndu -install MirrorViewEnabler-01.01.5.002-xpfree.ena MVAEnabler-01.01.5.002xpfree.ena RPSplitterEnabler-01.01.5.002.ena -delay 360 -force -gen -verbose
FAST Suite Enablers:
naviseccli -h SP_IP ndu -runrules FASTCacheEnabler-01.01.5.008.ena FASTEnabler-01.01.5.008.ena
-verbose
naviseccli -h SP_IP ndu -install FASTCacheEnabler-01.01.5.008.ena FASTEnabler-01.01.5.008.ena -delay
360 -force -gen -verbose
Additional Enabler Software:
naviseccli -h SP_IP ndu -runrules ThinProvisioning-01.01.5.008.ena CompressionEnabler01.01.5.008.ena DeduplicationEnabler-01.01.5.001.ena DataAtRestEncryptionEnabler-01.01.4.001armada54_free.ena -verbose
naviseccli -h SP_IP ndu -install ThinProvisioning-01.01.5.008.ena CompressionEnabler-01.01.5.008.ena
DeduplicationEnabler-01.01.5.001.ena DataAtRestEncryptionEnabler-01.01.4.001-armada54_free.ena
-delay 360 -force -gen -verbose
Monitoring the progress of the installation:
naviseccli -h SP_IP ndu -status
Is Completed: NO
Status: Activating software on primary SP
Operation: Install
naviseccli -h SP_IP ndu -status
Is Completed: NO
Status: Completing install on secondary SP
Operation: Install
naviseccli -h SP_IP ndu -status
Is Completed: YES
Status: Operation completed successfully
Operation: Install
naviseccli -h SP_IP ndu -list -name -RPSplitterEnabler
Commit Required: NO
Revert Possible: NO
Active State: YES
Is installation completed: YES
Is this System Software: NO

Re-enable stats logging:


naviseccli -h SP_IP setstats -on
If uninstall required:
naviseccli -h SP_IP ndu -messner -uninstall -RPSplitterEnabler -delay 360
Uninstall operation will uninstall
-RPSplitterEnabler
from both SPs Set NDU delay with interval time of 360 secs.Do you still want to continue. (y/n)? y
Leave a comment
EMC VNX RecoverPoint Enabler Installation
May 8, 2015 RecoverPoint, VNX EMC, Enabler, Install, RecoverPoint, Splitter, VNX
Installing the RecoverPoint Enabler using NAVICLI:
Check the list of all ENABLERS currently installed on the VNX:
naviseccli -h SP_IP ndu -list
A series of rule checks need to be performed in advance and correct any rule failures before proceeding:
naviseccli -h SP_IP ndu -runrules -listrules
Your configuration will run the following rules
===============================================
Host Connectivity
Redundant SPs
No Thin Provisioning Transitions
Version Compatibility
No Active Replication I/O
Acceptable Processor Utilization
Statistics Logging Disabled
No Transitions
All Packages Committed
Special Conditions
No Trespassed LUNs
No System Faults
No Interrupted Operations
No Incompatible Operations
FAST Cache Status
No Un-owned LUNs
Run through the Pre-installation rules to ensure the success of this software upgrade:
naviseccli -h 10.73.113.40 ndu -runrules -verbose
A common result is a warning for tresspassed LUNs:
RULE NAME: No Trespassed LUNs
RULE STATUS: Rule has warning.
RULE DESCRIPTION: This rule checks for trespassed LUNs on the storage system.
A total of 1 trespassed LUNs were found.
RULE INSTRUCTION: If these LUNs are not trespassed back, connectivity will be disrupted.
To remediate this rule failure and change the Current Owner you will need to execute a trespass command on
the LUN using navicli or by right clicking on the LUN in Unisphere and click the trespass option:
naviseccli -h SP_IP trespass lun 1
If changing on multiple LUNs then running the trespass mine command from the SP will trespass all the LUNs
that the SP has DEFAULT ownership of. For example to trespass LUNs with Default Ownership of SP B but
which are currently owned by SP A:
naviseccli -h SPB_IP trespass mine

Statistics Logging Disabled : Rule failed.


naviseccli -h SP_IP setstats -off
Confirm all rule checks for RPSplitterEnabler are met:
naviseccli -h SP_IP ndu -runrules c:\VNX\Enablers\RPSplitterEnabler-01.01.5.002.ena -verbose
Running install rules...
===============================================
Version Compatibility : Rule passed.
Redundant SPs : Rule passed.
Acceptable Processor Utilization : Rule passed.
No Trespassed LUNs : Rule passed.
No Transitions : Rule passed.
No System Faults : Rule passed.
All Packages Committed : Rule passed.
Special Conditions : Rule passed.
Statistics Logging Disabled : Rule passed.
Host Connectivity : Rule passed.
No Un-owned LUNs : Rule passed.
No Active Replication I/O : Rule passed.
No Thin Provisioning Transitions : Rule passed.
No Incompatible Operations : Rule passed.
No Interrupted Operations : Rule passed.
FAST Cache Status : Rule passed.
Install the RPSplitterEnabler:
naviseccli -h SP_IP ndu -install c:\VNX Enablers\RPSplitterEnabler-01.01.5.002.ena -delay 360 -force -gen
-verbose
Name of the software package: -RecoverpointSplitter
Already Installed Revision NO
Installable YES
Disruptive upgrade: NO
NDU Delay: 360 secs
Monitoring the progress of the installation:
naviseccli -h SP_IP ndu -status
Is Completed: NO
Status: Activating software on primary SP
Operation: Install
naviseccli -h SP_IP ndu -status
Is Completed: NO
Status: Completing install on secondary SP
Operation: Install
naviseccli -h SP_IP ndu -status
Is Completed: YES
Status: Operation completed successfully
Operation: Install
naviseccli -h SP_IP ndu -list -name -RPSplitterEnabler
Commit Required: NO
Revert Possible: NO
Active State: YES
Is installation completed: YES
Is this System Software: NO

Re-enable stats logging:


naviseccli -h SP_IP setstats -on
If uninstall required:
naviseccli -h SP_IP ndu -messner -uninstall -RPSplitterEnabler -delay 360
Uninstall operation will uninstall
-RPSplitterEnabler
from both SPs Set NDU delay with interval time of 360 secs.Do you still want to continue. (y/n)? y
Installing the RecoverPoint Enabler via Unisphere Service Manager (USM):

Leave a comment
EMC VNX List of Useful NAS Commands
April 23, 2015 VNX Blade, COMMANDS, Control
Station, DATAMOVER, EMC, FILE, NAS,TROUBLESHOOTING, VG, VNX, VNX2
Verify NAS Services are running:
Login to the Control Station as nasadmin and issue the cmd /nas/sbin/getreason from the CS console. The
reason code output should be as follows (see detailed list of Reason Codes below):
10 - slot_0 primary control station
11 - slot_1 secondary control station
5 - slot_2 contacted
5 - slot_3 contacted
Complete a full Health Check:
/nas/bin/nas_checkup
Location of output:
#check Log:# cd /nas/log/
ls
cat /nas/log/checkup-rundate.log

Confirm the EMC NAS version installed and the model name:
/nasmcd/bin/nas_version
/nas/sbin/model
Stop NAS Services:
/sbin/service nas stop
Start NAS Services:
/sbin/service nas start
Check running status of the DATA Movers and view which slot is active/standby:
nas_server -info -all
Verify connectivity to the VNX storage processors (SPs) from the Control Stations:
/nas/sbin/navicli -h SPA_IP domain -list
/nas/sbin/navicli -h SPB_IP domain -list
Confirm the VMAX/VNX is connected to the NAS:
nas_storage -check -all
nas_storage -list
View VNX NAS Control LUN Storage Group details:
/nas/sbin/navicli -h SP_IP storagegroup -list -gname ~filestorage
List the disk table to ensure all of the Control Volumes have been presented to both Data Movers:
nas_disk -list
Check the File Systems:
df -h
server_sysconfig server_2 -virtual
/nas/bin/nas_storage info all
View trunking devices created (LACP,Ethc,FSN):
server_sysconfig server_2 -virtual
Example view interface name LACP_NAS:
server_sysconfig server_2 -virtual -info LACP_NAS
server_2 :
*** Trunk LACP_NAS: Link is Up ***
*** Trunk LACP_NAS: Timeout is Short ***
*** Trunk LACP_NAS: Statistical Load Balancing is IP ***
Device Local Grp Remote Grp Link LACP Duplex Speed
-----------------------------------------------------------------------fxg-1-0 10002 51840 Up Up Full 10000 Mbs
fxg-1-1 10002 51840 Up Up Full 10000 Mbs
fxg-2-1 10002 51840 Up Up Full 10000 Mbs
fxg-2-0 10002 51840 Up Up Full 10000 Mbs
Check Code Levels:
List the datamovers: nas_server -list
Check the DART code installed on the Data Movers: server_version ALL
Check the NAS code installed on the Control Station: nas_version
View Network Configuration:
To display parameters of all interfaces on a Data Mover, type:
Control Station: /sbin/ifconfig (eth3 is the mgmt interface)
Data Movers: server_ifconfig server_2 -all
cat ifcfg-eth3

View VNX SP IP Addresses from the CS console:


grep SP /etc/hosts | grep A_
grep SP /etc/hosts | grep B_
Verify Control Station Comms:
/nas/sbin/setup_enclosure -checkSystem
Confirm the unified FLAG is set:
/nas/sbin/nas_hw_upgrade -fc_option -enable
Date & Time:
Control Station: date
Data Movers: server_date ALL
Check IP & DNS info on the CS/DM:
nas_cs -info
server_dns ALL
Log Files:
Log file location: /var/log/messages
Example of NAS services starting successfully:
grep -A10 Starting NAS services /var/log/messages*
Output:
Dec 8 19:07:27 emcnas_i0 S95nas: Starting NAS services
Dec 8 19:07:46 emcnas_i0 EMCServer: nas_mcd: MCD will monitor CS IPMI connection.
Dec 8 19:08:46 emcnas_i0 EMCServer: nas_mcd: slot 0 missed 10 heartbeats from slot 1.
Dec 8 19:08:50 emcnas_i0 EMCServer: nas_mcd: Install Manager is running on slot 0, skipping slot 1 reboot
Dec 8 19:08:50 emcnas_i0 EMCServer: nas_mcd: Slot 0 becomes primary due to timeout
Dec 8 19:08:52 emcnas_i0 mcd_helper: All NBS devices are up
Dec 8 19:09:08 emcnas_i0 kernel: kjournald starting. Commit interval 5 seconds
Check the Data Mover Logs:
server_log server_2
Failing over a Control Station:
Failover:
/nas/sbin/./cs_standby -failover
Takeover:
/nasmcd/sbin/./cs_standby -takeover
Or reboot:
nas_cs reboot
Shutdown control station:
/sbin/shutdown -h now
Power off CS1 from CS0:
/nas/sbin/t2reset pwroff -s 1
List on which VMAX3 directors each CS and DM are located:
nas_inventory -list
List Datmover PARAMETERS:
/nas/bin/server_param server_2 -info
/nas/bin/server_param server_3 -info
/nas/bin/server_param server_2 -facility -all -list
/nas/bin/server_param server_3 -facility -all -list
Determine the failover status of the Blades (Datamovers):
/nas/bin/nas_server -info all

Initiate a manual failover of server_2 to the standby Datamover:


server_standby server_2 -activate mover
List the status of the Datamovers:
nas_server -list
Review the information for server_2:
nas_server -info server_2
All DMs: nas_server -info ALL
Shutdown Datamover (Xblade):
/nas/bin/server_cpu server_2 -halt now
Power on the Datamover (Xblade):
/nasmcd/sbin/t2reset pwron -s 2
Restore the original primary Datamover:
server_standby server_2 -restore mover
To monitor an immediate cold restart of server_2:
server_cpu server_2 -reboot cold -monitor now
A cold reboot or a hardware reset shuts down the Data Mover completely before restarting, including a Power
on Self Test (POST).
To monitor an immediate warm restart of server_2:
server_cpu server_2 -reboot -monitor now
A warm reboot or a software reset performs a partial shutdown of the Data Mover, and skips the POST after
restarting. A software reset is faster than the hardware reset.
Clean Shutdown:
Shutdown Control Stations and DATAMovers:
/nasmcd/sbin/nas_halt -f now
finished when:exited on signal IS APPEARS
Powerdown Entire VNX Including Storage Processors:
nas_halt f sp now
Check if Product Serial Number is Correct:
/nasmcd/sbin/serial -db_check
Remove inconsistency between the db file and the enclosures:
/nasmcd/sbin/serial -repair
List All Hardware components by location:
nas_inventory -list -location
nas_inventory -list -location | grep DME 0 Data Mover 2 IO Module
Use location address to view specific component details:
nas_inventory -info system:VNX5600:CKM001510001932007|enclosure:xpe:0|mover:VNX5600:2|
iomodule::1
List of Reason Codes:
0 Reset (or unknown state)
1 DOS boot phase, BIOS check, boot sequence
2 SIB POST failures (that is, hardware failures)
3 DART is loaded on Data Mover, DOS boot and execution of boot.bat, boot.cfg.
4 DART is ready on Data Mover, running, and MAC threads started.
5 DART is in contact with Control Station box monitor.
6 Control Station is ready, but is not running NAS service.
7 DART is in panic state.
9 DART reboot is pending or in halted state.
10 Primary Control Station reason code

11 Secondary Control Station reason code


13 DART panicked and completed memory dump (single Data Mover configurations only, same as code 7, but
done with dump)
14 This reason code can be set for the Blade for any of the following:
Data Mover enclosure-ID was not found at boot time
Data Movers local network interface MAC address is different from MAC address in configuration file
Data Movers serial number is different from serial number in configuration file
Data Mover was PXE booted with install configuration
SLIC IO Module configuration mismatch (Foxglove systems)
15 Data Mover is flashing firmware. DART is flashing BIOS and/or POST firmware. Data Mover cannot be
reset.
17 Data Mover Hardware fault detected
18 DM Memory Test Failure. BIOS detected memory error
19 DM POST Test Failure. General POST error
20 DM POST NVRAM test failure. Invalid NVRAM content error (checksum, WWN, etc.)
21 DM POST invalid peer Data Mover type
22 DM POST invalid Data Mover part number
23 DM POST Fibre Channel test failure. Error in blade Fibre connection (controller, Fibre discovery, etc.)
24 DM POST network test failure. Error in Ethernet controller
25 DM T2NET Error. Unable to get blade reason code due to management switch problems.
3 Comments
EMC VNX ArrayConfig & SPCollect (Powershell Script)
January 26, 2015 VNX Array Capture, Backup, Config, EMC, Navicli, Powershell, SPCollect,VNX
Reference: VNX Command Line Interface Reference for Block:
SPCollect : The naviseccli spcollect command selects a collection of system log files and places them in a single
.zip file on the system. You can retrieve the file from the system using the managefiles command. Important: The
SPCollect functionality can affect system performance (may degrade system performance).
ArrayConfig : The arrayconfig -capture command queries the system for its configuration along with I/O port
configuration information. When issued, the command will capture a systems essential configuration data. The
information is formatted and stored on the client workstation. This generated file can be used as a template to
configure other systems or rebuild the same system if the previous configuration is destroyed.
Using native Navi commands integrated with Powershell this script automates the process of backing up the
current VNX configuration along with the latest SPCollect log files. You will just need to Complete a few
simple user entries:
SP A&B IP Addresses
Username & Password
Backup Directory
The script will automatically create a sub-directory in the backup location provided. For example if you input a
backup directory of C:\VNX this will result in a backup location of C:\VNX\VNXserial_timeDate
Example Script Input:

Expected Script Output:

The backup directory location will automatically open on completion of the script:

Download HERE and remove .doc extension replacing with .ps1! Or in full text format below:

############################
#
# Reference: VNX CLI Docs
# Script: VNX BACKUPS
#
# Date: 2015-01-23 14:30:00

#
# Version Update:
# 1.0 David Ring

#
############################

######## Banner ########


Write-Host " "
Write-Host "#######################################"
Write-Host "## VNX Configuration & LOGS Backup ##"
Write-Host "#######################################"

### VNX SP IP's, User/PW & Backup Location ###


$SPAIP = Read-Host 'IP Address for Storage Processor A:'
$SPBIP = Read-Host 'IP Address for Storage Processor B:'
$User = Read-Host 'VNX Username:'
$Password = Read-Host 'VNX Password:'
$BackupLocation = Read-Host "Backup Location:(A sub-dir with the current Time & Date will be created):"

$ArrayConfig = (naviseccli -user $User -password $Password -scope 0 -h $SPAIP getagent | Select-String
"Serial No:")
$ArrayConfig = $ArrayConfig -replace Serial No:,
$ArrayConfig = $ArrayConfig -replace

$BackupLocation = (join-path -Path $BackupLocation -ChildPath ($ArrayConfig +"_"+ "$(date -f


HHmmddMMyyyy)"))
IF(!(Test-Path "$BackupLocation")){new-item "$BackupLocation" -ItemType directory | Out-Null}
$BackupLocation = "`"$BackupLocation`""

Write-Host "Storage Processor 'A':" $SPAIP


Write-Host "Storage Processor 'B':" $SPBIP
Write-Host "VNX Username:" $User

Write-Host "VNX Password:" $Password


Write-Host "VNX Serial Number:" $ArrayConfig
Write-Host "Backup Location Entered:" $BackupLocation

Start-Sleep -s 10

$BackupName = $ArrayConfig+"_"+$(date -f HHmmddMMyyyy)+".xml" ; naviseccli -user $User -password


$Password -scope 0 -h $SPAIP arrayconfig -capture -output $BackupLocation"\"$BackupName

Write-Host $ArrayConfig "Configuration Data Has Been Backed Up In XML Format!"

Start-Sleep -s 5

### Gather & Retrieve SP Collects for both Storage Processors ###
Write-Host "Now Generating Fresh Storage Processor 'A' & 'B' Collects!"
$GenerateSPA = naviseccli -user $User -password $Password -scope 0 -h $SPAIP spcollect -messner
$GenerateSPB = naviseccli -user $User -password $Password -scope 0 -h $SPBIP spcollect -messner
Start-Sleep -s 10

### Storage Processor 'A' LOG Collection ###

## WHILE SP_A '*RUNLOG.TXT' FILE EXISTS THEN HOLD ...RESCAN EVERY 90 SECONDS ##
Do {
$listSPA = naviseccli -user $User -password $Password -scope 0 -h $SPAIP managefiles -list | select-string
"_runlog.txt"
$listSPA
Start-Sleep -s 90
Write-Host "Generating Log Files For Storage Processor 'A' Please Wait!"
}

While ($listSPA -like '*runlog.txt')

Write-Host "Generation of SP-'A' Log Files Now Complete! Proceeding with Backup."

Start-Sleep -s 15

$latestSPA = naviseccli -user $User -password $Password -scope 0 -h $SPAIP managefiles -list | Select-string
"data.zip" | Select-Object -Last 1
$latestSPA = $latestSPA -split " "; $latestSPA=$latestSPA[6]
$latestSPA
$BackupSPA = naviseccli -user $User -password $Password -scope 0 -h $SPAIP managefiles -retrieve -path
$BackupLocation -file $latestSPA -o

Start-Sleep -s 10

### Storage Processor 'B' LOG Collection ###

## WHILE SP_B '*RUNLOG.TXT' FILE EXISTS THEN HOLD ...RESCAN EVERY 15 SECONDS ##
Do {
$listSPB = naviseccli -user $User -password $Password -scope 0 -h $SPBIP managefiles -list | select-string
"_runlog.txt"
$listSPB
Start-Sleep -s 15
Write-Host "Generating Log Files For Storage Processor 'B' Please Wait!"
}
While ($listSPB -like '*runlog.txt')

Write-Host "Generation of SP-'B' Log Files Now Complete! Proceeding with Backup."

Start-Sleep -s 10

$latestSPB = naviseccli -user $User -password $Password -scope 0 -h $SPBIP managefiles -list | Select-string
"data.zip" | Select-Object -Last 1
$latestSPB = $latestSPB -split " "; $latestSPB=$latestSPB[6]
$latestSPB
$BackupSPB = naviseccli -user $User -password $Password -scope 0 -h $SPBIP managefiles -retrieve -path
$BackupLocation -file $latestSPB -o

$BackupLocation = $BackupLocation -replace '"', ""


invoke-item $BackupLocation

Read-Host "Confirm Presence of 'Array Capture XML' and 'SP Collects' in the Backup Directory!"

See Also @Pragmatic_IO post: EMC VNX Auto Array Configuration Backups using NavisecCLI and
Powershell
7 Comments
EMC VNX Managing Data Mover (Blade) Relationships
June 7, 2013 VNX Blade, Data Mover, VNX
Today I had a VNX Data Mover (Blade) slic issue where the failover relationships were not as they should be.
The following commands are very useful for checking the relationships between the Blades and restoring those
replationships if necessary.
You can determine the Blade failover status through the following command:
/nas/bin/nas_server -info all

Type field indicates whether the Blade is a Primary Blade (nas) or a standby blade(standby).
The standbyfor field lists the primary Blade for which the Blade is the standby.
If the name field indicates that a primary Blade is faulted and has failed over to its standby, you need to perform
a restore on that primary Blade. Enter the following command:
/nas/bin/server_standby server_4 -restore mover
If a failover relationship is not as it should be then you can delete the failover relationship as follows. For
example deleting server_2 (Primary) relationship with server_4 (Standby):
/nas/bin/server_standby server_2 -delete mover=server_4
It is then possible to create a failover relationship between server_2 as (Primary) and server_5 (Standby):
/nas/bin/server_standby server_2 -c mover=server_5 -policy auto

The following commands can be used to halt and reboot the Blade:
/nasmcd/sbin/t2reset -s 5 halt
/nasmcd/sbin/t2reset reboot -s 5
Another useful command you can use in order to list the I/O modules in the Blade:
/nasmcd/sbin/t2vpd -s 5

Leave a comment
EMC VNX LUN MIGRATION
April 24, 2013 VNX EMC, VNX
This post is a result of having to migrate Cisco UCS Blade boot LUNs between Raid Groups. Both the source
and destination were Classic LUNs RAID Group LUN (FLARE LUN/FLU). The migration occurred across
two DAEs, with each DAE populated with one Raid Group. This is the ideal scenario for best migration
performance having the load spread across different RGs and also spread across backend ports. The Rates
have an incremental effect on the Storage Processor utilization. We used the ASAP migration rate which was
acceptable as the system was not in production and would not therefore impact performance of other workloads.
Also note that two ASAP migrations can be performed simultaneously per Storage Processor.
For migration of a LUN within a VNX array the following rates can be used in the calculation:

Priority Rate (MB/s)


Low 1.4 MB/s
Medium 13 MB/s
High 44 MB/s
ASAP 85 MB/s
Calculation for the Migration of a LUN is as follows:
Time = (Source LUN Capacity * (1/Migration Rate)) + ((Destination LUN Capacity Source LUN Capacity) *
(1/Initialization Rate))
Calculation for the migration of a 20Gig BOOT LUN between Raid Groups at ASAP SPEED (85MB/s):
Source LUN = 20000MB
Destination LUN = 20000MB
Migration Rate is 85MB/s = 306000 MB/Hour
Initialization Rate = 306000 MB/Hour
(20000 * (1hour / 306000)) + ((20000 20000) * (1hour / 306000))
= .0653 hrs
= 3.9 Minutes Per BOOT LUN
The following command can be used if CLI is your preferred option:
naviseccli migrate -start -source 15 -dest 16 -rate asap
Where the source LUN ID is 15 and the destination is 16 with priority ASAP

Check Percentage Complete:


naviseccli -h x.x.x.x migrate -list -source 15 -percentcomplete
Check Time Remaining:
naviseccli -h x.x.x.x migrate -list -source 15 -timeremaining
Note: The LUN Migration feature can also be used to migrate to a larger LUN size, thus increasing capacity of
the source LUN. The whole process can be completed online and non-disruptively (transparent to VMware
ESXi hosts) and does not require any post configuration. Once the migration completes then the original source
LUN is deleted off the array and the new LUN takes the source LUN WWN and LUN ID.
11 Comments
EMC VNX OE Upgrade Via USM (Dual CS)
February 22, 2013 VNX #EMC #VNX #FLARE
Here I will detail the steps involved in upgrading the OE for file and block on the VNX. Using the Unisphere
Service Manager is the recommended method of upgrade. The USM now supports an upgrade if the VNX has
two Control stations provided you are running a minimum code level of 7.0.50.x. Note that VG2 and VG8
systems must continue to use the CLI procedure for upgrading.
Download the required software from the EMC Support site
The VNX OE for File software packages required for this procedure will be:
a. pkg_x.x.x.x_emcnas.tar and pkg_x.x.x.x_emcnas.tar.checksum (if available)
b. x.x.x.x_emcnas_CD1.iso and x.x.x.x_emcnas_CD1.iso.checksum (if available).

Notes:
The DVD iso image cannot be used.
If you do not have the expected checksum file, Unisphere Service Manager (USM) will warn that it cannot
validate the software file and that it may be corrupt. If you are sure that your files were downloaded correctly,
you can ignore this message.
c. The pkg and iso files should be placed in a folder with the following naming convention.
c:\EMC\repository\downloads\vnx\\, for example c:\EMC\repository\downloads\vnx\22022013\7.0.54.3.upg
Note: The .upg is required for the folder name.
The VNX OE for Block software packages required for this procedure will be:
a. CX5-Bundle-05.3x.000.5.xxx.pbu file
b. Any enablers that you are to install.
Ensure that the destination versions of the VNX OE for Block and VNX OE for File software are compatible
Upgrade the VNX OE for File software before upgrading the VNX OE for Block software Start Unisphere
Service Manager.
Log in to the storage system to be updated using the IP address of one of the SPs and administrator credentials.
Click on the Software Tab, then System Software
Run the Prepare for Installation step to check for any issues, and address them.

After you have run through the Prepare wizard then select the Install Software step
Select the type of upgrade that you will be performing Install VNX OE for File
Choose your upgrade package

During the File software upgrade, you will be asked if you want to reboot the data Movers during the upgrade or
after the upgrade.

Follow the prompts in the Wizard, answering the prompts to reboot if you indicated that you wanted to reboot
during the upgrade, not afterwards

Did the File OE upgrade complete successfully?

If yes, go to the next step.


Block OE Upgrade
Select Express Install or Custom Install. (Express Install is offered if software was pre-staged using
the Prepare for Installation option within the last 2 days.)
Follow the wizard steps to upgrade the block software
1 Comment
EMC VNX FAST CACHE
February 21, 2013 VNX EMC VNX, FAST CACHE, VNX
EMC FAST Cache technology gives a performance enhancement to the VNX Storage Array by adding FLASH
drives as a Secondary Cache, working hand-in-hand with DRAM Cache and enhancing overall Storage Array
performance. EMC recommend firstly using available Flash Drives for FAST Cache and then adding Flash
Drives as a tier to selected FAST VP pools as required. FAST Cache works with all VNX systems (And also
CX4) and is activated by installing the required FAST Cache Enabler. FAST Cache works with traditional Flare
LUNs and VP Pools.
Note: FAST Cache is enabled at the Pool wide level and cannot be selective for specific LUNs within the Pool.

The initial configuration is quite simple, after adding the required quantity of drives you can create FAST Cache
through the System Properties console in Unisphere, which will enable FAST Cache for system wide use:

Or you can use Navicli commands to create and monitor the initialization status of FAST Cache:
naviseccli -h SPA_IP cache -fast -create disks disks -mode rw -rtype r_1
Check on the status of FAST Cache creation:
naviseccli -h SPA_IP cache -fast -info -status
If you wish to enable|disable FAST Cache on a specific VP_POOL:
naviseccli -h SPA_IP storagepool -modify -name Pool_Name -fastcache off|on
Check what Pools have FAST Cache enabled/disabled:
naviseccli -h SPA_IP storagepool -list -fastcache
If the FAST Cache configuration requires any modification then it first needs to be disabled. By disabling
(destroying) FAST Cache all dirty blocks are flushed back to disk; once FAST Cache has completed disabling
then you may re-create FAST Cache with your new configuration.
Configuration Options
FAST Cache configuration options range from 100GB on a CX4-120 to 4.2TB of FAST Cache on a VNX-8000.
CX4 Systems:
CX4-120 100GB
CX4-240 200GB
CX4-480 800GB
CX4-960 2000GB
VNX1 Systems:
VNX 5100 100GB
VNX 5300 500GB
VNX 5500 1000GB
VNX 5700 1500GB
VNX 7500 2100GB
VNX2 Systems:
VNX 5200 600GB
VNX 5400 1000GB
VNX 5600 2000GB
VNX 5800 3000GB
VNX 7600 4200GB
VNX 8000 4200GB

FAST Cache drives are configured as RAID-1 mirrors and it is good practice tobalance the drives across all
available back-end buses; this is due to the fact that FAST Cache drives are extremely I/O Intensive and placing
more than the recommended maximum per Bus may cause I/O saturation on the Bus. Amount of FAST Cache
drives per B/E Bus differs for each system but ideally for a CX/VNX1 system aim for no more than 4 drives
per bus and 8 for a VNX2. It is best practice on a CX/VNX1 to avoid placing drives on the DPE or 0_0 that
will result in one of the drives being placed in another DAE, for example DO NOT mirror a drive in 0_0 with a
drive in 1_0.
The order the drives are added into FAST Cache is the order in which they are bound, with the:
first drive being the first Primary;
the second drive being the first Secondary;
the third drive being the next Primary and so on
You can view the internal private RAID_1 Groups of FAST Cache by running the following Navicli:
naviseccli -h SPA_IP getrg EngineeringPassword
Note: Do not mix different drive capacity sizes for FAST Cache, either use all 100GB or all 200GB drive types.
Also for VNX2 systems there are two types of SSD available:
FAST Cache SSDs are single-level cell (SLC) Flash drives that are targeted for use with FAST Cache. These
drives are available in 100GB and 200GB capacities and can be used both as FAST Cache and as TIER-1 drives
in a storage pool.
FAST VP SSDs are enterprise Multi-level cell (eMLC) drives that are targeted for use as TIER-1 drives in a
storage pool (Not supported as FAST Cache drives). They are available in three flavors 100GB, 200GB and
400GB.
Example FAST Cache Drive Layout (VNX 8000)
The following image is an example of a VNX-8000 with the maximum allowed FAST Cache configuration (42
x 200GB FAST Cache SSDs). As you can see I have spread the FAST Cache drives as evenly possible across the

BE Buses beginning with DAE0_0 in order to achieve the lowest latency possible:

FAST Cache Internals


FAST Cache is built on the Unified LUN technology; thus the Data in FAST Cache is as secure as any other
LUN in the CX/VNX array. FAST Cache is a nonvolatile storage that survives both power and SP failures and it
does not have to re-warm after a power outage either.
There will be a certain amount of DRAM allocated during FAST Cache creation for the IO tracking of FAST
Cache known as the Memory Map. This FAST Cache bitmap is directly proportional to the size of the FAST
Cache. The memory allocation is something in the region of 1MB for every 1GB of FAST Cache created. So
when FAST Cache is being enabled FLARE will attempt to grab approx 1/3rd the required memory from Read
cache and 2/3rds from the Write cache and then re-adjusts the existing DRAM read and write caches
accordingly.
With a compatible workload; FAST Cache increases performance by reducing the response time to hosts and
provides higher throughput (IOPS) for busy areas that may be under pressure from drive or RAID limitations.
Apart from Storage Processors being able to cache read and write I/Os; the Storage Processors on the
VNXalso coalesce writes and pre-fetch reads to improve performance. However, these operations generally do
not accelerate random read-heavy I/Os and this is whereFAST Cache helps. FAST Cache monitors the storage
processors I/O activity for blocks that are read or written to multiple times, with the third IO to any block
within a 64K extent getting scheduled for promotion to FAST Cache, promotion is handled the same way as
writing or reading an IO to a LUN. The migration processoperates 247 using the Least Recently Used
algorithm in order to determine which data stays and which goes. The writes continue to be written to DRAM
write cache but with FAST Cache enabled those writes are flushed to the Flash drives and soincreasing flush
speeds.
One important thing to note is that while performance of the VNX increases and IOPS figures increase with
workload demands there will be an increase in the SP CPU utilization and this should be monitored. There are
recommended guidelines on max throughput figures for particular arrays more on this later.
It is important to know the type of workload on your LUN; as an example, log files are generally written and
read sequentially across the whole LUN, in this scenario the LUN would not be a good candidate for FAST
Cache as Flash drives are not necessarily better at serving large block sequential I/O when compared to spinning
drives. Also Large block sequential I/O workloads are better served by having large quantities of drives,
promoting this type of data to FAST Cache will normally result in the data being served by a lesser quantity of

drives thus resulting in a performance reduction. Avoiding using FAST Cache on unsuitable LUNs will help to
reduce the overhead of tracking I/O for promotion to FAST Cache.
Best Suited
Here I have listed conditions that you should factor when deciding if FAST Cache will be a good fit for your
environment:
VNX Storage Processor Utilization is under 70-percent
There is evidence of regular forced Write Cache Flushing
The majority I/O block size is under 64K (OLTP Transactions are typically 8K)
The disk utilization of RAID Groups is consistently running above 60-70%
Your workload is predominately random read I/O
Your production LUNs have a high percentage of read cache misses
Host response times are unacceptable
Useful EMC KBs
Please see the following EMC KB articles for further reference (support.emc.com):
Please Note that some guidelines apply differently to Clar|VNX|VNX2.
KB 73184: Rules of thumb for configuring FAST Cache for best performance and highest availability
KB 14561: How to flush and disable FAST Cache
KB 15075: FAST Cache performance considerations
KB 82823: Should FAST Cache be configured across enclosures?
KB 15606: How to monitor FAST Cache performance using Unisphere Analyzer
KB 168348: [Q&A]VNX Fast Cache
KB 84362: Where do I find information about FAST Cache?
3 Comments
Search for:
@DAVIDCRING
TOP POSTS & PAGES

EMC VMAX3 - CLI Cheat Sheet

EMC VNX - List of Useful NAS Commands

EMC RecoverPoint Architecture and Basic Concepts


CATEGORIES
Categories
VNX

ARCHIVES
Archives
DAVID RING

ViPR Controller -Configuring AD Authentication September 27, 2016 David Ring

vSphere VCSA 6.x Enabling Bash ShellSeptember 22, 2016 David Ring

RecoverPoint GEN6 Hardware ApplianceSeptember 19, 2016 David Ring

Blog at WordPress.com.
Follow

Você também pode gostar