Você está na página 1de 16

EMC Simple Support Matrix

EMC VMAX All Flash: VMAX 250F, VMAX 250FX, VMAX 450F,
VMAX 450 FX, VMAX 850F, and VMAX 850 FX
EMC VMAX3: VMAX 400K, VMAX 200K, and VMAX 100K
Director Bit Settings and Features

(T10 and SRDF/Metro, Mobility ID


and ALUA Multipathing)
SEPTEMBER 2016
REV 17

© 2014 - 2016 EMC Corporation. All Rights Reserved.

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without
notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR
WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY
DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United State and other countries.
All other trademarks used herein are the property of their respective owners.

For the most up-to-date regulatory document for your product line, go to EMC Online Support (https://support.emc.com).
Table 1 through Table 5 contain the EMC® E-Lab™ director bit settings and features for EMC VMAX® All Flash: VMAX 250F, VMAX 250FX,
VMAX 450F, VMAX 450 FX, VMAX 850F, VMAX 850 FX and EMC VMAX 3: VMAX 400K, VMAX 200K, and VMAX 100K flags. Table 6, Table 7,
and Table 8 contain SRDF/Metro support information. Table 9 provides information for EMC VMAX All Flash and VMAX3 with EMC
HYPERMAX® OS 5977.811.784, 5977.813.785k, and 5977.945.890 with Mobility ID and ALUA Multipathing.

Refer to the EMC VMAX 40K, VMAX 20K, VMAX 10K xxx987xxxx Director Bit Settings Simple Support Matrix, the EMC Symmetrix VMAX,
VMAX 10K xxx959xxxx, VMAXe Director Bit Settings Simple Support Matrix, or the EMC Symmetrix DMX-4/DMX-3 Director Bit Settings Simple
Support Matrix, available at https://elabnavigator.emc.com, for specific director bit settings information.
Table 1 EMC VMAX All Flash and VMAX3 director bit settings for Fibre Channel switches
Category Flag Description 5977 Factory VPLEX Windows 2003 Windows Linux HP-UX 11i HP-UX IBM IBM i i, j VMware Solaris HP-
Default Settings SP1 and later 2008/2012 v1, 11i v2 11i v3 AIX OpenVMS
FC Topology Fabric
FC Loop ID 0
SCSI Soft Reset s
SCSI Environment e
Reports to Host
SCSI Disable Queue
Reset on Unit de (opt.) (opt.)
Attention
SCSI Avoid Reset arb
Broadcast
SCSI SCSI3 Interface SC3 a, c
SCSI SCSI Primary SPC2 a, c
Commands
SCSI OS2007 OS2007 e os2007 d (opt.) (opt.) (opt.) os2007 (opt.) os2007 (opt.)
SCSI Enable Volume Set ve V V
Addressing
FC Non Participate np e
Hard Addressing
SCSI Initiator Group ACLX e (opt.)
HOST Open VMS ovms e OVMS
SCSI Show ACLX DV show h
FC Link Speed AUTO/2/4/8 Gb b, f
AUTO/4/8/16 Gb b, g
Legend: Blank = No change from array factory default setting.
Lowercase = Value disabled (off).
Uppercase = Value enabled (on).
(opt.) = Optional. (Optional settings are tolerated and may be used as required by specialized configuration.)
Notes: a. Factory Default setting is not to be changed since EMC requires setting the SC3 and SPC2 flags.
b. When Link Speed is enabled to AUTO (by default), the link speed is negotiated by the target port and HBA up to the maximum capable speed. The link speed setting is only
effective for dependent installed I/O modules. Refer to the appropriate Symmetrix Product Guide for more information.
c. SC3 and SPC2 are advertised and is non-configurable.
d. For VPLEX 5.3 SP1 and higher, the OS2007 flag MUST be enabled.
e. This director bit can be modified to comply with the required setting or desired setting if it is optional either on the target port basis or initiator basis.
f. Specific to FC IO module P/N 303-092-102.
g. Direct attachments with VMAX 400K, VMAX 200K and VMAX 100K is restricted to the FC IO module P/N 303-092-102 and is supported with only FC-AL topology. Point-to-point
direct attachments are not supported.
h. The Access Control Logix (ACLX) device show director bit has a default factory configuration setting of “enabled” on a maximum of one port per Fibre Channel Director, which by
default is the lowest port number available on the FA director. The default setting can be modified to make the device accessible on a different port; however, the aclx show
director bit should be enabled on a maximum of only one port per FA director.
i. VMAX 400K/200K/100K only support D910 thin emulation devices for an IBM i host.
j. For IBM i hosts, the factory settings for SC3 and SPC2 are not required and should be removed.
k. 5977.811.784 has been replaced with the Q2'16 SR release 5977.813.785. Version 5977.811.784 can still be used for new installs only, and can be downloaded from
\\ssr\public\5977.811.784. For complete details please refer to Salesforce KB: 482212.

9/26/16
Table 2 EMC VMAX All Flash and VMAX3 director bits for Fibre Channel Direct Connect
Category Flag Description 5977 VPLEX Windows Windows Linux HP-UX HP-UX IBM AIX IBM i i, j VMware Solaris
Factory 2003 SP1 2008 / 2012 11i v1, 11i v3
Default and later 11i v2
Settings
FC Topology e Fabric AL AL AL AL AL AL AL AL AL
FC Loop ID 0 0-126
SCSI Soft Reset s
SCSI Environment Reports to Host e
SCSI Disable Queue Reset on Unit
Attention df (opt.) (opt.)

SCSI Avoid Reset Broadcast arb


SCSI SCSI3 Interface SC3 a, c
SCSI SCSI Primary Commands SPC2 a, c
SCSI OS2007 OS2007 f os2007 d (opt.) (opt.) (opt.) os2007 (opt.) os2007
SCSI Enable Volume Set Addressing vf V V
FC Non Participate Hard
np f NP NP
Addressing
SCSI Initiator Group ACLX f (opt.)
HOST Open VMS ovms f
SCSI Show ACLX DV show h
FC Link Speed AUTO/2/4/8 Gb b, g (opt.)
Legend: Blank = No change from array factory default setting.
Lowercase = Value disabled (off).
Uppercase = Value enabled (on).
(opt.) = Optional. (Optional settings are tolerated and may be used as required by specialized configuration.)
Notes: a. Factory Default setting is not to be changed since EMC requires setting the SC3 and SPC2 flags.
b. When Link Speed is enabled to AUTO (by default), the link speed is negotiated by the target port and HBA up to the maximum capable speed. The link speed setting is only
effective for dependent installed I/O modules. Refer to the appropriate Symmetrix Product Guide for more information.
c. SC3 and SPC2 are advertised and is non-configurable.
d. For VPLEX 5.3 SP1 and higher, the OS2007 flag MUST be enabled.
e. Direct connect for VMAX 400K, VMAX 200K and VMAX 100K is limited to 8 Gb/s and lower.
f. This director bit can be modified to comply with the required setting or desired setting if it is optional either on the target port basis or initiator basis.
g. Direct attachments with VMAX 400K, VMAX 200K and VMAX 100K is restricted to the FC IO module P/N 303-092-102 and is supported with only FC-AL topology, point-to-point
direct attachments are not supported.
h. The Access Control Logix (ACLX) device show director bit has a default factory configuration setting of “enabled” on a maximum of one port per Fibre Channel Director, which
by default is the lowest port number available on the FA director. The default setting can be modified to make the device accessible on a different port; however, the aclx show
director bit should be enabled on a maximum of only one port per FA director.
i. VMAX 400K/200K/100K only support D910 thin emulation devices for an IBM i host.
j. For IBM i hosts, the factory settings for SC3 and SPC2 are not required and should be removed.

9/26/16
Table 3 VMAX All Flash and VMAX3 director bits for iSCSI
Category Flag Description 5876 Windows 2003 Windows Linux IBM AIX VMware Solaris
Factory SP1 and later 2008/ 2012
Default
Settings
iSCSI Topology Fabric IP IP IP IP IP IP
iSCSI IPV4 Netmask 0 0-32
iSCSI Link Speed 10 Gb
Legend: Blank = No change from array factory default setting.
Lowercase = Value disabled (off).
Uppercase = Value enabled (on).
(opt.) = Optional. (Optional settings are tolerated and may be used as required by specialized configuration.)

Table 4 VMAX All Flash and VMAX3 director bits for FCoE
Category Flag Description 5876 Windows Windows Linux HP-UX HP-UX IBM AIX VMware Solaris
Factory 2003 SP1 2008 / 2012 11i v1, 11i v3
Default and later 11i v2
Settings
FCoE Topology Fabric Fabric Fabric Fabric Fabric Fabric Fabric Fabric Fabric
SA Soft Reset s
SA Environment Reports to Host e
SA Disable Queue Reset on Unit Attention d D (opt.)
SA Avoid Reset Broadcast arb
SA SCSI3 Interface SC3 a
SA SCSI Primary Commands SPC2 a
SA OS2007 OS07 (opt.) (opt.) (opt.) (opt.) os07
FA Enable Volume Set Addressing v V V
FA Open VMS ovms
SCSI Initiator Group ACLX
SCSI Show ACLX DV show b
FCoE Link Speed 10 Gb
Legend: Blank = No change from array factory default setting.
Lowercase = Value disabled (off).
Uppercase = Value enabled (on).
(opt.) = Optional. (Optional settings are tolerated and may be used as required by specialized configuration.)
Notes: a. Factory Default setting is not to be changed since EMC requires setting the SC3 and SPC2 flags.
b. The Access Control Logix (ACLX) device show director bit has a default factory configuration setting of “enabled” on a maximum of one port per Fibre Channel Director,
which by default is the lowest port number available on the FA director. The default setting can be modified to make the device accessible on a different port; however,
the aclx show director bit should be enabled on a maximum of only one port per FA director.

9/26/16
Table 5 VMAX All Flash and VMAX3 Layers supporting device flag DIF1 (T10 PI/DIX minimum versions)

Emulex 8G FC HBA

T10 DIX Layer


Application Any Oracle Database application
Database Oracle Database 11g with Oracle Automatic Storage Management (ASM)
Operating system Oracle Linux 5.x or 6.x with UEK kernel versions 2.6.39-200.24.1 or later, oracleasmlib 2.0.8 or later
T10 PI Layer
HBA models: driver/firmware Emulex LPe12000-E and LPe12002-E with firmware 2.01a10 or later and driver 8.3.5.68.6p or later on Oracle Linux, and
driver 8.3.7.34.3p or later on RHEL 7.x
Operating system Oracle Linux 5.x or 6.x with UEK kernel versions 2.6.39-200.24.1 or later; RHEL 7.0 or later

Array EMC VMAX3 Series with Enginuity 5977


Emulex 16G FC HBA
T10 PI Layer
HBA models: driver/firmware Emulex LPe16000B-E and LPe16002B-E with firmware 10.0.803.25 or later and driver 8.3.7.34.4p or later on Oracle
Linux 6.x, and driver 10.2.370.15 or later on RHEL 7.x
Operating system Oracle Linux 6.x with UEK kernel versions 3.8.13-26.2.3 or later; RHEL 7.0 or later
Array EMC VMAX3 Series with Enginuity 5977
QLogic 16G FC HBA

T10 DIX Layer


Application Any Oracle Database application
Database Oracle Database 11g with Oracle Automatic Storage Management (ASM)
Operating system Oracle Linux 6.x with UEK kernel version 3.8.13-34 or later, oracleasmlib 2.0.10-6 or later
T10 PI Layer
HBA models: driver/firmware Qlogic QLE2670-E-SP and QLE2672-E-SP with driver 8.07.00.08.39.0-k1 or later on Oracle Linux 6.x, and driver
8.06.00.08.07.0-k2 or later on RHEL 7.x
Operating system Oracle Linux 6.x with UEK kernel versions 3.8.13-26.2.3 or later; RHEL 7.0 or later
Array EMC VMAX3 Series with Enginuity 5977
Notes:
• T10 PI/DIX is supported by any layer meeting minimum version requirements listed in Table 5. For upper layers to be able to support features, all layers
below it must also be enabled.
• T10 PI (Protection Information) provides additional data integrity from the array's disk drive through to the HBA (host bus adapter).
• T10 DIX (Data Integrity Extensions) facilitates protection information interchange between the operating system and HBA through to, and including,
enabled applications.

9/26/16
Table 6 SRDF/Metro support version 3.0 HYPERMAX OS 5977.945.890

Operating system MultiPathing Native Cluster Third-Party iSCSI / FC Boot from Example applications 2
software support Cluster support support SAN 1
AIX 6.x Native MPIO Yes No FC only Yes AIX Native FS, AIX Native SCSI-2 Cluster (PowerHA),
PowerPath 6.x10 AIX Native SCSI-3 Cluster (GPFS, PowerHA)
AIX 6.x - VIOS 2.x Native MPIO Yes No FC only No AIX Native FS, AIX Native SCSI-2 Cluster (PowerHA),
PowerPath 6.x10 AIX Native SCSI-3 Cluster (GPFS, PowerHA)
AIX 7.x Native MPIO Yes No FC only Yes AIX Native FS, AIX Native SCSI-2 Cluster (PowerHA),
PowerPath 6.x AIX Native SCSI-3 Cluster (GPFS, PowerHA)
AIX 7.x - VIOS 2.x Native MPIO Yes No FC only No AIX Native FS, AIX Native SCSI-2 Cluster (PowerHA),
PowerPath 6.x AIX Native SCSI-3 Cluster (GPFS, PowerHA)
Citrix XenServer 6.2 and DM-MPIO Yes No FC, and iSCSI Yes (with iSCSI
higher only)
ESXi 5.x NMP Yes 8 Yes FC and iSCSI Yes (with iSCSI MS Failover Clustering6 (Single node and multi node)
only)
ESXi 5.x PP/VE 5.9 Yes Yes FC and iSCSI No MS Failover Clustering (Single node and multi
node)
ESXi 6.x 7 NMP Yes 8 Yes FC and iSCSI No MS Failover Clustering6 (Single node and multi node)

ESXi 6.x 7 PP/VE 6.0 Yes Yes FC and iSCSI Yes (with FC MS Failover Clustering (Single node and multi
only) node)
OEL 6.x UEK R3 and DM-MPIO, No Yes FC only No Oracle RAC 12.1.0.2 9
higher PowerPath 6.x
OEL7.x UEK R3 and DM-MPIO, No Yes FC only No Oracle RAC 12c R19
higher PowerPath 6.x
OVM 3.3.x DM-MPIO, Yes No FC only No Native FS
PowerPath 6.x
OVM 3.4 DM-MPIO, Yes No FC only No Native FS
PowerPath 6.x
RHEL 6.x DM-MPIO, Veritas Yes No FC and iSCSI No Ext4 FS, GFS, RedHat Cluster Suite
DMP 6.0.x or
higher
RHEL 6.x DM-MPIO, Yes No FC and iSCSI Yes Ext4 FS, GFS, RedHat Cluster Suite
PowerPath 5.x, 6.x
RHEL 7.x DM-MPIO Yes Yes FC and iSCSI Yes (with FC Native LVM, VxVM, Native Filesystem, RedHat Cluster
only) Suite, Pacemaker with RHEL 7.x based on VxFS
Filesystem, ASM, Oracle RAC 9
RHEL 7.x Veritas DMP 6.1.x Yes Yes FC and iSCSI Yes (with FC Native LVM, VxVM, Native Filesystem, RedHat Cluster
or higher, only, Infoscale Suite, Pacemaker with RHEL 7.x based on VxFS
InfoScale 7.0 7.0) Filesystem, InfoScale VCS 7.0
RHEL 7.x PowerPath 6.x Yes No FC only No Ext4 FS, GFS, ASM, Oracle RAC 9

SLES 11 SP2, SP3, SP4 DM- MPIO, Yes No FC only Yes (with FC VxVM, ocfs2, HAE(High Availability Extension)
PowerPath 5.x, 6.x, only) Clusters over OCFS2
Veritas DMP 6.0.x
and higher, or
InfoSacle 7.0
SLES12 Veritas DMP Yes No FC only Yes (with FC VxVM, ocfs2
6.0.x and higher, only)
or InfoSacle 7.0
SLES12 DM- MPIO, Yes No FC only No VxVM, ocfs2, HAE(High Availability Extension)
PowerPath 5,x, 6.x, Clusters over OCFS2
Veritas DMP 6.0.x
and higher, or
InfoSacle 7.0
Solaris SPARC 10.x PowerPath 5.x, 6.x Yes No FC only No Native FS, ZFS, UFS, SVM,
Solaris Cluster 3.3U2
Solaris SPARC 11.x PowerPath 5.x, 6.x Yes Yes FC only No ZFS, UFS, SVM, Solaris Cluster 4.3, Native FS,
Oracle RAC 12.1
Solaris x86 10.x PowerPath 5.x, 6.x Yes No FC only No Native FS

Solaris x86 11.x PowerPath 5.x, 6.x Yes Yes FC and iSCSI No Oracle RAC 12.110

Solaris x86 11.x PowerPath 5.x, 6.x No No iSCSI No Oracle RAC 12

Windows 2008 MPIO 4, No No FC only No


PowerPath 5.x, 6.x
Windows 2008 R2 MPIO 4,PowerPath Yes 5 No FC and iSCSI 3 No Native NTFS, Microsoft Failover Clusters, Standard
5.x, 6.x Storage Resource, Hyper-V, Clustered
Shared Volumes

9/26/16
Windows 2008 R2 Veritas Storage No Yes FC only No Third party VCS 6.0 Clusters (no GCO and no VVR)
Foundation Suite
a n d Veritas
Storage
Foundation Suite
HA with DMP
6.0.x, Powerpath
6.0.x

Windows 2012 MPIO Yes 5 No FC and iSCSI Yes (with FC Native NTFS, Microsoft Failover Clusters, Standard
Windows 2012 R2 only) Storage Resource, Hyper-V, Clustered Shared
Volumes
Windows 2012 PowerPath 5.x, 6.x Yes No FC only and No Native NTFS, Microsoft Failover Clusters, Standard
Windows 2012 R2 iSCSI Storage Resource, Hyper-V, Clustered Shared
Volumes
Windows 2012R2 Veritas Storage No Yes FC only No Third party VCS 6.0 Clusters (no GCO and no VVR)
Foundation Suite
a n d Veritas
Storage
Foundation Suite
HA with DMP
6.0.x, Powerpath
6.0.x

H P-U X 11 i v3 PowerPath 5.2 Yes Yes FC only Yes Service Guard 11.20, Oracle RAC 12C, ASM
P06, MPIO
H P-U X 11 i v2 PowerPath 5.2 Yes No FC only Yes Service Guard 11.19
P06, PV Links
Footnotes for Table 6: SRDF/Metro support 3.0 Hypermax IS 5977.945.890:
1 Refer to the SRDF/Metro Best Practices document for more details. An RPQ is required for Boot from SAN in clustered environments. The E-Lab host connectivity guides, located at

https://elabnavigator.emc.com, provide details on configuring boot from SAN.


2 SRDF/Metro is not limited to the applications listed in this column.

3 Windows 2008 hotfix required for iSCSI support: KB2957560, KB2460971, KB2511962 (for hotfixes, install as per the KB article and follow the process to update the registry entries, if

specified).
4 Windows 2008 hotfix required for native MPIO support: KB2522766, KB2718576, KB2511962, KB2460971, KB2460971, KB2754704 (for hotfixes, please install as per the KB article and

follow the process to update the registry entries, if specified).


5 If running Windows Failover Clusters, be sure to have the up-to-date version of cluster related hotfixes published by Microsoft on http://support.microsoft.com.

6 ESXi 5.5 and ESXi 6.0 with vSphere 5.5 or 6.0 with NMP has an issue with pRDM in Microsoft Clustering environment in Cluster-Across-Boxes (CAB) configuration. EMC is working

closely with VMware to get a resolution to this NMP issue (SR 16905845903). Pending KB article.
7On ESXi with vSphere 6.0 hotfix required for path failover issue in vSphere described in https://kb.vmware.com/kb/2144657. It’s required that you raise an SR with VMware to get the

hotfix or upgrade to vSphere 6.0 U2. The following hotfix patch PR numbers should be used in the SR for your respective vSphere 6.0 versions:
6.0 U1 - Refer to PR 1582888
6.0 U1a – Refer to PR 1589866
6.0 U1b – Refer to PR 1588465
Note that:
6.0 U1a is also referred to as 6.0ep3 (express patch 3)
6.0 U1b is also referred to as 6.0P02 (patch level 02)
Alternatively vSphere 6.0 U2 is now generally available and you can update to it.
8 For uniform ESXi cross connect configuration, use default NMP Round-robin policy over PSP (Path selection plug-in).

9 AFD (ASM Filter Driver) is not supported with Oracle RAC.

10 For AIX 6.x and current Powerpath combination, download the emc_pp_configure.sh script from https://support.emc.com/search/

?text=powerpath&facetResource=ST&facetProductId=1726 and refer to the SRDF/Metro Technical Notes with the section on AIX, GPFS, and Powerpath for usage and proper
configuration of multiple paths using power path in active-active environment.

Limitations:
• Window’s cluster supported with uniform (with cross connects) configuration only.
• FCoE is not supported as a front-end adapter

Note:
Mobility ID and ALUA Multipathing cannot be used in conjunction with SRDF/Metro.

9/26/16
Table 7 SRDF/Metro support version 2.0 HYPERMAX OS 5977.811.784 and 5977.813.785

Operating system MultiPathing Native Cluster Third-Party iSCSI / FC Boot from Example applications 2
software support Cluster support support SAN 1
AIX 6.1 Native MPIO Yes No FC only Yes (with FC AIX Native FS, AIX Native SCSI-2 Cluster (PowerHA),
only) AIX Native SCSI-3 Cluster (GPFS, PowerHA)
AIX 6.1 - VIOS 2.x Native MPIO Yes No FC only No AIX Native FS, AIX Native SCSI-2 Cluster (PowerHA),
AIX Native SCSI-3 Cluster (GPFS, PowerHA)
AIX 7.1 Native MPIO Yes No FC only No AIX Native FS, AIX Native SCSI-2 Cluster (PowerHA),
AIX Native SCSI-3 Cluster (GPFS, PowerHA)
AIX 7.1 - VIOS 2.x Native MPIO Yes No FC only No AIX Native FS, AIX Native SCSI-2 Cluster (PowerHA),
AIX Native SCSI-3 Cluster (GPFS, PowerHA)
Citrix XenServer 6.2 and DM-MPIO Yes No FC, and iSCSI Yes (with iSCSI
higher only)
ESXi 5.x NMP Yes 8 Yes FC and iSCSI Yes (with iSCSI MS Failover Clustering6 (Single node and multi node)
only)
ESXi 6.x 7 NMP Yes 8 Yes FC and iSCSI No MS Failover Clustering6 (Single node and multi node)

OEL 6.x UEK R3 and DM-MPIO, No Yes FC only No Oracle RAC 12.1.0.2 9
higher PowerPath 6.x
OEL7.x UEK R3 and DM-MPIO, No No FC only No
higher PowerPath 6.x
OVM 3.3.x DM-MPIO, Yes No FC only No Native FS
PowerPath 6.x
RHEL 6.x DM-MPIO Yes No FC and iSCSI No Ext4 FS, GFS, RedHat Cluster Suite

RHEL 6.x Veritas DMP 6.0.x Yes No FC and iSCSI No Ext4 FS, GFS, RedHat Cluster Suite
or higher
RHEL 6.x PowerPath 5.x, 6.x No No FC and iSCSI No Ext4 FS

RHEL 7.x DM-MPIO Yes Yes FC and iSCSI Yes (with FC Native LVM, VxVM, Native Filesystem, RedHat Cluster
only) Suite, Pacemaker with RHEL 7.x based on VxFS
Filesystem, ASM, Oracle RAC 9
RHEL 7.x Veritas DMP 6.1.x Yes Yes FC and iSCSI Yes (with FC Native LVM, VxVM, Native Filesystem, RedHat Cluster
or higher, only, Infoscale Suite, Pacemaker with RHEL 7.x based on VxFS
InfoScale 7.0 7.0) Filesystem, InfoScale VCS 7.0
RHEL 7.x PowerPath 6.x No No FC only No Ext4 FS, GFS, ASM, Oracle RAC 9

SLES 11 SP2, SP3, SP4 DM- MPIO, No No FC only Yes (with FC VxVM, ocfs2
PowerPath 5.x, 6.x, only)
Veritas DMP 6.0.x
and higher, or
InfoSacle 7.0
SLES12 DM- MPIO, Yes No FC only No VxVM, ocfs2, HAE(High Availability Extension)
PowerPath 5,x, 6.x, Clusters over OCFS2 with Powerpath 6.x.
Veritas DMP 6.0.x
and higher, or
InfoSacle 7.0
Solaris SPARC 10.x PowerPath 5.x, 6.x No No FC only No Native FS

Solaris SPARC 11.x PowerPath 5.x, 6.x No Yes FC only No Native FS, Oracle RAC 12.110

Solaris x86 10.x PowerPath 5.x, 6.x No No FC only No Native FS

Solaris x86 11.x PowerPath 5.x, 6.x No Yes FC only No Native FS, Oracle RAC 12.110

Windows 2008 MPIO 4 No No FC only No

Windows 2008 R2 MPIO 4 Yes 5 No FC and iSCSI 3 No Microsoft Failover Clusters, Standard Storage
Resource
Windows 2008 PowerPath 5.x, 6.x No No FC only No Native NTFS
Windows 2008 R2
Windows 2008R2 Veritas Storage No Yes FC only No Third party VCS 6.0 Clusters (no GCO and no VVR)
Foundation Suite
for Window HA
with DMP 6.0.x
Windows 2012 MPIO Yes 5 No FC only Yes (with FC Native NTFS, Microsoft Failover Clusters, Standard
only) Storage Resource
Windows 2012 R2 MPIO Yes 5 No FC and iSCSI Yes (with FC Native NTFS, Microsoft Failover Clusters, Standard
only) Storage Resource
Windows 2012 PowerPath 5.x, 6.x No No FC only No
Windows 2012 R2
Windows 2012R2 Veritas Storage No Yes FC only No Third party VCS 6.0 Clusters (no GCO and no VVR)
Foundation Suite
for Window HA
with DMP 6.0.x
Footnotes for Table 7: SRDF/Metro support version 2.0 HYPERMAX OS 5977.811.784 and 5977.813.785
1
Limited host/mpio support. Please refer to the SRDF/Metro Best Practices document for more details. An RPQ is required for Boot from SAN in clustered environments. The E-Lab host
connectivity guides, located at https://elabnavigator.emc.com, provide details on configuring boot from SAN.
2
SRDF/Metro is not limited to the applications listed in this column.
3
Windows 2008 hotfix required for iSCSI support: KB2957560, KB2460971, KB2511962 (for hotfixes, install as per the KB article and follow the process to update the registry entries, if
specified).
4
Windows 2008 hotfix required for native MPIO support: KB2522766, KB2718576, KB2511962, KB2460971, KB2460971, KB2754704 (for hotfixes, please install as per the KB article and
follow the process to update the registry entries, if specified).
5 If running Windows Failover Clusters, be sure to have the up-to-date version of cluster related hotfixes published by Microsoft on http://support.microsoft.com.

6 ESXi 5.5 and ESXi 6.0 with vSphere 5.5 or 6.0 with NMP has an issue with pRDM in Microsoft Clustering environment in Cluster-Across-Boxes (CAB) configuration. EMC is working

closely with VMware to get a resolution to this NMP issue (SR 16905845903). Pending KB article.
7
On ESXi with vSphere 6.0 hotfix required for path failover issue in vSphere described in https://kb.vmware.com/kb/2144657. It’s required that you raise an SR with VMware to get the
hotfix or upgrade to vSphere 6.0 U2. The following hotfix patch PR numbers should be used in the SR for your respective vSphere 6.0 versions:
6.0 U1 - Refer to PR 1582888
6.0 U1a – Refer to PR 1589866
6.0 U1b – Refer to PR 1588465
Note that:
6.0 U1a is also referred to as 6.0ep3 (express patch 3)
6.0 U1b is also referred to as 6.0P02 (patch level 02)
Alternatively vSphere 6.0 U2 is now generally available and you can update to it.
8 For uniform ESXi cross connect configuration, use default NMP Round-robin policy over PSP (Path selection plug-in).

9 AFD (ASM Filter Driver) is not supported with Oracle RAC.

10 Oracle RAC 12.1 support only with PowerPath 6.x

Limitations:
• Window’s cluster supported with uniform (with cross connects) configuration only.
• Clusters Shared volume (CSV) with HyperV is not supported.
• FCoE is not supported as a front-end adapter
• VAAI primitives’ xCopy and ODX are not supported.
Note:
Mobility ID and ALUA Multipathing cannot be used in conjunction with SRDF/Metro.

9/26/16
Table 8 SRDF/Metro support version 1.1 HYPERMAX OS 5977.691.684

Operating System Multipathing Software Native Cluster Support Third-Party Cluster

Citrix XenServer 6.2 and higher DM-MPIO Yes No

OVM 3.3.x DM-MPIO, Yes No


PowerPath 6.x
OEL 6.x UEK R3 and higher DM-MPIO, No Oracle RAC 12.1.0.2 b
PowerPath 6.x
OEL7.x UEK R3 and higher DM-MPIO, No No
PowerPath 6.x
RHEL 6.x, DM- MPIO, No No
RHEL 7.x PowerPath 5.x, 6.x,
Veritas DMP 6.0.x and higher
SLES 11 SP2, SP3, SP4 DM- MPIO, No No
SLES12 PowerPath 5.x, 6.x,
Veritas DMP 6.0.x and higher
VMware ESXi 5.5 a, ESXi 6.0 a NMP Yes No

Windows 2012, MPIO, No No


Windows 2012R2 PowerPath 5.x,
PowerPath 6.x
Windows 2008, PowerPath 5.x, No No
Windows 2008R2 PowerPath 6.x
Footnote:
a. EMC supports ESXi with SRDF/Metro with approved RPQs (Request for Product Qualification) only
b. AFD (ASM) Filter Driver) is not supported with Oracle RAC.
Limitations • FCoE and iSCSI not supported as front end directors
• VAAI Primitives not supported except ATS for ESXi
• SCSI2 Cluster not supported except for ESXi
• SCSI3 Cluster not supported
• Boot from SAN not supported

9/26/16
Table 9 provides information for EMC VMAX All Flash and VMAX3 arrays, with EMC HYPERMAX OS 5977.811.784, 5977.813.785,
and 5977.945.890 with Mobility ID and ALUA Multipathing.

Table 9 EMC All Flash and VMAX3 with Mobility ID and ALUA Multipathing

Operating system Multipathing version Connection


AIX 7.1 MPIOb FC
AIX 6.1 MPIOb FC
HP-UX 11.31 b FC
MPIO
Linux: Citrix XenServer 6.5 DMb iSCSI
Linux: OVM 3.3.3 PowerPath 6.0 P01 FC
Linux: RHEL 7, 7.1, 7.2 b FC and iSCSI
DM
Linux: RHEL 7, 7.1 PowerPath 6.0 SP1 FC and iSCSI
Linux: RHEL 6.4, 6.5, 6.7 PowerPath 6.0 SP1 FC and iSCSI
DM b
Linux: SLES 12, SP1 DM b FC and iSCSI
Linux: SLES 12 PowerPath 6.0 SP1 FC and iSCSI
Linux: SLES 11 SP3 a, SP4 DM b FC and iSCSI
PowerPath 6.0 SP1 FC and iSCSI
Solaris 11 x86 b FC
MPxIO
Solaris 10 U10 x86 MPxIO b FC
VMware ESX 6.0 NMP FC and iSCSI
PowerPath VE 6.0 SP1
VMware ESX 5.5 NMP FC and iSCSI
VMware ESX 5.1 NMP FC and iSCSI
Windows 2012 Server R2 MPIO FC and iSCSI
MPIO with PowerPath 6.0 SP1
Windows 2008 EE R2 SP1, 64 Bit MPIO FC and iSCSI
MPIO with PowerPath 6.0 SP1
Footnotes:
a. VMAX3 ALUA support requires:
• device-mapper: 1.02.77-0.13.1
• multipath-tools: 0.4.9-0.105.1
b. Enabling the ALUA multipath handler will result in the creation of multipath devices for only the Mobility ID / ALUA devices. No multipath devices will be
created for the Compatibility ID devices on the system.This is a limitation of the native multipathing software and the operating system.
Note:
Mobility ID and ALUA Multipathing cannot be used in conjunction with SRDF/Metro.

9/26/16
Table 10 eNAS operating environment for 8.1.4 configuration guidelines

Guideline/specification Maximum tested value Comments


CIFS Guidelines
CIFS TCP Connection 64K (default and theoretical max.), Param tcp.maxStreams sets the maximum number of TCP connec-
40K (max. tested) tions that a Data Mover can have. Maximum value is 64K, or 65535.
TCP connections (streams) are shared by other components and
should be changed in monitored small increments.
With SMB1/SMB2 a TCP connection means single client (machine)
connection.
With SMB3 and multi channel, a single client could use several net-
work connections for the same session. It will depend on the number
of available interfaces on the client machine, or for high speed inter-
face like 10 Gb link, it can go up to 4 TCP connections per link.
Share name length 80 characters (Unicode) Unicode: The maximum length for a share name with Unicode
enabled is 80 characters.
Number of CIFS shares 40,000 per Data Mover (maximum Larger number of shares can be created. Maximum tested value is
tested limit) 40K per Data Mover.
Number of NetBIOS names/ comp- 509 (max) Limited by the number of network interfaces available on the Virtual
names per Virtual Data Mover Data Mover. From a local group perspective, the number is limited to
(VDM) 509. NetBIOS and compnames must be associated with at least one
unique network interface.
NetBIOS name length 15 NetBIOS names are limited to 15 characters (Microsoft limit) and can-
not begin with an @ (at sign) or a - (dash) character. The name also
cannot include white space, tab characters, or the following g sym-
bols: / \ : ; , = * + | [ ] ? < > . If using compnames, the NetBIOS form of
the name is assigned automatically and is derived from the first 15
characters of the ,comp_name>.
Comment (ASCII chars) for Net- 256 Limited to 256 ASCII characters.
BIOS name for server • Restricted Characters: You cannot use double quotation ("),
semicolon (;), accent (`), and comma (,) characters within the
body of a comment. Attempting to use these special characters
results in an error message. You can only use an exclamation
point (!) if it is preceded by a single quotation mark (').
• Default Comments: If you do not explicitly add a comment, the
system adds a default comment of the form EMC-
SNAS:T<x.x.x.x> where <x.x.x.x> is the version of the e NAS
software.
Compname length 63 bytes For integration with Windows environment releases later than Win-
dows 2000, the CIFS server computer name length can be up to 21
characters when UTF-8 (3 bytes/char) is used.
Number of domains 10 tested 509 (theoretical maximum) The maximum number of Windows
domains a Data Mover can be a member of. To increase the maxi-
mum default value from 32, change parameter cifs.lsarpc.maxDo-
main. The Parameters Guide for VNX for File contains more detailed
information about this parameter.
Block size negotiated 64KB Maximum buffer size that can be negotiated with Microsoft Windows
128KB with SMB2 clients. To increase the default value, change param cifs.W95BufSz,
cifs.NTBufSz or cifs.W2KBufSz. The Parameters Guide for VNX for
File contains more detailed information about these parameters.
Note: With SMB2.1, the read and write operation supports a 1MB
buffer (this feature is named "large MTU").

9/26/16
Guideline/specification Maximum tested value Comments
Number of simultaneous requests 127 (SMB1) For SMB1, value is fixed and defines the number of requests a client
per CIFS session (maxMpxCount) 512 (SMB2) is able to send to the Data Mover at the same time (for example, a
change notification request). To increase this value, change wsathe
maxMpxCount parameter.
For SMB2 and newer protocol dialect, this notion has been replaced
by 'credit number', which has a maximum of 512 credit per client but
could be adjusted dynamically by the server depending on the load.
The Parameters Guide for VNX for File contains more detailed infor-
mation about this parameter.
Total number files/directories 500,000 A large number of open files could require high memory usage on the
opened per Data Mover Data Mover and potentially lead to out-of-memory issues.
Number of Home Directories sup- 20,000 (maximum possible limit, not A configuration file containing Home Directories information is read
ported recommended) completely at each user connection. For easy management, this
should not exceed a few thousand.
Number of Windows/UNIX users 40,000 (limited by the number of Earlier versions of the NAS software relied on a basic database,
connected at the same time TCP connections) nameDB, to maintain Usermapper and secmap mapping information.
DBMS now replaces the basic database. This solves the inode con-
sumption issue and provides better consistency and recoverability
with the support of database transactions. It also provides better ato-
micity, isolation, and durability in database management.
Number of users per TCP 64 K To decrease the default value, change param cifs.listBlocks (default
connection 255, maximum 255). The value of this parameter times 256 = maxi-
mum number of users.
Note: TID/FID/UID shares this parameter and cannot be changed
individually for each ID. Use caution when increasing this value as it
could lead to an out-of-memory condition. Refer to the Parameters
Guide for VNX for File for parameter information.
Number of files/directories opened 64 k To decrease the default value, change param cifs.listBlocks (default
per CIFS connection in SMB1 255, maximum 255). The value of this parameter times 256 = maxi-
mum number of files/directories opened per CIFS connection.
Note: TID/FID/UID shares this parameter and cannot be changed
individually for each ID. Use caution when increasing this value as it
could lead to an out-of-memory condition. Be sure to follow the rec-
ommendation for total number of files/directories opened per Data
Mover. Refer to the system Parameters Guide for VNX for File for
parameter information.
Number of files/directories opened 127 k To decrease the default value, change parameter cifs.smb2.list-
per CIFS connection in SMB2 Blocks (default 511, maximum 511). The value of this parameter
times 256 = maximum number of files/directories opened per CIFS
connection.
Number of VDMs per Data Mover 128 The total number of VDMs, file systems, and checkpoints across the
entire system cannot exceed 2048
FileMover
Maximum connections to second- 1024
ary storage per primary (eNAS for
File) file system
Number of HTTP threads for ser- 64 Number of threads available for recalling data from secondary stor-
vicing FileMover API requests per age is half the number of whichever is the lower of CIFS or NFS
Data Mover threads. 16 (default), can be increased using server_http command,
maximum (tested) is 64.
Mount point name length 255 bytes (ASCII) The "/" is used when creating the mount point and is equal to one
character. If exceeded, Error 4105: Server_x:path_name: invalid path
specified is returned.
9/26/16
Guideline/specification Maximum tested value Comments
File system name length 240 bytes (ASCII) For nas_fs list, the name of a file systemwill be truncated if it is more
19 chars display for list option than 19 characters. To display the full file system name, use the -info
option with a file system ID (nas_fs -i id=<fsid>).
Filename length 255 bytes (NFS) With Unicode enabled in an NFS environment, the number of charac-
255 characters (CIFS) ters that can be stored depends on the client encoding type such as
latin-1. For example: With Unicode enabled, a Japanese UTF-8 char-
acter may require three bytes.
With Unicode enabled in a CIFS environment, the maximum number
of characters is 255. For filenames shared between NFS and CIFS,
CIFS allows 255 characters. NFS truncates these names when they
are more than 255 bytes in UTF-8, and manages the file successfully.
Pathname length 1,024 bytes Ensure that the final path length of restored files is less than 1024
bytes. For example, if a file is backed up which originally had path
name of 900 bytes, and it is restoring to a path with 400 bytes, the
final path length would be 1300 bytes and would not be restored.
Directory name length 255 bytes This is a hard limit and is rejected on creation if over the 255 limit.The
limit is bytes for UNIX names, Unicode characters for CIFS.
Subdirectories (per parent direc- 64,000 This is a hard limit, and code will prevent you from creating more than
tory) 64,000 directories.
Number of file systems per eNAS 4096 This maximum number includes VDM and checkpoint file systems.
system
Number of file systems per Data 2048 The mount operation will fail when the number of file systems
Mover reaches 2048 with an error indicating the maximum number of file
systems reached. This maximum number includes VDM and check-
point file systems.
Maximum disk volume size Dependent on RAID group (see com- For eNAS systems, users might configure LUNs greater than 2 TB up
ments) to 16 TB or a maximum size of the RAID group, whichever is less.
This is supported for eNAS systems when attached to VMAX3 arrays.
Total storage for a Data Mover 256 TB (raw) • On a per-Data-Mover basis, the total size of all file systems, and
the size of all SavVols used by SnapSure must be less than the
total supported capacity. Exceeding these limit can cause out of
memory panic.
• Refer to the eNAS file capacity limits table for more information.
File Size 16 TB This hard limit is enforced and cannot be exceeded.
Number of directories supported This is the same as the number of inodes in a file system. Each 8 KB
per file system of space = 1 inode.
Number of files per directory 500,000 Exceeding this number will cause performance problems.
Maximum number of files and 256 million (default) This is the total number of files and directories that can be in a single
directories per eNAS file system eNAS file system. This number can be increased to 4 billion at file
system creation time but should only be done after considering recov-
ery and restore time and the total storage utilized per file. The actual
maximum number in a given file system is dependent on a number of
factors, including the size of the file system.
Maximum amount of deduplicated 256 TB When quotas are enabled with file size policy, the maximum amount
data supported of deduplicated data supported is 256 TB. This amount includes other
files owned by UID 0 or GID.
All other industry-standard caveats, restrictions, policies, and best practices prevail. This includes, but is not limited to, fsck times (now made faster
through multi-threading), backup and restore times, number of objects per file system, snapshots, file system replication, VDM replication, perfor-
mance, availability, extend times, and layout policies. Proper planning and preparation should occur prior to implementing these guidelines.
Naming Service guidelines
Number of DNS domains 3 - WebUI Three DNS servers per Data Mover is the limit if using WebUI. There
unlimited - CLI is no limit when using the command line interface (CLI).
9/26/16
Guideline/specification Maximum tested value Comments
Number of NIS Servers per Data 10 You can configure up to 10 NIS servers in a single NIS domain on a
Mover Data Mover.
NIS record capacity 1004 bytes A Data Mover can read 1004 bytes of data from a NIS record.
Number of DNS servers per DNS 3 3
domain
NFS guidelines
Number of NFS exports 2,048 per Data Mover tested. You may notice a performance impact when managing a large num-
Unlimited theoretical maximum. ber of exports using Unisphere.
Number of concurrent NFS clients 64 K with TCP (theoretical) Unlimited This is limited by TCP connections.
with UDP (theoretical)
Netgroup line size 16383 The maximum line length that the Data Mover will accept in the local
netgroup file on the Data Mover or the netgroup map in the NIS
domain that the Data Mover is bound to.
Number of UNIX groups supported 64 k 2 billion maximum value of any GID. The maximum number of GIDs
is 64 K, but an individual GID can have an ID in the range of 0-
2147483648.
Networking guidelines
Link aggregation/ether channel 8 ports (ether channel) Ether channel: the number of ports used must be a power of 2 (2, 4,
12 ports ((link aggregation (LACP)) or 8). Link aggregation: any number of ports can be used. All ports
must be the same speed. Mixing different NIC types (that is copper
and fibre) is not recommended.
Number of VLANs supported 4094 IEEE Standard
Number of interfaces per Data 45 tested Theoretically 509
Mover
Number of FTP connections Theoretical value 64 K By default the value is (in theory) 0xFFFF, but it is also limited by the
number of TCP streams that can be opened. To increase the default
value, change param tcp.maxStreams (set to 0x00000800 by
default). If you increase it to 64 k before you start TCP, you will not be
able to increase the number of FTP connections. Refer to the Param-
eters Guide for VNX for File for parameter information.
Quotas guidelines
Number of tree quotas 8191 Per file system
Maximum size of tree quotas 256 TB Includes file size and quota tree size
Maximum number of unique 64 k Per File system
groups
Quota path length 1024
eNAS replication for File
Number of replication sessions per 682 (eNAS)
eNAS
Maximum number of replication 1024 This enforced limit includes all configured file system, VDM, and copy
sessions per Data Mover sessions.
Maximum number of local and 682
remote file system and VDM repli-
cation sessions per Data Mover
Maximum number of loopback file 341
system and VDM replication ses-
sions per Data Mover

9/26/16
Guideline/specification Maximum tested value Comments
Snapsure guidelines
Number of checkpoints per file sys- 96 read-only Up to 96 read-only checkpoints per file system are supported and 16
tem 16 writeable writeable checkpoints.
On systems with Unicode enabled, a character might require between 1 and 3 bytes, depending on the encoding type or character used. For example,
a Japanese character typically uses 3 bytes in UTF-8. ASCII characters require 1 byte.

Table 11 eNAS feature limits


eNAS VMAX eNAS VMAX eNAS VMAX eNAS VMAX eNAS VMAX
100K 200K 400K 450F/FX 850F/FX
eNAS VMAX 850F/FX 256 TB 512 TB 512 TB 512 TB 512 TB
Number of Data Movers 2 2 or 4 2, 4, 6, or 8 2 or 4 2, 4, 6, or 8
Maximum FS Size 16 TB 16 TB 16 TB 16 TB 16 TB
Maximum number of FS per Data Mover1 2048 2048 2048 2048 2048
Maximum number of FS per system 4096 4096 4096 4096 4096
Maximum configured replication sessions per Data 1024 1024 1024 1024 1024
Mover for Replication for File2
Maximum number of checkpoints per Primary File Sys- 96 96 96 96 96
tem
Maximum number of NDMP sessions per Data Mover 4 8 8 8 8
Memory per Data Mover 6 GB 24 GB 24 GB 24 GB 24 GB
1 This count includes production file systems, user-defined file system checkpoints, and two checkpoints for each replication session.
2 A maximum of 256 concurrently transferring sessions are supported per Data Mover.

Table 12 eNAS IO Module models and firmware


Model number Description FW version
E-FE00400C VMAX VG 4PT GBE BASE T 3.28
E-FE00200TC VMAX VG 2PT 10 GBE BASE T 6.2.17
E-FE00200TO VMAX VG 2PT 10 GBE OPTICAL 6.2.11
E-BK40000E VMAX VG FC BACKUP I/O FOR NAS

9/26/16

Você também pode gostar