Escolar Documentos
Profissional Documentos
Cultura Documentos
Version 8.1
Copyright 2015-2016 EMC Corporation. All rights reserved. Published in the USA.
Published February, 2016
EMC believes the information in this publication is accurate as of its publication date. The information is subject to change
without notice.
The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with
respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a
particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable
software license.
EMC, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other
countries. All other trademarks used herein are the property of their respective owners.
For the most up-to-date regulatory document for your product line, go to EMC Online Support (https://support.emc.com).
EMC Corporation
Hopkinton, Massachusetts 01748-9103
1-508-435-1000 In North America 1-866-464-7381
www.EMC.com
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
CONTENTS
Preface
Chapter 1
Introduction
Chapter 2
Concepts
15
Chapter 3
Setting up prerequisites
19
Chapter 4
Configuring
27
Chapter 5
41
CONTENTS
Chapter 6
61
Chapter 7
65
Chapter 8
Service checklists
73
Chapter 9
4
Troubleshooting
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
77
CONTENTS
Appendix A
81
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
CONTENTS
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
Preface
As part of an effort to improve and enhance the performance and capabilities of its
product lines, EMC periodically releases revisions of its hardware and software.
Therefore, some functions described in this document may not be supported by all
versions of the software or hardware currently in use. For the most up-to-date information
on product features, refer to your product release notes.
If a product does not function properly or does not function as described in this
document, please contact your EMC representative.
Special notice conventions used in this document
EMC uses the following conventions for special notices:
DANGER
Indicates a hazardous situation which, if not avoided, will result in death or serious
injury.
WARNING
Indicates a hazardous situation which, if not avoided, could result in death or serious
injury.
CAUTION
Indicates a hazardous situation which, if not avoided, could result in minor or moderate
injury.
NOTICE
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
Preface
Note
Do not request a specific support representative unless one has already been assigned to
your particular system problem.
Your comments
Your suggestions will help us continue to improve the accuracy, organization, and overall
quality of the user publications.
Please send your opinion of this document to:
techpubcomments@EMC.com
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
CHAPTER 1
Introduction
Topics include:
l
l
l
l
l
Introduction
Introduction
For best practices and performance information related to the VDM MetroSync feature,
refer to this feature's related white paper.
System requirements
The VDM MetroSync feature requires the following software, hardware, network, and
storage configurations:
For software:
l
The source and destination VNX2 systems must use the same VNX operating
environment (OE) for file version 8.1.8.130 or later.
The MirrorView/S and SnapView Clone enablers must be installed on each array.
Checkpoints (SavVol) must be on the same pool as the production file system (PFS).
For hardware:
l
For network:
10
At a minimum, an IP network with LAN or WAN links for communication between the
source VNX2 system's Control Station and the destination VNX2 system's Control
Station.
An ICMP ping request must be allowed between 2 arrays' management IPs (Control
Station, SPA, and SPB).
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
Introduction
Only one sync-replicable VDM is allowed on one sync-replicable NAS pool (there is a
1:1 mapping between the sync-replicable VDM and its sync-replicable NAS pool).
You can create only one synchronous replication session on one sync-replicable
VDM.
To get the same expected performance, you should use the same disk types between
each of the VNX2 systems.
Do not use the VNX File Migration feature to migrate VDMs or file systems that are
configured for use with VDM synchronous replication. Otherwise, errors will occur.
If you are manually activating a standby Data Mover by using the server_standby
command.
Before upgrading the VNX for File OE software, make sure that the VDM MetroSync
Manager service is stopped, and that no VDM MetroSync replication session
operations are in process. During the upgrade process, you must not execute any
VDM MetroSync replication session operations.
You should not perform operations, (such as fracture, promote, delete, and so on) on
any MirrorView Consistency Groups and mirrors that have a prefix name of
"syncrep_", or any MirrorView mirrors and Clone Groups that have a prefix name of
"nasdb_" from the Block side directly. The "syncrep_" and "nasdb_" prefix names are
used by synchronous replication sessions, and the synchronous replication service,
respectively. Otherwise, the synchronous replication session or service will not work
properly.
When you reverse or failover a synchronous replication session, schedules which are
in a "complete" state will not be imported on the standby system.
After failover, if the original active Data Mover for the VDM synchronous replication
session still works, it may run into a rolling panic because the underlying LUNs
become read-only. You need to clean the synchronous replication session to set it
Restrictions and limitations
11
Introduction
back to a healthy state. If all of the Data Movers in the system run into a rolling panic,
you need to follow the PXE reboot process on page 79 to fix the NAS service before
a synchronous replication session can be cleaned.
12
If you are using the Preserve RepV2 session for VDM MetroSync feature, VDM
MetroSync supports the use of IP Replication on VDM synchronous replication VDMs
and file systems.
Each VDM synchronous replication session is built upon a single VDM and single
user-defined storage pool.
For each system, the local and remote SPs should not exceed either the maximum
number of Consistency Groups allowed (including the maximum number of members
allowed for each group in a system) or the maximum number of mirrors allowed for
each Consistency Group.
For each system, the local and remote Data Movers and pools should consist of disk
volumes that do not exceed the maximum number of mirrors allowed for each
Consistency Group.
Common Log file systems are not supported. Only a split-log VDM or file system
contained in a synchronous replication session is supported. To transfer a commonlog file system to a split-log file system, use host-based copy.
Only uxfs and rawfs file systems that are created on a sync-replicable NAS pool are
supported.
Temporarily unmounted file systems and checkpoints will become mounted after a
reverse or failover operation.
Temporarily unloaded VDMs will become loaded after a reverse or failover operation.
For a VDM on a sync-replicable NAS pool, you cannot mount file systems or
checkpoints to another Data Mover or VDM.
When creating VDM synchronous replication sessions, you must define separate FSID
ranges for the source and target systems.
If the top of the FSID range is reached, after more VDMs/file systems/checkpoints are
created, an FSID conflict may occur during reverse or failover. To avoid an FSID
conflict, the FSID range is enforced to no less than <fsidrange> (default is 8192)
defined in nas_param. The FSID range on source and destination VNX2 systems must
not overlap. nas_checkup will identify any potential FSID conflict for all active
VDMs under synchronous replication sessions on the system.
If the synchronous replication service or session status is not in_sync when the
disaster occurs, the system cannot guarantee the success of failover on the
synchronous replication session.
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
Introduction
A period of data unavailability (DU) time will occur during a synchronous replication
session reverse or failover operation. The actual DU time depends on how many file
systems or checkpoints exist on the sync-replicable NAS pool.
After a disaster occurs, DU will start. When a synchronous replication session failover
is executed, the DU will continue until the failover operation succeeds.
After failover, if the original active Data Mover for the VDM synchronous replication
session still works, it may run into a rolling panic because the underlying LUNs
become read-only. You need to clean the synchronous replication session to return it
to a healthy state.
The Home Directory feature does not support Continuous Availability (CA) capability.
Keep this in mind if you configure CIFS CA support for VDM synchronous replication.
Manual migration
The replication of Data Mover configurations or Cabinet level service is not included in
VDM synchronous replication. The following Data Mover configuration and Cabinet level
service items must be manually migrated using the migrate_system_conf command
before the creation of synchronous replication, before failover or reverse operations, and
any time this data changes:
Note
The routing table, including the default route, does not get migrated with this command.
The routes must be configured manually. To add a default gateway or a route entry, a
network interface with a status of UP must exist.
l
DNS
NIS
NTP
Usermapper client
FTP/SFTP
LDAP
HTTP
CEPP
CAVA
Server parameters
Netgroup
Nsswitch
Hosts
ntxmap
Usermapper service
13
Introduction
Related information
Specific information related to the features and functionality described in this document
are included in:
l
The complete set of EMC VNX2 customer publications is available on the EMC Online
Support website at http://Support.EMC.com. After logging in to the website, click the
Support by Product page, to locate information for the specific product or feature
required.
14
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
CHAPTER 2
Concepts
Topics include:
l
l
Concepts
15
Concepts
Sync-replicable VDM
It is a user-defined pool.
All file systems and checkpoints on it are either un-mounted or mounted on its syncreplicable VDM.
Production File Systems (PFSs) and checkpoints (SavVol) on it must be on the same
storage pool. They must not use space on a different storage pool.
Have user-defined File storage pools for each VDM synchronous replication session
that is to be created. Only a single user-defined pool can be allocated per VDM
synchronous replication session.
All volumes within the user-defined pools must be disk volumes (dvols).
The disk volumes in the membership must match in number and size with those in
the sync-replicable NAS pool on the active system under the synchronous replication
session.
Sync-replicable VDM
A sync-replicable VDM must meet the following criteria:
16
It is the only VDM within the sync-replicable NAS pool. This feature restricts one VDM
per pool by design.
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
Concepts
All file systems (including the VDM rootfs) mounted on it must be split-log file
systems.
All file systems (including the VDM rootfs) and checkpoints mounted on it must be
created on its sync-replicable NAS pool.
Use cases
VDM MetroSync can be used in the following use cases:
l
Human error
Power outages
Maintenance
Load balancing
More efficient use of hardware (a VDM-level DR solution does not require standby
Data Mover hardware like a Cabinet-level DR solution does)
Use cases
17
Concepts
18
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
CHAPTER 3
Setting up prerequisites
This chapter contains information about setting up your environment for VDM
synchronous replication for disaster recovery (DR) between two VNX2 systems using
MirrorView/S.
Note
You must complete the tasks included in this chapter before you start the configuration
tasks in the next chapter.
Topics include:
l
l
l
l
l
l
l
Setting up prerequisites
19
Setting up prerequisites
Source-site
information
Destination-site
information
When using replication for the first time, or on new systems, configure IP alias first,
then use IP alias in the nas_cel -create -ip <ipaddr> command.
For existing systems with existing replication sessions, the current slot_0 primary
Control Station IP address must be used for the IP alias. You must then assign a new
IP address to slot_0. For example:
# nas_config -IPalias -create 0
Do you want slot_0 IP address <10.6.52.154>
as your alias? [yes or no]: yes
Please enter a new IP address for slot_0: 10.6.52.95
Done
#
When IP alias is deleted using the nas_config -IPalias -delete command, the IP
address of the primary or the secondary Control Station is not changed. Changes to
the IP address of the primary or the secondary Control Station must be done
separately. Replication depends on communication between the source Control
Station and the destination Control Station. When IP alias is used for replication,
deleting the IP alias breaks the communication. The IP address which was used as
the IP alias must be restored on the primary Control Station to restore the
communication.
While performing a NAS code upgrade, do not use an IP alias to log in to the Control
Station.
EMC VNX Command Line Interface Reference for File provides details on the nas_config
command.
20
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
Setting up prerequisites
Create an IP alias
Procedure
1. Login as nasadmin, and then su to root.
2. To create an IP alias for the Control Station, use one of the following scripts:
l
3. Depending on whether you want to use a new IP or the current IP address as the IP
alias, you will answer no or yes to the script prompt:
l
To use a new IP address as the IP alias, answer no to the question and then type
the new IP address to use as an IP alias within the same network.
To use the current IP address as the IP alias, answer yes to the question, and then
type a new IP address to replace the current IP address.
The systems are up and running and IP network connectivity exists between the
Control Stations of both VNX2 systems. Verify whether a relationship already exists
by using the nas_cel -list command.
The source and destination Control Station system times must be within 10 minutes
of each other. Take into account time zones and daylight savings time, if applicable.
EMC recommends using an NTP service on the Control Stations to control this
function. You can set this up using VIA during the VNX system initialization process or
by using the nas_cs CLI command.
Create an IP alias
21
Setting up prerequisites
The same 6-15 characters passphrase must be used for both VNX systems.
Note
If the VNX systems have two Control Stations, an IP alias needs to be created, and the
Control Station-to-Control Station communication should be setup using the IP alias. For
more information about creating an IP alias, see IP aliasing for VNX with two Control
Stations on page 20.
To establish communication between the source and destination sites, do the following:
Procedure
1. On the source VNX system, to establish the connection to the destination VNX system
in the replication configuration, use this command syntax:
$ nas_cel -create <cel_name> -ip <ip> -passphrase
<passphrase>
where:
<cel_name> = name of the remote destination VNX system in the configuration
<ip> = IP address (or IP alias for a VNX2 system with two Control Stations) of the
remote Control Station in slot 0
<passphrase> = the secure passphrase used for the connection, which must have 6-15
characters and be the same on both sides of the connection
Example:
To add an entry for the Control Station of the destination VNX system, cs110, from the
source VNX system cs100, type:
$ nas_cel -create cs110 -ip 192.168.168.10 -passphrase nasadmin
Output:
operation in
id
=
name
=
owner
=
device
=
channel
=
net_path
=
celerra_id =
passphrase =
2. Repeat step 1 on the destination VNX system to establish the connection to the
source VNX system in the replication configuration.
Example:
To add an entry for the Control Station of the source VNX system, cs100, from the
destination VNX system cs110, type:
$ nas_cel -create cs100 -ip 192.168.168.12 -passphrase nasadmin
Output:
operation in progress (not interruptible)...
id
= 2
name
= cs100
owner
= 0
device
=
channel
=
net_path
= 192.168.168.12
22
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
Setting up prerequisites
celerra_id = APM000340000680000
passphrase = nasadmin
Click Add.
Login to the destination system with the correct username and password.
23
Setting up prerequisites
Both source and destination systems have been added as destination systems of
each other (Control Station-to-Control Station relationship) with the same pass
phrase (use the nas_cel command command).
Cabinet DR has not been created on either the local or remote systems.
Before you can create a sync-replicable VDM, you must enable the synchronous
replication service between the source and destination systems.
Procedure
1. At either the source or destination site, type the following command syntax:
$ nas_cel -syncrep -enable {<cel_name>|id=<cel_id>}
-local_fsidrange <from>,<to>
-remote_fsidrange <from>,<to>
24
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
Setting up prerequisites
[-local_storage {raid_group=<rg_id>|block_pool=<pool_id>}]
[-remote_storage {raid_group=<rg_id>|block_pool=<pool_id>}]
where:
-enable {<cel_name>|id=<cel_id> = enables the VDM Synchronous Replication
feature on the specified VNX system (source or destination).
-local_fsidrange <from>,<to> = sets the file system identifier range on the local
VNX2 system. This range must not overlap the file system identifier range on the
remote VNX2 system. Valid values are from 1 to 32767.
-remote_fsidrange <from>,<to> = sets the file system identifier range on the
remote VNX2 system. This range must not overlap the file system identifier range on
the local VNX2 system. Valid values are from 1 to 32767.
raid_group=<rg_id> or block_pool=<pool_id> = specify either a local and remote
RAID group or a block storage pool to create synchronous replication mirror LUNs.
Example:
To enable the VDM synchronous replication service, on either the source or
destination system type:
nas_cel -syncrep -enable id=1 -local_fsidrange 4096,18000
-remote_fsidrange 18001,32000 -local_storage raid_group=0
-remote_storage raid_group=0
Output:
Now doing precondition check...
done
Now saving FSID range [18001,32000] on remote system...
done
Now saving FSID range [4096,18000] on local system...
done
Now adding remote storage info to local system...
done
Now creating sync replication mirror LUNs on local system... done
Now creating sync replication mirror LUNs on remote system...done
Now creating Mirrors and Clones (may take several minutes)...done
Now waiting for Mirrors and Clones to finish initial copy... done
Now adding NBS access to local server server_2...
done
Now adding NBS access to local server server_3...
done
Now creating mountpoint for sync replica of NAS database... done
Now mounting sync replica of NAS database...
done
Now enabling sync replication service on remote system...
done
done
Note
The initial copy operation takes longer for thick LUNs than for thin LUNs.
Results
The synchronous replication service record is saved on both sides. FSID ranges have
been set for both the source and destination systems.
If the enable service operation fails, and you want to choose different LUNs to use as
synchronous replication mirror LUNs, you must first disable the service before choosing
different LUNs.
If the MirrorView connection type is iSCSI, the enable service operation may time out
while waiting for mirrors and Clone groups to complete an initial copy. If this happens, try
to enable the service again after the mirrors and Clone groups become in-sync (this may
take hours).
25
Setting up prerequisites
Both source and destination VNX2 systems have been added as destination systems
of each other (Control Station-to-Control Station trusted relationship) with the same
pass phrase (use the nas_cel command command).
Procedure
1. In Unisphere, select the source VNX system.
For detailed information about creating a pool, refer to the Unisphere online help. You
can also create step-by-step instructions tailored to your VNX environment by going to
the mydocuments site at https://mydocuments.emc.com/ and click VNX Series and,
under VNX tasks, click Configure LUNs and storage pools.
2. Click Storage > Storage Configurations > Storage Pools
3. Click Create to create a block storage pool.
4. Click OK.
5. Select the destination VNX system and repeat these steps.
26
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
CHAPTER 4
Configuring
The topics in this chapter provide detail on the individual tasks to configure VDM
synchronous replication.
Note
The tasks included in the previous chapter must be completed before you start the tasks
in this chapter.
Topics include:
l
l
l
l
l
l
l
l
l
l
l
Configuring
27
Configuring
Create the same number of volumes using the same size as the source, which will be
presented to the target VNX and used to create the target NAS pools.
Data LUNs for NAS resources and NAS Disks Volumes have been created.
Create the desired number of NAS user-defined pools (one for each sync-replicable VDM)
using the NAS Disks you created in the previous section. Name these pools appropriately,
such as vdm1pool and vdm2pool.
28
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
Configuring
Procedure
1. On the source VNX system, run the following command to create a NAS user-defined
pool:
$ nas_pool -create -name <name> -volumes
<volume_name>[,<volume_name>,...]
where:
-create = creates a user-defined storage pool.
<name> = assigns a name to the new storage pool. If no name is specified, assigns one
by default.
Make sure to use the same number of volumes with the same sizes, as each source
and target pool must match in size.
The synchronous replication service has been enabled between the two VNX2
systems, source and destination.
If you intend to use an existing VDM for synchronous replication, see Modify a VDM from
non-sync-replicable to sync-replicable on page 53 for instructions to modify a nonsyncreplicable VDM to be syncreplicable.
Procedure
1. At the source site, type the following command syntax:
$ nas_server [-name <name>] [-type vdm] -create <movername>
[-setstate <state>] [pool=<pool>] [-option <options>]
where:
Create a sync-replicable VDM
29
Configuring
If you specify the mover= option, only the sync-replicable VDM on the sync-replicable
NAS pool can be specified.
Procedure
1. Specify the name of the file system, its size, the sync-replicable NAS pool, and the
VDM name by using the nas_fs command.
Use the following command syntax:
nas_fs -name <name> -create size=<integer> [T|G|M]
pool=<pool>
where:
<name> = name of the VDM file system. A file system name can be up to 240
characters, but cannot begin with a dash (-), be comprised entirely of integers or be a
single integer, contain the word root or contain a colon (:).
<integer> = size of the file system to create, in terabytes (T), gigabytes (G), or
megabytes (M).
<pool> = name of the sync-replicable NAS pool.
Example:
To create a file system on a sync-replicable NAS pool, type:
nas_fs -name VDM1_FS -create size=100G pool=VDM1_Pool
Results
The split-log file system is created on the sync-replicable NAS pool.
30
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
Configuring
Example:
To create a file system checkpoint on a sync-replicable NAS pool, type:
fs_ckpt VDM1_FS -Create
2. Alternatively, you can also create a checkpoint schedule by using the following
syntax.
nas_ckpt_schedule -create <name>
-filesystem { <fs_name> | id=<id> } -recurrence
{ once -runtimes <HH:MM> [ -start_on <YYYY-MM-DD> ]
[ -ckpt_name <ckpt_name> ]
daily -runtimes <HH:MM>[,...] { -keep <number_of_ckpts> |
-ckpt_names <ckpt_name>[,...] } [ -every <number_of_days> ]
[ -start_on <YYYY-MM-DD> ] [ -end_on <YYYY-MM-DD> ]
weekly -days_of_week {Mon|Tue|Wed|Thu|Fri|Sat|Sun}[,...]
-runtimes <HH:MM>[,...] { -keep <number_of_ckpts> |
-ckpt_names <ckpt_name>[,...] } [ -every <number_of_weeks> ]
[ -start_on <YYYY-MM-DD> ] [ -end_on <YYYY-MM-DD> ]
monthly -days_of_month <days>[,...] -runtimes <HH:MM>[,...]
{ -keep <number_of_ckpts> | -ckpt_names <ckpt_name>[,...] }
[ -every <number_of_months> ] [ -start_on <YYYY-MM-DD> ]
[ -end_on <YYYY-MM-DD> ] }
where:
<name> = name of checkpoint schedule; the name must be a system-wide unique
name and can contain up to 128 ASCII characters, including az, AZ, 09, and
period (.), hyphen (-), or underscore ( _ ). Names cannot start with a hyphen, include
intervening space, or contain all numbers
<fs_name> = name of the file system of which to create the checkpoint; you can also
use the file system ID; the syntax for the file system ID is filesystem id=<n>
Create the first file system checkpoint on a sync-replicable NAS pool
31
Configuring
<HH:MM> = time at which the checkpoint creation occurs; specify this value by using
the 24-hour clock format; for example, 23:59
<YYYY-MM-DD> = date on which the checkpoint creation occurs
<ckpt_name> = name of the checkpoint
<number_of_ckpts> = number of checkpoints to keep or create before the checkpoints
are refreshed
<number_of_days> = every number of days to create the checkpoints; for example, 2
generates checkpoints every 2 days; the default is 1
<number_of_weeks> = every number of weeks to create the checkpoints; for example, 2
runs the checkpoint schedule bimonthly; the default is 1
<days> = one or more days of the month to run the automated checkpoint schedule;
specify an integer between 1 and 31; use a comma to separate the days
<number_of_months> = every number of months to create the checkpoints; for
example, 3 runs the checkpoint schedule every 3 months; the default is 1
Example:
To create a daily checkpoint schedule, type:
nas_ckpt_schedule -create VDM1_Schedule -filesystem VDM1_FS
-recurrence daily -runtimes 20:00 -keep 5 -start_on 2015-08-08
Results
SavVol for the file system is created on the sync-replicable NAS pool along with the first
file system checkpoint.
The synchronous replication service has been enabled between the two VNX2
systems, source and destination.
Create an additional interface on each side that is not attached to any VDM. This
ensures the Data Mover will always have an interface available for dynamic DNS
updates. If you do not create these interfaces, reverse and failover operations on the
last VDM may take longer to complete.
Alternatively, you can disable dynamic DNS updates if they are not required in your
environment by running the following command:
server_param server_2 -f dns -m updateMode -v 0
This parameter change takes effect immediately.
To create and assign a network interface for each sync-replicable VDM, do the following:
Procedure
1. Type the following command syntax:
server_ifconfig <movername> -create -Device <device_name>
-name <if_name> -protocol {IP <ipv4_addr> <ipmask>
<ipbroadcast>}
32
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
Configuring
where:
<movername> = name of the Data Mover
<device_name> = name of the device
<if_name> = name of the interface
<ipv4_addr>, <ipmask>, and <ipbroadcast> = IP address, mask, and broadcast address.
The IP address is the address of a particular interface. The IP mask includes the
network part of the local address and the subnet, which is taken from the host field of
the address. For example, 255.255.255.0 would be a mask for a Class C network. The
IP broadcast is a special destination address that specifies a broadcast message to a
network.
Example:
To create a network interface for a sync-replicable VDM, type:
server_ifconfig server_2 -create -Device cge-1-0 -name
VDM1_Interface -protocol IP 10.10.10.50 255.255.255.0 10.10.10.255
2. Optionally, if you want the IP to carry over during a reverse or failover operation, then
on the source VNX system, type:
nas_server -vdm <vdm_name> -attach <interface>
where:
<vdm_name> = name of the VDM
<interface> = name of the interface
Example:
To attach a network interface to a sync-replicable VDM, type:
$ nas_server -vdm vdm1 -attach vdm1interface
Create CIFS shares and NFS exports for each file system on your
VDM
The procedures in this section require that file systems have been created on a syncreplicable NAS pool.
Create CIFS shares and NFS exports for each of the file systems on your VDM. For detailed
information about creating CIFS shares and NFS exports, refer to Configuring and Managing
CIFS on VNX and Configuring NFS on VNX. These documents are located on EMC Online
Support (registration required) at http://Support.EMC.com and in the Related documents
section of the VNX Series on the mydocuments site at https://mydocuments.emc.com/.
Create CIFS shares and NFS exports for each file system on your VDM
33
Configuring
DNS
NIS
NTP
Usermapper client
FTP/SFTP
LDAP
HTTP
CEPP
CAVA
Server Parameters
Netgroup
Nsswitch
Hosts
Usermapper service
The routing table, including the default route, does not get migrated with this command.
The routes need to be configured manually. To add a default gateway or a route entry, a
network interface with a status of UP must exist.
34
Synchronous replication service has been enabled between the two VNX2 systems,
source and destination.
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
Configuring
Both source and destination systems are operating with VNX OE for file version
8.1.8.130 or later version.
The destination user-defined pool is not in use and meets all sync-replicable NAS
pool criteria.
The destination user-defined pool must match the size of the source user defined
pool. If equal performance is desired at the destination site, in relation to the source
site, the destination user-defined pool should be built using the same configuration.
Local and remote Data Mover should have the same I18N mode.
Sync-replicable NAS pool of the specified VDM does not contain a File System/
Checkpoint with an FSID that is used in the remote system.
If you want the destination system to match the configuration of the source, from the
destination VNX2 system, you must manually migrate some Data Mover and Cabinet
level service items. Refer to Migrate Data Mover configurations and cabinet level
service on page 34 for details.
Output:
Now validating params...
Now marking remote pool as standby pool...
done
done
35
Configuring
done
done
done
done
done
Results
The initial synchronization takes time to complete. You can monitor the synchronization
status by running the nas_synrep -info command.
Note
If you create and attach a new IP interface on the source VDM after the replication session
has been created, a warning will appear stating that this new interface will not be
reversed or failed over. You must manually create the new Interface, with the same name,
in the DOWN state on the destination VNX system before you can reverse or failover the
session. If the interface is not created on the destination VNX system, a VDM
synchronous replication session reverse or failover operation will fail.
The Home Directory feature does not support Continuous Availability (CA) capability.
Keep this in mind when you configure CIFS Continuous Availability (CA) support for the
VDM synchronous replication feature.
Procedure
1. If not already enabled, enable the SMB 3.0 protocol.
Example:
To enable the SMB 3.0 protocol, type:
$ server_cifs server_2 -add security=NT,dialect=SMB3
2. Mount and export network Shares with the smbca flag set.
A VNX File Server configuration that uses CIFS CA requires network Shares that are
mounted and exported with a special smbca flag. CA mount and Export options are
not supported in Unisphere. For more information about CIFS, see Configuring and
Managing CIFS on VNX.
Example:
To mount and export network Shares, type:
$ server_mount server_2 -o smbca fs1 /fs1
$ server_export server_2 -P cifs -name fileshare -option
type=CA /fs1
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
Configuring
6. If the VNX system needs to communicate outside of its subnet, you must configure
route settings using one of the following methods (refer to Configuring and Managing
Networking on VNX for details):
Add a default gateway using the server_route CLI command, for example:
$ server_route server_2 -add default 10.11.12.1
Add a route entry using the server_route CLI command, for example:
$ server_route server_3 -add net 10.13.14.15 10.11.12.1
or
$ /nas/bin/migrate_system_conf -mover -source_system id=1 source_user nasadmin -source_mover server_2 -destination_mover
server_2 -service dns
8. Ensure the target network interface can work on the destination VNX2 system.
The source and destination network interfaces for a synchronous replication session
are using the same name. The network interface on the destination VNX2 system is in
the down state. If the network interface on the destination VNX2 system is created
automatically during a synchronous replication session creation, then it is configured
using the same configuration as the source VNX2 system, including the IP address. If
the destination VNX2 system network interface is created manually after synchronous
replication session creation, it can be configured with any configuration which works
on the destination VNX2 system. If the network interface on the source and
destination are using different IP addresses, simply bring the destination network
interface up and see that it is working by using the server_ping CLI command.
37
Configuring
Note
If the network interfaces on the source and destination are using the same IP address,
the network interface on the destination cannot be brought up; otherwise, there will
be an IP address conflict on the network. Use one of the following ways to test
whether the network interface works:
l
Create a network interface using an IP address within the same subnet as the
destination network interface, bring it up, and test if it works using the
server_ping CLI command.
If the target network interface cannot work, check with your network administrator.
Ensure the target Data Mover has the correct routes, and, if applicable, the VLAN
functions.
Add a route entry using the server_route CLI command, for example:
$ server_route server_3 -add net 10.13.14.15 10.11.12.1
4. Ensure the target network interface can work on the destination VNX2 system.
The source and destination network interfaces for a synchronous replication session
are using the same name. The network interface on the destination VNX2 system is in
the down state. If the network interface on the destination VNX2 system is created
automatically during a synchronous replication session creation, then it is configured
using the same configuration as the source VNX2 system, including the IP address. If
the destination VNX2 system network interface is created manually after synchronous
38
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
Configuring
replication session creation, it can be configured with any configuration which works
on the destination VNX2 system, as long as it uses the same name as the source
VNX2 system network interface. To ensure NFS I/O transparency, the source and
destination network interface must use the same IP address.
Note
If the network interfaces on the source and destination are using the same IP address,
the network interface on the destination cannot be brought up; otherwise, there will
be an IP address conflict on the network. Use one of the following ways to test
whether the network interface works:
l
Create a network interface using an IP address within the same subnet as the
destination network interface, bring it up, and test if it works using the
server_ping CLI command.
If the target network interface cannot work, check with your network administrator.
Ensure the target Data Mover has the correct routes, and, if applicable, the VLAN
functions.
39
Configuring
40
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
CHAPTER 5
Managing by using the VNX for File CLI
Topics include:
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
41
Reverse should be used when attempting to move a VDM from one VNX2 system to
the other while the source system is still available. For more information about when
to use reverse, see Reverse a Sync Replication session on page 43.
Failover should only be used in the event that the source VNX2 system is not
reachable.
For more information about when to use Failover and Clean, see Failover a sync
replication session on page 45.
Note
The Failover and Reverse tasks can also be performed automatically by using the VDM
MetroSync Manager software. Refer to the "Using VDM MetroSync Manager" chapter for
details.
The remaining topics are ancillary tasks related to the management of the VDM
synchronous replication environment and are also useful for troubleshooting.
Output:
id
0
1
name
my_system1
my_system2
syncrep
initialized
enabled
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
Output:
id
= 83
name
= my_vdm
acl
= 0
type
= vdm
server
= server_2
rootfs
= root_fs_vdm_my_vdm
I18N mode = ASCII
mountedfs =
syncreplicable = True
member_of =
status
:
defined = enabled
actual = loaded, ready
Interfaces to services mapping:
Note
All network interfaces on the source VNX2 system have the corresponding network
interfaces with the same names on the destination VNX2 system specified for the
synchronous replication session.
The synchronous replication's session status and service status for remote to local is
synchronized.
Local and remote Data Mover should have the same I18N mode.
It is recommended that you leave at least one interface available on each physical
Data Mover at all times on both source and destination systems. To do this, add an
Reverse a synchronous replication session
43
interface to each primary Data Mover on both sites but leave it unattached to any
VDMs. The additional interface should be on the same subnet as the one that is on
the VDMs.
Because the interface is brought down as part of the reverse process, reversing the
last VDM on a Data Mover might remove all available interfaces from that Data Mover.
This prevents the Data Mover from communicating with the environment such as NTP
servers, DNS servers, and so on.
Also, in order to support CIFS CA (Continuous Availability) on reverse, you must use the
SMB 3.0 client with CA enabled. If the service outage time can be less than the CIFS
timeout, CIFS CA can be achieved. To configure CIFS CA support on VDM synchronous
replication reverse, see Configure CIFS CA support on page 36 for details. If you need to
configure for NFS I/O transparency, see Configure for NFS I/O transparency in VDM
synchronous replication session on page 38 for details.
This task reverses the direction of a synchronous replication session along with the
source and destination roles of the two VNX2 systems involved in the synchronous
replication session.
Procedure
1. At the destination site, type the following command syntax:
$ nas_syncrep -reverse {<name>|id=<id>}
where:
<name> = name of the synchronous replication session
<id> = identifier of the synchronous replication session
Example:
To reverse the direction of a synchronous replication session using the ID of the
synchronous replication session, type:
$ nas_syncrep -reverse id=4315
Output:
WARNING: You have just issued the nas_syncrep -reverse command.
There will be a period of Data Unavailability during the reverse
operation. After the reverse operation, the VDM/FS(s)/
checkpoint(s) protected by the sync replication session will be
reversed to the local site. Are you sure you want to proceed? [yes
or no]
44
done: 19 s
done: 11 s
done: 1 s
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
Now
Now
Now
Now
done
done:
done:
done:
done:
Elapsed time:
0 s
0 s
16 s
17 s
116s
Results
The original destination VNX2 system becomes the source VNX2 system for the VDM in
the synchronous replication session. The NAS client can only access data from the new
source VNX2 system for the VDM. The NAS client cannot access data from the original
VNX2 system.
Until a reverse operation succeeds, do not change the VDM, file systems, and
checkpoints protected by the synchronous replication session.
Note
After the synchronous replication session is reversed, for the disk volumes in the new
standby pool, their visible servers (listed in the corresponding servers field) will
become empty.
After you finish
After you have reversed the direction of a VDM synchronous replication session so that
the destination site is the active site, and you want to make the original source site the
active site again, run the nas_syncrep -reverse command on the original source
site. This action returns the VDM, file system, file system checkpoints, file system
checkpoint schedules, and related network interfaces to the original source site, which
becomes active. The VDM at the destination site becomes standby.
Local and remote Data Movers should have the same I18N mode.
All network interfaces on the source VNX2 system have corresponding network
interfaces with the same names on the destination VNX2 system specified for the
synchronous replication session.
The synchronous replication's session status and service status for remote to local
should not be sync_in_progress.
45
Note
If the network interface on the original source system is not in the down state (system will
only try to turn it down during the failover, but it may fail), I/O transparency cannot be
guaranteed.
Also, in order to support CIFS CA (Continuous Availability) on failover, you must use the
SMB 3.0 client with CA enabled. If the service outage time can be less than the CIFS
timeout, CIFS CA can be achieved. To configure CIFS CA support on VDM synchronous
replication failover, see Configure CIFS CA support on page 36 for details. If you need to
configure for NFS I/O transparency, see Configure for NFS I/O transparency in VDM
synchronous replication session on page 38 for details.
After a disaster occurs and the active VNX2 system is down, failover a sync-replicable
VDM to the standby VNX2 system to make it active.
NOTICE
Failover should only be used in situations where the source site is not available. In
situations where the source site is still accessible, a reverse must be used instead.
Note
When failover starts, you must not perform any operation on the VDM or synchronous
replication session on the failed site.
Procedure
1. At the destination site, type the following command syntax:
$ nas_syncrep -failover {<name>|id=<id>}
where:
<name> = name of the VDM synchronous replication session
<id> = identifier of the VDM synchronous replication session
Example:
To failover the VDM to the standby VNX2 system, type:
$ nas_syncrep -failover id=4560
Output:
WARNING: You have just issued the nas_syncrep -failover command.
Verify whether the peer system or any of its file storage
resources are accessible. If they are, you should issue the
nas_syncrep -reverse command instead. Running the nas_syncrep
-failover command while the peer system is still accessible
could result in Data Loss if the session is not in sync. Are you
sure you want to proceed? [yes or no]
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
30 s
7 s
1 s
done: 4
15 s
5 s
0 s
3 s
Results
Until a failover operation succeeds, do not change the VDM, file systems, and
checkpoints protected by the synchronous replication session.
After the failover completes, the original standby system becomes the active system for
the VDM in the synchronous replication session. The NAS client will now access data from
the VDM on this active system. The original active system becomes the standby system
for the VDM in the synchronous replication session. After failover, the LUNs under
synchronous replication on the original active system (now the standby) become Read
Only.
After you finish
If the failed system is offline or the MirrorView link is down, the synchronous replication
session's state becomes Stopped. The data cannot be synchronized back to the original
source side.
1. Fix the issue that caused the failure at the source site.
NOTICE
If the source site Data Mover was powered off, the Data Mover network must be
disconnected before powering it on, to avoid IP address conflict.
2. Perform a Clean operation. Refer to Clean synchronous replication session on page
47 for details.
3. If the source site Data Mover network was disconnected, reconnect it.
4. At the source site, run the nas_syncrep -reverse command. This action
restores normal operation at the source site. It brings the VDM, file system, file
system checkpoints, file system checkpoint schedules, and related network
interfaces online at the source site and changes the corresponding VDM at the
destination site to standby.
47
NOTICE
If the source site Data Mover was powered off, the Data Mover network must be
disconnected before powering it on, to avoid IP address conflict.
2. Run the nas_syncrep -Clean command from the source site for either a specified
synchronous replication session or all synchronous replication sessions stored in the
source NAS database. This action cleans the source site of all unnecessary objects
and prepares it for a nas_syncrep -reverse operation.
If the nas_syncrep -Clean command is not run, you are prevented from reversing
the replication session.
3. If the source site Data Mover network was disconnected, reconnect it.
4. At the source site, run the nas_syncrep -reverse command. This action restores
normal operation at the source site. It brings the VDM, file system, file system
checkpoints, file system checkpoint schedules, and related network interfaces online
at the source site and changes the corresponding VDM at the destination site to
standby.
5. At the original source VNX2 system site, type the following command syntax:
$ nas_syncrep -Clean {-all|<name>|id=<id>}
where:
<name> = name of the synchronous replication session
<id> = identifier of the synchronous replication session
Example:
To Clean the synchronous replication session on the original source VNX2 system for a
single synchronous replication session, type:
$ nas_syncrep -Clean id=8002
Output:
Now cleaning session my_session1 (may take several minutes)...done
Now starting session my_session1...
done
Note
The Data Mover may reboot during a clean operation. If a Data Mover reboot is
needed, a warning prompt message is displayed. Otherwise, the warning prompt
message does not display.
Results
The following occur as a result of a successful Clean operation:
48
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
The Control Station communication channel is ready for communication between the
two sites.
Example:
To delete a synchronous replication session, type:
$ nas_syncrep -delete my_syncrep1
Output:
WARNING: Please do not perform any operation on my_syncrep1 on
standby system until delete is done.
Deleting...done
Results
The VNX2 system removes the synchronous replication session from the local NAS
database.
Note
After a synchronous replication session is deleted, for the disk volumes in the userdefined pool, their type will be updated (changed to the corresponding unmirrored type).
49
Output:
id
= 1
name
= CS0_4224225
syncrep
= enabled
fsidrange
= 10001,20000
type
= MirrorView
local_storage
=
APM00144311642,MirrorView=nasdb_0784_642_mv,CloneGroup=nasdb_0784_6
42_clone
remote_storage
=
APM00143330784,MirrorView=nasdb_642_0784_mv,CloneGroup=nasdb_642_07
84_clone
service_status
=
local_to_remote = in_sync
remote_to_local = sync_in_progress (47%)
Results
Information about the VDM synchronous replication service between the source system
and the specified destination system is displayed.
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
and checking whether the syncreplicable= option appears. The option does
not appear if the VDM is not set to sync-replicable.
Output:
id
= 1
name
= vdm1
acl
= 0
type
= vdm
server
= server_2
rootfs
= root_fs_vdm_vdm1
I18N mode = UNICODE
mountedfs = vdm1_fs2,vdm1_fs1,vdm1_fs3
syncreplicable = True
member_of =
status
:
defined = enabled
actual = loaded, active
Interfaces to services mapping:
interface=vdm1 :cifs vdm
Output:
id
= 1
name
= vdm1
acl
= 0
type
= vdm
server
= server_2
rootfs
= root_fs_vdm_vdm1
I18N mode = UNICODE
mountedfs = vdm1_fs2,vdm1_fs1,vdm1_fs3
member_of =
status
:
defined = enabled
actual = loaded, active
Interfaces to services mapping:
interface=vdm1 :cifs vdm
51
Output:
Now unmounting sync replica of NAS database...
done
Now deleting mountpoint for sync replica of NAS database...
done
Now deleting remote storage info...
done
Now removing NBS access to local server server_2...
done
Now removing NBS access to local server server_3...
done
Now deleting local Mirror and Clone...
done
Now disabling service (including deleting Mirror and Clone) on remote system...done
Now removing FSID range [10001,20000] on remote system...
done
Now removing FSID range [1000,10000] on local system...
done
Now removing other sync replication service settings on local system...
done
done
The specified VDM should not have a synchronous replication session on it.
Output:
id
name
acl
type
server
rootfs
I18N mode
mountedfs
member_of
status
defined
52
=
=
=
=
=
=
=
=
=
:
=
80
test_vdm
0
vdm
server_2
root_fs_vdm_test_vdm
ASCII
enabled
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
Results
The sync-replicable flag on the VDM and pool are unset.
The underlying pool should meet the criteria of a sync-replicable NAS pool.
Note
If a VDM and the file systems on it are created before upgrading to VNX operating
environment (OE) for file version 8.1.8.130 or later, using the default log type (the default
before version 8.1.6 is common log), the VDM cannot be converted to sync-replicable.
To modify a VDM from non-sync-replicable to sync-replicable, do the following:
Procedure
1. Type the following command syntax:
$ nas_server -vdm <vdm_name> -option syncreplicable=<yes|no>
where:
-vdm <vdm_name> -option syncreplicable=<yes|no> = specifies whether the
VDM is sync-replicable.
Example:
To modify a VDM from non-sync-replicable to sync-replicable, type:
nas_server -vdm test_vdm -option syncreplicable=yes
Output:
id
= 80
name
= test_vdm
acl
= 0
type
= vdm
server
= server_2
rootfs
= root_fs_vdm_test_vdm
I18N mode = ASCII
mountedfs =
syncreplicable = True
member_of =
status
:
defined = enabled
actual = loaded, ready
Interfaces to services mapping:
Results
The sync-replicable flag on the VDM and pool are set.
53
The VDM that is being deleted cannot contain mounted file systems.
Output:
id = 3
name = my_syncrep1
acl = 0
type = vdm
server =
rootfs = root_fs_my_syncrep1
I18N mode = UNICODE
mountedfs =
member_of =
status :
defined = enabled
actual = permanently unloaded
Interfaces to services mapping:
Results
The pool under the deleted VDM becomes a non-sync-replicable VDM pool as a result of a
successful Delete operation.
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
where:
-list = displays all the configured synchronous replication sessions on the local
systems NAS database and those having the local system as the standby system in
the remote system's replicated NAS database.
Example:
To list the VDM synchronous replication session information, type:
nas_syncrep -list
Output:
id
5020
10030
name
my_syncrep1
my_syncrep2
vdm_name
my_vdm
my_vdm
remote_system
-->my_system1
<--my_system1
session_status
sync_in_progress
in_sync
Output:
id
name
vdm_name
syncrep_role
local_system
local_pool
local_mover
remote_system
remote_pool
remote_mover
consistency_group
session_status
local_cg_state
local_cg_condition
remote_cg_state
remote_cg_conditoin
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
5020
my_syncrep1
my_vdm
active
my_system
my_pool1
server_2
my_system1
my_pool1
server_2
syncrep_72_0784_72_642
sync_in_progress (8%)
Synchronizing
Active
Synchronizing
Active
55
You must specify the volume to extend the sync-replicable NAS pool.
To add disk volumes into a sync-replicable NAS pool without a synchronous replication
session on it, do the following:
Procedure
1. Specify the volumes to extend the sync-replicable NAS pool by using the following
command syntax:
nas_pool -xtend <name> [-volumes
<volume_name>[,<volume_name>,...]]
where:
<name> = name of the storage pool
<volume_name> = name of one or more volumes
Example:
To extend the sync-replicable NAS pool, type:
nas_pool -xtend VDM1_Pool -volumes d17,d18,d19,d20,d21
Results
The sync-replicable NAS pool is extended with the new volumes.
56
You must specify the volume to extend the sync-replicable NAS pool.
Verify that the synchronous replication session's status is in_sync before extending a
pool by running the nas_syncrep -info <session_name> command.
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
If the synchronous replication session is failed over, before extending the pool, you
must ensure that the session is cleaned by using the nas_syncrep -Clean
<session_name> command.
To add disk volumes into a sync-replicable NAS pool with a synchronous replication
session on it, do the following:
Procedure
1. On the main VDM MetroSync Manager window, verify that the Service State is
Stopped. If not, click Stop.
2. In Unisphere, navigate to Storage > LUNs. Click Create and provision additional LUNs.
3. Add the LUNs to the ~filestorage Storage Group.
4. Click Rescan Storage Systems from the task list. This may take a few minutes to
complete.
5. To view newly added dVols, run the nas_disk -list command. Note that the new
dVols are now available for use.
6. Repeat steps 1 to 4 on the target VNX. Make sure that you create the same size LUNs
on both systems.
7. On your source VNX, run the following script to create mirrors for the new LUNs:
/nas/sbin/syncrep/syncrep_modify_mirror -create
-session <session_name> -local_disk_volumes
<volume_name>[,<volume_name>,...] -remote_disk_volumes
<volume_name>[,<volume_name>,...]
where:
<session_name> = name of the VDM syncrep session
<volume_name> = name of the volume or volumes to use, separated by commas
Example:
/nas/sbin/syncrep/syncrep_modify_mirror -create -session
VDM1_Session -local_disk_volumes d17,d18,d19,d20,d21
-remote_disk_volumes d17,d18,d19,d20,d21
Output:
Now creating mirror(s)...
Now waiting for mirror(s) to complete initial copy...
Done
done
done
Once the mirrors are created, the initial synchronization starts and the progress is
displayed. Allow the initial synchronization to complete and then proceed to the next
step.
Note
The initial copy operation takes longer for thick LUNs than for thin LUNs.
8. On your source VNX, run the following script to add the mirrors to the Synchronous
Replication sessions Consistency Group:
/nas/sbin/syncrep/syncrep_modify_cg -addmirror
-session <session_name> -local_disk_volumes
<volume_name>[,<volume_name>,...] -remote_disk_volumes
<volume_name>[,<volume_name>,...]
Extend a sync-replicable NAS pool (with synchronous replication session)
57
Example:
/nas/sbin/syncrep/syncrep_modify_cg -addmirror -session
VDM1_Session -local_disk_volumes d17,d18,d19,d20,d21
-remote_disk_volumes d17,d18,d19,d20,d21
9. On your target VNX, extend the pool by using the -skip_session_check option:
nas_pool -xtend <name> -volumes <volume_name>
[,<volume_name>,...] -skip_session_check
Example:
nas_pool -xtend VDM1_Pool -volumes d17,d18,d19,d20,d21
-skip_session_check
10. On your source VNX, extend the pool by using the -skip_session_check option.
Results
The NAS pool has been expanded on both sides without requiring a full synchronization.
The data connections between NFS and CIFS clients and servers are not impacted.
You must specify the volume (not the pool) to shrink the sync-replicable NAS pool.
To move some disk volumes out of a sync-replicable NAS pool without a synchronous
replication session on it, do the following:
Procedure
1. Specify the volumes to shrink the sync-replicable NAS pool by using the following
command syntax:
nas_pool -shrink <name> [-volumes
<volume_name>[,<volume_name>,...]]
where:
<name> = name of the storage pool
<volume_name> = name of one or more volumes
Example:
58
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
Results
The specified volumes are removed from the sync-replicable VDM's NAS pool.
You must specify the volume to shrink the sync-replicable NAS pool.
The automated failover service cannot be running during a shrink operation. Stop the
service if it is still running.
To move some disk volumes out of a sync-replicable NAS pool with a synchronous
replication session on it, do the following:
Procedure
1. On the main VDM MetroSync Manager window, verify that the Service State is
Stopped. If not, click Stop.
2. Delete the synchronous replication session on the VDM. See Delete a Sync Replication
session on page 49.
3. Specify the volumes to shrink the sync-replicable NAS pool using nas_pool.
For detailed information about nas_pool, refer to the nas_pool section of the EMC
VNX Command Line Interface Reference for File.
4. Shrink the standby pool of the original synchronous replication session by specifying
the volumes using nas_pool.
5. Recreate the synchronous replication session on the VDM/sync-replicable NAS pool.
See Create a VDM synchronous replication session on page 34.
Results
The specified volumes are removed from the sync-replicable VDM's NAS pool.
Secondary Control Stations have been installed on the source and destination VNX2
Systems.
The NTP Service is configured on the source and destination secondary Control
Stations.
59
Procedure
1. Log in to the source primary Control Station as root.
2. Add the source secondary Control Station to the VDM synchronous replication
configuration.
Example:
Type:
# /nas/sbin/syncrep/syncrep_config_cs1 -cel_id 1
Output:
Copying NBS configuration file to the secondary CS ...
done
Rebooting the secondary CS to enable NBS configuration ...
done
Output:
Copying NBS configuration file to the secondary CS ...
done
Rebooting the secondary CS to enable NBS configuration ...
done
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
CHAPTER 6
Working with ReplicatorV2 sessions
Topics include:
l
l
l
l
l
l
61
Output:
Enabling PreserveRepv2 on remote array
PreserveRepv2 enabled successfully
Enabling PreserveRepv2 on local array
PreserveRepv2 enabled successfully
System A to System C
System B to System C
System C to System A
System C to System B
62
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
System A
System B
2x on System C
System A to System C
System B to System C
System C to System A
System C to System B
4. On System C:
a. Create a new pool.
b. Create LUNs.
c. Add LUNs to the ~filestorage Storage Group.
d. Run a Rescan.
e. Create a new VDM.
f. If using CIFS, make sure that the CIFS service is started.
5. On System A, create a new ReplicatorV2 session for a file system from System A to
System C. Put the destination file system on the VDM.
6. On System B:
a. Run either a reverse or failover operation on the VDM sync rep session.
b. Run the nas_syncrep_rr restore -vdm <Name> command.
c. Run the nas_syncrep_rr -free_intermediate_data command.
d. Run the nas_replicate -list command.
Results
The replication session should now appear on System B.
Output:
Configuration for PreserveRepv2 on remote array (cel_id = 1):
enabled
Configuration for PreserveRepv2 on local array: enabled
63
Procedure
1. To disable Preserve RepV2 session for VDM MetroSync, type the following command:
nas_syncrep_rr -config -disable
Output:
Disabling PreserveRepv2 on remote array
PreserveRepv2 disabled successfully
Disabling PreserveRepv2 on local array
PreserveRepv2 disabled successfully
Using the -all option may take some time for the operation to complete. Check the
operation's status by running the nas_task -info <task_id> command.
For example:
$ nas_task -info 20478
Task Id
= 20478
Celerra Network Server = filesim8116cs0
Task State
= Succeeded
Movers
=
Description
= Repv2
id=237_BB0050562C7C86_0000_242_BB0050562C7C82_0000 Restored.
Error 13422034949: Internal
error.
Restore Repv2 id=247_BB0050562C7C87_0000_252_BB0050562C7C85_0000
failed.
Originator
= root@cli.localhost
Start Time
= Tue May 12 22:27:28 EDT 201
End Time
= Tue May 12 22:27:43 EDT 201
Schedule
= n/a
Response Statuses
=
64
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
CHAPTER 7
Using VDM MetroSync Manager
Topics include:
l
l
l
l
l
l
l
l
l
l
l
l
l
65
Set a VDM Network to "Failed" when either All network interfaces fail or Any VDM
network interface fails. This setting applies for each VDM; failover occurs on one
or more sessions that have failed.
If VDM MetroSync Manager cannot failover, set the VDM failover retry count.
Configure SMTP (email) alerts. If you are not configuring SMTP, you must clear the
checkbox.
The VDM MetroSync service must be stopped before you can set a failover policy for a
VDM session.
You can configure a synchronous replication session to failover automatically when the
VDM MetroSync service is started, or you can manually run failover or reverse operations.
Monitored events on page 66 identifies the events that are return warnings, and the
events that cause failovers:
Table 2 Monitored events
Event
66
Warning
Failure
Description
Remote Ping
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
Event
Warning
Failure
Description
SyncRep session
check
Procedure
1. On the main VDM MetroSync Manager window, select a VDM session for which you
want to set a policy.
2. Click the Edit (pencil) icon.
3. Set Failover Policy to either Auto or Manual.
Configure a synchronous replication session's failover policy
67
The Failover Policy is effective once the VDM MetroSync service is started. By default,
automatic failover is configured for all monitored VDM Sessions. To control certain
sessions manually, select specific VDM sessions and set each policy to Manual.
4. Click OK.
The VDM MetroSync service must be stopped before you can set a failover priority for
a synchronous replication session.
Procedure
1. On the main VDM MetroSync Manager window, select a VDM session for which you
want to set or change a failover priority.
2. Use the Up and Down arrows to move the session up or down on the priority list.
Alternatively, you can click the Edit (pencil) icon, and then select the Failover Priority
from the drop-down list.
3. Click OK.
68
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
Procedure
1. Open the VDM MetroSync Manager software.
2. On the main window, click Start.
Results
The service that runs is called "EMC VDM MetroSync service" with a description of "EMC
VDM MetroSync Manager".
When the VDM MetroSync service is running, the VDM MetroSync Manager monitors all
configured VDM services on the primary site. When a VDM service failure is detected by
the VDM MetroSync Manager, the failed VDMs sessions are failed over from the primary
site to the secondary site. If multiple VDMs failed, the VDMs are failed over based on the
configured priority. If the failover of a session fails, the VDM MetroSync Manager
continues failing over lower-priority sessions before retrying.
Note
If a local standby Data Mover is configured, the service delays the failover operation for
20 seconds to allow for the local standby to be initiated. If local failover is detected, then
the service waits an additional 300 seconds before issuing the next check to allow the
local failover to complete.
If a local standby Data Mover is not configured, these delays are skipped.
Do not run a failover operation if the source system is accessible. If it is available, run a
reverse operation instead.
Procedure
1. On the main VDM MetroSync Manager window, verify that the Service State is
Stopped. If not, click Stop.
2. Select one or more VDMs that you want to failover from your source array to the
destination array.
3. Click Failover.
After you finish
Once the failover is complete, a message window opens to show that the priority set for
each VDM has changed.
69
VDMs must be in the Reversed or Failed over state before you can run a Restore
operation.
If the VDM has been failed over, you must have resolved the issue with the source
system/VDM that caused the failover.
Procedure
1. On the main VDM MetroSync Manager window, verify that the Service State is
Stopped. If not, click Stop.
2. Select one or more VDMs with a Reversed or Failed over state that you want to move
from the destination VNX back to the source VNX.
3. Click Restore.
Collect logs
The Main Log window, shown at the bottom of the main VDM MetroSync Manager
window, contains information about the actions performed when using the VDM
MetroSync Manager. The Main Log window information is captured in the
vmsm.main.log file, found in the C:\<Program Files>\EMC\VMSM\log folder.
Note that the <Program Files> folder name depends on the version of Windows
being used.
To create a collection log to help with troubleshooting issues:
70
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
Procedure
1. On the main VDM MetroSync Manager window, select Help > Log Collection.
2. By default, the Log Collection tool creates a zip file, and saves the collection's zip file
to the Desktop.
Check status
Before you begin
You must have existing VDM MetroSync sessions to check their status.
Procedure
1. On the main VDM MetroSync Manager window, select Management > Check Status.
Status information appears in the VDM MetroSync Manager Main log which displays
in the lower pane of the window. The earliest messages display at the beginning of
the log, and the latest messages display at the end of the log.
l
Fatal, Error, and Warning messages are highlighted within the VDM MetroSync
Manager Main log. If you want to change the color for a highlight, or add messages
to be highlighted, click Highlighting.
If you want to display only the status messages with highlights, select Only
Highlighted.
To always display the most current log entries first, select Follow Tail. If you do not
select Follow Tail, then the oldest entries are displayed first.
71
72
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
CHAPTER 8
Service checklists
Each topic in this chapter provides a checklist which can be used to determine if there
are any issues that need to be resolved with your VDM synchronous replication for
disaster recovery configuration.
Topics include:
l
l
Service checklists
73
Service checklists
Service setup
yes/no
For VNX Systems with dual Control Stations, an IP alias should have been
created and used to build the nas_cel connection. For VNX Systems with dual
Control Stations, is the IP alias reporting to be in the UP state? (Use the
nas_config -IPalias -list command to determine the status.)
Does the nas_cel connection from System 1 to System 2 exist? (Use the
nas_cel -list command to determine the status.)
Also verify that the nas_cel connection from System 2 to System 1 exists.
Note
For VNX Systems with dual Control Stations, the nas_cel connection should
have been configured using an IP alias. ID 0 is the self-identifier of the Control
Station and should not be mistaken for the connection between both systems.
Is the MirrorView connection enabled between both systems?
Are the Write Intent Logs configured on both systems?
Are the Clone Private LUNs configured on both systems?
Is the VDM Synchronous Replication service enabled? (Use the nas_cel syncrep -list command to determine the status.)
Note
Under the syncrep column, the service should be showing 'enabled' for the
appropriate nas_cel ID.
Run a /nas/bin/nas_checkup on both VNX Systems. Are the VNX Systems in a
healthy state?
74
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
Service checklists
yes/no
Is the VDM replication session state either sync in progress or in sync? (Use
the nas_syncrep -list command to determine the status.)
Are all of the VDM network interfaces pingable? (Use the ping <interface
IP address> command to determine the status.)
Are all of the CIFS or NFS shares accessible? (Request that the customer
provide this information.)
Run a /nas/bin/nas_checkup on both VNX Systems. Are the VNX Systems in a
healthy state?
75
Service checklists
76
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
CHAPTER 9
Troubleshooting
Troubleshooting
77
Troubleshooting
Check the VDM MetroSync Manager's Main log (vmsm.main.log) for status
information.
Note
In the case of the syncrep log, during a reverse/failover operation, the level will be
changed to DEBUG and it will be changed back after the reverse/failover operation
finishes.
Checks have been added to nas_checkup for synchronous replication health. If an error
or warning is detected during a scheduled nas_checkup run, it will be included in one
single Checkup alert. The alert can be viewed through Unisphere.
=
=
=
=
id
name
syncrep
fsidrange
type
local_storage
=
=
=
=
=
=
remote_storage
service_status
78
0
NAS12
initialized
1000,16000
3
NAS33
enabled
16001,32000
MirrorView
CKM00152402428,MirrorView=nasdb_428_0109_mv,
CloneGroup=nasdb_0109_428_clone
= CETV2151700109,MirrorView=nasdb_0109_428_mv,
CloneGroup=nasdb_428_0109_clone
=
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
Troubleshooting
local_to_remote = out_of_sync
remote_to_local = in_sync
If either the local or remote sessions are out_of_sync, note the MirrorView and
CloneGroup names.
2. Run the following command:
naviseccli -h spa mirror -sync -syncimage -name <mirror_name>
where <slot_num> is the slot number of the Data Mover. PXE reboot only needs to be
done for one Data Mover.
4. After rebooting the Data Mover, verify that the Data Mover is accessible by typing:
$ /nasmcd/sbin/getreason
6 - slot_0 control station ready
4 - slot_2 configured
4 - slot_3 configured
After one Data Mover gets into a configured or contacted state, the /nas folder can be
accessed.
5. Stop the PXE service. Type:
$ /nasmcd/sbin/t2pxe -e
How to recover when all Data Movers are panicking
79
Troubleshooting
Results
After the Control Station reboot, the NAS service should be working and all Data Movers
will be in the contacted state.
Error messages
All event, alert, and status messages provide detailed information and recommended
actions to help you troubleshoot the situation.
To view message details, use any of these methods:
l
Unisphere software:
n
CLI:
n
Right-click an event, alert, or status message and select to view Event Details,
Alert Details, or Status Details.
Use this guide to locate information about messages that are in the earlier-release
message format.
Use the text from the error message's brief description or the message's ID to
search the Knowledgebase on EMC Online Support. After logging in to EMC Online
Support, locate the applicable Support by Product page, and search for the error
message.
80
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery
APPENDIX A
Changing FSID ranges
Topics include:
l
81
#fsidrange:<min_size>:
fsidrange:8192:
82
EMC VNX Series 8.1 Using VDM MetroSync with VDM MetroSync Manager for Disaster Recovery