Você está na página 1de 43

Remote Replication Part 3

Jan 2011

Welcome to Module 8: Remote Replication Part 3.

Remote Replication Part 3


Jan 2011

This is module 8 of a 10 part series. Module 8 covers VNX and VNXe Remote Replication Part 3 VNX File Replicator.

Remote Replication Part 3


Jan 2011

The objectives for this module are shown here. Please take a moment to read them.

Remote Replication Part 3


Jan 2011

The objectives for this lesson are shown here. Please take a moment to read them.

Remote Replication Part 3


Jan 2011

File Replicator is an IP-based replication solution that produces a read-only, point-in-time copy of a source (production) file system. The VNX Replication service periodically updates this copy, making it consistent with the production file system. Replicator uses internal checkpoints to ensure availability of the most recent point-in-time copy. These internal checkpoints are based on SnapSure technology. This read-only replica can be used by an X-Blade in the same VNX cabinet (local replication), or an X-Blade at a remote site (remote replication) for content distribution, backup, and application testing. In the event that the primary site becomes unavailable for processing, File Replicator enables you to failover to the remote site for production. When the primary site becomes available, you can use File Replicator to synchronize the primary site with the remote site, and then failback the primary site for production. You can also use the failover/reverse features to perform maintenance at the primary site or testing at the remote site.

When a replication session is first started, a full backup is performed. After initial synchronization, Replicator only sends changed data over IP.

Remote Replication Part 3


Jan 2011

Replicator is part of the VNX and VNXe Remote Protection Suite for VNX Arrays. Note that the VNX5100 does not support FAST VP, VEE, FLR, Replicator, or SnapSure.

Remote Replication Part 3


Jan 2011

The following are the key terminology used in File Replication.

Source/Destination objects: Objects that can be replicated, such as file systems, Vblades, or iSCSI LUNs.
X-Blade interconnect: The communication path between a given X-Blade pair located on the same VNX cabinet or different cabinets. SavVol: This is where SnapSure writes checkpoint data. Full Copy: This is the copy of the source object that is sent to the destination object when a replication session is started, or when a common base is not found. delta: Block changes to the source object, as calculated by comparing the newest, currently marked internal checkpoint (point-in-time snapshot) against the previously replicated internal checkpoint. Differential copy: The difference between the common base and the object that needs to be copied. Only the delta between the common base and the source object is transferred to the destination. Replication Failover: This is the process that changes the destination file system from read only to read/write and stops the transmission of replicated data. The source file system, if available, becomes read-only. Replication reversal: Process of reversing the direction of replication. The source file system becomes readonly and the destination file system becomes read/write. Replication switchover: When the source is functioning and available, you can switch over the specified replication session to perform synchronization of the source and destination without data loss. Bandwidth/throttle schedule: A list of time periods (days and hours), bandwidth values (in KB/s), or both that control the amount of bandwidth available to all Replicator sessions.

Remote Replication Part 3


Jan 2011

Some of the main uses of Replicator are:

Data Recovery A duplicate copy of production file systems can be replicated to a remote site where they can be brought online with little downtime in case of a major disaster.
Backup A duplicate copy of the data can be mounted on a dedicated X-Blade for backups only. This offloads the backup processing from the production X-Blade. Decision Making File systems can be replicated so multiple sets of data can be used for different data sets when mining for information on trends, etc. Software Testing Before upgrading software, a duplicate copy can be made and the upgrade tested before impacting production file systems with unknown results. Production performance is also not impacted if testing is performed on another X-Blade.

Data Migrations When a new site is being migrated to, data can be replicated to the new location and brought online with minimal downtime.

Remote Replication Part 3


Jan 2011

A license must be purchased and the Production file system must be mounted before replication can start. In order to unmount a file system that is being replicated, Replicator must first be stopped. Checkpoints must have enough SavVol storage available for use.

Remote Replication Part 3


Jan 2011

The objectives for this lesson are shown here. Please take a moment to read them.

10

Remote Replication Part 3


Jan 2011

Before creating a replication session for remote replication, you must establish the trusted relationship between the source and destination VNX Network Servers in your configuration. IP network connectivity must exist between the Control Stations of both VNX systems. The source and destination Control Station system times must be within 10 minutes of each other. The trust link is configured from the CLI on both VNX systems using the nas_cel command or from the Unisphere GUI. The trust link uses HTTPS to secure the communications over the network between the VNXs.

11

Remote Replication Part 3


Jan 2011

Each File Replicator V2 session must use an established X-Blade interconnect, which defines the communications path between a given X-Blade pair located on the same cabinet or different cabinets. Before you can create a replication session, both sides of an X-Blade interconnect must be established to ensure communication between the X-Blade pair that will represent your source and destination. Interconnect for remote replication is created between a local X-Blade and a remote X-Blade on another system. You must first create the interconnect on the local side and then on the destination side, before you can successfully create a remote replication session. Each X-Blade, by default, has a loopback interconnect which cannot be removed or modified. The loopback interconnect is used for loopback replication sessions. Only one interconnect can be established between a given X-Blade pair. Each physical link can have multiple IP addresses. An interconnect cannot be deleted if it is used by a replication or copy session. Also, the interface chosen for a session cannot be changed while in use. The nas_cel interconnect command is used to create the X-Blade interconnect.

12

Remote Replication Part 3


Jan 2011

Loopback replication of a source object occurs on the same X-Blade in the cabinet. In other words, the source X-Blade and destination X-Blade are the same. Communication is established by using the X-Blade loopback interconnect, established automatically for each X-Blade in the cabinet. The internal loopback of 127.0.0.1 is used in communicating control signals back and forth. Since an internal IP address is used, there is no need to involve the network stack for data transfer.

13

Remote Replication Part 3


Jan 2011

The basic steps in a Loopback replication session are as follows:

1. Network clients read and write to the source objects (File system, Vblade or iSCSI LUNs) through an XBlade without interruption during the replication process.
2. The loopback interconnect establishes the path between the source and destination after a replication session is started by using the nas_replicate command. 3. Replication creates two checkpoints for the source object. 4. For file system and Vblade replication, the destination object is created automatically (same size as the source and read-only) as long as the specified storage is available. iSCSI destination must already exist and be same size or larger. If the replication session identifies an existing destination object, the destination object is reused.

5. Replication creates two checkpoints for the destination objects.


6. A full (initial) copy is performed to transfer the source data to the destination. 7. At the destination, the first checkpoint (ckpt 1) is refreshed of the destination object to establish the common base with the source checkpoint. 8. The second checkpoint at the source (ckpt 2) is refreshed to establish the difference (delta) between the original source checkpoint and the latest point-in-time copy, and transferred to the destination. The transfer is called a differential copy. 9. Once the data transfer is complete, replication uses the latest internal checkpoint taken on the destination to establish a new common base (ckpt 2). The latest internal checkpoint contains the same data as the internal checkpoint on the source (ckpt 2) that was marked for replication.

14

Remote Replication Part 3


Jan 2011

Local replication occurs between two X-Blades in the same VNX cabinet. Both X-Blades must be configured to communicate with one another by using a X-Blade interconnect. After communication is established, a local replication session can be set up to produce a read-only copy of the source object for use by a different X-Blade in the same VNX cabinet. For file system replication, the source and destination file systems are stored on separate volumes.

15

Remote Replication Part 3


Jan 2011

Replication occurs between a local X-Blade and an X-Blade on a remote VNX. Both VNX systems must be configured to communicate with one another by using a common passphrase, and both X-Blade interconnects. After communication is established, a remote replication session can be set up to create and periodically update a source object at a remote destination site. The initial copy of the source file system can either be done over an IP network or by using the tape transport method. After the initial copy, replication transfers the changes made to the local source object to a remote destination object over the IP network. These transfers are automatic and are based on definable replication session properties and update policy.

16

Remote Replication Part 3


Jan 2011

A replication failover operation sets the replicated objects on the destination VNX to Read/Write so that data access can resume. The failover operation is performed only on the destination VNX. The execution of the failover operation is asynchronous and will result in data loss if all the data is not transferred to the destination site prior to issuing the failover. The failover process starts when the source side of replication becomes unavailable and the normal replication process stops. This could be due to a disaster at the source side of replication, a power-outage or a network outage. Next, the replication failover command is issued to the destination VNX. The replicated objects on the destination VNX are changed from being mounted read-only to read/write. If the source VNX is still available, its replicated objects will be mounted as read-only. Data access resumes from the destination side VNX. Its data will be consistent with the last successful data transfer from the source.

17

Remote Replication Part 3


Jan 2011

In this replication failover scenario, the replication source at Site A has experienced a disaster and is unavailable. In response to this potential disaster scenario, you can perform a failover of the replication sessions to make the replicated objects available at Site B. The execution of the failover operation is asynchronous and will result in data loss if all the data is not transferred to the destination site prior to issuing the failover. You perform a failover from the destination VNX only. During the failover process, the replicated objects on the destination VNX at Site B are mounted read/write to provide data access to the clients.

18

Remote Replication Part 3


Jan 2011

You can perform a reverse operation from the source side of one or more file systems or Vblade replication sessions without data loss. This operation will reverse the direction of the replication session, thereby making the destination read/write and the source read-only. The reverse operation does the following: 1. Synchronizes the destination object with the source 2. Mounts the source object as read-only 3. Stops replication 4. Mounts the destination as read/write 5. Starts replication in reverse direction from a differential copy and with the same configuration parameters Make sure that the client can access the source site before reversing the direction in order to ensure immediate data access.

19

Remote Replication Part 3


Jan 2011

In this replication reversal scenario, the destination VNX at Site B is operating in replication failover mode because of a disaster rendering the source VNX at Site A inaccessible. The replicated objects at the destination VNX at Site B are mounted read/write and are providing data access to clients. There are no replication sessions between the two sites. When the VNX at Site A is again available, the replication can be started between the two sites using the replication start reversal operation with the overwrite_destination option. This starts the replication with the Site B VNX as a replication source and the VNX at Site A as a replication destination.

20

Remote Replication Part 3


Jan 2011

For test or migration purposes, when the source is functioning and available, you can switch over the specified replication session to perform synchronization of the source and destination without data loss. You can perform this operation only on the source VNX. The switchover process works as follows: 1. Synchronizes the destination object with the source. 2. Stops the replication. 3. Mounts the source object as read-only. 4. Mounts the destination object as read/write so that it can act as the new source object. Note that unlike a reverse operation, a switchover operation does not start the replication session.

21

Remote Replication Part 3


Jan 2011

An active replication session can be temporarily stopped and leave replication in a condition that allows it to be started again. The Stop function allows you to temporarily stop a replication session, perform some action, and then start the replication session again by using a differential copy rather than a full data copy. A session may be stopped to mount the replication source or destination file system on a different X-Blade, or to change the IP addresses or interfaces the interconnect is using. To stop a local or remote replication session from the source VNX, select: Both This mode stops both the source and destination sides of the session, when the destination is available. Source This mode stops only the replication session on the source and ignores the other side of the replication relationship.

To stop a local or remote replication session from the destination VNX, select:
Destination This mode stops the replication session on the destination side only. For a Loopback replication session, the stop command automatically stops both sides of the session.

22

Remote Replication Part 3


Jan 2011

A cascade configuration is where one destination serves as the source for another replication session. Cascade is supported to one level. Each replication session is configured individually and is independent of one another. A cascade configuration involves two replication sessions. The first session runs replication from the source to the first destination. The second session runs replication from the first destination, serving as the source, to the second destination. To set up a cascade configuration, one must create the first session on the source side, and then create the second session at the destination side using the name of the destination object as the source.

23

Remote Replication Part 3


Jan 2011

Shown here is a basic cascade configuration for file system replication. The first session has two checkpoints for the source and two for the destination object. For the second replication session, two more checkpoints are created for the new source object, formerly the destination in session one, and two more are created for the second destination object. The data on the session two destination object can be quite different depending on the time-out-of-sync values for both sessions.

24

Remote Replication Part 3


Jan 2011

In a one-to-many configuration, multiple replication sessions can be set up to different destination objects from one source object. A maximum of four destinations can be associated with one source object. This type of configuration requires a separate replication session for each destination. These sessions are configured individually and are independent of one another. If a replication session that is involved in a one-to-many configuration is reversed, the source side goes into cascade mode. The destination side from one of the one-to-many sessions becomes the source and the original source side becomes the destination. Next, the source cascades out to the other destination sessions. Replication types that can be used with this configuration are Loopback, Local, and Remote.

25

Remote Replication Part 3


Jan 2011

Displayed here is a basic one-to-many configuration for ongoing replication of a given source file system to multiple destinations, which is up to four. This type of configuration requires a separate replication session for each destination. Each replication session generates two internal checkpoints on the source.

26

Remote Replication Part 3


Jan 2011

The objectives for this lesson are shown here. Please take a moment to read them.

27

Remote Replication Part 3


Jan 2011

Replicator has policies to control how often the destination object is refreshed by using the max_time_out_of_sync setting, and to control throttle bandwidth by specifying bandwidth limits on specific days, specific hours, or both. Replicator can also set the data amount to be sent across the IP network before an acknowledgement is required from the receiving side. The TCP window is automatically-sized. All of these policies can be established for one replication session by using Unisphere or the CLI. The update policy set for a replication session determines how frequently the destination is updated with source changes. You can define a max_time_out_of_sync value for a replication session or you can perform on-demand, manual updates. The max_time_out_of_sync value represents the elapsed time window within which the system attempts to keep the data on the destination synchronized with the data on the source. The destination could be updated sooner than this value. The source write rate, network link speed, and the interconnect throttle, when set, determine when and how often data is sent.

When you change a replication session from a max_time_out_of_sync policy to a manual refresh, the system suspends the session. To resume the session, reset the max_time_out_of_sync value.

28

Remote Replication Part 3


Jan 2011

As mentioned on the previous slide, the bandwidth schedule controls throttle bandwidth by specifying bandwidth limits on specific periods of time. A bandwidth schedule allocates the interconnect bandwidth used on specific days, specific times, or both, instead of using all available bandwidth at all times for the interconnect. For example, during work hours 40% of the bandwidth can be allocated to Replicator and then changed to 100% during off hours. Each side of an X-Blade interconnect can define a bandwidth schedule for all replication sessions using that interconnect. By default, an interconnect provides all available bandwidth at all times for the interconnect.

29

Remote Replication Part 3


Jan 2011

To add a bandwidth schedule to a particular X-Blade Interconnect, simply click the Add button. Enter the amount of bandwidth desired and select the days and the time period of the schedule. Displayed here is a bandwidth schedule of 10K KB/s from 7:00 in the morning to 6:00 at night for all weekdays.

30

Remote Replication Part 3


Jan 2011

If a second bandwidth schedule is needed, click the Add button once again and enter the new schedule.

31

Remote Replication Part 3


Jan 2011

When there is more than one bandwidth schedule present, the top schedule in the list has priority over the other schedules. Priority moves from top to bottom in that order. To change the priority of a bandwidth schedule, check the Select checkbox and click the Up or Down button. To delete a bandwidth schedule, click the Remove button.

32

Remote Replication Part 3


Jan 2011

The maximum number of replication sessions per X-Blade is based on network configuration, such as the WAN network bandwidth and the production I/O workload. This number is also affected by running both SnapSure and Replicator on the same X-Blade. Both of these applications share the available memory on an X-Blade. Each internal checkpoint uses a file system ID, which will affect the file system limit even though these internal checkpoints do not count toward the user checkpoint limit. For all configurations, there is an upper limit to the number of replications allowed per X-Blade. The maximum number of sessions per X-Blade is 1024 (the total number of configured replication sessions and copy sessions). The maximum number of active sessions is 256 per X-Blade. If you plan to run Loopback replications, keep in mind that each Loopback replication counts as two replication sessions since each session encapsulates both outgoing and incoming replications. Memory and CPU usage should also be monitored.

33

Remote Replication Part 3


Jan 2011

Carefully evaluate the infrastructure of the destination site by reviewing items such as: Subnet addresses Unicode configuration Availability of name resolution services; for example, WINS, DNS, and NIS Availability of WINS/DC in the right Microsoft Windows 2003/2008 domain Share names Availability of user mapping, such as Usermapper The CIFS environment requires more preparation to set up a remote configuration because of the higher demands on its infrastructure than the Network File System (NFS) environment. For example, authentication is handled by the domain controller. For the CIFS environment, you must perform mappings between the usernames/groups and UIDs/GIDs with the Usermapper or local group/password files on the X-Blades.

The destination file system can only be mounted on one X-Blade, even though it is read-only. At the application level, as well as the operating system level, some applications may have limitations on the read-only destination file system due to caching and locking.
If you are planning on enabling international Unicode character sets on your source and destination sites, you must first set up translation files on both sites before starting Unicode conversion on the source site. Using International Character Sets with VNX covers this consideration. VNXs FileMover feature supports replicated file systems. This is described in Using VNX FileMover. VNXs File-Level Retention Capability supports replicated file systems. Using File-Level Retention on VNX provides additional configuration information.

34

Remote Replication Part 3


Jan 2011

The objectives for this lesson are shown here. Please take a moment to read them.

35

Remote Replication Part 3


Jan 2011

File Replicator includes support for failover and reversal, as well as Virtual X-Blade support. Combining these features provides an asynchronous data recovery solution for CIFS servers and CIFS file systems. In a CIFS environment, in order to successfully access file systems on a remote secondary site, you must replicate the entire CIFS working environment including local groups, user mapping information, Kerberos, DNS, shares and event logs. You must replicate the production file systems attributes, access the file system through the same UNC path, and find the previous CIFS servers attributes on the secondary file system. An asynchronous Data Recovery solution is possible because X-Blade clients can continue accessing data in the event of a failover from the primary site to the secondary site.

36

Remote Replication Part 3


Jan 2011

A connection must be established between the control stations to enable replication. Both the primary and secondary sites must have interfaces configured with the same name. Use the server_ifconfig command to configure interfaces. DNS resolution is required on both the primary and secondary sites. The time must be synchronized between the two sites and the domain controllers at each site. Some method of mapping Windows users to UIDs and GIDs for example, Internal Usermapper. File systems must be prepared. First, determine the space required at both the primary and secondary sites. Then create volumes and file systems to accommodate the size requirements. Next, Vblades are created. A primary Vblade is created in the loaded state. The secondary Vblade is created in a mounted state. This read-only state is used on the secondary side when replicating a Vblade. It cannot be actively managed, and receives updates from the primary during replication. Finally, create data file systems on the primary and secondary sites and mount file systems to the Vblade. CIFS servers are then created within the Vblade. The CIFS servers are added and joined to the Active Directory domain. Shares can then be created.

37

Remote Replication Part 3


Jan 2011

This example scenario assumes an operational CIFS/Vblade environment on a source VNX cel7. The example screens illustrate that to replicate this environment, two replication sessions need to be created. First, one for the Vblade and a second replication session for the associated file system. When the Vblade replication session is created, a Vblade gets created on the destination VNX cel9. When the file system replication is created, the Vblade that was created on the destination is selected as the Vblade to be associated with the file system.

38

Remote Replication Part 3


Jan 2011

You can now replicate the CIFS environment (Vblade) from the primary to the secondary site.

In the steady-state CIFS environment, all data is replicated from the primary to the secondary, and all the daily management changes are automatically replicated. Successful access to CIFS servers, when failed over, depends on the customer taking adequate actions to maintain DNS, Active Directory, user mappings, and network support of the data recovery site. VNX depends on those components for successful failover.
Monitor the X-Blade, file systems, and the replication process.

39

Remote Replication Part 3


Jan 2011

These are the key points covered in this module. Please take a moment to review them.

40

Remote Replication Part 3


Jan 2011

The next module in this series will be Module 9: Application Protection Part 1. Within this module you will learn about Replication Manager.

41

Remote Replication Part 3


Jan 2011

Let us take a moment to discuss other development and productivity offerings that are exclusively available to you as an EMC Velocity Partner. These tools are designed to help you understand, pitch, and sell EMC Products in your consultative conversations with your customers. You can find these tools on Powerlink under the Home > Training > Tools, Launches, & Live Training > Productivity Tools section. After you have completed reading, click the Next Slide button.

42

Remote Replication Part 3


Jan 2011

This concludes the module on VNX and VNXe Remote Replication Part 3 VNX File Replicator. Thank you for taking the time to review it.

43

Você também pode gostar