Você está na página 1de 2

Capabilities[edit]

Block-based journaling.
Synchronous and asynchronous replication.
Any-Point-In-Time - Every write is tracked and stored as a different snapshot. A
lternatively, groups of writes can be aggregated according to configuration in o
rder to reduce storage space and network traffic.
Heterogeneous (multi-vendor) storage arrays via Fibre Channel.
WAN-based compression.
Tracking multiple volumes as a single consistency group.
Replication[edit]
RecoverPoint continuous data protection (CDP) tracks changes to data at a block
level and journals these changes.[1] The journal then allows rolling data to a p
revious "Point-In-Time" in order to view the drive contents as they were before
a certain data corruption. CDP can journal each write individually, hence enabli
ng "Any-Point-In-Time" snapshots, or it can be configured to combine consecutive
writes in order to reduce journal space and improve bandwidth. CDP works only o
ver SAN - the RecoverPoint appliances needs to be zoned and masked with both the
master, the replica and the journal LUNs.
RecoverPoint continuous remote replication (CRR) enables a replica in a remote s
ite. For such a setup, RecoverPoint appliances clusters are required in both the
local and remote sites. These 2 clusters communicate over either FC or IP. Reco
verPoint applies compression and de-duplication in order to reduce WAN traffic.
As of RecoverPoint 3.4, only one remote site. CRR can be combined with CDP in or
der to provide concurrent local and remote (CLR) replication.
The consistency group (CG) term is used for grouping several LUNs together in or
der to ensure write-order consistency over several volumes. This is used for exa
mple with a database that stores its data and journal on different logical drive
s. These logical drives must be kept in-sync on the replica if data-consistency
needs to be preserved. Other examples are multi-volume file systems such as ZFS
or Windows' Dynamic Disks. RecoverPoint 3.4 supports up to 128 CGs and 2048 LUNs
.[2] Each LUN can contain up to 2 TB, and the total supported capacity can be up
to 150 TB.
Write splitting[edit]
Similar to other continuous data protection products, and unlike backup products
, RecoverPoint needs to obtain a copy of every write in order to track data chan
ges. RecoverPoint supports three methods of write splitting: host-based, fabricbased and in the storage array. EMC advertises RecoverPoint as heterogenous due
to its support of multi-vendor server, network and storage environments.[3]
Host-based write splitting is done using a device driver that is installed on th
e server accessing the storage volumes. The usage of a host-based splitter allow
s replication of non-EMC storages. However, splitters are not available for all
operating systems and versions.
Available fabric-based splitters are for Brocade SAN switches and for Cisco SANT
ap. This requires the investment in additional switch blades. This configuration
allows splitting from all operating systems regardless of their version, and is
agnostic to the storage array vendor.
Storage array splitters are only supported on a subset of EMC storages. This met
hod allows write splitting from all operating systems, and does not require spec
ial SAN switching hardware. The RecoverPoint/SE is a slimmed-down version that o
nly supports this type of splitter.
Architecture[edit]
Each site requires installation of a cluster that is composed of 2-8 RecoverPoin
t appliances. The multiple appliances work together as an high availability clus

ter. Each appliance is connected via Fibre Channel to the SAN, and must be zoned
together with both the server (SCSI initiator) and the storage (SCSI target). E
ach appliance must also be connected to an IP network for management.
All replication takes place over either FC or standard IP for asynchronous repli
cation. With RecoverPoint 4.0 and later synchronous replication can now take pla
ce over FC or IP.
One or more host-, fabric- or array- splitters would split traffic to both the s
torage and the appliances.
When configuring a consistency group, there is a need to select source LUNs on w
hich the data will be monitored, target LUNs in the same size, and journal LUNs.
The management GUI will indicate when the target LUNs are identical to the sour
ce LUNs, and will enable selecting an older timestamp in order to roll back the
target LUNs to an historical state.
Integration with other products[edit]
Besides integration with EMC products such as AppSync, ViPR, Replication Manager
, Control Center and Unisphere, and the CLARiiON,VNX, Symmetrix and VPLEX storag
e arrays, RecoverPoint integrates with the following products:
Integration with VMware vSphere, VMware Site Recovery Manager and Microsoft Hype
r-V allows protection to be specified per VM instead of per volumes that are ava
ilable to the hypervisor.
Integration with Microsoft Shadow Copy, Exchange and SQL Server and Oracle Datab
ase Server allows RecoverPoint to temporarily stop writes by the host in order t
o take consistent application-specific snapshots.
The usage of APIs/CLIs allows customers to integrate RecoverPoint with custom in
ternal software.[1]
Notes[edit]
^ Jump up to: a b

Você também pode gostar