Escolar Documentos
Profissional Documentos
Cultura Documentos
Table of Contents
Table of Contents
Chapter 1: About this Document ............................................................................................... 4
Overview ............................................................................................................................ 4
Audience and purpose ....................................................................................................... 5
Business challenge ............................................................................................................ 6
Technology solution ........................................................................................................... 6
Objectives .......................................................................................................................... 7
Reference Architecture ...................................................................................................... 8
Validated environment profile ............................................................................................. 9
Hardware and software resources ..................................................................................... 9
Prerequisites and supporting documentation ................................................................... 11
Terminology ..................................................................................................................... 12
Chapter 2: Use Case Components ......................................................................................... 13
Chapter 3: Storage Design ...................................................................................................... 17
Overview .......................................................................................................................... 17
CLARiiON storage design and configuration ................................................................... 18
Data Domain .................................................................................................................... 23
SAN topology ................................................................................................................... 25
Chapter 4: Oracle Database Design ....................................................................................... 28
Overview .......................................................................................................................... 28
Chapter 5: Installation and Configuration ................................................................................ 33
Overview .......................................................................................................................... 33
Navisphere ....................................................................................................................... 34
PowerPath ........................................................................................................................ 37
Install Oracle clusterware ................................................................................................. 41
Data Domain .................................................................................................................... 47
NetWorker ........................................................................................................................ 51
Multiplexing ...................................................................................................................... 57
Chapter 6: Testing and Validation ........................................................................................... 59
Overview .......................................................................................................................... 59
Section A: Test results summary and resulting recommendations .................................. 60
Chapter 7: Conclusion ............................................................................................................. 76
Overview .......................................................................................................................... 76
Appendix A: Scripts ................................................................................................................. 78
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
3
This Proven Solution Guide summarizes a series of best practices that were
discovered, validated, or otherwise encountered during the validation of the EMC
Data Domain Backup and Recovery for an Oracle 11g OLTP environment enabled
by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery
Manager.
EMC's commitment to consistently maintain and improve quality is led by the Total
Customer Experience (TCE) program, which is driven by Six Sigma methodologies.
As a result, EMC has built Customer Integration Labs in its Global Solutions Centers
to reflect real-world deployments in which TCE use cases are developed and
executed. These use cases provide EMC with an insight into the challenges currently
facing its customers.
Use case
definition
A use case reflects a defined set of tests that validates the reference architecture for
a customer environment. This validated architecture can then be used as a reference
point for a Proven Solution.
Contents
See Page
Business challenge
Technology solution
Objectives
Reference Architecture
11
Terminology
12
Purpose
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
5
Business challenge
Overview
Today's IT is being challenged by the business to solve the following pain points
around the backup and recovery of the business critical data:
Technology solution
Overview
This solution describes a backup and recovery environment of an Oracle 11g OLTP
database. The database was deployed on a CLARiiON CX4-960 using Oracle ASM
in a two-node RAC configuration. Backup and recovery was implemented using
RMAN, EMC NetWorker, and an EMC Data Domain DD880 appliance.
The backup process was offloaded to a NetWorker proxy node using Navisphere
SnapView clones to minimize the impact to the production environment. The
DD880 appliance enabled an 84 percent saving on the storage capacity required by
the backup process. Reference the Storage saving % after 5 weeks of backup
cycle chart in the Testing and Validation section.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
6
The following table describes the key components and their configuration details
within this environment.
Component
Configuration
Software
CLARiiON CX4960
Four BE 4 Gb FC ports,
eight FE 4 Gb FC ports
per storage processor,
nine DAEs with five 146
GB, and 130 x 300 GB
disk drives
FLARE
04.29.000.5.003
Data Domain
DD880
DDOS 4.7.1.3
Oracle 11g
Database/Cluster/
ASM versions
11.1.0.7
NetWorker
NetWorker Management
Console, storage nodes,
clients
NetWorker 7.6
NMO 5.0
Objectives
This document provides guidelines on how to configure and set up an Oracle 11g
OLTP database with Data Domain deduplication storage systems. The solution
demonstrates the benefits of deduplication in an Oracle backup environment. The
backup schedule used level 0 (full backups), level 1 (incremental differential
backups), and Oracle Block Change Tracking (BCT).
This document is not intended to be a comprehensive guide to every aspect of an
Enterprise Oracle 11g solution. The information that is to be obtained and reported
from this document is described in the following list:
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
7
Reference Architecture
Corresponding
Reference
Architecture
Reference
Architecture
diagram
The following diagram depicts the overall physical architecture of the use case.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
8
The use case was validated with the following environment profile.
Profile characteristic
Database characteristic
Benchmark profile
Value
OLTP
Swingbench OrderEntry - TPC-C-like
benchmark
< 10 ms
70 / 30
A Swingbench load that keeps the system
running within agreed performance limits
1 TB
1
300 GB 15k rpm
Response time
Read / Write ratio
Database scale
Size of databases
Number of databases
Array drives: size and speed
Equipment
Storage array
Quantity
1
Configuration
CLARiiON CX4-960:
Nine DAEs
SAN
Navisphere management
server
NetWorker server
Network
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
9
Software
Software
RedHat Linux
Microsoft Windows
5.3
2003 SP2
Version
Oracle Database/Cluster/ASM
Oracle ASMLib
Swingbench
Orion
2.0
2.3
10.2
04.29.000.5.003
Navisphere Analyzer
SnapView
PowerPath
DDOS
NetWorker
NetWorker Module for Oracle
Cisco IOS
Fabric OS
6.29.0.6.34
6.29.0.6.34.1
5.3
4.7.1.3
7.6
5.0
12.2
6.2.0g
Comment
OS for database nodes
OS for Navisphere
Management Server
Database/cluster
software/volume management
Support library for ASM
OLTP database benchmark
Orion is the Oracle I/O
Numbers Calibration Tool
designed to simulate Oracle
I/O workloads
Includes:
Access Logix
Navisphere Agent
Multipathing software
Data Domain OS
Back up and recover suite
NetWorker Oracle integration
Network
SAN
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
10
Supporting
documents
Third-party
documents
EMC CLARiiON
EMC Data Domain
EMC NetWorker
Oracle Database
Red Hat Linux
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
11
Terminology
Terms and
definitions
Term
Definition
ASM
BCT
BE
Back End
DAE
DBCA
FE
Front End
NMO
RAC
RPO
RTO
SAS
SISL
VTL
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
12
This section briefly describes the key solutions components. For details on all of the
components that make up the solution architecture, refer to the hardware and
software sections.
CLARiiON
CX4-960
The EMC CLARiiON CX4 model 960 enables you to handle the most data-intensive
workloads and large consolidation projects. CLARiiON CX4-960 delivers innovative
technologies such as Flash drives, Virtual Provisioning, a 64-bit operating system,
and multi-core processors.
The CX4 new flexible I/O module design, UltraFlex technology, delivers an easily
customizable storage system. Additional connection ports can be added to expand
connection paths from servers to the CLARiiON. The CX4-960 can be populated with
up to six I/O modules per storage processor.
The CX4-960 also uses a new generation of storage processor CPUs, memory, and
PCI bus architecture.
EMC Data
Domain DD880
EMC Data Domain deduplication storage systems dramatically reduce the amount of
disk storage needed to retain and protect enterprise data. By identifying redundant
data as it is being stored, Data Domain systems provide a storage footprint that is
five to 30 times smaller, on average, than the original dataset. Backup data can then
be efficiently replicated and retrieved over existing networks for streamlined disaster
recovery and consolidated tape operations.
The Data Domain DD880 is the industrys highest throughput, most cost-effective,
and scalable deduplication storage solution for disk backup and network-based
disaster recovery (DR).
The high-throughput inline deduplication data rate of the DD880 is enabled by the
Data Domain Streams Informed Segment Layout (SISL) scaling architecture. The
level of throughput is achieved by a CPU-centric approach to deduplication, which
minimizes the number of disk spindles required.
Navisphere
Management
Suite
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
13
EMC
PowerPath
EMC NetWorker
Together with the NetWorker server, NMO augments the backup and recovery
system provided by the Oracle server and provides a storage management solution
that addresses the need for cross-platform support of enterprise applications.
EMC SnapView
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
14
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
15
The Swingbench workload used in this testing was Order Entry. The Order Entry
(PL/SQL) workload models the classic order entry stress test. It has a profile similar
to the TPC-C benchmark. It models an online order entry system, with users being
required to log in before purchasing goods.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
16
The environment consists of a two-node Oracle 11g RAC cluster that accesses a
single production database. Each cluster node resides on its own server, which is a
typical Oracle RAC configuration. The two RAC nodes communicate with each other
through a dedicated private network that includes a Cisco Catalyst 3750G-48TS
switch. This cluster interconnection synchronizes cache across various database
instances between user requests. A Fibre Channel SAN is provided by two Brocade
4900 switches. EMC PowerPath is used in this solution and works with the storage
system to intelligently manage I/O paths. In this solution, for each server, PowerPath
manages four active I/O paths to each device and four passive I/O paths to each
device.
Contents
See Page
18
Data Domain
23
SAN topology
25
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
17
Gold copy
Backup copy
Gold copy
At various logical checkpoints within the testing process the gold copy was refreshed
to ensure there was an up-to-date copy of the database available at all times. This
was to ensure that an instantaneous recovery image was always available in the
event that any logical corruption occurred during, or as result of, the testing process.
If any issue did occur then a reverse synchronization from the SnapView clone gold
copy would have made the data available immediately, thereby avoiding having to
rebuild the database.
Backup copy
The backup clone copy was used for NetWorker proxy backups. The clone copy of
the database was mounted to the proxy node and the backups were executed on the
proxy node. This is also referred to as the clone mount host.
Configuration
It is a best practice to use ASM external redundancy for data protection when using
EMC arrays. CLARiiON will also provide protection against loss of media, as well as
transparent failover in the event of a specific disk or component failure.
The following image shows the CLARiiON layout; the CX4-960 deployed for this
solution had four 4 Gb Fibre Channel back-end buses for disk connectivity. The
back-end buses are numbered Bus 0 to Bus 3. Each bus connects to a number of
DAEs (disk array enclosures). DAEs are numbered using the Bus X Enc Y
nomenclature, so the first enclosure on Bus 0 is therefore known as Bus 0 Enc 0.
Each bus has connectivity to both storage processors for failover purposes.
Each enclosure can hold up to 15 disk drives. Each disk drive is numbered in an
extension of the Bus Enclosure scheme. The first disk in Bus 0 Enclosure 0 is known
as disk 0_0_0.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
18
The following image shows how ASM diskgroups were positioned on the CLARiiON
array.
The first enclosure contains the vault area. The first five drives 0_0_0 through 0_0_4
have a portion of the drives reserved for internal use. This reserved area contains
the storage processor boot images as well as the cache vault area. Disks 0_0_11 to
0_0_14 were configured as hot spares.
Disks 0_0_5 to 0_0_9 were configured as RAID Group 0 with 16 LUNs used for the
redo logs. These LUNs were then allocated as an ASM diskgroup, named the redo
diskgroup. RAID Group 0 also contained the OCR disk and the Voting disk.
The next four enclosures contain three additional ASM diskgroups. The following
section explores this in more detail.
ASM diskgroups
The database was built using four distinct ASM diskgroups:
The Data diskgroup contains all datafiles and the first control file.
The Online Redo diskgroup contains online redo logs for the database and a
second control file. Ordinarily, Oracles best practice recommendation is for
the redo logs files to be placed in the same diskgroup as all the database
files (the Data diskgroup in this example). However, it is necessary to
separate the online redo logs from the data diskgroup when planning to do
recovery from split mirror snap copies since the current redo log files cannot
be used to recover the cloned database.
The Flash Recovery diskgroup contains the archive logs.
The Temp diskgroup contains tempfiles.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
19
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
20
EMC SnapView
SnapView clones were used to create complete copies of the database. A clone
copy was used to offload the backup operations to the proxy node. A second clone
copy was used as a Gold copy. The following graphic shows an example of a clone
LUN's relationship to the source LUN, in this example the clone information for one
of the LUNs is contained in the ASM datagroup. SnapView clones create a full bitfor-bit copy of the respective source LUN. A clone was created for each of the LUNs
contained within the ASM diskgroups, and all clones were then simultaneously split
from their respective sources to provide a point-in-time content consistent replica set.
The command naviseccli h arrayIP snapview listclonegroup data1 was used
to display information on this clone group. Each of the ASM diskgroup LUNs is
added to a clone group becoming the clone source device. Target LUN clones are
then added to the clone group. Each clone group is assigned a unique ID and each
clone gets a unique clone ID within the group. The first clone added has a clone ID
of 010000000000000 and for each subsequent clone added the clone ID increments.
The clone ID is then used to specify which clone is selected each time a cloning
operation is performed.
As shown above there are two clones assigned to the clone group. Clone ID
01000000000000000 was used as the gold copy and clone ID 0200000000000000
was used for backups. (The Navisphere Manager GUI also shows this information.)
When the clones are synchronized they can be split (fractured) from the source LUN
to provide an independent point-in-time copy of the database.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
21
The LUNs used for the clone copies were configured in a similar fashion to the
source copy to maintain the required throughput during the backup process. The
image below shows the clone relationship for two of the metaLUNs.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
22
Data Domain
Overview
The following sections describe how Data Domain systems ensure data integrity and
provide multiple levels of data compression, reliable restorations and multipath
configurations. The Data Domain operating system (DD OS) Data Invulnerability
Architecture protects against data loss from hardware and software failures.
Data integrity
When writing to disk, the DD OS creates and stores checksums and self-describing
metadata for all data received. After writing the data to disk, the DD OS then
recomputes and verifies the checksums and metadata. An append-only write policy
guards against overwriting valid data.
After a backup completes, a validation process looks at what was written to disk to
see that all file segments are logically correct within the file system and that the data
is the same on the disk as it was before being written to disk.
In the background, the Online Verify operation continuously checks that data on the
disks is correct and unchanged since the earlier validation process.
The back-end storage is set up in a double parity RAID 6 configuration (two parity
drives). Additionally hot spares are configured within the system. Each parity stripe
has block checksums to ensure that data is correct. The checksums are constantly
used during the online verify operation and when data is read from the Data Domain
system. With double parity, the system can fix simultaneous errors on up to two
disks.
To keep data synchronized during a hardware or power failure, the Data Domain
system uses NVRAM (non-volatile RAM) to track outstanding I/O operations. An
NVRAM card with fully-charged batteries (the typical state) can retain data for a
minimum of 48 hours. When reading data back on a restore operation, the DD OS
uses multiple layers of consistency checks to verify that restored data is correct.
Data
compression
Smaller footprint
Longer retention
Faster restore
Faster time to disaster recovery
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
23
SISL
Multipath and
load-balancing
configuration
Multipath configuration and load balancing are supported on Data Domain systems
that have at least two HBA ports. In a multipath configuration on a Data Domain
system, each of two HBA ports on the system is connected to a separate port on the
backup server.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
24
SAN topology
SAN topology
Oracle layout
The two-node Oracle 11g RAC cluster nodes and the proxy node were cabled and
zoned as shown in the following image. Each node contained four two-port HBAs.
Four of the available ports were used to connect the nodes to the CX4-960.
CLARiiON best practice dictates that single initiator soft zoning is used. Each HBA is
zoned to both storage processors. This configuration offers the highest level of
protection and may also offer higher performance. It entails the use of full-feature
PowerPath software. In this configuration, there are multiple HBAs connected to the
host; therefore, there are redundant paths to each storage processor. There is no
single point of failure. Data availability is ensured in event of an HBA, cable, switch,
or storage processor failure. Since there are multiple paths per storage processor,
this configuration benefits from the PowerPath load-balancing feature and thus
provides additional performance.
The connectivity diagram below shows the two-node Oracle 11g RAC cluster nodes.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
25
The remaining four HBA ports in each node were zoned to the Data Domain DD880
appliance. This zoning approach ensured that the primary storage
CLARiiON CX4-960 and the backup storage DD880 were on different HBA ports.
This ensures that the traffic is segregated during backups as recommended by EMC
best practice.
Single initiator zoning was also used when zones were created for the DD880. The
zoning of the nodes to the DD880 also had redundancy built-in to ensure that tape
drives were available in the event of a cable, switch, or HBA failure.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
26
NetWorker
topology
The EMC NetWorker environment provides the ability to protect your enterprise
against the loss of valuable data. In a network environment, where the amount of
data grows rapidly, the need to protect data becomes crucial. The EMC NetWorker
product gives you the power and flexibility to meet such a challenge.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
27
This chapter provides guidelines on the Oracle database design used for this
validated solution. The design and configuration instructions apply to the specific
revision levels of components used during the development of the solution.
Before attempting to implement any real-world solution based on this validated
scenario, gather the appropriate configuration documentation for the revision levels
of the hardware and software components. Version-specific release notes are
especially important.
ASM
diskgroups
The database was built with four distinct ASM diskgroups (+DATA, +FRA, +REDO,
and +TEMP).
ASM Diskgroup
Contents
DATA
FRA
Archive logs
REDO
TEMP
Temporary tablespace
The ASMCMD CLI lists the diskgroups, showing the state of each one.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
28
Control files
The Oracle database, in this solution, has two control files, each stored in different
ASM diskgroups.
Redo logs
All database changes are written to the redo logs (unless logging is explicitly turned
off) and are therefore very write-intensive. To protect against a failure involving the
redo log, the Oracle database was created with multiplexed redo logs so that copies
of the redo log can be maintained on different disks.
Archive log mode was enabled which automatically created database-generated
offline archived copies of online redo log files. Archive log mode enables online
backups and media recovery.
Note
Oracle recommends that archive logging is enabled; the following graphic shows
how to check that the database is in archive log mode.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
29
The previous graphic shows that once archive log mode is enabled, the archive logs
were written out to the FRA diskgroup.
Parameter files
A centrally located server parameter file (spfile) was used to store and manage the
database initialization parameters persistently by all RAC instances. Oracle
recommends that you create a server parameter file as a dynamic means of
maintaining initialization parameters.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
30
Swingbench
TPC-C-like
toolkit
The order entry wizard that is used to create the SOE schema in Swingbench 2.3
limits its size to 100 GB. The reason for this is that it executes as a single-threaded
operation and would take an unreasonable time to create a schema any larger.
However, Datagenerator is capable of creating larger schemas that would generate
much higher levels of I/O (index lookups). The script alterations specific to this
solution environment to create a 1 TB database can be seen in the appendix.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
31
The Configuration dialog box (see the following image) enables you to change all of
the important attributes that control the size and type of load placed on your server.
Four of the most useful are:
Number of Users: This describes the number of sessions that Swingbench will
create against the database.
Min and Max Delay Between Transactions (ms): These values control how long
Swingbench will put a session to sleep between transactions.
Benchmark Run Time: This is the total time Swingbench will run the bench for.
After this time has expired Swingbench will automatically log off the sessions.
This graphic shows a typical example with 120 concurrent sessions.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
32
This chapter provides procedures and guidelines for installing and configuring the
components that make up the validated solution scenario. The installation and
configuration instructions presented in this chapter apply to the specific revision
levels of components used during the development of this solution.
Before attempting to implement any real-world solution based on this validated
scenario, gather the appropriate installation and configuration documentation for the
revision levels of the hardware and software components planned in the solution.
Version-specific release notes are especially important.
Contents
See Page
Navisphere
34
PowerPath
37
41
Data Domain
47
NetWorker
51
Multiplexing
57
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
33
Navisphere
Overview
Navisphere Management Suite enables you to access and manage all CLARiiON
advanced software functionality.
Register hosts
The Connectivity Status view in Navisphere, seen in the image below, shows the
new host as logged in but not registered. Install the Navisphere host agent on the
host and reboot, and the HBAs will then automatically register.
The Hosts tab shows the host as unknown and the host agent is unreachable; this is
because the host is multi-homed, that is, the host has multiple NICs configured, see
the following image.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
34
A multi-homed host machine has multiple IP addresses on two or more NICs. The
host can be physically connected to multiple data links that can be on the same or
different networks. When you install Navisphere Host Agent on a multi-homed host,
the host agent, by default, binds to the first NIC in the host. If your host is multihomed, for the host agent to successfully register with the desired CLARiiON storage
system, you need to configure the host agent to bind to a specific NIC. This is
rectified by setting up an agentID.txt file. To do this stop the Navisphere agent, then
rename or delete the HostIdFile.txt file located in /var/log, as shown in the following
image.
Create agentID.txt in root; this file should only contain the fully qualified hostname of
the host and the IP address HBA/NIC port that the Navisphere agent should use.
The agentID.txt file should contain only these two lines and no special characters,
as shown in the following image.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
35
Then stop and restart the Navisphere agent; this re-creates the HostIdFile.txt file
binding the agent to the correct NIC. The host now shows correctly on the Hosts tab.
The host is now shown as registered correctly with Navisphere, as shown in the
previous image.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
36
PowerPath
Overview
EMC PowerPath provides I/O multipath functionality. With PowerPath, a node can
access the same SAN volume via multiple paths (HBA ports), which enables both
load balancing across the multiple paths and transparent failover between the paths.
PowerPath
policy
After PowerPath has been installed and licensed it is important to set the PowerPath
policy to CLARiiON-Only. The following image shows the powermt display output
prior to setting the PowerPath policy.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
37
Once the PowerPath Policy has been set correctly all paths are now alive and
licensed. The previous image shows the powermt set policy command and the
powermt display command output for CLARiiON LUN 80. It lists the eight paths for
this device. These paths are managed by PowerPath. The LUN is owned by SPA,
therefore the four paths to SPA are active and the remaining paths to SPB are
passive.
All ASM diskgroups were then built using PowerPath pseudo names.
Note
A pseudo name is a platform-specific value assigned by PowerPath to the
PowerPath device.
Because of the way in which the SAN devices were discovered on each node, there
was a possibility that a pseudo device pointing to a specific LUN on one node might
point to a different LUN on another node. The emcpadm command was used to
ensure consistent naming of PowerPath devices on all nodes.
The following image shows how to determine the available pseudo names.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
38
The next image shows how to change the pseudo names using the following
command:
emcpadm renamepseudo s <xxx> t <yyy>
This table shows the PowerPath names associated with the LUNs used in the ASM
diskgroups.
Diskgroup
Purpose
Diskgroup Name
Path
CLARiiON
LUN
Data files
DATA
/dev/emcpowerac
10
/dev/emcpowerad
/dev/emcpowerae
/dev/emcpoweraf
/dev/emcpowere
65
/dev/emcpowerf
64
/dev/emcpowerg
63
/dev/emcpowerh
62
/dev/emcpoweri
61
/dev/emcpowerj
60
/dev/emcpowerk
59
/dev/emcpowerl
58
/dev/emcpowerm
57
/dev/emcpowern
56
/dev/emcpowero
55
/dev/emcpowerp
54
/dev/emcpowerq
53
/dev/emcpowerr
52
/dev/emcpowers
50
/dev/emcpowert
51
/dev/emcpoweru
22
Temp/Undo
REDO
TEMP
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
39
Flash Recovery
High
availability
health check
FRA
/dev/emcpowerv
20
/dev/emcpowerw
16
/dev/emcpowerx
18
/dev/emcpowery
23
/dev/emcpowerz
21
/dev/emcpoweraa
19
/dev/emcpoweab
17
To verify that the hosts and CLARiiON are set up for high availability, install and run
the naviserverutilcli utility on each node to ensure that everything is set up correctly
for failover. To run the utility, use the following command:
naviserverutilcli hav upload ip 172.<xxxxxxx>
In addition to the standard output, the health check utility also uploads a report to the
CLARiiON storage processors that can be retrieved and stored for reference.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
40
Oracle 11g clusterware was installed and configured for both production nodes.
Below are a number of screenshots taken during the installation, showing the
configuration of both RAC nodes.
Specify cluster
Configure ASM
and Oracle 11g
software and
database
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
41
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
42
ASM diskgroup
creation
The Oracle DBCA creates the ASM data diskgroup and the init.ora for the ASM
instance. Additional diskgroups can then be created.
ASM uses mirroring for redundancy. Three types of redundancy are supported by
ASM. They are:
External redundancy.
Normal redundancy: 2-way mirrored. At least two failure groups are needed.
High redundancy: 3-way mirrored. At least three failure groups are needed.
EMC recommends using external redundancy as provided by the CLARiiON CX4940. Refer to the CLARiiON configuration setup.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
43
Database
installation
Once the ASM diskgroups were created, Oracle Database 11g 11.1.0.6.0 was
installed.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
44
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
45
Block change
tracking
The block change tracking feature for incremental backups improves incremental
backup performance by recording changed blocks in each datafile in a block change
tracking file. This file is a small binary file stored in the database area. RMAN tracks
changed blocks as redo is generated.
If you enable block change tracking, RMAN uses the change tracking file to identify
changed blocks for an incremental backup, thus avoiding the need to scan every
block in the datafile. RMAN only uses block change tracking when the incremental
level is greater than 0 because a level 0 incremental backup includes all blocks.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
46
Data Domain
Introduction
Data Domain DD880 integrates easily into existing data centers and can be
configured for leading backup and archiving applications using NFS, CIFS, OST, or
VTL protocols. This solution focuses on VTL over Fibre Channel SAN.
The Data Domain appliance was configured with four Fibre Channel connections to
the SAN. The largest number of tape drives assigned per channel for this solution
was two, giving a maximum number of eight RMAN channels per node.
VTL option
The following image shows the Data Domain Enterprise Manager. The VTL option
requires an additional Data Domain license.
To add the license click the Maintenance tab and launch the Configuration Wizard
from the Tasks drop-down, as shown in the following image.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
47
After the VTL license has been applied, the VTL service can then be enabled. Go to
the Data Management page and select VTL Actions Enable.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
48
After the VTL license was added, the virtual tape library was created.
Virtual tapes were created in a tape pool. For this solution the default pool was
selected.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
49
Access groups were created within the DD880 appliance to allow access to
individual tape drives and the media changer. The HBA WWNs of the nodes will be
present in the Physical Resources tab only if they are correctly zoned on the FC
switches. The HBAs are then available to be added to the access groups as
initiators. Tape drives are added to the access group as LUNs. The LUNs are
assigned primary and secondary ports on the appliance.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
50
NetWorker
NetWorker
introduction
NetWorker
configuration
After zoning the SAN switches correctly and creating the access groups on the Data
Domain appliance, the servers can pick up the tape devices. A SCSI bus rescan is
required to achieve this.
When selecting tape devices for use it is important to select the SCSI non-rewind
devices. The st driver provides the interface to a variety of SCSI tape devices under
Linux.
First (auto-rewind) SCSI tape device name: /dev/st0
Second (auto-rewind) SCSI tape device name: /dev/st1
First the non-rewind SCSI tape devices: /dev/nst0
Second the non-rewind SCSI tape devices: /dev/nst1
The following image shows the devices selected are all non-rewind devices.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
51
The three servers were added to NetWorker as storage nodes. The devices can then
be made available to NetWorker by performing a scan for devices.
To change device properties, right-click on the device and select Properties. The
parameters used for this solution are described in the following section.
The NetWorker device Target sessions and Max sessions parameters were both set
to 1. This effectively disabled NetWorker multiplexing, which is important when using
data deduplication.
Multiplexing, and its effect on data duplication, is described in detail in the
Multiplexing section.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
52
The NetWorker Wizard was used to configure the client backups on each node.
NetWorker Module for Oracle NMO was installed on each node to enable NetWorker
integration with Oracle RMAN.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
53
Number of channels
Backup level
Control file backup
Archive redo log backup
Filesperset
Testing was conducted using different numbers of RMAN channels. Backup levels
were set at level 0, or full backups, on the weekend and level 1, or differential
incremental backups, on weekdays. The control file and archive redo logs were
included in the backups. Refer to the Testing and Validation section for more
details.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
54
The filesperset parameter was set to 1; this is explained in detail in the next section.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
55
The wizard creates the RMAN script, as shown below, which can be modified if
required.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
56
Multiplexing
Overview to
NetWorker
multiplexing
The image below shows six 10 MB/s clients writing to a 60 MB/s device, with
multiplexing enabled, all clients are interleaved together to take advantage of the
extra bandwidth of the tape device.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
57
RMAN
multiplexing
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
58
Contents
See Page
60
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
59
Testing was conducted using two, four, six, and eight RMAN channels.
Backups were run using a backup cycle consisting of level 0, or full backups, on the
weekend and level 1, or differential incremental backups, Monday through Thursday.
For the purposes of this solution Friday COB was deemed to be the start of the
weekend.
The starting point for an incremental backup strategy is a level 0 incremental backup,
which backs up all blocks in the database. An incremental backup at level 0 is
identical in content to a full backup but, unlike a full backup, the level 0 backup is
considered part of the incremental backup strategy.
A level 1 incremental backup contains only blocks changed after a previous
incremental backup. A level 1 backup can be a cumulative incremental backup,
which includes all blocks changed since the most recent level 0 backup, or a
differential incremental backup, which includes only blocks changed since the most
recent incremental backup. Incremental backups are differential by default.
Note
When using a deduplication appliance such as Data Domain it is often
recommended to run only full backups as only the unique data will be stored.
However, this was outside the scope of the solution.
Archived redo logs and the control file were also backed up as part of each backup
that occurred during this solution. Backing up the archived redo logs had a significant
impact on the overall change rate of the database. The change rate of the database
was 2 percent. However, because the archived log files were backed up on every
backup, the change rate observed during incremental backups was actually much
higher, closer to 10 percent.
Both NetWorker and RMAN multiplexing (filesperset=1) were disabled for all
backups, as this has a negative effect on deduplication rates achieved.
For this use case a number of tests were carried out on the Oracle 11g OLTP
backup and recovery infrastructure. At a high level the tests performed were:
Orion validation
RMAN channel configuration
Swingbench
Validate Swingbench profile
Backup from production
SnapView clone copy from production
Backup from proxy
Data Domain deduplication
Restore
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
60
Orion
validation
Once the disk environment was set up on the CLARiiON CX4-960, the disk
configuration was validated using an Oracle toolset called Orion. Orion is the Oracle
I/O Numbers Calibration Tool designed to simulate Oracle I/O workloads without
having to create and run an Oracle database. It utilizes the Oracle database I/O
libraries and can simulate OLTP workloads (small I/Os) or data warehouses (large
I/Os). Orion is useful for understanding the performance capabilities of a storage
system, either to uncover performance issues or to size a new database installation.
Note
This is a destructive tool so it should only be run against raw devices prior to
installing any database or application.
This graph shows total throughput on a single node, with four metaLUNs, consisting
of 40 disks.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
61
RMAN channel
configuration
This image shows the average MB/s throughput when backing up the database
using different numbers of RMAN channels. Eight RMAN channels proved to be the
fastest in this environment.
Shown here is a typical example of NetWorker backing up the database, in this case
using four RMAN channels.
All subsequent tests were run using the following base parameters. The
methodology for selecting these parameters was explained earlier in this document.
Backup parameters
RMAN Channels
Filesperset
NetWorker Target Session
NetWorker Max Session
Archived Redo Logs
Control File
8
1
1
1
Included in all backups
Included in all backups
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
62
Validate
Swingbench
profile
The following image shows the processor activity on Node1 with the Swingbench
load running against the cluster.
The next image shows the response time of the data metaLUNs (Data ASM
diskgroup) on the CLARiiON array during the same period.
Note
The array time is one hour behind the server time.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
63
Backup from
production
The following graph shows the processor activity on Node 1. Similar to the previous
example the Swingbench load is running against the cluster. In addition, an RMAN
backup initiated by NetWorker is also running against Node 1. The backup is running
against the same LUNs that are serving the Swingbench load. There is an increase
in CPU utilization and iowait.
The following graph shows the response times of Data metaLUNs (Data ASM
diskgroup) on the CLARiiON. The response time is higher for the duration of the
backup.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
64
Clone copy
from
production
The following graph again shows the processor activity on Node 1 under the
Swingbench load. In this example, a sync of the proxy clone is started and finished
during the time window.
The following graph shows the response time on the Data metaLUNs (Data ASM
diskgroup) on the CLARiiON. There is an initial increase in response time and iowait
(highlighted) when the sync is started. But the duration of the impact to the
production nodes is much shorter when compared to running the backup from one of
the nodes.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
65
The following graph shows processor activity on Node 1 under Swingbench load.
Again, a clone sync is initiated. In this example, the database is also put into hot
backup mode after the completion of the sync. No desirable effect can be seen on
the RAC node when in hot backup mode.
The following image shows the response time on the metaLUNs (Data ASM
diskgroup) on the CLARiiON. The first increase in response time, circa 09:47,
corresponds to the start of the clone sync.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
66
The second larger increase in response time, between 9:59 and 10:03, occurs when
the database is put into hot backup mode. The spike occurs when the database is
put into hot backup mode because:
Any dirty databuffers in the database buffer cache are written out to files and
the datafiles are checkpointed.
The datafile headers are updated to the system change number (SCN)
captured when the begin backup command is issued. The SCN is not
incremented with checkpoints while a file is in hot backup mode. This lets the
recovery process understand which archive redo log files may be required to
fully recover this file from that SCN onward.
The datablocks within the database files continue to be read and written to.
During hot backup, an entire block is written to the redo log files the first time
the data block is changed. Subsequently, only redo vectors (changed bytes)
are written to the redo logs.
When the database is taken out of hot backup mode the datafile header and SCN
are updated.
This shows that there is less impact on the RAC nodes when using SnapView clones
to offload the backup to the proxy host, compared to running the backup directly from
the RAC node.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
67
Backup from
proxy node
The following graph shows processor activity on the proxy node during a backup.
The subsequent graph shows the response time on the proxy clone Data metaLUNs
(Data ASM diskgroup) on the CLARiiON.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
68
The following two graphs show a complete level 0 backup from start to finish. They
show both production RAC nodes under Swingbench load. A clone sync is initiated
at approximately 2:50. Again, there is a short period at the beginning of the sync
during which an impact is observed. The sync completed at approximately 2:58. The
database was put into hot backup mode prior to fracturing the clones; there was no
noticeable effect on the production nodes.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
69
The following graph shows the proxy node during the same time interval. There was
no load running against the proxy node. To facilitate backing up the clone copy of the
database, the instance was started and database mounted on the proxy node.
The next graph shows the response times from the CLARiiON for the duration of the
backup. The response time of the production data metaLUNs (the ASM data
diskgroup), is tracked on the upper metric. The first peak is the clone sync and the
second peak is when the database is in hot backup mode. The lower metric tracks
the proxy clone data metaLUNs during the same period; the backup is taken from
these LUNs on the proxy node, taking the additional overhead of the backup away
from the production nodes.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
70
Deduplication
The following image shows the data stored on the DD880 after the first weekly
backup cycle. The backup cycle consisted of an RMAN level 0 (full) backup on the
weekend and RMAN level 1 backups Monday through Thursday. Oracle Block
Change Tracking was enabled to improve incremental backup performance. The
database daily change rate is 2 percent. However, because the archived log files
were also backed up, the change rate observed during incremental backups was
actually much higher, closer to 10 percent.
Note
The graphs show the total Data Written to the DD880 increasing over time; this is
also described as the logical capacity. The Data Stored refers to the unique data
that is stored on the appliance. The % Reduction shows the storage savings gained
from using Data Domain.
By eliminating redundant data segments, the Data Domain system allows many
more backups to be stored and managed than would normally be possible for a
traditional storage server. While completely new data has to be written to disk
whenever discovered, the variable-length segment deduplication capability of the
Data Domain system makes finding identical segments extremely efficient.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
71
The following graph shows the deduplication after two weeks; there is an increase at
the weekend, when the level 0 full backups are run.
A falloff in the deduplication factor can be seen on weekdays when the backup policy
is incremental and archived redo logs are contained within every backup.
The graph above shows the backup cycle trend and deduplication factor after three
weeks.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
72
The graph above illustrates a deduplication factor of 6:1 after five weeks, which
yields a storage capacity saving of 84 percent (see the following graph).
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
73
Restore
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
74
The following image shows the restore throughput of the DD880 appliance. A
sustained read throughput of over 630 MB/s was achieved. Comparing this to the
data in the subsequent image, which shows the throughput achieved by the DD800
appliance during a backup, this further demonstrates the balance between backup
and restore speeds.
The following image shows the DD880 appliance during a backup with a sustained
write throughput of over 790 MB/s.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
75
Chapter 7: Conclusion
Chapter 7: Conclusion
Overview
Introduction to
conclusion
Conclusion
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
76
Chapter 7: Conclusion
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
77
Appendix A: Scripts
Appendix A: Scripts
Datagenerator
.env file
Datagenerator
SOE schema
creation script
The following Datagenerator script was used to create and populate the SOE
schema used by Swingbench when running a typical TPC-C-type load against the
database.
-- Datagenerator "SOE" benchmark schema creation
-- Author : Dominic Giles
-- Created : 1/9/2008
-set verify off
define tablespace=SOE
variable tablespace_size number
define tablespace_size=1024
define datafile='+DATA'
define indextablespace=SOE_INDEX
define indextablespace_size=2048
define indexdatafile='+DATA'
define username=SOE
define password=SOE
define indexprefs=nologging
define parallelism=16
define parallelclause=''
define usecompression=''
-- Uncomment the following line to enable compression on the sales
and customer
-- define usecompression='COMPRESS'
define connectstring='//tce-r900-enc01/orcl'
variable scale number
define scale=1
accept username default &username prompt 'Order entry username
[&username] : '
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
78
Appendix A: Scripts
Appendix A: Scripts
SOE schema
installation
The SOE schema was installed manually using the scripts located in the
$DATAGEN_HOME/bin/scripts/soe directory.
These scripts can be run directly from SQL*Plus as the SYS or SYSTEM user.
The following command is an example of how to start the installation using the
soe_install script:
[oracle@TCE-R900-ENC01 soe]$ sqlplus /nolog
SQL*Plus: Release 11.1.0.7.0 - Production on Thu Aug 20 10:47:32 2009
Copyright (c) 1982, 2008, Oracle.
This script was used to create the schema of the testing user (in this case, soe) and
load all of the data necessary to run the tests.
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
80
Appendix A: Scripts
NetWorker
RMAN backup
script
The RMAN script below is a typical example of one used to generate backups
through the NetWorker console.
This example shows an eight-channel incremental level 0 backup to tape. Each
backup was assigned a tag ID, which was later used as part of the restore process.
RUN {
ALLOCATE CHANNEL CH1 TYPE 'SBT_TAPE';
ALLOCATE CHANNEL CH2 TYPE 'SBT_TAPE';
ALLOCATE CHANNEL CH3 TYPE 'SBT_TAPE';
ALLOCATE CHANNEL CH4 TYPE 'SBT_TAPE';
ALLOCATE CHANNEL CH5 TYPE 'SBT_TAPE';
ALLOCATE CHANNEL CH6 TYPE 'SBT_TAPE';
ALLOCATE CHANNEL CH7 TYPE 'SBT_TAPE';
ALLOCATE CHANNEL CH8 TYPE 'SBT_TAPE';
BACKUP
INCREMENTAL LEVEL 0
FILESPERSET 1
FORMAT '%d_%U'
TAG= 'RUN529'
DATABASE PLUS ARCHIVELOG;
backup controlfilecopy '+FRA/ORCL/control_backup'
RELEASE CHANNEL CH1;
RELEASE CHANNEL CH2;
RELEASE CHANNEL CH3;
RELEASE CHANNEL CH4;
RELEASE CHANNEL CH5;
RELEASE CHANNEL CH5;
RELEASE CHANNEL CH7;
RELEASE CHANNEL CH8;
}
Oracle RMAN
restore script
tag= 'RUN529_CTL';
The restore process consisted of first allocating eight channels then restoring the
controlfile, mounting the database, and performing the restore database command
using the tag ID assigned earlier. Below is a sample restore script.
RUN
{
ALLOCATE CHANNEL CH1 TYPE 'SBT_TAPE';
ALLOCATE CHANNEL CH2 TYPE 'SBT_TAPE';
ALLOCATE CHANNEL CH3 TYPE 'SBT_TAPE';
ALLOCATE CHANNEL CH4 TYPE 'SBT_TAPE';
ALLOCATE CHANNEL CH5 TYPE 'SBT_TAPE';
ALLOCATE CHANNEL CH6 TYPE 'SBT_TAPE';
ALLOCATE CHANNEL CH7 TYPE 'SBT_TAPE';
ALLOCATE CHANNEL CH8 TYPE 'SBT_TAPE';
restore controlfile from tag'RUN529_CTL';
alter database mount;
restore DATABASE from tag'RUN529';
RELEASE CHANNEL CH1;
RELEASE CHANNEL CH2;
RELEASE CHANNEL CH3;
RELEASE CHANNEL CH4;
RELEASE CHANNEL CH5;
RELEASE CHANNEL CH6;
RELEASE CHANNEL CH7;
RELEASE CHANNEL CH8;
}
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,
and Oracle Recovery Manager using Fibre Channel Proven Solution Guide
81