Você está na página 1de 68

Disaster Recovery using

Veritas Storage Foundation Enterprise HA


and IBM System Storage SAN Volume
Controller Metro Mirror & Global Mirror

Solution Installation and Configuration

Document SM102-5
September 26, 2008

Copyright IBM Corporation, 2008. All Rights Reserved.


All trademarks or registered trademarks mentioned herein are the property of their respective holders

Table of contents
Abstract........................................................................................................................................4
Introduction .................................................................................................................................4
About High Availability ............................................................................................................................. 4
About Disaster Recovery ......................................................................................................................... 4
About IBM ................................................................................................................................................ 4
About Symantec....................................................................................................................................... 5
About Veritas Storage Foundation HA..................................................................................................... 5
About SAN Volume Controller ................................................................................................................. 5

Overview ......................................................................................................................................7
Test System Configuration ........................................................................................................7
Installing and Configuring SAN Volume Controller ...............................................................11
Fabric Zoning ......................................................................................................................................... 11
SAN Volume Controller Software Installation ........................................................................................ 12
SAN Volume Controller Storage Configuration ..................................................................................... 12
Setting up SAN Volume Controller Cluster Partnership ........................................................................ 13
Create SVC Metro Mirror, SVC Global Mirror and FlashCopy Consistency Groups............................. 14
Create SVC Metro Mirror, SVC Global Mirror Relationship and FlashCopy Mappings......................... 15

Installing Veritas Storage Foundation ....................................................................................16


Symantec Product Licensing ................................................................................................................. 16
Supported AIX Operating Systems........................................................................................................ 16
Database Requirements ........................................................................................................................ 17
Disk Space............................................................................................................................................. 17
Environment Variables........................................................................................................................... 17
Virtual IP Address .................................................................................................................................. 17
Prerequisites for Remote and Cluster Installation ................................................................................. 17
Mounting a Software Disk...................................................................................................................... 18
Installing Using the Veritas Product Installer ......................................................................................... 18
Upgrading Veritas Storage Foundation HA to 5.0MP1RP3................................................................... 21

Installing and Upgrading VCS agent for DB2 .........................................................................22


Installing and Upgrading VCS agent for Oracle .....................................................................23
Installing VCS agent for IBM SVCCopyServices ....................................................................23
Installing and Configuring DB2, Oracle ..................................................................................23
Configuring Applications for Disaster Recovery ...................................................................24
Quick Setup ........................................................................................................................................... 24
General Configuration Steps ................................................................................................................. 25
Setting up SVC Cluster Partnership ...................................................................................................... 25
Configuring SVC Metro / Global Mirror, FlashCopy Relationship.......................................................... 25
Setting up VCS Global Custer ............................................................................................................... 25
Configuring VCS Application Service Group ......................................................................................... 26
Adding the VCS SVC CopyServices Resource ..................................................................................... 28

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

Setting up Firedrill .................................................................................................................................. 31

Failover Scenarios ....................................................................................................................33


Application Host Failover ....................................................................................................................... 34
Disaster Recovery in a Global Cluster Configuration ............................................................................ 34
Checking Failover Readiness with Firedrill............................................................................................ 35

Summary....................................................................................................................................37
Appendix A: VCS Configuration File (main.cf) with DB2, Oracle Application Service
Group for HA/DR .......................................................................................................................38
Appendix B: Setting up the Database Applications...............................................................56
Setting up the DB2 Database Application.............................................................................................. 56
Setting up the Oracle Database Application.......................................................................................... 59

Appendix C: Veritas Software Filesets Listing.......................................................................66


Trademarks and Special Notices .............................................................................................68

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

Abstract
This document describes how Symantec and IBM have installed, configured and validated High
Availability (HA) and Disaster Recovery (DR) configurations for DB2 and Oracle with IBM
System Storage SAN Volume Controller (SVC). These validations include local HA
configurations using Veritas Storage Foundation and Veritas Cluster Server. The
configuration was extended to a DR configuration using IBM SVC Metro Mirror for synchronous
replication and SVC Global Mirror for asynchronous replication with the VCS agent for IBM SVC
CopyServices and VCS Global Cluster Option for alternate site failover/failback capability.

Introduction
Infrastructure for mission critical applications must be able to meet the organization's Recovery Time
Objective (RTO) and Recovery Point Objective (RPO) for resuming operation in the event of a site
disaster. This solution addresses environments where the RPOs and RTOs are in the range of minutes
to a few hours. While backup is the foundation for any DR plan, typical RTOs for tape based backup are
well beyond these objectives. Also, replication alone is not enough as having the application data at a DR
site is of limited use without also having the ability to start the correct sequence of database management
systems, application servers, and business applications.
Symantecs DR solutions Metro Clustering and Global Clustering are extensions of local HA clustering
using Veritas Storage Foundation and Veritas Cluster Server. This validated and documented solution is
an example of Global Clustering, a collection of two or more VCS clusters at separate locations linked
together with VCS Global Cluster Option to enable wide area failover and disaster recovery. Each local
cluster within the global cluster is connected to its own shared storage. Local clustering provides local
failover for each site. IBM SAN Volume Controller Metro Mirror replicates data between sites to maintain
synchronized copies of storage at the two sites. For a disaster that affects an entire site, the customer
makes a decision on whether to move operations to the disaster recovery site. When that decision is
made, the application is automatically migrated to a system at the DR site. IBM SAN Volume Controller
Global Mirror replicates data asynchronously between sites and applies recovery at the DR sites

About High Availability


The term high availability (HA) refers to a state where data and applications are highly available because
software or hardware is in place to maintain the continued functioning in the event of computer failure.
High availability can refer to any software or hardware that provides fault tolerance, but generally
the term has become associated with clustering. Local clustering provides high availability through
database and application failover. Veritas Storage Foundation Enterprise HA (SF/HA) includes Veritas
Storage Foundation and Veritas Cluster Server and provides the capability for local clustering. SAN
Volume Controller includes High Availability features as well.

About Disaster Recovery


Wide area disaster recovery (DR) provides the ultimate protection for data and applications in the event of
a disaster. If a disaster affects a local or metropolitan area, data and critical services are failed over to a
site hundreds or thousands of miles away. SAN Volume Controller Metro Mirror and Global Mirror and
Veritas Storage Foundation Enterprise HA/DR provide the capability for implementing disaster recovery.

About IBM
IBM is the world's largest information technology company, with over 80 years of leadership in helping
businesses innovate by delivering a wide range of solutions and technologies that enable customers,
large and small, to deliver more efficient and effective services. IBM's comprehensive server, storage,
software and services portfolio is designed to help you create new business insight by integrating,
analyzing and optimizing information on demand. From its foundations of virtualization, openness

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

and innovation through collaboration, IBM can optimize management of information through technology
innovations and infrastructure simplification to help achieve maximum business productivity. Visit IBM
at ibm.com.

About Symantec
Symantec is a global leader in infrastructure software, enabling businesses and consumers to have
confidence in a connected world. The company helps customers protect their infrastructure, information
and interactions by delivering software and services that address risks to security, availability, compliance
and performance. Headquartered in Cupertino, California, Symantec has operations in more than 40
countries. Visit Symantec at symantec.com

About Veritas Storage Foundation HA


Veritas Storage Foundation HA is a comprehensive solution that delivers data and application availability
by bringing together two industry-leading products: Veritas Storage Foundation and Veritas Cluster
Server.
Veritas Storage Foundation provides a complete solution for heterogeneous online storage management.
Based on the industry-leading Veritas Volume Manager and Veritas File System, it provides a
standard set of integrated tools to centrally manage explosive data growth, maximize storage hardware
investments, provide data protection and adapt to changing business requirements. Unlike point
solutions, Storage Foundation enables IT organizations to manage their storage infrastructure with one
tool. With advanced features such as centralized storage management, non-disruptive configuration and
administration, dynamic storage tiering, dynamic multi-pathing, data migration and local and remote
replication, Storage Foundation enables organizations to reduce operational costs and capital
expenditures across the data center.
Veritas Cluster Server is an industry leading clustering solution for reducing both planned and unplanned
downtime. By monitoring the status of applications and automatically moving them to another server in
the event of a fault, Veritas Cluster Server can dramatically increase the availability of an application or
database. Veritas Cluster Server can detect faults in an application and all its dependent components,
including the associated database, operating system, network, and storage resources. When a failure is
detected, Cluster Server gracefully shuts down the application, restarts it on an available server, connects
it to the appropriate storage device, and resumes normal operations. Veritas Cluster Server can
temporarily move applications to a standby server when routine maintenance such as upgrades or
patches requires that the primary server be taken offline.

About SAN Volume Controller


IBM System Storage SAN Volume Controller (SVC) is a storage virtualization system designed to help
users reduce the costs and complexity associated with storage, and improve efficiency. SAN Volume
Controller is designed to pool storage volumes from IBM and nonIBM storage systems into reservoirs of capacity for centralized
management. SAN Volume Controller is also designed to hide the
boundaries among disk systems, which helps administrators to
focus on managing storage as a resource to meet business
requirements and not as a set of boxes.
SAN Volume Controller Highlights
SVC is designed to

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

Combine storage capacity from multiple disk systems into a single reservoir of capacity that can
be better managed as a business resource and not as separate boxes

Help increase storage utilization by providing host applications with more flexible access to
capacity

Help improve productivity of storage administrators by enabling management of heterogeneous


storage systems using a common interface

Support improved application availability by insulating host applications from changes to the
physical storage infrastructure

Enable a tiered storage environment in which the cost of storage can be better matched to the
value of data

Support advanced copy services from higher- to lower-cost devices and across storage systems
from multiple vendors

Find SVC on the web at ibm.com/storage/svc.

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

Overview
This whitepaper illustrates the steps involved in installing and configuring Veritas Storage Foundation HA
and IBM SAN Volume Controller Metro Mirror and IBM SAN Volume Controller Global Mirror for a host
failover and disaster recovery readiness scenario. Once the setup is complete, it is ready for conducting
cluster failover and cluster failback across two cluster sites using SAN Volume Controller Metro Mirror,
SAN Volume Controller Global Mirror and VCS Global Cluster Option.
The host failover scenario simulates a fault in a node of a two node VCS cluster at Site A. Once the DB2 /
Oracle application servicegroup fails over to the other node, a fault is injected in that node causing the
application to failover to a node of a two node cluster at Site B.
The disaster recovery readiness scenario simulates a disaster by introducing a fault in the nodes of the
cluster and storage at Site A causing the application to failover to a node of a two node cluster at Site B.
VCS configured with GCO brings up the DB2 / Oracle application global service group on that node at
Site B. Once Site A is restored and becomes available, the application is failed back to Site A. The VCS
IBM SVC Copy Services agent manages the necessary SAN Volume Controller related procedures for
failover and failback. The Failover Scenarios section provides procedural details.
Two sets of volume groups are used for the configuration described above. One set is used for
configuring Metro Mirror and another set is used for Global Mirror. To test for DR readiness the VCS
Firedrill set is configured. A fire drill procedure verifies the fault-readiness of a disaster recovery

configuration.VCS Friredrill uses IBM FlashCopy . A third volume group set is used for FlashCopy.
The document provides instructions for installing and configuring a test setup and then provides
procedures for the scenarios, using main.cf.oradb2 sample configuration.

Test System Configuration


A cluster configuration can be configured as shown in Figure 1. This configuration includes

A cluster at Site A consisting of two AIX System p hosts configured as a 2-node VCS cluster, an
SVC cluster with 1-iogroup, and IBM System Storage DS8300 storage attached to the SAN
through the switch.
A cluster at Site B consisting of two AIX System p hosts configured as a 2-node VCS cluster, an
SVC cluster with 1 iogroup, and IBM System Storage DS8300 storage attached to the SAN
through the switch.

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

Disk systems

Disk systems

Console

Console
long distance connection

switch

switch
2 node SVC cluster

2 node SVC cluster

2 node VCS cluster server

2 node VCS cluster server


WAN

Application Clients

Application Clients

Site A

Site B

Figure 1. VCS cluster and IBM SVC Metro and Global Mirror for Disaster Recovery
The test configuration setup built for this whitepaper (see Figure 2) consists of four application hosts. The
configuration and hardware and software components are listed in Table 1 and Table 2. The hosts are
AIX IBM System p servers. The four servers are configured to form two 2-node Storage Foundation
cluster servers. Figure 2 shows the test system configuration.
The two clusters represent two sites, Site A and Site B. The cluster at Site A is the active cluster and the
cluster at Site B is the standby cluster. Table 4 shows the SAN Volume Controller virtual disk map.
In the test configuration there are only 2 ISLs shown between the remote and local cluster. That is
adequate for this demonstration but the design of a production configuration should ensure adequate
provisioning of intercluster bandwidth. More than 2 ISLs may be required and they may need to be
trunked.
In this setup both DB2 and Oracle applications are installed on the same cluster. There are three VCS
configuration files: The first configuration is for an Oracle application (main.cf.ora), the second
configuration is for a DB2 application (main.cf.Db2) and the third configuration is a combination of the first
and second and a firedrill resource (main.cf.oradb2). The scenario explained in the Overview uses the
third configuration.

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

SI1

SVCMetro/Global
mdisks

Db2, Oracle
Application data
disks
Other
Application data
disks

lss0000
lss0001
lss0002
lss0100
lss0101
lss102
lss0006
lss0007
lss0003

HMC1

Site A

SI2 lss0004
75ABNY2

75ABNY1

SVC Vdisks
vdisk001
vdisk002
vdisk003
vdisk004
vdisk005
vdisk006
vdisk007
vdisk008
vdisk009
vdisk010

Site B
Thunder 9200
SVCMetro/Global
mdisks

lss0001
lss0005
lss0002

vdisk101
vdisk102
vdisk103
vdisk104
vdisk105
vdisk106
vdisk107
vdisk108
vdisk109
vdisk110

lss0003
lss0004
lss0005
lss0006
lss0007
lss0008
lss0009

lss0004

lss0010

lss0005

lss0011

mdisk01

SVC Vdisks

mdisk02
mdisk03
mdisk04
mdisk05
mdisk06
FlashCopy vdisks

DS8300

SVC1 node 01
SVC1 node 02

SWITCH 042

SWITCH 044

SVC
Console

Hitachi

SVC
Console

SVC2 node 01
SVC2 node 02

WAN

ClustsvcP
ClustsvcPNd1
IBM P650
AIX 5.3
Veritas SF
HA/DR

ClustsvcP
ClustsvcPNd2
IBM P650
AIX 5.3
Veritas SF
HA/DR

ClustsvcS

(GCO)
VCS Cluster
Site A

ClustsvcS

(GCO)
VCS Cluster
Site B

WINDOWS
CLIENTS

Figure 2. Test cluster configuration with SVC Metro and Global Mirror

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

Table 1 Hardware Configuration


Application Host VCS Cluster Servers
Cluster Sites
VCS Cluster Names
System / Cluster
Node Names
Number Of
Processors
Processor Clock
Speed
CPU Type
Kernel Type
LPAR Info
Memory Size
Good Memory Size
Platform Firmware
level
Firmware Version

Site A ( Primary )
ClustsvcP

Site B ( Secondary )
ClustsvcS

Nd1

Nd2

Nd1

Nd2

1452 MHz

1452 MHz

1452 MHz

1452 MHz

64-bit
64-bit
1 NULL
4096 MB
4096 MB
3F060109

64-bit
64-bit
1 NULL
4096 MB
4096 MB
3F060109

64-bit
64-bit
1 NULL
4096 MB
4096 MB
3F060109

64-bit
64-bit
1 NULL
4096 MB
4096 MB
3F060109

IBM,RG06010
9_d79e15_r

IBM,RG060109_
d79e15_r

IBM,RG060109_
d79e15_r

IBM,RG06010
9_d79e15_r

SAN Volume Controller Cluster Configuration


Cluster Sites
H/W Model
BIOS Version
Code level
SVC Cluster
SVC Cluster Names
System / Cluster
Node Names
Config Node
I/O group

Site A ( Master )

Site B ( Auxiliary )
8F4
APE137AUS-1.14
4.2.1.3 build 7.7.0802120000
svc002.abc.def.com
svc003.abc.def.com
ibmsvcM
ibmsvcDR
nodeM1-1
Yes

nodeM1-2

nodeDR2-1

nodeDR2-2

No

No

Yes

io_grp0

io_grp0

Storage
Array Model

IBM DS8000 (2107 9A2)

Firmware Version
Capacity

6.1.3.20071026.1
18.688TB(128x146GB

HBA Ports
Array Model

32
HDS Thunder 9200

Firmware Version
Capacity
HBA Ports

355L
500GB
2

Switch Model
Firmware version
Ports

SAN
SilkWorm 200E
v5.0.1b
16+16

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

10

Table 2 Software Configuration


Vendor
IBM
IBM
Oracle Corporation
Symantec
Symantec
Symantec
Symantec

Software
AIX 5.3
DB2
Oracle
Veritas Storage Foundation Enterprise
Veritas High Availability
5.0MP1 Agent for DB2 by Symantec
Veritas High Availability
5.0MP1 Agent for Oracle by Symantec
Veritas Clustering Support for IBM SVC Copy
Services

Version
5300-06-04-0748
9.1
10gR2
5.0, 5.0MP1RP3
5.0MP1 RP3
5.0MP1 RP3
5.0MP1RP3

The following table lists product documentation set required for installation, configuration and
troubleshooting the setup.

Table 3 Required Documents


Product
IBM DS8000
IBM SAN
Volume
Controller
Veritas Storage
Foundation
Enterprise HA
Veritas Cluster
Server
Veritas Volume
Manager
Veritas Cluster
Server Agents

Use this Guide


http://www-03.ibm.com/systems/storage/disk/ds8000/index.html
http://www.ibm.com/storage/support/2145
Veritas Storage Foundation Installation Guide (combined VM, FS, SFUA install)
Veritas Cluster Server Installation Guide
Veritas Cluster Server Users Guide
Veritas Volume Manager Administrator's Guide
Veritas Cluster Server Agent for Oracle Installation and Configuration Guide
Veritas Cluster Server Agent for DB2 Installation and Configuration Guide
Veritas Cluster Server Agent for IBM SVCCopyServices Installation and
Configuration Guide

Installing and Configuring SAN Volume Controller


Fabric Zoning
Follow the guidelines in chapter 3 of the Software Installation and Configuration Guide to configure the
fabric. Table 4 shows the switch port mappings and the zones created for this configuration

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

11

Table 4 Switch Mapping

ISL
ISL
ISL
ISL

config
zone
zone
zone
zone
zone
zone

Switch Port Map


Switch 001
PNd1
nodeM1-1
nodeM1-1
nodeDR2-1
PNd2
nodeM1-1
nodeM1-1
nodeDR2-1
Switch 002
PNd1
SNd1
nodeM1-2
nodeM1-2
PNd2
SNd2
nodeM1-2
nodeM1-2
ibmsvc svc1dz; svc1hz; svc2dz; svc2hz
ibmsvcM svc1hz; svc1dz
ibmsvcDR svc2hz; svc2dz
svc1dz svc1; DS8300; ISL
svc1hz svc10102; glax0401; glax0501; ISL
svc2dz svc2; HITACHI; Hitachi9200; ISL
svc2hz svc20304; glax0601; glax0701; ISL

nodeDR2-1
nodeDR2-1

HDS
HDS

DS8300
DS8300

SNd1
SNd2

nodeDR2-2

nodeDR2-2

nodeDR2-2

nodeDR2-2

SAN Volume Controller Software Installation


Refer to chapters 4, 5 and 6 of the Software Installation and Configuration Guide to install the SAN
Volume Controller cluster and SAN Volume Controller master console. From the SAN Volume Controller
GUI console or CLI, ensure FlashCopy, Metro Mirror and Global Mirror licenses are activated.

SAN Volume Controller Storage Configuration


This section assumes that the SAN Volume Controller clusters are up and running. You can login to the
SVC CLI from any VCS cluster node OR you can connect to the SVC GUI console. This method utilizes
SVC CLI utility to create and configure virtual disks from Mdisks presented by SAN Volume Controller.
SVC discovers LUNS from the storage connected to it and presents them as Mdisks. In this setup the
storage arrays attached to the SAN are IBM System Storage DS8300 and HDS Thunder 9200. Storage
elements such as arrays, ranks, extent pools, are created as necessary to meet application needs. Table
5 shows the virtual disks configured for Metro and Global Mirror used in this configuration.
Once SVC completes disk discovery, create Mdisk groups. Create virtual disks and map them to the
server hosts. The vdisks and host mapping required for this setup is shown below. Table 5 shows the
virtual disks configured for Metro and Global Mirror used in this configuration. The commands given below
provide an example of creating Mdisk groups, vdisks and mapping vdisks to hosts.
Creating Managed Disk groups:
IBM_2145:ibmsvcDR:admin> svctask detectmdisk
IBM_2145:ibmsvcDR:admin> svcinfo lsmdiskcandidate
IBM_2145:ibmsvcDR:admin> svcinfo lsmdisk -delim : -filtervalue mode=unmanaged
IBM_2145:ibmsvcDR:admin> svctask mkmdiskgrp -name Global_dg1 -ext 128 -mdisk
mdisk4
IBM_2145:ibmsvcDR:admin> svctask addmdisk -mdisk mdisk5 Global_dg1
IBM_2145:ibmsvcDR:admin> svcinfo lsmdiskgrp
Creating vdisks:
IBM_2145:ibmsvcDR:admin> svctask mkvdisk -name gvdisk1 -iogrp io_grp0 -mdiskgrp
Global_dg1 vtype striped -size 2 -unit gb
IBM_2145:ibmsvcDR:admin> svctask mkvdisk -name gvdisk2 -iogrp io_grp0 -mdiskgrp
Global_dg1 -vtype striped -size 2 -unit gb

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

12

IBM_2145:ibmsvcDR:admin> svcinfo lsvdisk


Map vdisks to hosts:
IBM_2145:ibmsvcDR:admin> svctask diskhostmap -host hostname1 scsi 0 force 1
IBM_2145:ibmsvcDR:admin> svctask mkvdiskhostmap hostname2 scsi 0 -force 2

Table 5 SAN Volume Controller Disk Configuration


Vdisks for MM
Vdisks for GM
Vdisks for FD

Master Cluster (Site A)


mvdisk001-007
gmvdisk001-007

Auxiliary Cluster (Site B)


drvdisk001-007,
drGMvdisk001-007
FD_drvdisk0001-007
FD_drGMvdisk0001-007
Master Vdisk
Auxiliary Vdisk
mvdisk001-007
drvdisk001-007
gmvdisk001-008
drGMvdisk0001-008
FD Vdisk
Auxiliary Vdisk
FD_drvdisk0001-007
drdisk001-007
FD_drGMvdisk0001-007
drGMdisk001-007
FD_mmappgrp1cg
FD_gmappgrp1cg

Relationship
Name
MM Relationship
svcmm001-007
GM Relationship
svcgm001-008
FC Mappings
Name
MetroMirror
FD_mmappgrp1001-007
GlobalMirror
FD_gmappgrp1001-007
Flash Copy Consistency Group
Flash Copy Consistency Group
MetroMirror & GlobalMirror Consistency Groups
Name
State
Copy
Master
Auxiliary
Primary
Relationship
Type
Cluster
Cluster
Count
svcgmMDR Consistent
Global ibmsvcM
ibmsvcDR Master
7
synchronized
svcmmMDR Consistent
Metro ibmsvcM
ibmsvcDR Master
7
synchronized
Cluster
Location
State
Bandwidth
Link
InterIntraName
Tolerance
Cluster
Cluster
(s)
Delay
Delay
Simulation
Simulation
(ms)
(ms)
MetroMirror Cluster Partnership
ibmsvcDR
Remote
Fully
500
0
Configured
GlobalMirror Cluster Partnership
ibmsvcDR
Remote
Fully
500
300
20
0
Configured
MM
MetroMirror
GM
GlobalMirror
FC
Flash Copy
FD
Fire Drill
DR
Disaster Recovery

Setting up SAN Volume Controller Cluster Partnership


Login to the SAN Volume Controller console from any VCS cluster node, or you can connect to the SVC
GUI console.
#ssh -l admin -i /.ssh/id_dsa svc003.abc.def.com

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

13

IBM_2145:ibmsvcDR:admin>
To establish a fully functional Metro Mirror issue these commands to both clusters. This step is a
prerequisite to creating Metro Mirror between VDisks on the clusters.
IBM_2145:ibmsvcDR:admin>svctask mkpartnership -bandwidth 500 ibmsvcDR
IBM_2145:ibmsvcDR:admin>svctask mkpartnership -bandwidth 500 ibmsvcM
To establish a fully functional Global Mirror partnership, issue these commands to both clusters. This step
is a prerequisite to creating Global Mirror relationships between VDisks on the clusters.
Get the cluster alias.
IBM_2145:ibmsvcDR:admin>svcinfo lscluster ibmsvc
Update intercluster delay for the Global Mirror configuration. Refer to the SAN Volume Controller guides
to configure these parameters.
-gmlinktolerance link_tolerance: Is the length of time, in seconds, for which an inadequate inter-cluster
link will be tolerated for Global Mirror operation. Accepts values from 60 to 86400 seconds in steps of 10
sec. Default is 300 seconds. The link tolerance can be disabled by entering the value 0 for this parameter.
-gminterdelaysimulation inter_cluster_delay_simulation: Is the Inter-cluster delay simulation, which
simulates Global Mirror round trip delay between two clusters, in milliseconds. Default is 0, valid range is
0 to 100 milliseconds.
-gmintradelaysimulation intra_cluster_delay_simulation: Is the Intra-cluster delay simulation, which
simulates Global Mirror round trip delay in milliseconds. Default is 0, valid range is 0 to 100 milliseconds.
IBM_2145:ibmsvcDR:admin>svctask chcluster -alias 20064E07548 -name ibmsvc -gmlinktolerance
300 -gminterdelaysimulation 20 -gmintradelaysimulation 0

Create SVC Metro Mirror, SVC Global Mirror and FlashCopy Consistency Groups
Create Metro Mirror and Global Mirror Consistency groups by specifying a name and the remote SAN
Volume Controller cluster name as shown below. Make sure the two SVC clusters are up and in
communication throughout the create process. The new consistency group does not contain any
relationships and will be in an empty state. A consistency group is used to ensure that a number of
relationships are managed so that, in the event of a disconnection of the relationships, the data in all
relationships within the group is in a consistent state. This can be important in a database application
where data files and log files are held on separate VDisks, and consequently are being managed by
separate relationships. In the event of a disaster, the primary and secondary sites may become
disconnected. If the relationships associated with the VDisks are not in a consistency group, then as the
disconnection happens, and the Mirror relationships stop copying data from the primary to the secondary
site, there is no assurance that updates to the two separate secondary VDisks will stop in a consistent
manner.
IBM_2145:ibmsvcDR:admin>svctask mkrcconsistgrp -name svcmmMDR -cluster ibmsvcDR
IBM_2145:ibmsvcDR:admin>svctask mkrcconsistgrp -name svcgmMDR -cluster ibmsvcDR
The VCS Firedrill function utilizes SVC FlashCopy to create and maintain the snapshot of the application
resource. Therefore create a FlashCopy Consistency Group from the SVC cluster at Site B.
Ensure the SVC FlashCopy license is enabled.
IBM_2145:ibmsvcDR:admin>svctask chlicense -flash on

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

14

IBM_2145:ibmsvcDR:admin>svctask mkfcconsistgrp -name FD_mmappgrp1cg


IBM_2145:ibmsvcDR:admin>svctask mkfcconsistgrp -name FD_gmappgrp1cg

Create SVC Metro Mirror, SVC Global Mirror Relationship and FlashCopy
Mappings
Now create Metro Mirror and Global Mirror relationships using the following command. This relationship
persists until it is deleted. The auxiliary virtual disk must be identical in size to the master virtual disk. The
master and auxiliary cannot be in an existing relationship. Neither disk can be the target of a FlashCopy
mapping. Copy type defaults to metro if -global is not specified.
IBM_2145:ibmsvcDR:admin>svctask mkrcrelationship -master 16 -aux 7 -sync -cluster ibmsvcDR name svcgm001 -consistgrp svcmmMDR
IBM_2145:ibmsvcDR:admin>svctask mkrcrelationship -master 1 -aux 50 -sync -cluster ibmsvcDR name svcgm002 -consistgrp svcmmMDR
IBM_2145:ibmsvcDR:admin>svctask mkrcrelationship -master 2 -aux 51 -sync -cluster ibmsvcDR name svcgm003 -consistgrp svcmmMDR
IBM_2145:ibmsvcDR:admin>svctask mkrcrelationship -master 3 -aux 52 -sync -cluster ibmsvcDR name svcgm004 -consistgrp svcmmMDR
IBM_2145:ibmsvcDR:admin>svctask mkrcrelationship -master 4 -aux 53 -sync -cluster ibmsvcDR name svcgm005 -consistgrp svcmmMDR
IBM_2145:ibmsvcDR:admin>svctask mkrcrelationship -master 5 -aux 54 -sync -cluster ibmsvcDR name svcgm006 -consistgrp svcmmMDR
IBM_2145:ibmsvcDR:admin>svctask mkrcrelationship -master 6 -aux 55 -sync -cluster ibmsvcDR name svcgm007 -consistgrp svcmmMDR

IBM_2145:ibmsvcDR:admin>svctask mkrcrelationship -master 7 -aux 56 -sync -cluster ibmsvcDR


-name svcgm001 -consistgrp svcgmMDR -global
IBM_2145:ibmsvcDR:admin>svctask mkrcrelationship -master 8 -aux 57 -sync -cluster ibmsvcDR name svcgm002 -consistgrp svcgmMDR -global
IBM_2145:ibmsvcDR:admin>svctask mkrcrelationship -master 9 -aux 58 -sync -cluster ibmsvcDR name svcgm003 -consistgrp svcgmMDR -global
IBM_2145:ibmsvcDR:admin>svctask mkrcrelationship -master 10 -aux 59 -sync -cluster ibmsvcDR
-name svcgm004 -consistgrp svcgmMDR -global
IBM_2145:ibmsvcDR:admin>svctask mkrcrelationship -master 11 -aux 60 -sync -cluster ibmsvcDR
-name svcgm005 -consistgrp svcgmMDR -global
IBM_2145:ibmsvcDR:admin>svctask mkrcrelationship -master 12 -aux 61 -sync -cluster ibmsvcDR
-name svcgm006 -consistgrp svcgmMDR -global
IBM_2145:ibmsvcDR:admin>svctask mkrcrelationship -master 13 -aux 62 -sync -cluster ibmsvcDR
-name svcgm007 -consistgrp svcgmMDR global
Create FlashCopy mappings. The mkfcmap command creates a new FlashCopy mapping, which maps a
source virtual disk to a target virtual disk for subsequent copying
svctask mkfcmap -source 1--target 50 -consistgrp 1 -name FD_gmappgrp1001 -copyrate 100 incremental -cleanrate 100 -iogrp 0
svctask mkfcmap -source 2 -target 51 -consistgrp 1 -name FD_gmappgrp1002 -copyrate 100 incremental -cleanrate 100 -iogrp 0
svctask mkfcmap -source 3 -target 52 -consistgrp 1 -name FD_gmappgrp1003 -copyrate 100 incremental -cleanrate 100 -iogrp 0
svctask mkfcmap -source 4 -target 53 -consistgrp 1 -name FD_gmappgrp1004 -copyrate 100 incremental -cleanrate 100 -iogrp 0

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

15

svctask mkfcmap -source 5 -target 54 -consistgrp 1 -name FD_gmappgrp1005 -copyrate 100 incremental -cleanrate 100 -iogrp 0
svctask mkfcmap -source 6 -target 55 -consistgrp 1 -name FD_gmappgrp1006 -copyrate 100 incremental -cleanrate 100 -iogrp 0
svctask mkfcmap -source 7 -target 56 -consistgrp 2 -name FD_gmappgrp1001 -copyrate 100 incremental -cleanrate 100 -iogrp 0
svctask mkfcmap -source 8 -target 57 -consistgrp 2 -name FD_gmappgrp1002 -copyrate 100 incremental -cleanrate 100 -iogrp 0
svctask mkfcmap -source 9 -target 58 -consistgrp 2 -name FD_gmappgrp1003 -copyrate 100 incremental -cleanrate 100 -iogrp 0
svctask mkfcmap -source 10 -target 59 -consistgrp 2 -name FD_gmappgrp1004 -copyrate 100 incremental -cleanrate 100 -iogrp 0
svctask mkfcmap -source 11 -target 60 -consistgrp 2 -name FD_gmappgrp1005 -copyrate 100 incremental -cleanrate 100 -iogrp 0
svctask mkfcmap -source 12 -target 61 -consistgrp 2 -name FD_gmappgrp1006 -copyrate 100 incremental -cleanrate 100 -iogrp 0
svctask mkfcmap -source 13 -target 62 -consistgrp 2 -name FD_gmappgrp1007 -copyrate 100 incremental -cleanrate 100 -iogrp 0

Installing Veritas Storage Foundation


Symantec Product Licensing
The application hosts have AIX 5.3 installed with the fix requirements for DS8000 storage microcode
level, and the following ODM filesets.
devices.fcp.disk.ibm.rte 1.0.0.9 COMMITTED IBM FCP Disk Device
ibm2105.rte 32.6.100.29 COMMITTED IBM 2105 Disk Device
The installation procedure describes how to activate the key. The VRTSvlic package enables product
licensing. After VRTSvlic is installed, the following commands and their manual pages are available on
the system:
vxlicinst Installs a license key for a Symantec product
vxlicrep Displays currently installed licenses
vxlictest Retrieves features and their descriptions encoded in a license key
Make sure you have activated the Veritas Storage Foundation Enterprise HA/DR AIX,5.0 license key.

Supported AIX Operating Systems


This release of Veritas Storage Foundation operates on AIX 5.2 and AIX 5.3 operating systems.
Product installation scripts verify required update levels. The installation process terminates if the
target systems do not meet maintenance level requirements.
For any Veritas cluster product, all nodes in the cluster must have the same operating system version
and update level.
The minimum system requirements for this release are:
AIX 5.2 ML6 (legacy) or above
AIX 5.3 TL4 with SP 4

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

16

Database Requirements
The following database version are supported by the respective Veritas High Availability
5.0MP1 Agents for DB2 and Oracle.
DB2 8.1 ESE, DB2 8.2 ESE, DB2 8.2.2 ESE, DB2 9.1
Oracle9i, Oracle 10g R1, and Oracle 10g R2 (including 64-bit versions)

Disk Space
Use Perform a Preinstallation Check (P) option of the product installer to determine whether there is
sufficient space.

Environment Variables
Most of the commands used in the installation are in the /sbin or /usr/sbin directory. However, there are
additional variables needed in order to use a Veritas Storage Foundation product after installation. Add
the following directories to your PATH environment variable:
If you are using Bourne or Korn shell (sh or ksh), use the commands:
$ PATH=$PATH:/usr/sbin:/opt/VRTSvxfs/sbin:/opt/VRTSob/bin:\/opt/VRTSvcs/bin:/opt/VRTS/bin$
MANPATH=/usr/share/man:/opt/VRTS/man:$MANPATH
$ export PATH MANPATH
If you are using a C shell (csh or tcsh), use the commands:
% set path = ( $path /usr/sbin /opt/VRTSvxfs/sbin \/opt/VRTSvcs/bin /opt/VRTSob/bin
/opt/VRTS/bin )% setenv MANPATH /usr/share/man:/opt/VRTS/man:$MANPATH
Note: The nroff versions of the online manual pages are not readable using the man command if the
bos.txt.tfs fileset is not installed; however, the VRTSvxvm and VRTSvxfs packages install ASCII versions
in the /opt/VRTS/ man/catman* directories that are readable without the bos.txt.tfs fileset.

Virtual IP Address
This configuration will need several IP addresses depending on the products you are enabling. Have at
least six virtual IPs' allocated for the two clusters. The list below shows virtual IPs required for this
configuration.
Purpose
DB2 failover
Oracle failover
GCO

ClustsvcP
10.140.xx.01
10.140.xx.03
10.140.xx.04

ClustsvcS
10.140.xx.02
10.140.xx.04
10.140.xx.05

Prerequisites for Remote and Cluster Installation


Establishing communication between nodes is required to install Veritas software from a remote system,
or to install and configure a cluster. The node from which the installation utility is run must have
permissions to run rsh (remote shell) or ssh (secure shell) utilities as root on all cluster nodes or remote
systems. Make sure that the hosts to be configured as cluster nodes have two or more NIC cards and
are connected for heartbeat links. See the Veritas Cluster Server installation Guide for more details.

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

17

Mounting a Software Disk


You must have superuser (root) privileges to load the Veritas software.
To mount the Veritas software disk
1. Log in as superuser.
2. Place the Veritas software disk into a DVD drive connected to your system.
3. Mount the disk by determining the device access name of the DVD drive.
The format for the device access name is cdX where X is the device number. After inserting the
disk, type the following commands:
# mkdir -p /cdrom
# mount -V cdrfs -o ro /dev/cdX /cdrom

Installing Using the Veritas Product Installer


You can perform an upgrade to Storage Foundation using the Veritas product installer or product
installation script if you already have Storage Foundation installed. Go to the upgrade procedure
explained in the next section.
Note: Veritas products are installed under the /opt directory on the specified host systems. Ensure that
the directory /opt exists and has write permissions for root before starting an installation procedure.

The Veritas product installer is the recommended method to license and install the product. The
installer also enables you to configure the product, verify pre-installation requirements, and view
the products description.
You can use the product installer to install Veritas Storage Foundation and Veritas Storage
Foundation enterprise HA.
At most points during an installation, you can type b (back) to return to a previous section of
the installation procedure. The back feature of the installation scripts is context-sensitive, so it
returns to the beginning of a grouped section of questions. If an installation procedure hangs,
use Controlc to stop and exit the program. There is a short delay before the script exits.
To install a Storage Foundation product execute the steps from one node in each cluster.
1 Make sure the disk is mounted. See Mounting a Software Disk
2 To invoke the common installer, run the installer command on the disk as shown in this
example:
# cd /cdrom/disk_name/installer rsh ClustsvcPNd1 ClustsvcPNd2
3 Enter I to install a product and press Return to begin.
4 When the list of available products is displayed, select the product you want to install and enter
the corresponding number and press Return. The product installation begins automatically.
6 Enter the Storage Foundation Enterprise HA/DR product license information.

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

18

Enter a product_name license key for ClustsvcPNd1: [?] XXXX-XXXX-XXXX-XXXXXXXX-XXXX-X


XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-X successfully registered on ClustsvcPNd1
Do you want to enter another license key for ClustsvcPNd1? [y,n,q,?] (n)
Enter a product_name license key for ClustsvcPNd2: [?] XXXX-XXXX-XXXX-XXXXXXXX-XXXX-X
XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-X successfully registered on ClustsvcPNd2
Do you want to enter another license key for ClustsvcPNd2? [y,n,q,?] (n)
Enter n if you have no further license keys to add for a system.
You are then prompted to enter the keys for the next system.
Note: Each system requires a product license before installation. License keys for additional
product features should also be added at this time.
7 Choose to install all filesets.
SF can be installed without optional filesets to conserve disk space. Additional
filesets are typically installed to simplify future upgrades.
1) Required Veritas Storage Foundation filesets - 928 MB required
2) All Veritas Storage Foundation filesets - 1063 MB required
Select the filesets to be installed on all systems? [1-2,q,?] (2)
4. At the installed product list page, enter y or press Return to configure the Storage Foundation and
VCS products.
It is possible to install SF filesets without performing configuration.
It is optional to configure SF now. If you choose to configure SF later, you
can do so manually or run the installsf -configure command.
Are you ready to configure SF? [y,n,q] (y)
Do you want to configure VCS on these systems at this time? [y,n,q] (y) .
5. The installer will ask for details for configuring the VCS cluster for SF. Enter the unique cluster
name and Cluster ID number.
Enter the unique cluster name: [?] ClustsvcP
Enter the unique Cluster ID number between 0-65535: [b,?] 8

6. The installer discovers the NICs available on the first system and reports them:
Discovering NICs on ClustsvcPNd1 ...discovered en0 en1 en2

7. Enter private heartbeat NIC information for each host.


Enter the NIC for the first private heartbeat link on ClustsvcPNd1:[b,?] en1
Would you like to configure a second private heartbeat link?[y,n,q,b,?] (y) y
Enter the NIC for the second private heartbeat link on ClustsvcPNd1:[b,?]
en2
Would you like to configure a third private heartbeat link?[y,n,q,b,?] (n) n

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

19

Do you want to configure an additional low priority heartbeat link? [y,n,q,b,?]


(n) n
Are you using the same NICs for private heartbeat links on all systems?
[y,n,q,b,?] (y) y
Note: When answering y, be sure that the same NICs are available on each system; the installer
does not verify this.

Notice that in this example, en0 is not selected for use as a private heartbeat NIC because it is
already in use as the public network interface.
8.

A summary of the information you entered is given. When prompted, confirm that the
information is correct.
Is this information correct? [y,n,q] (y)

If the information is correct, press Return. If the information is not correct, enter n. The installer
prompts you to enter the information again.
9. When prompted to configure the product to use Veritas Security Services, enter y or n to
configure.
Note: Before configuring a cluster to operate using Veritas Security Services, another system must
already have Veritas Security Services installed and be operating as a Root Broker. Refer to the
Veritas Cluster Server Installation Guide for more information on configuring a VxSS Root Broker.
Would you like to configure product_name to use Veritas Security Services? [y,n,q]
(n) n

10. A message displays notifying you of the information required to add users. When prompted, set
the user name and /or password for the Administrator.
Do you want to set the username and/or password for the Admin user (default
username = 'admin', password='password')? [y,n,q] (n)

11. Enter n if you want to decline. If you enter y, you are prompted to change the password. You are
prompted to add another user to the cluster.
Do you want to add another user to the cluster? [y,n,q] (n)

Enter n if you want to decline, enter y if you want to add another user. You are prompted to
verify the user.
Is this information correct? [y,n,q] (y)

Enter y or n to verify if this information is correct.


12. You are prompted to configure the cluster management console. Enter y or n to configure the
cluster management console.
Do you want to configure the Cluster Management Console [y,n,q] (n) n

13. You are prompted to configure the cluster connector. Enter y or n to configure the cluster
connector.
Do you want to configure the cluster connector [y,n,q] (n)

14. When prompted to configure SMTP notification, enter y to not configure SMTP.
Do you want to configure SMTP notification? [y,n,q] (n)

15. When prompted to configure SNMP notification, enter y to not configure SMTP notification.
Do you want to configure SNMP notification? [y,n,q] (n)

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

20

16. When prompted to set up the default disk group for each system, enter n to set up the disk
group for each system.
Do you want to set up a default disk group for each system?
[y,n,q,?] (n)
17. You are prompted to enter the fully qualified hostname of system clustsvcPNd1. Enter y for the
ClustsvcPNd1.domain_name.
Is the fully qualified hostname of system "ClustSVCPNd1"
="ClustsvcPNd1.domain_name"? [y,n,q] (y)
18. 23 You are prompted to enter the fully qualified hostname of system ClustsvcPNd2. Enter y for
the ClustsvcPNd2 domain_name.
Is the fully qualified hostname of system "ClustsvcNd2"
="clustsvcPNd2.domain_name"? [y,n,q] (y)
19. You are prompted to enable Storage Foundation Management Server Management.
Enable Storage Foundation Management Server Management? [y,n,q] (n)
20. Enter n to enable Storage Foundation Management Server Management. You are prompted to
start Storage Foundation.
Do you want to start Veritas Storage Foundation processes now?
[y,n,q] (y)...Startup completed successfully on all systems
You declined to set up the name of the default disk group for
ClustsvcPNd1.
You declined to set up the name of the default disk group for
ClustsvcPNd2.
Installation log files, summary file, and response file are saved
at:
/opt/VRTS/install/logs/installsf-7ai12i
When installsf installs software, some software may be applied rather
than committed. It is the responsibility of the system administrator
to commit the software, which can be performed later with the -c
option of the installp command.
21. Proceed to the next section to upgrade Veritas Storage Foundation 5.0 to 5.0MP1RP3

Upgrading Veritas Storage Foundation HA to 5.0MP1RP3


This release of Veritas products operates on AIX 5.2 and AIX 5.3 operating systems. The minimum
system requirements for this release are:
AIX 5.2 TL8
AIX 5.3 TL5 with SP 1
Product installation scripts verify the required update levels. The installation process terminates if the
target systems do not meet the maintenance level requirements.

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

21

You install upgrade packages by running the installmp installation script from the command line. The
installmp script is at the top level of the Maintenance Pack disk. The installmp script checks for any
installed Veritas package and replaces it with the updated package.
All Veritas product processes are stopped after running installmp. To ensure that processes restart
correctly, a manual restart or system reboot may be required. If you want to configure a product, run the
product installation scripts from the /opt/VRTS/install directory using the configure option.
See the individual product release notes for information on how to restart processes, or how to uninstall
a maintenance pack and return to the previous product level.
Execute the steps from one node in each cluster.
1. Make sure the disk is mounted. See Mounting a Software Disk.
2. To install the Storage Foundation software, you must invoke the installmp command from one of
your cluster nodes using the option that corresponds to your configuration:
3. To install on more than one system using remote shell (RSH) utilities, enter the following
command:
# ./installmp ClustsvcPNd1 ClustsvcPNd2 -rsh
4. After the initial system checks have completed successfully, press Enter to start the requirements
checks for the installation.
5. After the requirement checks have completed successfully, press Enter to begin installing the
packages.
6. To disable io-fencing on each node in the two clusters follow the steps
#cd /etc/VRTSvcs/conf/config
#echo vxfen_mode=disabled > vxfenmode

7. To install the Volume Manager rolling patch RP3 on top of 5.0MP1, down load the patch. Follow
steps given the README document.
8. After the installation of the packages is complete, use the following command to reboot the
system:
# shutdown -r now

Installing and Upgrading VCS agent for DB2


For complete details refer to the Veritas Cluster Server Agent for DB2 Installation and Configuration
Guide. You must install the DB2 agent on each node in the cluster. In global cluster environments, install
the agent on each node in each cluster. These instructions assume that you have already installed
Cluster Server. Follow steps below to install the agent.
1. Make sure the disk is mounted. See Mounting a Software Disk.
2. Navigate to the location of the agent packages:
# cd /cdrom/cluster_server_agents/db2_agent/pkgs
3. Add the filesets for the software
# installp -ac -d VRTSvcsdb.rte.bff VRTSvcsdb
4. Navigate to the location of the 5.0MP1 agent packages:

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

22

# cd /cdrom/cluster_server_agents/db2_agent/patches
5. Add the filesets for the software
# installp -ac -d VRTSvcsdb.rte.bff VRTSvcsdb

Installing and Upgrading VCS agent for Oracle


For complete details refer to the Veritas Cluster Server Agent for Oracle Installation and Configuration
Guide. You must install the Oracle agent on each node in the cluster. In global cluster environments,
install the agent on each node in each cluster. These instructions assume that you have already installed
Cluster Server. Follow steps below to install the agent.
1. Make sure the disk is mounted. See Mounting a Software Disk.
2. Navigate to the location of the agent packages:
# cd /cdrom/cluster_server_agents/oracle_agent/pkgs
3. Add the filesets for the software
# installp -ac -d VRTSvcsor.rte.bff VRTSvcsor
4. Navigate to the location of the 5.0MP1 agent packages:
# cd /cdrom/cluster_server_agents/oracle_agent/patches
5. Add the filesets for the software
# installp -ac -d VRTSvcsor.rte.bff VRTSvcsor

Installing VCS agent for IBM SVCCopyServices


For complete details refer to the Veritas Cluster Server Agent for IBM SVCCopyServices Installation and
Configuration Guide. You must install the IBM SVC Copy Services agent on each node in the cluster. In
global cluster environments, install the agent on each node in each cluster. These instructions assume
that the Veritas Cluster Server is already installed. Follow the steps below to install the agent.
1. Make sure the disk is mounted. See Mounting a Software Disk.
2. Navigate to the location of the agent packages:
# cd /cdrom/aix/replication/svccopyservices_agent /version/pkgs
The variable version represents the version of the agent. We have installed version 5.0.3.0
3. Add the filesets for the software
# installp -ac -d VRTSvcssvc.rte.bff VRTSvcssvc
All of the required Software components have now been installed. You should be able to list out the
filesets in Appendix C on each application host.

Installing and Configuring DB2, Oracle


This step involves
Installation of DB2 and Oracle software
Creation of a DB2 instance
Creation of Database
Install DB2 on all the nodes of the clusters in Site A and Site B. Install Oracle on any one node in Site A.
Make sure that the installation prereqs are met and are identical on all nodes, especially the user and
group id, passwords, owner and group permissions and listener port id.

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

23

Refer to the appropriate sections in Appendix B for instructions to setup the database.
In this configuration a database representing TPC-C schema is built. A database workload application is
used to populate and simulate the TPC-C workload. You will need a workload application to exercise the
database load.
While configuring VxVM make sure to enable the mode for scsi3 disk discovery. The instructions to
enable it are included in Appendix B.

Configuring Applications for Disaster Recovery


Having installed and configured the base system software and applications, we are now ready for
configuring the required Applications for High Availability and Disaster Recovery. Most clustered
applications can be adapted to a disaster recovery environment by:

Setting up SVC Cluster partnership


Configuring SVC Metro Mirror and Global Mirror relationship
Setting up VCS Global cluster
Configuring VCS Application failover service group
Adding IBM SVC Copy Services resource to the service group
Setting up Firedrill

To quickly setup this test configuration, follow Quick Setup Steps section. You can follow the General
Configuration Steps and refer to the documents mentioned in those sections for detailed configuration
steps.

Quick Setup
This section assumes the following tasks have been completed:
The hardware and software configuration is already setup, SVC and VCS clusters are up and
running.
The SVC and VCS objects in Appendix B are created.
There are three configuration samples in Appendix A: One for Oracle Application resource failover, one
for DB2 Application resource failover and one for an Application group failover, containing both Oracle
and DB2 application resources and a fire drill resource on the secondary cluster.
This procedure lists steps to stop VCS cluster server on all the nodes in both clusters. Copy the sample
VCs configuration files. Select any one configuration (preferably main.cf.oradb2). Apply required
modifications to the config file, verify and then start the VCS cluster and bring Application resource online
on one node in the Primary cluster.
1. Make sure all of the objects mentioned in VCS configuration file are created and available
2. Halt the cluster server from any node in the clusters in Site A and Site B
#/opt/VRTSvcs/hastop all
3. Cut and paste the VCS configuration files ( main.cf ) in Appendix A, to files in
/etc/VRTSvcs/conf/config directory as shown below.
On cluster nodes ClustsvcPNd1, ClustsvcPNd2 in Site A as:
main.cf.db2.siteA, main.cf.ora.siteA, main.cf.oradb2.siteA
On cluster nodes ClustsvcSNd1, ClustsvcSNd2 in Site B as:

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

24

main.cf.db2.siteB, main.cf.ora.siteB, main.cf.oradb2.siteB


4. Modify the values of hostnames, IP addresses, mount points, volumes and disk group resources,
Cluster names, passwords etc. to match your configuration.
5. Copy the VCS agent type definition files, if they are not present in the destination directory.
#cp /etc/VRTSagents/ha/conf/Oracle/OracleTypes.cf /etc/VRTSvcs/conf/config/
#cp /etc/VRTSagents/ha/conf/Db2udb/Db2udbTypes.cf /etc/VRTSvcs/conf/config/
#cp /etc/VRTSvcs/conf/SVCCopyServicesTypes.cf /etc/VRTSvcs/conf/config/
6. Depending on which application you want to test, make a copy of the configuration file as shown
below
For example main.cf.oradb2: On Site A cluster node 1 (ClustsvcPNd1)
#cd /etc/VRTSvcs/con/config
#cp main.cf.oradb2.siteA main.cf
#rcp main.cf ClustsvcPNd2:/etc/VRTSvcs/conf/config/main.cf
On Site B cluster node 1 (ClustsvcPNd1)
#cd /etc/VRTSvcs/con/config
#cp main.cf.oradb2.siteB main.cf
#rcp main.cf ClustsvcSNd2:/etc/VRTSvcs/conf/config/main.cf
7. Verify that the main.cf does not have any errors and fix it if there are any issues.
#/opt/VRTSvcs/bin/hacf verify
If there are no errors, command returns with exit code 0.
8. Start the cluster on each node in the clusters in Site A and B.
#/opt/VRTSvcs/hastart all
#/opt/VRTSvcs/hastart all
9. Start the cluster Manager from any node in the cluster SiteA. Login to one of the nodes as
admin/password
#export DISPLAY=xhost:xx.0
#/opt/VRTSvcs/hagui &
10. On ClustsvcPNd1 login to the cluster manager and online the app_grp1 service group if it is not
already online.
11. Now you are ready to manage the clusters from the cluster manager GUI. To test HA and DR
scenarios, proceed to Failover Scenarios section and follow the steps.

General Configuration Steps


In this section the six steps mentioned above are explained in detail. Refer to the documents listed in
Table 3 for additional details of configuration procedures. You will need the following guides:
Veritas Cluster Server Users Guide
Veritas Cluster Server Bundled Agents Reference Guide
Veritas Cluster Server Agent for DB2 Installation and Configuration Guide
Veritas Cluster Server Agent for Oracle Installation and Configuration Guide
Veritas Cluster Server Agent for IBM SVCCopyServices installation and Configuration Guide

Setting up SVC Cluster Partnership


Refer to section Setting up Cluster Partnership.

Configuring SVC Metro / Global Mirror, FlashCopy Relationship


Refer to sections Create SVC Metro Mirror, SVC Global Mirror and FlashCopy Consistency Groups and
Create SVC Metro Mirror, SVC Global Mirror relationship and FlashCopy Mappings

Setting up VCS Global Custer


Linking clusters

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

25

Refer to the section Linking clusters in the Veritas Cluster Server Users Guide. The Remote Cluster
Configuration wizard provides an easy interface to link clusters. Before linking clusters, verify the virtual IP
address for the ClusterAddress attribute for each cluster is set. Use the same IP address as the one
assigned to the IP resource in the ClusterService group. Have the following information ready: The active
host name or IP address of each cluster in the global configuration and of the cluster being added to the
configuration, the administrator login name and password for each cluster in the configuration.
Invoke the Cluster manager (java console) from the node in the cluster.
#export DISPLAY=xhost:xx.0
#/opt/VRTSvcs/hagui &
From the cluster manager, click Edit>Add/Delete Remote Cluster. Follow the instructions in the GUI to
complete adding the remote cluster.

Configuring Global Cluster


From any node in the clusters in Site A and Site B run the GCO Configuration wizard to create or update
the ClusterService group. The wizard verifies your configuration and validates it for a global cluster setup.
#/opt/VRTSvcs/bin/gcoconfig
The wizard discovers the NIC devices on the local system and prompts you to enter the device to be used
for the global cluster.
Specify the name of the device and press Enter. If you do not have NIC resources in your configuration,
the wizard asks you whether the specified NIC will be the public NIC used by all systems. Enter y if it is
the public NIC; otherwise enter n. If you entered n, the wizard prompts you to enter the names of NICs on
all systems.
Enter the virtual IP to be used for the global cluster which you already have identified. If you do not have
IP resources in your configuration, the wizard prompts you for the netmask associated with the virtual IP.
The wizard detects the netmask; you can accept the suggested value or enter another value.
The wizard starts running commands to create or update the ClusterService group. Various messages
indicate the status of these commands. After running these commands, the wizard brings the
ClusterService group online.

Configuring VCS Application Service Group


The DB2 and Oracle VCS service groups can be setup once the VCS agents have been installed. In this
setup both Applications (DB2 and Oracle) are installed on the same cluster. Two separate VCS
configuration files (main.cf) are created for each application. The failover scenarios are performed for one
configuration at a time. As an alternative configuration you can include both applications into a single
configuration.
Figure 3 shows the dependency graph of the VCS resources within each service group. There are two
service groups. The Application service group and the Cluster service group. The Application group is a
Global service group. The DiskGroup resources depend on the Svc Copy Services resource. The
complete listing of the configuration files (main.cf) are provided in Appendix A.
Refer to the Veritas Cluster Server Agent for DB2 Installation and Configuration Guide and the Veritas
Cluster Server Agent for Oracle Installation and Configuration Guide for adding and configuring the DB2
and Oracle resource types.
Configure Application service group on the primary and secondary clusters. It involves the following
steps.

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

26

1. Start Cluster Manager and log on to the cluster.


#/opt/VRTS/vcs/bin/hagui &
2. If the agent resource type (Oracle / DB2 / SVC CopyServices ) is not added to your
configuration, add it. From the Cluster Manager File menu, choose Import Types and select the
appropriate types file for the agent in the following directory path :
/etc/VRTSagents/ha/conf/Db2udb/Db2udbTypes.cf,
/etc/VRTSagents/ha/conf/Oracle/OracleTypes.cf , /etc/VRTSvcs/conf/SVCCopyServicesTypes.cf
3. Click Import.
4. Save the configuration.
5. Create the Service groups. In this case you can create the app_grp1 service group. Refer to
chapter 4 of the guides mentioned above to create the Service group.
6. Add the resources mentioned in the app_grp1 service group in the main.cf.oradb2 for the
respective cluster.
7. Configure the resources as mentioned in the main.cf by changing the values for your installation
8.
9. Configure the service group as a global group using the Global Group Configuration Wizard. See
the Veritas Cluster Server Users Guide for more information.

Figure 3. VCS dependency tree for App_grp1 Application service group resources

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

27

Figure 4. VCS dependency tree for FD_App_grp1 firedrill service group resources

Adding the VCS SVC CopyServices Resource


Refer to chapter 3 of Veritas Cluster Server Agent for IBM SVCCopyServices installation and
Configuration Guide for detailed configuration of Firedrill resource.
The VCS enterprise agent for IBM SVCCopyServices manages replication relationships and consistency
groups that are defined on SVC clusters. An SVC cluster brings storage devices together in a virtual pool
to make all storage appear as one logical device to centrally manage and to allocate capacity as needed.

Before you configure the agent for SVC CopyServices


Before configuring the agent, review the following information:
Set up the SSH identity file on the VCS hosts prior to configuring the service group. Use the
SSH keygen, if required. Generate a public and private key pair using the ssh-keygen utility.
Copy the public key on the SVC cluster.
Review the configuration concepts, which describe the agents type definition and attributes.
Verify that you have installed the agent on all nodes in the cluster.
Verify the hardware setup for the agent.
Make sure that the cluster has an effective heartbeat mechanism in place.

Adding the agent resource type

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

28

Perform these steps on every node in the cluster. Adding and configuring the agent manually in a
global cluster involves the following tasks. Next section describes details of the Resource type
definition and attributes.
1. Start Cluster Manager and log on to the cluster.
#export DISPLAY=xhost:xx.0
#/opt/VRTSvcs/hagui &
2. If the agent resource type is not added to your configuration, add it. From the Cluster Manager
File menu, choose Import Types and select
/etc/VRTSvcs/conf/SVCCopyServicesTypes.cf
3. Click Import.
4. Save the configuration.
Now you can configure the SVC Copy Services resource by creating

Service groups for SVC Copy Services, which you have already done in the previous section
by creating app_grp1 service group.
Adding a resource of type SVC Copy Services at the bottom of the service group. See the
main.cf.oradb2 as an example. If you are adding the resource, use the GUI or the command
line, follow the resource names from the example and create the app_svccp resource
#hares add app_svccp SVCCopyServices app_grp1

Configuring the attributes of the SVC Copy Services resource. In this example add the
GoupName and Cluster IP attribute values.
#hares modify app_svccp GroupName svcgmMDR
#hares modify app_svccp ClusterIP 10.140.XX.XXX

SVCCopyServices resource type definition


The IBM SVCCopyServices agent is represented by the SVC CopyServices type in VCS.
type SVC CopyServices (
str GroupName
int IsConsistencyGroup = 1
str SSHBinary = "/usr/bin/ssh"
str SSHPathToIDFile
str SVCClusterIP
str SVCUserName = admin
int StopTakeover = 0
int DisconnectTakeover = 0
static str ArgList[] = {GroupName, IsConsistencyGroup, SSHBinary,
SSHPathToIDFile, SVCClusterIP, SVCUserName, StopTakeover,
DisconnectTakeover}
)

Attribute definitions for the SVCCopyServices agent


Review the description of the agent attributes.

Required attributes
You must assign values to required attributes.
GroupName
Type-dimension
IsConsistency

Name of the replication relationship or consistency group that is


managed by the agent.
string-scalar
Indicates whether the value specified in the GroupName

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

29

Group

Type-dimension
SSHBinary

Type-dimension
SSHPathToID File

SVCClusterIP
Type-dimension
SVCUserName
StopTakeover

Type-dimension
Disconnect
Takeover

attribute is the name of a single replication relationship or of a


consistency group consisting of several replication
relationships.
Attribute value is either 0 or 1. Default is 1.
string-scalar
Contains the absolute path to the SSH binary. SSH is the mode
of communication with the SVC cluster that is connected to the
node.
Default is "/usr/bin/ssh".
string-scalar
Contains the absolute path to the identity file used for
authenticating the host with the SVC cluster. The corresponding
public key must be uploaded on the SVC cluster so that the
SVC cluster can correctly authenticate the host.
The IP address of the SVC cluster in the dot notation. The
agent uses this IP address to communicate with the SVC
cluster.
string-scalar
User name that authenticates the SSH connection with the SVC
cluster.
Default is admin.
Determines whether the agent makes read-write access
available to the host when the replication is in a stopped state
(i.e. consistent_stopped).
The status of the replication goes into a stopped state when the
user fires the stoprcrelationship or the stoprcconsistgrp
command. Thus, no replication occurs between the primary and
secondary SVC clusters.
Attribute value is either 0 or 1. Default value is 0. If it is set to
1, there is a possibility for data loss, if after the replication was
stopped, the application continues to write data on the Primary.
Thus, when the agent enables read/write access on the
secondary SVC cluster, the secondary SVC cluster does not
have up-to-date data on it.
The possible stopped states are: inconsistent_stopped and
consistent_stopped
When the state of the replication is consistent_stopped and
StopTakeover = 1 , the agent enables read-write access for the
SVC cluster.
When the state of the replication is inconsistent_stopped, the
agent does not enable read-write access for the SVC cluster.
string-scalar
Determines whether the agent makes read-write access
available to the host when the replication is in a disconnected
state (i.e. consistent_disconnected).
The status of the replication goes into a disconnected state
when the primary and secondary SVC clusters lose
communication with each other. Thus, no replication occurs
between the primary and secondary
SVC clusters.
Attribute value is either 0 or 1. Default is 0.
The possible disconnected states are:
idling_disconnected
inconsistent_disconnected

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

30

Type-dimension

consistent_disconnected
When the state of the replication is consistent_disconnected
and
DisconnectTakeover = 1, then the agent enables read/write
access for the SVC cluster. When the state of the replication is
idling_disconnected, the agent does not enable read-write
access for the SVC cluster.
string-scalar

A resource of type SVCCopyServices may be configured as follows in main.cf: This example is a copy of
the one in main.cf.oradb2 given in Appendix A
SVCCopyServices app_svccp (
SSHPathToIDFile = " /.ssh/id_dsa "
GroupName = svcgmMDR
SVCClusterIP = "10.140.XX.XXX"
DisconnectTakeover = 1
)

Setting up Firedrill
Refer to Chapter 5 in the Veritas Cluster Server Agent for IBM SVCCopyServices installation and
Configuration Guide for detailed configuration procedure of Firedrill resource.

About Firedrill Resource


A fire drill procedure verifies the fault-readiness of a disaster recovery configuration. This procedure is
done without stopping the application at the primary site and disrupting user access.
A fire drill is performed at the secondary site using a special service group for fire drills. The fire drill
service group is identical to the application service group, but uses a fire drill resource in place of the
replication agent resource. The fire drill service group uses a copy of the data that is used by the
application service group. In clusters employing IBM SVCCopyServices, the SVCCopyServicesSnap
resource manages the replication relationship during a fire drill.
Bringing the Firedrill service group online demonstrates the ability of the application service group to
come online at the remote site when a failover occurs. The SVCCopyServicesSnap agent supports fire
drills for storage devices that are managed using Veritas Volume Manager 5.0 MP1 RP2, which is a
component of Veritas Storage Foundation.
The SVCCopyServicesSnap agent is the fire drill agent for IBM SVCCopyServices. It is included in the
VCS Agent for IBM SVCCopyServices agent package. The agent manages the replication relationship
between the source and target arrays when running a fire drill. Configure the SVCCopyServicesSnap
resource in the fire drill service group, in place of the SVCCopyServices resource. VCS supports the
Gold, Silver and Bronze fire drill configurations for the agent. Refer to the guide for more details.

SVCCopyServicesSnap resource type definition


The IBM SVCCopyServicesSnap agent is represented by the SVC CopyServicesSnap type in VCS.
type SVCCopyServicesSnap (
static int MonitorInterval = 300
static int OpenTimeout = 180
static int NumThreads = 1
static int OfflineMonitorInterval = 0
static int OnlineTimeout = 6000
static int RestartLimit = 1
static str ArgList[] = { TargetResName, MountSnapshot, UseSnapshot,
RequireSnapshot, FCMapGroupName }

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

31

str TargetResName
int MountSnapshot
int UseSnapshot
int RequireSnapshot
str FCMapGroupName
temp str Responsibility
)

Attribute definitions for the SVCCopyServicesSnap agent


Review the description of the agent attributes.

Required attributes
You must assign values to required attributes.
TargetResName

Type-dimension
UseSnapshot
Type-dimension
RequireSnapshot

Type-dimension
MountSnapshot

Type-dimension
Responsibility
Type-dimension
FCMapGroupName

Type-dimension

Name of the resource managing the LUNs that you want to take
snapshot of. Set this attribute to the name of the
SVCCopyServices resource if you want to take a snapshot of
replicated data. Set this attribute to the name of the DiskGroup
resource if the data is not replicated.
For example, in a typical Oracle setup, you might replicate data
files and redo logs, but you may choose to avoid replicating
temporary tablespaces. The temporary tablespace must still
exist at the DR site and may be part of its own disk group.
string-scalar
Specifies whether the SVCCopyServicesSnap resource takes a
local snapshot of the target array. Set this attribute to 1 for Gold
and Silver configurations. For Bronze, set this attribute to 0.
integer-scalar
Specifies whether the SVCCopyServicesSnap resource must
take a snapshot before coming online. Set this attribute to 1 if
you want the resource to come online only after it succeeds in
taking a snapshot. Set this attribute to 0 if you do want the
resource to come online even if it fails to take a snapshot.
Setting this attribute to 0 creates the Bronze configuration.
Note: Set this attribute to 1 only if UseSnapshot is set to 1
integer-scalar
Specifies whether the resource uses the snapshot to bring the
service group online. Set this attribute to 1 for Gold
configuration. For Silver and Bronze configurations, set the
attribute to 0.
Note: Set this attribute to 1 only if UseSnapshot is set to 1.
integer-scalar
Do not modify. For internal use only. Used by the agent to keep
track of desynchronizing snapshots.
temporary string
Name of the FlashCopy mapping or FlashCopy consistency
group. If the target SVCCopyServices resource contains a
consistency group, set FCMapGroupName to a FlashCopy
consistency group. If the target SVCCopyServices resource
contains a relationship, set FCMapGroupName to a FlashCopy
mapping. This attribute is optional for Bronze configurations.
string-scalar

A resource of type SVCCopyServicesSnap may be configured as follows in main.cf: Also check the
configuration and note that it is in Gold configuration.

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

32

SVCCopyServicesSnap FD_app_svccpsnap (
TargetResName = app_svccp
MountSnapshot = 1
UseSnapshot = 1
RequireSnapshot = 1
FCMapGroupName = FD_gmappgrp1cg
)

Configure the fire drill service group


On one node of the secondary VCS cluster:
1.
2.
3.
4.
5.

In Cluster Explorer, click the Service Group tab in the left pane.
Click the fire drill service group in the left pane and click the Resources tab in the right pane.
Right-click the SVCCopyServices resource and click Delete.
Add a resource of type SVCCopyServicesSnap and configure its attributes.
Right-click the resource to be edited and click View > Properties View. If a resource to be
edited does not appear in the pane, click Show All Attributes.
6. Edit attributes to reflect the configuration at the remote site. For example, change the Mount
resources so that they point to the volumes that are used in the fire drill service group.

Enable the FireDrill attribute


1. In Cluster Explorer, click the Types tab in the left pane, right-click the type to be edited, and
click View > Properties View.
2. Click Show All Attributes.
3. Double click FireDrill.
4. In the Edit Attribute dialog box, enable FireDrill as required, and click OK.
5. Repeat the process of enabling the FireDrill attribute for all required resource types.
6. Bring the fire drill service group online application running. Verify that the This action
validates your disaster recovery service group can fail over to the secondary failure (disaster)
at the primary site

Failover Scenarios
The section describes the procedures for perform the failover scenarios. The setup contains one service
group containing Oracle and DB2 resource types. The example uses a SVC Global Mirror setup.
To use the SVC Metro Mirror configuration, replace the SVCCopyservices GroupName attribute,
svcgmMDR by svcmmMDR and the SVCCopyServiceSnap.
For the SVC Metro Mirror configuration the SVCCopyservices GroupName attribute is svcmmMDR.
For the SVC Global Mirror configuration the SVCCopyservices GroupName attribute is svcgmMDR.
For the SVC Metro Mirror configuration the SVCCopyServicesSnap FCMapGroupName attribute is
FD_mmappgrp1cg.
For the SVC Global Mirror configuration the SVCCopyServicesSnap FCMapGroupName attribute is
FD_gmappgrp1cg.
Make sure the switch zoning configuration requirements are met for Global Mirror.

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

33

Before you start the scenarios make sure that the both clusters in Site A and Site B are up and running.
Login to a site A and site B cluster node and issue the following command.
#/opr/VRTSvcs/bin/hastatus
Start the Cluster management GUI
#export DISPLAY=xhost:xx.0
#/opt/VRTSvcs/bin/hagui &
Make sure that, the Clusterservice group is online on a node at each site and the app_grp1 service
group is online on a cluster node at Site A.
While the Application is running, from the SVC console ensure the Global Mirror consistency group is in
consistent_synchronized state and the Primary is the Master Cluster (ibmsvcM).

Application Host Failover


In this scenario, a node in the cluster at Site A where the application is online is lost. The application fails
over to the second node in the cluster. Next, that node is also lost, since all nodes in the cluster are down
the application fails over to a cluster node at Site B.
To test the host failure across nodes and clusters, perform the following steps:
1. Disable a host port of cluster node on the switch for say ClustsvcPNd1 at site A where the
application is online. This action introduces a fault. The service group fails over to the second
node i.e. ClustsvcPNd2 pf the cluster at Site A. Wait till the service group comes online on
ClustsvcPNd2, check that the DB application data is consistent. While the Application is running,
from the SVC console ensure the Global Mirror consistency group is in consistent_synchronized
state and the Primary is the Master Cluster (ibmsvcM).
#IBM_2145:ibmsvcM:admin>svcinfo lsrcconsistgrp
2. Offline the Firedrill service group at Site B if it is online.
3. Disable the host port on the switch of cluster node ClustsvcPNd2
4. A cluster down alert appears and gives the admin opportunity to fail over the service group
manually to one of the cluster nodes at Site B, choose say ClustsvcSNd1.
5. Wait till the service group comes online on ClustsvcSNd1, check that the DB application data is
consistent. While the Application is running, from the SVC console ensure the Global Mirror
consistency group is in consistent_synchronized state and the Primary is the Master Cluster (
ibmsvcM ).
6. Follow step 1 to failover the application to ClustsvcSNd2 and verify the status.
7. Enable the switch ports of the two cluster nodes at Site A , ensure the cluster comes up and the
service group is offline.
8. From any node switch the service group to its original host i.e. ClustsvcPNd1. In the Service
Groups tab of the Cluster Manager Configuration tree, right-click the service group. Click Switch
To, and click the ClustsvcPNd1 on which the service group was originally online.
9. The service group comes online on ClustsvcPNd1. Check from the SVC console that the Master
console becomes the Primary again. Wait till the service group comes online on ClustsvcPNd2,
check that the DB application data is consistent

Disaster Recovery in a Global Cluster Configuration


Test cluster failover in case of a disaster and failback after restoration of the cluster at the disaster site (
Site A). In this case simulate a disaster by introducing fault to all nodes in the cluster and storage at Site
A simultaneously.

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

34

To test the cluster failover and failback, perform the following steps:
1. Make sure the Application is up and running on one of the cluster node, say, ClustsvcPNd1 in
Site A.
2. Offline the Firedrill service group at Site B if it is online.
3. Disable the host ports and storage switch ports of cluster site A.
4. A cluster down alert appears and gives the admin opportunity to fail over the service group
manually to one of the cluster nodes at Site B, choose say ClustsvcSNd1. The SVC Global Mirror
consistency group lists Primary as the Auxiliary Cluster (ibmsvcDR). You can also check the
status from the VCS log at /var/VRTSvcs/log/engine_A.log.
5. Enable the disabled switch ports of cluster Site A. Ensure the cluster comes up and the
ClusterService group goes online. The application service group is offline.
6. Manually resynchronize the data between the primary and secondary sites. If the current primary
is retained as the primary for the replication relationship and the replication is not in a stopped
state, stop the replication manually.
Use the stoprcrelationship or the stoprcconsistgrp command, without the -access option.
Use the startrcrelationship or the startrcconsistgrp command with the -force option on the
SVC cluster that the user wants to retain as primary.
Alternatively, Use the update action entry point with no arguments specified.
If you change the primary for the replication relationship, stop the replication manually (if it is not
stopped already).
Enable read / write access on both SVC clusters by using the stoprcrelationship or the
stoprcconsistgrp command with the access option.
Use the startrcrelationship or the startrcconsistgrp command with both the -force and primary options.
Alternatively, use the update action entry point with exactly one argument that is the SVC cluster,
which the user retains as primary.
7. After the resynchronization is complete, switch application to the cluster at Site A .
8. The service group comes online on ClustsvcPNd1. Check from the SVC console that the Master
console becomes the Primary again. Wait till the service group
p comes online on ClustsvcPNd2, check that the DB application data is consistent

Checking Failover Readiness with Firedrill


In this example the Firedrill service group is in Gold configuration. A local snapshot of the target LUN is
taken and the fire drill service group in brought online by mounting the replication target LUN. VCS
creates a lock file to indicate that the resource is online.
1. Ensure the application service group app_grp1 is offline at cluster Site B.
2. Bring the fire drill service FD_app_svccpsnap online on a node (ClustsvcSNd1) at site B.
#hagrp online firedrill_appgrp1 sys gl-ax06 localclus
This action validates your disaster recovery configuration. The production service group can fail over to
the secondary site in the event of an actual failure (disaster) at the primary site. If the fire drill service
group does not come online, review the VCS engine log for more information.
3. Take the fire drill offline after its functioning has been validated.
#hagrp offline firedrill_appgrp1 sys gl-ax06 localclus

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

35

Note: Failing to take fire drill service group offline could cause failures in your environment. For example,
if the application service group fails over to the node hosting fire drill service group, there would be
resource conflicts, resulting in both service groups faulting and based on failover policy for the application
service group, it can failover to another node in the cluster.

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

36

Summary
Clustering software, like Veritas Cluster Server, has for many years been the standard approach to
protect against failures of individual hardware or software components. As more and more organizations
look to add robust disaster recovery capabilities to their mission critical systems, merely shipping backup
tapes to an offsite location is not adequate. This whitepaper has shown how a local HA cluster can be
extended with DR capabilities. The IBM SVC Metro Mirror and Global Mirror features are utilized to add
data replication capabilities to the solution. VCS Wizards are used to convert two independent clusters (at
two different locations) into a global cluster with automated failover capability between locations in the
event of a site disaster. The result is a robust DR environment capable of meeting stringent Recovery
Time Objectives.

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

37

Appendix A: VCS Configuration File (main.cf) with


DB2, Oracle Application Service Group for HA/DR
Site A : Primary Cluster
Site B : Secondary Cluster
Nodes : ClustsvcPNd1 ClustsvcPNd2
Nodes : ClustsvcSNd1 , ClustsvcSNd2
VCS configuration file for DB2
main.cf.db2.siteA
main.cf.db2.siteB
include "types.cf"
include "Db2udbTypes.cf"
include "SVCCopyServicesTypes.cf"

include "types.cf"
include "Db2udbTypes.cf"
include "SVCCopyServicesTypes.cf"

cluster ClustsvcP (
UserNames = { admin = bIJbIDiFJeJJhRJdIG }
ClusterAddress = "10.140.xx.05"
Administrators = { admin }
)

cluster ClustsvcS (
UserNames = { admin = IpqIpkPmqLqqOyqKpn }
ClusterAddress = "10.140.xx.06"
Administrators = { admin }
)

remotecluster ClustsvcS (
ClusterAddress = "10.140.xx.06"
)

remotecluster ClustsvcP (
ClusterAddress = "10.140.xx.05"
)

heartbeat Icmp (
ClusterList = { ClustsvcS }
AYATimeout = 30
Arguments @ClustsvcS = { "10.140.xx.06" }
)

heartbeat Icmp (
ClusterList = { ClustsvcP }
AYATimeout = 30
Arguments @ClustsvcP = { "10.140.xx.05" }
)

system ClustsvcPNd1 (
)

system ClustsvcSNd1 (
)

system ClustsvcPNd2 (
)

system ClustsvcSNd2 (
)

group ClusterService (
SystemList = { ClustsvcPNd1 = 0, ClustsvcPNd2 = 0 }
AutoStartList = { ClustsvcPNd1, ClustsvcPNd2 }
OnlineRetryLimit = 3
OnlineRetryInterval = 3
)

group ClusterService (
SystemList = { ClustsvcSNd1 = 0, ClustsvcSNd2 = 0 }
AutoStartList = { ClustsvcSNd1, ClustsvcSNd2 }
OnlineRetryLimit = 3
OnlineRetryInterval = 3
)

Application wac (
StartProgram = "/opt/VRTSvcs/bin/wacstart"
StopProgram = "/opt/VRTSvcs/bin/wacstop"
MonitorProcesses = { "/opt/VRTSvcs/bin/wac" }
)

Application wac (
StartProgram = "/opt/VRTSvcs/bin/wacstart"
StopProgram = "/opt/VRTSvcs/bin/wacstop"
MonitorProcesses = { "/opt/VRTSvcs/bin/wac" }
)

IP gcoip (
Device = en0
Address = "10.140.xx.05"
NetMask = "255.255.xxx.0"
)

IP gcoip (
Device = en0
Address = "10.140.xx.06"
NetMask = "255.255.xxx.0"
)

NIC gconic (
Device = en0
)

NIC gconic (
Device = en0
)

gcoip requires gconic


wac requires gcoip

gcoip requires gconic


wac requires gcoip

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

38

group db2udb_grp1 (
SystemList = { ClustsvcPNd1 = 0, ClustsvcPNd2 = 1 }
ClusterList = { ClustsvcP = 0, ClustsvcS = 1 }
Authority = 1
AutoStartList = { ClustsvcPNd1, ClustsvcPNd2 }
)

group db2udb_grp1 (
SystemList = { ClustsvcSNd1 = 0, ClustsvcSNd2 = 1 }
ClusterList = { ClustsvcP = 0, ClustsvcS = 1 }
AutoStartList = { ClustsvcSNd1, ClustsvcSNd2 }
)
Db2udb db2udb (
DB2InstOwner = db2inst
DB2InstHome = "/db2inst/instance"
IndepthMonitor = 1
DatabaseName = TPCC1
)

Db2udb db2udb (
DB2InstOwner = db2inst
DB2InstHome = "/db2inst/instance"
IndepthMonitor = 1
DatabaseName = TPCC1
)

DiskGroup db2udb_dg1 (
DiskGroup = db2dg1
)

DiskGroup db2udb_dg1 (
DiskGroup = db2dg1
)

IP db2udb_ip (
Device = en0
Address = "10.140.xx.02"
NetMask = "255.255.xxx.0"
)

IP db2udb_ip (
Device = en0
Address = "10.140.xx.01"
NetMask = "255.255.xxx.0"
)

Mount db2udb_mnt0 (
MountPoint = "/db2inst"
BlockDevice = "/dev/vx/dsk/db2dg1/base"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount db2udb_mnt0 (
MountPoint = "/db2inst"
BlockDevice = "/dev/vx/dsk/db2dg1/base"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount db2udb_mnt1 (
MountPoint = "/db2inst/db/mnt1"
BlockDevice = "/dev/vx/dsk/db2dg1/mnt1"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount db2udb_mnt1 (
MountPoint = "/db2inst/db/mnt1"
BlockDevice = "/dev/vx/dsk/db2dg1/mnt1"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount db2udb_mnt2 (
MountPoint = "/db2inst/db/mnt2"
BlockDevice = "/dev/vx/dsk/db2dg1/mnt2"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount db2udb_mnt2 (
MountPoint = "/db2inst/db/mnt2"
BlockDevice = "/dev/vx/dsk/db2dg1/mnt2"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount db2udb_mnt3 (
MountPoint = "/db2inst/db/mnt3"
BlockDevice = "/dev/vx/dsk/db2dg1/mnt3"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount db2udb_mnt3 (
MountPoint = "/db2inst/db/mnt3"
BlockDevice = "/dev/vx/dsk/db2dg1/mnt3"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount db2udb_mnt4 (
MountPoint = "/db2inst/db/mnt4"
BlockDevice = "/dev/vx/dsk/db2dg1/mnt4"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount db2udb_mnt4 (
MountPoint = "/db2inst/db/mnt4"
BlockDevice = "/dev/vx/dsk/db2dg1/mnt4"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount db2udb_mnt5 (
MountPoint = "/db2inst/db/temp"
BlockDevice = "/dev/vx/dsk/db2dg1/dbtemp"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount db2udb_mnt5 (
MountPoint = "/db2inst/db/temp"
BlockDevice = "/dev/vx/dsk/db2dg1/dbtemp"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount db2udb_mnt6 (

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

39

MountPoint = "/db2inst/db/log"
BlockDevice = "/dev/vx/dsk/db2dg1/log"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount db2udb_mnt6 (
MountPoint = "/db2inst/db/log"
BlockDevice = "/dev/vx/dsk/db2dg1/log"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount db2udb_mnt7 (
MountPoint = "/db2inst/backup"
BlockDevice = "/dev/vx/dsk/db2dg1/backup"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount db2udb_mnt7 (
MountPoint = "/db2inst/backup"
BlockDevice = "/dev/vx/dsk/db2dg1/backup"
FSType = vxfs
FsckOpt = " -y -o full "
)

NIC db2udb_nic (
Device = en0
NetworkType = ether
NetworkHosts = { "10.140.xx.7", "10.140.xx.8" }
)

NIC db2udb_nic (
Device = en0
NetworkType = ether
NetworkHosts = { "10.140.xx.5", "10.140.xx.6" }
)

SVCCopyServices db2udb_svccp (
SSHPathToIDFile = " /.ssh/id_dsa "
GroupName = svcmmMDR
SVCClusterIP = "10.140.xx.xxx"
DisconnectTakeover = 1
)

SVCCopyServices db2udb_svccp (
SSHPathToIDFile = " /.ssh/id_dsa "
GroupName = svcmmMDR
SVCClusterIP = "10.140.xx.xxx"
DisconnectTakeover = 1
)

Volume db2udb_vol00 (
Volume = base
DiskGroup = db2dg1
)

Volume db2udb_vol00 (
Volume = base
DiskGroup = db2dg1
)

Volume db2udb_vol01 (
Volume = mnt1
DiskGroup = db2dg1
)

Volume db2udb_vol01 (
Volume = mnt1
DiskGroup = db2dg1
)

Volume db2udb_vol02 (
Volume = mnt2
DiskGroup = db2dg1
)

Volume db2udb_vol02 (
Volume = mnt2
DiskGroup = db2dg1
)

Volume db2udb_vol03 (
Volume = mnt3
DiskGroup = db2dg1
)

Volume db2udb_vol03 (
Volume = mnt3
DiskGroup = db2dg1
)

Volume db2udb_vol04 (
Volume = mnt4
DiskGroup = db2dg1
)

Volume db2udb_vol04 (
Volume = mnt4
DiskGroup = db2dg1
)

Volume db2udb_vol05 (
Volume = dbtemp
DiskGroup = db2dg1
)

Volume db2udb_vol05 (
Volume = dbtemp
DiskGroup = db2dg1
)

Volume db2udb_vol06 (
Volume = log
DiskGroup = db2dg1
)

Volume db2udb_vol06 (
Volume = log
DiskGroup = db2dg1
)

Volume db2udb_vol07 (
Volume = backup
DiskGroup = db2dg1
)

Volume db2udb_vol07 (
Volume = backup
DiskGroup = db2dg1
)

db2udb requires db2udb_ip


db2udb requires db2udb_mnt0

db2udb requires db2udb_ip

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

40

db2udb requires db2udb_mnt1


db2udb requires db2udb_mnt2
db2udb requires db2udb_mnt3
db2udb requires db2udb_mnt4
db2udb requires db2udb_mnt5
db2udb requires db2udb_mnt6
db2udb requires db2udb_mnt7
db2udb_dg1 requires db2udb_svccp
db2udb_ip requires db2udb_nic
db2udb_mnt0 requires db2udb_vol00
db2udb_mnt1 requires db2udb_mnt0
db2udb_mnt1 requires db2udb_vol01
db2udb_mnt2 requires db2udb_mnt0
db2udb_mnt2 requires db2udb_vol02
db2udb_mnt3 requires db2udb_mnt0
db2udb_mnt3 requires db2udb_vol03
db2udb_mnt4 requires db2udb_mnt0
db2udb_mnt4 requires db2udb_vol04
db2udb_mnt5 requires db2udb_mnt0
db2udb_mnt5 requires db2udb_vol05
db2udb_mnt6 requires db2udb_mnt0
db2udb_mnt6 requires db2udb_vol06
db2udb_mnt7 requires db2udb_mnt0
db2udb_mnt7 requires db2udb_vol07
db2udb_vol00 requires db2udb_dg1
db2udb_vol01 requires db2udb_dg1
db2udb_vol02 requires db2udb_dg1
db2udb_vol03 requires db2udb_dg1
db2udb_vol04 requires db2udb_dg1
db2udb_vol05 requires db2udb_dg1
db2udb_vol06 requires db2udb_dg1
db2udb_vol07 requires db2udb_dg1

db2udb requires db2udb_mnt0


db2udb requires db2udb_mnt1
db2udb requires db2udb_mnt2
db2udb requires db2udb_mnt3
db2udb requires db2udb_mnt4
db2udb requires db2udb_mnt5
db2udb requires db2udb_mnt6
db2udb requires db2udb_mnt7
db2udb_dg1 requires db2udb_svccp
db2udb_ip requires db2udb_nic
db2udb_mnt0 requires db2udb_vol00
db2udb_mnt1 requires db2udb_mnt0
db2udb_mnt1 requires db2udb_vol01
db2udb_mnt2 requires db2udb_mnt0
db2udb_mnt2 requires db2udb_vol02
db2udb_mnt3 requires db2udb_mnt0
db2udb_mnt3 requires db2udb_vol03
db2udb_mnt4 requires db2udb_mnt0
db2udb_mnt4 requires db2udb_vol04
db2udb_mnt5 requires db2udb_mnt0
db2udb_mnt5 requires db2udb_vol05
db2udb_mnt6 requires db2udb_mnt0
db2udb_mnt6 requires db2udb_vol06
db2udb_mnt7 requires db2udb_mnt0
db2udb_mnt7 requires db2udb_vol07
db2udb_vol00 requires db2udb_dg1
db2udb_vol01 requires db2udb_dg1
db2udb_vol02 requires db2udb_dg1
db2udb_vol03 requires db2udb_dg1
db2udb_vol04 requires db2udb_dg1
db2udb_vol05 requires db2udb_dg1
db2udb_vol06 requires db2udb_dg1
db2udb_vol07 requires db2udb_dg1

main.cf.ora.siteA

VCS configuration file for Oracle


main.cf.ora.siteB

include "types.cf"
include "Db2udbTypes.cf"
include "OracleTypes.cf"
include "SVCCopyServicesTypes.cf"

include "types.cf"
include "Db2udbTypes.cf"
include "OracleTypes.cf"
include "SVCCopyServicesTypes.cf"

cluster ClustsvcP (
UserNames = { admin = bIJbIDiFJeJJhRJdIG }
ClusterAddress = "10.140.xx.05"
Administrators = { admin }
)

cluster ClustsvcS (
UserNames = { admin = IpqIpkPmqLqqOyqKpn }
ClusterAddress = "10.140.xx.06"
Administrators = { admin }
)

remotecluster ClustsvcS (
ClusterAddress = "10.140.xx.06"
)

remotecluster ClustsvcP (
ClusterAddress = "10.140.xx.05"
)

heartbeat Icmp (
ClusterList = { ClustsvcS }
AYATimeout = 30
Arguments @ClustsvcS = { "10.140.xx.06" }
)

heartbeat Icmp (
ClusterList = { ClustsvcP }
AYATimeout = 30
Arguments @ClustsvcP = { "10.140.xx.05" }
)

system ClustsvcPNd1 (
)

system ClustsvcSNd1 (
)

system ClustsvcPNd2 (
)

system ClustsvcSNd2 (
)

group ClusterService (
SystemList = { ClustsvcPNd1 = 0, ClustsvcPNd2 = 0 }
AutoStartList = { ClustsvcPNd1, ClustsvcPNd2 }
OnlineRetryLimit = 3
OnlineRetryInterval = 3
)

group ClusterService (
SystemList = { ClustsvcSNd1 = 0, ClustsvcSNd2 = 0 }
AutoStartList = { ClustsvcSNd1, ClustsvcSNd2 }
OnlineRetryLimit = 3
OnlineRetryInterval = 3
)

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

41

Application wac (
StartProgram = "/opt/VRTSvcs/bin/wacstart"
StopProgram = "/opt/VRTSvcs/bin/wacstop"
MonitorProcesses = { "/opt/VRTSvcs/bin/wac" }
)

Application wac (
StartProgram = "/opt/VRTSvcs/bin/wacstart"
StopProgram = "/opt/VRTSvcs/bin/wacstop"
MonitorProcesses = { "/opt/VRTSvcs/bin/wac" }
)

IP gcoip (
Device = en0
Address = "10.140.xx.05"
NetMask = "255.255.xxx.0"
)

IP gcoip (
Device = en0
Address = "10.140.xx.06"
NetMask = "255.255.xxx.0"
)

NIC gconic (
Device = en0
)

NIC gconic (
Device = en0
)

gcoip requires gconic


wac requires gcoip

gcoip requires gconic


wac requires gcoip

group oracle_grp1 (
SystemList = { ClustsvcPNd1 = 0, ClustsvcPNd2 = 1 }
ClusterList = { ClustsvcP = 0, ClustsvcS = 1 }
Authority = 1
AutoStartList = { ClustsvcPNd1, ClustsvcPNd2 }
)

group oracle_grp1 (
SystemList = { ClustsvcSNd1 = 0, ClustsvcSNd2 = 1 }
ClusterList = { ClustsvcP = 0, ClustsvcS = 1 }
AutoStartList = { ClustsvcSNd1, ClustsvcSNd2 }
)
DiskGroup oracle_dg0 (
DiskGroup = oracle1
)

DiskGroup oracle_dg0 (
DiskGroup = oracle1
)

DiskGroup oracle_dg1 (
DiskGroup = oradata1
)

DiskGroup oracle_dg1 (
DiskGroup = oradata1
)

IP oracle_ip (
Device = en0
Address = "10.140.xx.04"
NetMask = "255.255.xxx.0"
)

IP oracle_ip (
Device = en0
Address = "10.140.xx.03"
NetMask = "255.255.xxx.0"
)

Mount oracle_mnt0 (
MountPoint = "/oracle"
BlockDevice = "/dev/vx/dsk/oracle1/base"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount oracle_mnt0 (
MountPoint = "/oracle"
BlockDevice = "/dev/vx/dsk/oracle1/base"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount oracle_mnt1 (
MountPoint = "/oradata"
BlockDevice = "/dev/vx/dsk/oradata1/mnt1"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount oracle_mnt1 (
MountPoint = "/oradata"
BlockDevice = "/dev/vx/dsk/oradata1/mnt1"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount oracle_mnt2 (
MountPoint = "/oradata/tpcc/mnt2"
BlockDevice = "/dev/vx/dsk/oradata1/mnt2"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount oracle_mnt2 (
MountPoint = "/oradata/tpcc/mnt2"
BlockDevice = "/dev/vx/dsk/oradata1/mnt2"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount oracle_mnt3 (
MountPoint = "/oradata/tpcc/mnt3"
BlockDevice = "/dev/vx/dsk/oradata1/mnt3"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount oracle_mnt3 (
MountPoint = "/oradata/tpcc/mnt3"
BlockDevice = "/dev/vx/dsk/oradata1/mnt3"
FSType = vxfs
FsckOpt = " -y -o full "

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

42

)
Mount oracle_mnt4 (
MountPoint = "/oradata/tpcc/mnt4"
BlockDevice = "/dev/vx/dsk/oradata1/mnt4"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount oracle_mnt4 (
MountPoint = "/oradata/tpcc/mnt4"
BlockDevice = "/dev/vx/dsk/oradata1/mnt4"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount oracle_mnt5 (
MountPoint = "/oradata/tpcc/log"
BlockDevice = "/dev/vx/dsk/oradata1/log"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount oracle_mnt5 (
MountPoint = "/oradata/tpcc/log"
BlockDevice = "/dev/vx/dsk/oradata1/log"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount oracle_mnt6 (
MountPoint = "/oradata/tpcc/dbtemp"
BlockDevice = "/dev/vx/dsk/oradata1/dbtemp"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount oracle_mnt6 (
MountPoint = "/oradata/tpcc/dbtemp"
BlockDevice = "/dev/vx/dsk/oradata1/dbtemp"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount oracle_mnt7 (
MountPoint = "/oradata/backup"
BlockDevice = "/dev/vx/dsk/oradata1/backup"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount oracle_mnt7 (
MountPoint = "/oradata/backup"
BlockDevice = "/dev/vx/dsk/oradata1/backup"
FSType = vxfs
FsckOpt = " -y -o full "
)

NIC oracle_nic (
Device = en0
NetworkType = ether
NetworkHosts = { "10.140.xx.7", "10.140.xx.8" }
)

NIC oracle_nic (
Device = en0
NetworkType = ether
NetworkHosts = { "10.140.xx.5", "10.140.xx.6" }
)

Netlsnr oracle_lsnr (
Owner = oracle
Home = "/oracle/orahome/"
TnsAdmin = "/oracle/orahome/network/admin/"
Listener @ClustsvcSNd1 = LISTENER_CLUSTSVCSND1
Listener @ClustsvcSNd2 = LISTENER_CLUSTSVCSND2
EnvFile = "/oracle/.profile"
LsnrPwd = DRHrDPgPG
)

Netlsnr oracle_lsnr (
Owner = oracle
Home = "/oracle/orahome/"
TnsAdmin = "/oracle/orahome/network/admin/"
Listener @ClustsvcPNd1 =
LISTENER_CLUSTSVCPND1
Listener @ClustsvcPNd2 =
LISTENER_CLUSTSVCPND2
EnvFile = "/oracle/.profile"
LsnrPwd = DRHrDPgPG
)

Oracle oracle (
Sid = tpcc
Owner = oracle
Home = "/oracle/orahome"
StartUpOpt = STARTUP
EnvFile = "/oracle/.profile"
DetailMonitor = 1
User = tpcc
Pword = aoeOamDmd
Table = test
)

Oracle oracle (
Sid = tpcc
Owner = oracle
Home = "/oracle/orahome"
StartUpOpt = STARTUP
EnvFile = "/oracle/.profile"
DetailMonitor = 1
User = tpcc
Pword = aoeOamDmd
Table = test
)

SVCCopyServices oracle_svccp (
SSHPathToIDFile = " /.ssh/id_dsa "
GroupName = svcmmMDR
SVCClusterIP = "10.140.xx.xxx"
DisconnectTakeover = 1
)

SVCCopyServices oracle_svccp (
SSHPathToIDFile = " /.ssh/id_dsa "
GroupName = svcmmMDR
SVCClusterIP = "10.140.xx.xxx"
DisconnectTakeover = 1
)

Volume oracle_vol00 (
Volume = base
DiskGroup = oracle1
)

Volume oracle_vol00 (

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

43

Volume = base
DiskGroup = oracle1
)

Volume oracle_vol01 (
Volume = mnt1
DiskGroup = oradata1
)

Volume oracle_vol01 (
Volume = mnt1
DiskGroup = oradata1
)

Volume oracle_vol02 (
Volume = mnt2
DiskGroup = oradata1
)

Volume oracle_vol02 (
Volume = mnt2
DiskGroup = oradata1
)

Volume oracle_vol03 (
Volume = mnt3
DiskGroup = oradata1
)

Volume oracle_vol03 (
Volume = mnt3
DiskGroup = oradata1
)

Volume oracle_vol04 (
Volume = mnt4
DiskGroup = oradata1
)

Volume oracle_vol04 (
Volume = mnt4
DiskGroup = oradata1
)

Volume oracle_vol05 (
Volume = log
DiskGroup = oradata1
)

Volume oracle_vol05 (
Volume = log
DiskGroup = oradata1
)

Volume oracle_vol06 (
Volume = dbtemp
DiskGroup = oradata1
)

Volume oracle_vol06 (
Volume = dbtemp
DiskGroup = oradata1
)

Volume oracle_vol07 (
Volume = backup
DiskGroup = oradata1
)

Volume oracle_vol07 (
Volume = backup
DiskGroup = oradata1
)

oracle requires oracle_mnt0


oracle requires oracle_mnt1
oracle requires oracle_mnt2
oracle requires oracle_mnt3
oracle requires oracle_mnt4
oracle requires oracle_mnt5
oracle requires oracle_mnt6
oracle requires oracle_mnt7
oracle_dg0 requires oracle_svccp
oracle_dg1 requires oracle_svccp
oracle_ip requires oracle_nic
oracle_lsnr requires oracle
oracle_lsnr requires oracle_ip
oracle_mnt0 requires oracle_vol00
oracle_mnt1 requires oracle_vol01
oracle_mnt2 requires oracle_mnt1
oracle_mnt2 requires oracle_vol02
oracle_mnt3 requires oracle_mnt1
oracle_mnt3 requires oracle_vol03
oracle_mnt4 requires oracle_mnt1
oracle_mnt4 requires oracle_vol04
oracle_mnt5 requires oracle_mnt1
oracle_mnt5 requires oracle_vol05
oracle_mnt6 requires oracle_mnt1
oracle_mnt6 requires oracle_vol06
oracle_mnt7 requires oracle_mnt1
oracle_mnt7 requires oracle_vol07
oracle_vol00 requires oracle_dg0
oracle_vol01 requires oracle_dg1
oracle_vol02 requires oracle_dg1
oracle_vol03 requires oracle_dg1
oracle_vol04 requires oracle_dg1

oracle requires oracle_mnt0


oracle requires oracle_mnt1
oracle requires oracle_mnt2
oracle requires oracle_mnt3
oracle requires oracle_mnt4
oracle requires oracle_mnt5
oracle requires oracle_mnt6
oracle requires oracle_mnt7
oracle_dg0 requires oracle_svccp
oracle_dg1 requires oracle_svccp
oracle_ip requires oracle_nic
oracle_lsnr requires oracle
oracle_lsnr requires oracle_ip
oracle_mnt0 requires oracle_vol00
oracle_mnt1 requires oracle_vol01
oracle_mnt2 requires oracle_mnt1
oracle_mnt2 requires oracle_vol02
oracle_mnt3 requires oracle_mnt1
oracle_mnt3 requires oracle_vol03
oracle_mnt4 requires oracle_mnt1
oracle_mnt4 requires oracle_vol04
oracle_mnt5 requires oracle_mnt1
oracle_mnt5 requires oracle_vol05
oracle_mnt6 requires oracle_mnt1
oracle_mnt6 requires oracle_vol06
oracle_mnt7 requires oracle_mnt1
oracle_mnt7 requires oracle_vol07
oracle_vol00 requires oracle_dg0
oracle_vol01 requires oracle_dg1

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

44

oracle_vol05 requires oracle_dg1


oracle_vol06 requires oracle_dg1
oracle_vol07 requires oracle_dg1

oracle_vol02 requires oracle_dg1


oracle_vol03 requires oracle_dg1
oracle_vol04 requires oracle_dg1
oracle_vol05 requires oracle_dg1
oracle_vol06 requires oracle_dg1
oracle_vol07 requires oracle_dg1

VCS configuration file for combined Oracle and Db2 Application Service group and Firedrill
main.cf.oradb2.siteA
main.cf.oradb2.siteB
include "types.cf"
include "Db2udbTypes.cf"
include "OracleTypes.cf"
include "SVCCopyServicesTypes.cf"

include "types.cf"
include "Db2udbTypes.cf"
include "OracleTypes.cf"
include "SVCCopyServicesTypes.cf"

cluster ClustsvcP (
UserNames = { admin = bIJbIDiFJeJJhRJdIG }
ClusterAddress = "10.140.xx.05"
Administrators = { admin }
)

cluster ClustsvcS (
UserNames = { admin = IpqIpkPmqLqqOyqKpn }
ClusterAddress = "10.140.xx.06"
Administrators = { admin }
)

remotecluster ClustsvcS (
ClusterAddress = "10.140.xx.06"
)

remotecluster ClustsvcP (
ClusterAddress = "10.140.xx.05"
)

heartbeat Icmp (
ClusterList = { ClustsvcS }
AYATimeout = 30
Arguments @ClustsvcS = { "10.140.xx.06" }
)

heartbeat Icmp (
ClusterList = { ClustsvcP }
AYATimeout = 30
Arguments @ClustsvcP = { "10.140.xx.05" }
)

system gl-ax04 (
)

system gl-ax06 (
)

system gl-ax05 (
)

system gl-ax07 (
)

group ClusterService (
SystemList = { gl-ax04 = 0, gl-ax05 = 0 }
AutoStartList = { gl-ax04, gl-ax05 }
OnlineRetryLimit = 3
OnlineRetryInterval = 3
)

group ClusterService (
SystemList = { gl-ax06 = 0, gl-ax07 = 0 }
AutoStartList = { gl-ax06, gl-ax07 }
OnlineRetryLimit = 3
OnlineRetryInterval = 3
)

Application wac (
StartProgram = "/opt/VRTSvcs/bin/wacstart"
StopProgram = "/opt/VRTSvcs/bin/wacstop"
MonitorProcesses = { "/opt/VRTSvcs/bin/wac" }
)

Application wac (
StartProgram = "/opt/VRTSvcs/bin/wacstart"
StopProgram = "/opt/VRTSvcs/bin/wacstop"
MonitorProcesses = { "/opt/VRTSvcs/bin/wac" }
)

IP gcoip (
Device = en0
Address = "10.140.xx.05"
NetMask = "255.255.xxx.0"
)

IP gcoip (
Device = en0
Address = "10.140.xx.06"
NetMask = "255.255.xxx.0"
)

NIC gconic (
Device = en0
)

NIC gconic (
Device = en0
)

gcoip requires gconic


wac requires gcoip

gcoip requires gconic


wac requires gcoip

group app_grp1 (
SystemList = { gl-ax04 = 0, gl-ax05 = 1 }
ClusterList = { ClustsvcP = 0, ClustsvcS = 1 }
Authority = 1
AutoStartList = { gl-ax04, gl-ax05 }
)

group app_grp1 (
SystemList = { gl-ax06 = 0, gl-ax07 = 1 }
ClusterList = { ClustsvcP = 0, ClustsvcS = 1 }
AutoStartList = { gl-ax06, gl-ax07 }
)

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

45

Db2udb db2udb (
DB2InstOwner = db2inst
DB2InstHome = "/db2inst/instance"
IndepthMonitor = 1
DatabaseName = TPCC1
)

Db2udb db2udb (
DB2InstOwner = db2inst
DB2InstHome = "/db2inst/instance"
IndepthMonitor = 1
DatabaseName = TPCC1
)

DiskGroup db2udb_dg1 (
DiskGroup = db2dg1
)

DiskGroup db2udb_dg1 (
DiskGroup = db2dg1
)

DiskGroup oracle_dg0 (
DiskGroup = oracle1
)

DiskGroup oracle_dg0 (
DiskGroup = oracle1
)

DiskGroup oracle_dg1 (
DiskGroup = oradata1
)

DiskGroup oracle_dg1 (
DiskGroup = oradata1
)

IP db2udb_ip (
Device = en0
Address = "10.140.xx.02"
NetMask = "255.255.xxx.0"
)

IP db2udb_ip (
Device = en0
Address = "10.140.xx.01"
NetMask = "255.255.xxx.0"
)

IP oracle_ip (
Device = en0
Address = "10.140.xx.04"
NetMask = "255.255.xxx.0"
)

IP oracle_ip (
Device = en0
Address = "10.140.xx.03"
NetMask = "255.255.xxx.0"
)

Mount db2udb_mnt0 (
MountPoint = "/db2inst"
BlockDevice = "/dev/vx/dsk/db2dg1/base"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount db2udb_mnt0 (
MountPoint = "/db2inst"
BlockDevice = "/dev/vx/dsk/db2dg1/base"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount db2udb_mnt1 (
MountPoint = "/db2inst/db/mnt1"
BlockDevice = "/dev/vx/dsk/db2dg1/mnt1"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount db2udb_mnt1 (
MountPoint = "/db2inst/db/mnt1"
BlockDevice = "/dev/vx/dsk/db2dg1/mnt1"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount db2udb_mnt2 (
MountPoint = "/db2inst/db/mnt2"
BlockDevice = "/dev/vx/dsk/db2dg1/mnt2"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount db2udb_mnt2 (
MountPoint = "/db2inst/db/mnt2"
BlockDevice = "/dev/vx/dsk/db2dg1/mnt2"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount db2udb_mnt3 (
MountPoint = "/db2inst/db/mnt3"
BlockDevice = "/dev/vx/dsk/db2dg1/mnt3"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount db2udb_mnt3 (
MountPoint = "/db2inst/db/mnt3"
BlockDevice = "/dev/vx/dsk/db2dg1/mnt3"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount db2udb_mnt4 (
MountPoint = "/db2inst/db/mnt4"
BlockDevice = "/dev/vx/dsk/db2dg1/mnt4"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount db2udb_mnt4 (
MountPoint = "/db2inst/db/mnt4"
BlockDevice = "/dev/vx/dsk/db2dg1/mnt4"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount db2udb_mnt5 (
MountPoint = "/db2inst/db/temp"

Mount db2udb_mnt5 (

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

46

BlockDevice = "/dev/vx/dsk/db2dg1/dbtemp"
FSType = vxfs
FsckOpt = " -y -o full "
)

MountPoint = "/db2inst/db/temp"
BlockDevice = "/dev/vx/dsk/db2dg1/dbtemp"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount db2udb_mnt6 (
MountPoint = "/db2inst/db/log"
BlockDevice = "/dev/vx/dsk/db2dg1/log"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount db2udb_mnt6 (
MountPoint = "/db2inst/db/log"
BlockDevice = "/dev/vx/dsk/db2dg1/log"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount db2udb_mnt7 (
MountPoint = "/db2inst/backup"
BlockDevice = "/dev/vx/dsk/db2dg1/backup"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount db2udb_mnt7 (
MountPoint = "/db2inst/backup"
BlockDevice = "/dev/vx/dsk/db2dg1/backup"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount oracle_mnt0 (
MountPoint = "/oracle"
BlockDevice = "/dev/vx/dsk/oracle1/base"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount oracle_mnt0 (
MountPoint = "/oracle"
BlockDevice = "/dev/vx/dsk/oracle1/base"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount oracle_mnt1 (
MountPoint = "/oradata"
BlockDevice = "/dev/vx/dsk/oradata1/mnt1"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount oracle_mnt1 (
MountPoint = "/oradata"
BlockDevice = "/dev/vx/dsk/oradata1/mnt1"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount oracle_mnt2 (
MountPoint = "/oradata/tpcc/mnt2"
BlockDevice = "/dev/vx/dsk/oradata1/mnt2"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount oracle_mnt2 (
MountPoint = "/oradata/tpcc/mnt2"
BlockDevice = "/dev/vx/dsk/oradata1/mnt2"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount oracle_mnt3 (
MountPoint = "/oradata/tpcc/mnt3"
BlockDevice = "/dev/vx/dsk/oradata1/mnt3"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount oracle_mnt3 (
MountPoint = "/oradata/tpcc/mnt3"
BlockDevice = "/dev/vx/dsk/oradata1/mnt3"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount oracle_mnt4 (
MountPoint = "/oradata/tpcc/mnt4"
BlockDevice = "/dev/vx/dsk/oradata1/mnt4"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount oracle_mnt4 (
MountPoint = "/oradata/tpcc/mnt4"
BlockDevice = "/dev/vx/dsk/oradata1/mnt4"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount oracle_mnt5 (
MountPoint = "/oradata/tpcc/log"
BlockDevice = "/dev/vx/dsk/oradata1/log"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount oracle_mnt5 (
MountPoint = "/oradata/tpcc/log"
BlockDevice = "/dev/vx/dsk/oradata1/log"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount oracle_mnt6 (
MountPoint = "/oradata/tpcc/dbtemp"
BlockDevice = "/dev/vx/dsk/oradata1/dbtemp"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount oracle_mnt6 (
MountPoint = "/oradata/tpcc/dbtemp"
BlockDevice = "/dev/vx/dsk/oradata1/dbtemp"
FSType = vxfs
FsckOpt = " -y -o full "
)

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

47

Mount oracle_mnt7 (
MountPoint = "/oradata/backup"
BlockDevice = "/dev/vx/dsk/oradata1/backup"
FSType = vxfs
FsckOpt = " -y -o full "
)

Mount oracle_mnt7 (
MountPoint = "/oradata/backup"
BlockDevice = "/dev/vx/dsk/oradata1/backup"
FSType = vxfs
FsckOpt = " -y -o full "
)

NIC app_nic (
Device = en0
NetworkType = ether
NetworkHosts = { "10.140.xx.7", "10.140.xx.8" }
)

NIC app_nic (
Device = en0
NetworkType = ether
NetworkHosts = { "10.140.xx.5", "10.140.xx.6" }
)

Netlsnr oracle_lsnr (
Owner = oracle
Home = "/oracle/orahome/"
TnsAdmin = "/oracle/orahome/network/admin/"
Listener @gl-ax06 = LISTENER_CLUSTSVCSND1
Listener @gl-ax07 = LISTENER_CLUSTSVCSND2
EnvFile = "/oracle/.profile"
LsnrPwd = DRHrDPgPG
)

Netlsnr oracle_lsnr (
Owner = oracle
Home = "/oracle/orahome/"
TnsAdmin = "/oracle/orahome/network/admin/"
Listener @gl-ax04 = LISTENER_CLUSTSVCPND1
Listener @gl-ax05 = LISTENER_CLUSTSVCPND2
EnvFile = "/oracle/.profile"
LsnrPwd = DRHrDPgPG
)

Oracle oracle (
Sid = tpcc
Owner = oracle
Home = "/oracle/orahome"
StartUpOpt = STARTUP
EnvFile = "/oracle/.profile"
DetailMonitor = 1
User = tpcc
Pword = aoeOamDmd
Table = test
)

Oracle oracle (
Sid = tpcc
Owner = oracle
Home = "/oracle/orahome"
StartUpOpt = STARTUP
EnvFile = "/oracle/.profile"
DetailMonitor = 1
User = tpcc
Pword = aoeOamDmd
Table = test
)

SVCCopyServices app_svccp (
SSHPathToIDFile = " /.ssh/id_dsa "
GroupName = svcgmMDR
SVCClusterIP = "10.140.xx.xxx"
DisconnectTakeover = 1
)

SVCCopyServices app_svccp (
SSHPathToIDFile = " /.ssh/id_dsa "
GroupName = svcgmMDR
SVCClusterIP = "10.140.xx.xxx"
DisconnectTakeover = 1
)

Volume db2udb_vol00 (
Volume = base
DiskGroup = db2dg1
)

Volume db2udb_vol00 (
Volume = base
DiskGroup = db2dg1
)

Volume db2udb_vol01 (
Volume = mnt1
DiskGroup = db2dg1
)

Volume db2udb_vol01 (
Volume = mnt1
DiskGroup = db2dg1
)

Volume db2udb_vol02 (
Volume = mnt2
DiskGroup = db2dg1
)

Volume db2udb_vol02 (
Volume = mnt2
DiskGroup = db2dg1
)

Volume db2udb_vol03 (
Volume = mnt3
DiskGroup = db2dg1
)

Volume db2udb_vol03 (
Volume = mnt3
DiskGroup = db2dg1
)

Volume db2udb_vol04 (
Volume = mnt4
DiskGroup = db2dg1
)

Volume db2udb_vol04 (
Volume = mnt4
DiskGroup = db2dg1
)

Volume db2udb_vol05 (

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

48

Volume = dbtemp
DiskGroup = db2dg1
)

Volume db2udb_vol05 (
Volume = dbtemp
DiskGroup = db2dg1
)

Volume db2udb_vol06 (
Volume = log
DiskGroup = db2dg1
)

Volume db2udb_vol06 (
Volume = log
DiskGroup = db2dg1
)

Volume db2udb_vol07 (
Volume = backup
DiskGroup = db2dg1
)

Volume db2udb_vol07 (
Volume = backup
DiskGroup = db2dg1
)

Volume oracle_vol00 (
Volume = base
DiskGroup = oracle1
)

Volume oracle_vol00 (
Volume = base
DiskGroup = oracle1
)

Volume oracle_vol01 (
Volume = mnt1
DiskGroup = oradata1
)

Volume oracle_vol01 (
Volume = mnt1
DiskGroup = oradata1
)

Volume oracle_vol02 (
Volume = mnt2
DiskGroup = oradata1
)

Volume oracle_vol02 (
Volume = mnt2
DiskGroup = oradata1
)

Volume oracle_vol03 (
Volume = mnt3
DiskGroup = oradata1
)

Volume oracle_vol03 (
Volume = mnt3
DiskGroup = oradata1
)

Volume oracle_vol04 (
Volume = mnt4
DiskGroup = oradata1
)

Volume oracle_vol04 (
Volume = mnt4
DiskGroup = oradata1
)

Volume oracle_vol05 (
Volume = log
DiskGroup = oradata1
)

Volume oracle_vol05 (
Volume = log
DiskGroup = oradata1
)

Volume oracle_vol06 (
Volume = dbtemp
DiskGroup = oradata1
)

Volume oracle_vol06 (
Volume = dbtemp
DiskGroup = oradata1
)

Volume oracle_vol07 (
Volume = backup
DiskGroup = oradata1
)

Volume oracle_vol07 (
Volume = backup
DiskGroup = oradata1
)

db2udb requires db2udb_ip


db2udb requires db2udb_mnt0
db2udb requires db2udb_mnt1
db2udb requires db2udb_mnt2
db2udb requires db2udb_mnt3
db2udb requires db2udb_mnt4
db2udb requires db2udb_mnt5
db2udb requires db2udb_mnt6
db2udb requires db2udb_mnt7
db2udb_dg1 requires app_svccp
db2udb_ip requires app_nic
db2udb_mnt0 requires db2udb_vol00
db2udb_mnt1 requires db2udb_mnt0
db2udb_mnt1 requires db2udb_vol01

db2udb requires db2udb_ip


db2udb requires db2udb_mnt0
db2udb requires db2udb_mnt1
db2udb requires db2udb_mnt2
db2udb requires db2udb_mnt3
db2udb requires db2udb_mnt4
db2udb requires db2udb_mnt5
db2udb requires db2udb_mnt6
db2udb requires db2udb_mnt7
db2udb_dg1 requires app_svccp
db2udb_ip requires app_nic
db2udb_mnt0 requires db2udb_vol00
db2udb_mnt1 requires db2udb_mnt0

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

49

db2udb_mnt2 requires db2udb_mnt0


db2udb_mnt2 requires db2udb_vol02
db2udb_mnt3 requires db2udb_mnt0
db2udb_mnt3 requires db2udb_vol03
db2udb_mnt4 requires db2udb_mnt0
db2udb_mnt4 requires db2udb_vol04
db2udb_mnt5 requires db2udb_mnt0
db2udb_mnt5 requires db2udb_vol05
db2udb_mnt6 requires db2udb_mnt0
db2udb_mnt6 requires db2udb_vol06
db2udb_mnt7 requires db2udb_mnt0
db2udb_mnt7 requires db2udb_vol07
db2udb_vol00 requires db2udb_dg1
db2udb_vol01 requires db2udb_dg1
db2udb_vol02 requires db2udb_dg1
db2udb_vol03 requires db2udb_dg1
db2udb_vol04 requires db2udb_dg1
db2udb_vol05 requires db2udb_dg1
db2udb_vol06 requires db2udb_dg1
db2udb_vol07 requires db2udb_dg1
oracle requires oracle_mnt0
oracle requires oracle_mnt1
oracle requires oracle_mnt2
oracle requires oracle_mnt3
oracle requires oracle_mnt4
oracle requires oracle_mnt5
oracle requires oracle_mnt6
oracle requires oracle_mnt7
oracle_dg0 requires app_svccp
oracle_dg1 requires app_svccp
oracle_ip requires app_nic
oracle_lsnr requires oracle
oracle_lsnr requires oracle_ip
oracle_mnt0 requires oracle_vol00
oracle_mnt1 requires oracle_vol01
oracle_mnt2 requires oracle_mnt1
oracle_mnt2 requires oracle_vol02
oracle_mnt3 requires oracle_mnt1
oracle_mnt3 requires oracle_vol03
oracle_mnt4 requires oracle_mnt1
oracle_mnt4 requires oracle_vol04
oracle_mnt5 requires oracle_mnt1
oracle_mnt5 requires oracle_vol05
oracle_mnt6 requires oracle_mnt1
oracle_mnt6 requires oracle_vol06
oracle_mnt7 requires oracle_mnt1
oracle_mnt7 requires oracle_vol07
oracle_vol00 requires oracle_dg0
oracle_vol01 requires oracle_dg1
oracle_vol02 requires oracle_dg1
oracle_vol03 requires oracle_dg1
oracle_vol04 requires oracle_dg1
oracle_vol05 requires oracle_dg1
oracle_vol06 requires oracle_dg1
oracle_vol07 requires oracle_dg1

db2udb_mnt1 requires db2udb_vol01


db2udb_mnt2 requires db2udb_mnt0
db2udb_mnt2 requires db2udb_vol02
db2udb_mnt3 requires db2udb_mnt0
db2udb_mnt3 requires db2udb_vol03
db2udb_mnt4 requires db2udb_mnt0
db2udb_mnt4 requires db2udb_vol04
db2udb_mnt5 requires db2udb_mnt0
db2udb_mnt5 requires db2udb_vol05
db2udb_mnt6 requires db2udb_mnt0
db2udb_mnt6 requires db2udb_vol06
db2udb_mnt7 requires db2udb_mnt0
db2udb_mnt7 requires db2udb_vol07
db2udb_vol00 requires db2udb_dg1
db2udb_vol01 requires db2udb_dg1
db2udb_vol02 requires db2udb_dg1
db2udb_vol03 requires db2udb_dg1
db2udb_vol04 requires db2udb_dg1
db2udb_vol05 requires db2udb_dg1
db2udb_vol06 requires db2udb_dg1
db2udb_vol07 requires db2udb_dg1
oracle requires oracle_mnt0
oracle requires oracle_mnt1
oracle requires oracle_mnt2
oracle requires oracle_mnt3
oracle requires oracle_mnt4
oracle requires oracle_mnt5
oracle requires oracle_mnt6
oracle requires oracle_mnt7
oracle_dg0 requires app_svccp
oracle_dg1 requires app_svccp
oracle_ip requires app_nic
oracle_lsnr requires oracle
oracle_lsnr requires oracle_ip
oracle_mnt0 requires oracle_vol00
oracle_mnt1 requires oracle_vol01
oracle_mnt2 requires oracle_mnt1
oracle_mnt2 requires oracle_vol02
oracle_mnt3 requires oracle_mnt1
oracle_mnt3 requires oracle_vol03
oracle_mnt4 requires oracle_mnt1
oracle_mnt4 requires oracle_vol04
oracle_mnt5 requires oracle_mnt1
oracle_mnt5 requires oracle_vol05
oracle_mnt6 requires oracle_mnt1
oracle_mnt6 requires oracle_vol06
oracle_mnt7 requires oracle_mnt1
oracle_mnt7 requires oracle_vol07
oracle_vol00 requires oracle_dg0
oracle_vol01 requires oracle_dg1
oracle_vol02 requires oracle_dg1
oracle_vol03 requires oracle_dg1
oracle_vol04 requires oracle_dg1
oracle_vol05 requires oracle_dg1
oracle_vol06 requires oracle_dg1
oracle_vol07 requires oracle_dg1

group firedrill_app_grp1 (
SystemList = { gl-ax06 = 0, gl-ax07 = 1 }
AutoFailOver = 0
)
Db2udb FD_db2udb (
DB2InstOwner = db2inst
DB2InstHome = "/db2inst/instance"
IndepthMonitor = 1
DatabaseName = TPCC1
)

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

50

DiskGroup FD_db2udb_dg1 (
DiskGroup = db2dg1_fd
)
DiskGroup FD_oracle_dg0 (
DiskGroup = oracle1_fd
)
DiskGroup FD_oracle_dg1 (
DiskGroup = oradata1_fd
)
IP FD_db2udb_ip (
Device = en0
Address = "10.140.xx.02"
NetMask = "255.255.xxx.0"
)
IP FD_oracle_ip (
Device = en0
Address = "10.140.xx.04"
NetMask = "255.255.xxx.0"
)
Mount FD_db2udb_mnt0 (
MountPoint = "/db2inst"
BlockDevice = "/dev/vx/dsk/db2dg1_fd/base"
FSType = vxfs
FsckOpt = " -y -o full "
)
Mount FD_db2udb_mnt1 (
MountPoint = "/db2inst/db/mnt1"
BlockDevice = "/dev/vx/dsk/db2dg1_fd/mnt1"
FSType = vxfs
FsckOpt = " -y -o full "
)
Mount FD_db2udb_mnt2 (
MountPoint = "/db2inst/db/mnt2"
BlockDevice = "/dev/vx/dsk/db2dg1_fd/mnt2"
FSType = vxfs
FsckOpt = " -y -o full "
)
Mount FD_db2udb_mnt3 (
MountPoint = "/db2inst/db/mnt3"
BlockDevice = "/dev/vx/dsk/db2dg1_fd/mnt3"
FSType = vxfs
FsckOpt = " -y -o full "
)
Mount FD_db2udb_mnt4 (
MountPoint = "/db2inst/db/mnt4"
BlockDevice = "/dev/vx/dsk/db2dg1_fd/mnt4"
FSType = vxfs
FsckOpt = " -y -o full "
)
Mount FD_db2udb_mnt5 (
MountPoint = "/db2inst/db/temp"
BlockDevice = "/dev/vx/dsk/db2dg1_fd/dbtemp"
FSType = vxfs
FsckOpt = " -y -o full "
)
Mount FD_db2udb_mnt6 (

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

51

MountPoint = "/db2inst/db/log"
BlockDevice = "/dev/vx/dsk/db2dg1_fd/log"
FSType = vxfs
FsckOpt = " -y -o full "
)
Mount FD_db2udb_mnt7 (
MountPoint = "/db2inst/backup"
BlockDevice = "/dev/vx/dsk/db2dg1_fd/backup"
FSType = vxfs
FsckOpt = " -y -o full "
)
Mount FD_oracle_mnt0 (
MountPoint = "/oracle"
BlockDevice = "/dev/vx/dsk/oracle1_fd/base"
FSType = vxfs
FsckOpt = " -y -o full "
)
Mount FD_oracle_mnt1 (
MountPoint = "/oradata"
BlockDevice = "/dev/vx/dsk/oradata1_fd/mnt1"
FSType = vxfs
FsckOpt = " -y -o full "
)
Mount FD_oracle_mnt2 (
MountPoint = "/oradata/tpcc/mnt2"
BlockDevice = "/dev/vx/dsk/oradata1_fd/mnt2"
FSType = vxfs
FsckOpt = " -y -o full "
)
Mount FD_oracle_mnt3 (
MountPoint = "/oradata/tpcc/mnt3"
BlockDevice = "/dev/vx/dsk/oradata1_fd/mnt3"
FSType = vxfs
FsckOpt = " -y -o full "
)
Mount FD_oracle_mnt4 (
MountPoint = "/oradata/tpcc/mnt4"
BlockDevice = "/dev/vx/dsk/oradata1_fd/mnt4"
FSType = vxfs
FsckOpt = " -y -o full "
)
Mount FD_oracle_mnt5 (
MountPoint = "/oradata/tpcc/log"
BlockDevice = "/dev/vx/dsk/oradata1_fd/log"
FSType = vxfs
FsckOpt = " -y -o full "
)
Mount FD_oracle_mnt6 (
MountPoint = "/oradata/tpcc/dbtemp"
BlockDevice = "/dev/vx/dsk/oradata1_fd/dbtemp"
FSType = vxfs
FsckOpt = " -y -o full "
)
Mount FD_oracle_mnt7 (
MountPoint = "/oradata/backup"
BlockDevice = "/dev/vx/dsk/oradata1_fd/backup"
FSType = vxfs
FsckOpt = " -y -o full "
)

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

52

Netlsnr FD_oracle_lsnr (
Owner = oracle
Home = "/oracle/orahome/"
TnsAdmin = "/oracle/orahome/network/admin/"
Listener @gl-ax06 = LISTENER_CLUSTSVCSND1
Listener @gl-ax07 = LISTENER_CLUSTSVCSND2
EnvFile = "/oracle/.profile"
LsnrPwd = DRHrDPgPG
)
Oracle FD_oracle (
Sid = tpcc
Owner = oracle
Home = "/oracle/orahome"
StartUpOpt = STARTUP
EnvFile = "/oracle/.profile"
DetailMonitor = 1
User = tpcc
Pword = aoeOamDmd
Table = test
)
SVCCopyServicesSnap FD_app_svccpsnap (
TargetResName = app_svccp
FCMapGroupName = FD_gmappgrp1cg
)
Volume FD_db2udb_vol00 (
Volume = base
DiskGroup = db2dg1_fd
)
Volume FD_db2udb_vol01 (
Volume = mnt1
DiskGroup = db2dg1_fd
)
Volume FD_db2udb_vol02 (
Volume = mnt2
DiskGroup = db2dg1_fd
)
Volume FD_db2udb_vol03 (
Volume = mnt3
DiskGroup = db2dg1_fd
)
Volume FD_db2udb_vol04 (
Volume = mnt4
DiskGroup = db2dg1_fd
)
Volume FD_db2udb_vol05 (
Volume = dbtemp
DiskGroup = db2dg1_fd
)
Volume FD_db2udb_vol06 (
Volume = log
DiskGroup = db2dg1_fd
)
Volume FD_db2udb_vol07 (
Volume = backup
DiskGroup = db2dg1_fd
)

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

53

Volume FD_oracle_vol00 (
Volume = base
DiskGroup = oracle1_fd
)
Volume FD_oracle_vol01 (
Volume = mnt1
DiskGroup = oradata1_fd
)
Volume FD_oracle_vol02 (
Volume = mnt2
DiskGroup = oradata1_fd
)
Volume FD_oracle_vol03 (
Volume = mnt3
DiskGroup = oradata1_fd
)
Volume FD_oracle_vol04 (
Volume = mnt4
DiskGroup = oradata1_fd
)
Volume FD_oracle_vol05 (
Volume = log
DiskGroup = oradata1_fd
)
Volume FD_oracle_vol06 (
Volume = dbtemp
DiskGroup = oradata1_fd
)
Volume FD_oracle_vol07 (
Volume = backup
DiskGroup = oradata1_fd
)
FD_db2udb requires FD_db2udb_ip
FD_db2udb requires FD_db2udb_mnt0
FD_db2udb requires FD_db2udb_mnt1
FD_db2udb requires FD_db2udb_mnt2
FD_db2udb requires FD_db2udb_mnt3
FD_db2udb requires FD_db2udb_mnt4
FD_db2udb requires FD_db2udb_mnt5
FD_db2udb requires FD_db2udb_mnt6
FD_db2udb requires FD_db2udb_mnt7
FD_db2udb_dg1 requires FD_app_svccpsnap
FD_db2udb_mnt0 requires FD_db2udb_vol00
FD_db2udb_mnt1 requires FD_db2udb_mnt0
FD_db2udb_mnt1 requires FD_db2udb_vol01
FD_db2udb_mnt2 requires FD_db2udb_mnt0
FD_db2udb_mnt2 requires FD_db2udb_vol02
FD_db2udb_mnt3 requires FD_db2udb_mnt0
FD_db2udb_mnt3 requires FD_db2udb_vol03
FD_db2udb_mnt4 requires FD_db2udb_mnt0
FD_db2udb_mnt4 requires FD_db2udb_vol04
FD_db2udb_mnt5 requires FD_db2udb_mnt0
FD_db2udb_mnt5 requires FD_db2udb_vol05
FD_db2udb_mnt6 requires FD_db2udb_mnt0
FD_db2udb_mnt6 requires FD_db2udb_vol06
FD_db2udb_mnt7 requires FD_db2udb_mnt0
FD_db2udb_mnt7 requires FD_db2udb_vol07
FD_db2udb_vol00 requires FD_db2udb_dg1
FD_db2udb_vol01 requires FD_db2udb_dg1
FD_db2udb_vol02 requires FD_db2udb_dg1

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

54

FD_db2udb_vol03 requires FD_db2udb_dg1


FD_db2udb_vol04 requires FD_db2udb_dg1
FD_db2udb_vol05 requires FD_db2udb_dg1
FD_db2udb_vol06 requires FD_db2udb_dg1
FD_db2udb_vol07 requires FD_db2udb_dg1
FD_oracle requires FD_oracle_mnt0
FD_oracle requires FD_oracle_mnt1
FD_oracle requires FD_oracle_mnt2
FD_oracle requires FD_oracle_mnt3
FD_oracle requires FD_oracle_mnt4
FD_oracle requires FD_oracle_mnt5
FD_oracle requires FD_oracle_mnt6
FD_oracle requires FD_oracle_mnt7
FD_oracle_dg0 requires FD_app_svccpsnap
FD_oracle_dg1 requires FD_app_svccpsnap
FD_oracle_lsnr requires FD_oracle
FD_oracle_lsnr requires FD_oracle_ip
FD_oracle_mnt0 requires FD_oracle_vol00
FD_oracle_mnt1 requires FD_oracle_vol01
FD_oracle_mnt2 requires FD_oracle_mnt1
FD_oracle_mnt2 requires FD_oracle_vol02
FD_oracle_mnt3 requires FD_oracle_mnt1
FD_oracle_mnt3 requires FD_oracle_vol03
FD_oracle_mnt4 requires FD_oracle_mnt1
FD_oracle_mnt4 requires FD_oracle_vol04
FD_oracle_mnt5 requires FD_oracle_mnt1
FD_oracle_mnt5 requires FD_oracle_vol05
FD_oracle_mnt6 requires FD_oracle_mnt1
FD_oracle_mnt6 requires FD_oracle_vol06
FD_oracle_mnt7 requires FD_oracle_mnt1
FD_oracle_mnt7 requires FD_oracle_vol07
FD_oracle_vol00 requires FD_oracle_dg0
FD_oracle_vol01 requires FD_oracle_dg1
FD_oracle_vol02 requires FD_oracle_dg1
FD_oracle_vol03 requires FD_oracle_dg1
FD_oracle_vol04 requires FD_oracle_dg1
FD_oracle_vol05 requires FD_oracle_dg1
FD_oracle_vol06 requires FD_oracle_dg1
FD_oracle_vol07 requires FD_oracle_dg1

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

55

Appendix B: Setting up the Database Applications


Setting up the DB2 Database Application
Create Groups
mkgroup -A id=310 db2adm
mkgroup -A id=311 db2fadm
Create Users
DB2INST=db2inst
mkdir /$DB2INST
mkuser id=320 pgrp=db2adm groups=db2adm home=/$DB2INST/instance db2inst
mkuser id=321 pgrp=db2fadm groups=db2fadm home=/$DB2INST/db2fenc db2fenc
passwd db2inst
passwd db2fenc
su to another user and then to the user to confirm the password
su guest
su db2inst
su db2fenc
Edit /etc/services
Update /etc/services on all the systems that will be running the database, add the following:
DB2_tpcc1
60000/tcp
DB2_tpcc1_1 60001/tcp
DB2_tpcc1_2 60002/tcp
DB2_tpcc1_END 60003/tcp
DB2_db2inst 60004/tcp
DB2_db2inst_1 60005/tcp
DB2_db2inst_2 60006/tcp
DB2_db2inst_END 60007/tcp
NOTE: "tpcc1" needs to match what the "SVCENAME" is set to for the database. (db2 get dbm cfg |
grep SVCENAME)
NOTE: "db2inst" needs to match the instance name. This is for the DAS instance.
Configure VxVM Disks
Successful completion of procedures mentioned in Installing and Configuring SVC is a
prerequisite for this step.
vxddladm enablescsi3 policy=aa
DG=db2dg1
DB2INST=db2inst
vxdg init $DG <disk-name> cds=off

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

56

# If you want simple volumes


vxassist -g $DG make base 2g layout=mirror &
vxassist -g $DG make dbtemp 2g layout=mirror &
vxassist -g $DG make mnt1 2g layout=mirror &
vxassist -g $DG make mnt2 2g layout=mirror &
vxassist -g $DG make mnt3 2g layout=mirror&
vxassist -g $DG make mnt4 2g layout=mirror&
vxassist -g $DG make log 20g layout=concat &
vxassist -g $DG make backup 6g layout=concat &
wait
vxvol -g $DG startall
V=V
mkfs -$V vxfs /dev/vx/rdsk/$DG/base
mkfs -$V vxfs /dev/vx/rdsk/$DG/dbtemp
mkfs -$V vxfs /dev/vx/rdsk/$DG/mnt1
mkfs -$V vxfs /dev/vx/rdsk/$DG/mnt2
mkfs -$V vxfs /dev/vx/rdsk/$DG/mnt3
mkfs -$V vxfs /dev/vx/rdsk/$DG/mnt4
mkfs -$V vxfs /dev/vx/rdsk/$DG/backup
mkfs -$V vxfs /dev/vx/rdsk/$DG/log
mkdir -p /$DB2INST
mount -$V vxfs /dev/vx/dsk/$DG/base /$DB2INST
mkdir -p /$DB2INST/db/temp
mount -$V vxfs /dev/vx/dsk/$DG/dbtemp /$DB2INST/db/temp
mkdir -p /$DB2INST/db/mnt1
mkdir -p /$DB2INST/db/mnt2
mkdir -p /$DB2INST/db/mnt3
mkdir -p /$DB2INST/db/mnt4
mount -$V vxfs /dev/vx/dsk/$DG/mnt1 /$DB2INST/db/mnt1
mount -$V vxfs /dev/vx/dsk/$DG/mnt2 /$DB2INST/db/mnt2
mount -$V vxfs /dev/vx/dsk/$DG/mnt3 /$DB2INST/db/mnt3
mount -$V vxfs /dev/vx/dsk/$DG/mnt4 /$DB2INST/db/mnt4
mkdir -p /$DB2INST/db/log
mount -$V vxfs /dev/vx/dsk/$DG/log /$DB2INST/db/log
mkdir -p /$DB2INST/backup
mount -$V vxfs /dev/vx/dsk/$DG/backup /$DB2INST/backup
chown db2inst /$DB2INST /$DB2INST/db /$DB2INST/db/* /$DB2INST/backup
chgrp db2adm /$DB2INST /$DB2INST/db /$DB2INST/db/* /$DB2INST/backup
mkdir /$DB2INST/instance /$DB2INST/db2fenc
touch /$DB2INST/instance/.profile
echo "+ +" >/$DB2INST/dbinst/.rhosts
chown db2inst /$DB2INST/instance /$DB2INST/instance/.profile
chown db2fenc /$DB2INST/db2fenc
chgrp db2adm /$DB2INST/instance /$DB2INST/instance/.profile
chgrp db2fadm /$DB2INST/db2fenc
chown db2inst /opt/VRTSdb2ed
chgrp db2adm /opt/VRTSdb2ed

Database installation
On each system install the db2 bits. Mount the Db2 Software disk OR have access to the DB2 software
store.
cd ese.dbcs

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

57

./db2_install -p ESE
Database creation
Create instance (as root)
USER=db2inst
DB2INST=db2inst
[ -d /opt/IBM/db2/V9.1/instance ]
cd /opt/IBM/db2/V9.1/instance
[ ! -f /db2inst/instance/.profile ]
./db2idrop db2inst
./db2icrt -a server -p 70000 -s ese -u db2inst db2inst
Set up the database (db2inst)
su - db2inst
echo "PATH=$PATH:/opt/VRTSdb2ed/bin" >> .profile
exit
su - db2inst
Create database
DB2INST=db2inst
DBNAME=tpcc1
SVNAME=DB2_tpcc1
LOGDIR=/$DB2INST/db/log
BACKUP=/$DB2INST/backup
db2 update dbm cfg using diaglevel 4
db2 force applications all
db2stop
db2start
db2 drop database $DBNAME
(rm -f $LOGDIR/* $LOGDIR/NODE*/*.LOG $BACKUP/*)
db2stop
db2start
db2 create database $DBNAME on /$DB2INST/db/"\
temporary tablespace managed by system using (\
'/$DB2INST/db/temp/tmp00',\
'/$DB2INST/db/temp/tmp01',\
'/$DB2INST/db/temp/tmp02') extentsize 16 prefetchsize 64"
db2 "update db cfg for $DBNAME using NEWLOGPATH $LOGDIR"
db2 "update db cfg for $DBNAME using SORTHEAP 2048"
db2 "update db cfg for $DBNAME using logretain on"
db2 "update db cfg for $DBNAME using LOGFILSIZ 2000"
db2 "update db cfg for $DBNAME using LOGPRIMARY 100"
db2 "update db cfg for $DBNAME using LOGSECOND 150"
db2set DB2COMM=tcpip
db2 "update dbm cfg using svcename $SVNAME"
# "tpcc1" needs to match up to the SVCENAME for the database
db2stop
db2start
exit $?

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

58

Backup the database


DB2INST=db2inst
DBNAME=tpcc1
BACKUP=/$DB2INST/backup
db2 backup database $DBNAME to $BACKUP
exit $?
Create table spaces and buffer pools
DB2INST=db2inst
DBNAME=tpcc1
db2 connect to tpcc1
db2 CREATE Bufferpool TPCC1_TS1 IMMEDIATE SIZE 65536 PAGESIZE 4 K
db2 CREATE REGULAR TABLESPACE TS1 PAGESIZE 4 K MANAGED BY DATABASE USING "(
FILE '/db2inst/db/mnt1/dfA1' 1900M ) EXTENTSIZE 16 PREFETCHSIZE 16 NO FILE SYSTEM
CACHING BUFFERPOOL TPCC1_TS1"
db2 CREATE Bufferpool TPCC1_TS2 IMMEDIATE SIZE 65536 PAGESIZE 4 K
db2 CREATE REGULAR TABLESPACE TS2 PAGESIZE 4 K MANAGED BY SYSTEM USING "(
'/db2inst/db/mnt2/dfB1') EXTENTSIZE 16 PREFETCHSIZE 16 NO FILE SYSTEM CACHING
BUFFERPOOL TPCC1_TS2"
db2 CREATE Bufferpool TPCC1_TS3 IMMEDIATE SIZE 65536 PAGESIZE 4 K
db2 CREATE REGULAR TABLESPACE TS3 PAGESIZE 4 K MANAGED BY DATABASE USING "(
FILE '/db2inst/db/mnt3/dfC1' 1900M ) EXTENTSIZE 16 PREFETCHSIZE 16 FILE SYSTEM
CACHING BUFFERPOOL TPCC1_TS3"
db2 CREATE Bufferpool TPCC1_TS4 IMMEDIATE SIZE 65536 PAGESIZE 4 K
db2 CREATE REGULAR TABLESPACE TS4 PAGESIZE 4 K MANAGED BY SYSTEM USING "(
'/db2inst/db/mnt4/dfD1') EXTENTSIZE 16 PREFETCHSIZE 16 FILE SYSTEM CACHING
BUFFERPOOL TPCC1_TS4"
db2 CONNECT RESET
db2 force application all
db2stop
db2start
exit $?

Setting up the Oracle Database Application


Create Groups
mkgroup -A id=410 oracle
mkgroup -A id=411 dba
Create Users
ORAINST=oracle
mkdir /$ORAINST
mkuser id=320 pgrp=dba groups=db2adm home=/$ORAINST/orahome oracle
passwd oracle
su to another user and then to the user to confirm the password

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

59

su guest
su oracle

Configure VxVM Disks


vxddladm enablescsi3 policy=aa
DG=oracle1
vxdg init $DG <disk-name>_0 cds=off
vxassist -g $DG make base 15g &
V=V
mkfs -$V vxfs /dev/vx/rdsk/$DG/base
mkdir -p /$DG
mount -$V vxfs /dev/vx/dsk/$DG/base /$ORAINST
mkdir /$ORAINST/orahome
touch /$ORAINST/orahome/.profile
echo "+ +" >/$ORAINST/orahome/.rhosts
chown -R oracle:dba /$ORAINST
DG1=oradata1
mkdir -p /$DG1
vxdg init $DG1 SAN_VC0_1 cds=off
#for simple volumes
vxassist -g $DG1 make mnt1 4g &
vxassist -g $DG1 make mnt2 4g &
vxassist -g $DG1 make mnt3 4g &
vxassist -g $DG1 make mnt4 6g &
vxassist -g $DG1 make log 20g &
vxassist -g $DG1 make dbtemp 4g &
vxassist -g $DG1 make backup 24g &
wait
mkfs -$V vxfs /dev/vx/rdsk/$DG1/mnt1
mkfs -$V vxfs /dev/vx/rdsk/$DG1/mnt2
mkfs -$V vxfs /dev/vx/rdsk/$DG1/mnt3
mkfs -$V vxfs /dev/vx/rdsk/$DG1/mnt4
mkfs -$V vxfs /dev/vx/rdsk/$DG1/log
mkfs -$V vxfs /dev/vx/rdsk/$DG1/temp
mkfs -$V vxfs /dev/vx/rdsk/$DG1/backup
mount -$V vxfs /dev/vx/dsk/$DG1/mnt1 /$DG1
mkdir -p /$DG1/tpcc
mkdir -p /$DG1/tpcc/mnt2
mkdir -p /$DG1/tpcc/mnt3
mkdir -p /$DG1/tpcc/mnt4
mkdir -p /$DG1/tpcc/log
mkdir -p /$DG1/tpcc/dbtemp
mkdir -p /$DG1/backup

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

60

mount -$V vxfs /dev/vx/dsk/$DG1/mnt2 /$DG1/tpcc/mnt2


mount -$V vxfs /dev/vx/dsk/$DG1/mnt3 /$DG1/tpcc/mnt3
mount -$V vxfs /dev/vx/dsk/$DG1/mnt4 /$DG1/tpcc/mnt4
mount -$V vxfs /dev/vx/dsk/$DG1/log /$DG1/tpcc/log
mount -$V vxfs /dev/vx/dsk/$DG1/dbtemp /$DG1/tpcc/temp
mount -$V vxfs /dev/vx/dsk/$DG1/backup /$DG1/backup
chown -R oracle:dba /$DG1

Database installation
On each system install the Oracle 10gR2. Mount the Oracle Software disk OR have access to the
Oracle software store. Install Oracle as oracle user. Edit the .profile file and set the following environment
variables. Follow instructions in the installer gui and complete installation
#su oracle
export ORACLE_BASE=/oracle
export ORACLE_HOME=$ORACLE_BASE/orahome
export ORACLE_SID=tpcc
export LIBPATH=/usr/dt/lib:$ORACLE_HOME/lib32:$ORACLE_HOME/lib
export NLSPATH=/usr/lib/nls/msg/%L/%N:/usr/lib/nls/msg/%L/%N.cat
export
PATH=$PATH:/oradata/tpcc/scripts:$ORACLE_BASE:$ORACLE_HOME:$ORACLE_HOME/bin
export DISPLAY=xhost:XX.0
$ . ./.profile
$ cd <oracle software disk path>/Disk1
./Disk1/runInstaller -ignoreSysPrereqs
Database creation
Copy the scripts listed below to a directory under /oracle. Login as oracle and run the script as
shown. Ensure that oracle users env has ORACLE_SID variable set .
$./build.sh sys manager /oracle/scripts
build.sh
#!/usr/bin/ksh
Usage ()
{
print -u2 "
$0 syspassword systempassword scriptpath
Example : $0 sys manager /tmp/scripts
"
exit 2
}
if [ $# -ne 3 ]; then
Usage
fi
export DBNAME=$ORACLE_SID
export sysPassword=$1

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

61

export systemPassword=$2
export SCRPT_PATH=$3
if [ -z $DBNAME ]
then
print -u2 " Oracle SID (ORACLE_SID) is not set. Set it in the oracle users .profile and execute it.
Rerun command again.
"
exit 2
fi
if [ ! -x $SCRPT_PATH/build.sh ]
then
print -u2 " Script directory path : $SCRPT_PATH is not valid.
Rerun command with correct path.
"
exit 2
fi
mkdir -p /oradata/$DBNAME
mkdir -p /oradata/$DBNAME/bdump
mkdir -p /oradata/$DBNAME/cdump
mkdir -p /oradata/$DBNAME/create
mkdir -p /oradata/$DBNAME/hdump
mkdir -p /oradata/$DBNAME/pfile
mkdir -p /oradata/$DBNAME/udump
mkdir -p /oradata/backup
echo Add this entry in the oratab: $DBNAME:/oracle/orahome:Y
$ORACLE_HOME/bin/sqlplus /nolog @$SCRPT_PATH/build.sql $DBNAME $sysPassword
$systemPassword $SCRPT_PATH
build.sql
set verify off
DEFINE DBNAME = &1
DEFINE sysPassword = &2
DEFINE systemPassword = &2
DEFINE SCRPT_PATH = &4
host /oracle/orahome/bin/orapwd file=/oracle/orahome/dbs/orapw.&&DBNAME
password=&&sysPassword force=y
@&&SCRPT_PATH/CreateDB.sql
@&&SCRPT_PATH/CreateDBFiles.sql
@&&SCRPT_PATH/CreateDBCatalog.sql
host echo "SPFILE='/${SCRPT_PATH}/spfile${DBNAME}.ora'" >
/oracle/orahome/dbs/init${DBNAME}.ora
@&&SCRPT_PATH/postDBCreation.sql
CreateDB.sql
connect SYS/&&sysPassword as SYSDBA
set echo on

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

62

spool /oracle/orahome/assistants/dbca/logs/CreateDB.log
shutdown immediate ;
startup nomount pfile="&& SCRPT_PATH /init.ora";
DEFINE DBNAME = &1
CREATE DATABASE &&DBNAME
CONTROLFILE REUSE
MAXINSTANCES 32
MAXLOGHISTORY 1
MAXLOGFILES 192
MAXLOGMEMBERS 3
MAXDATAFILES 1024
DATAFILE '/oradata/&&DBNAME/system01.dbf' SIZE 300M REUSE AUTOEXTEND ON NEXT
10240K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL
SYSAUX DATAFILE '/oradata/&&DBNAME/sysaux01.dbf' SIZE 120M REUSE AUTOEXTEND ON
NEXT 10240K MAXSIZE UNLIMITED
DEFAULT TEMPORARY TABLESPACE TEMP TEMPFILE
'/oradata/&&DBNAME/temp/temp01.dbf' SIZE 20M REUSE AUTOEXTEND ON NEXT 640K
MAXSIZE UNLIMITED
UNDO TABLESPACE "UNDOTBS1" DATAFILE '/oradata/&&DBNAME/undotbs01.dbf' SIZE 200M
REUSE AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED
CHARACTER SET WE8ISO8859P1
NATIONAL CHARACTER SET AL16UTF16
LOGFILE GROUP 1 ('/oradata/&&DBNAME/log/redo01.log') SIZE 10240K REUSE,
GROUP 2 ('/oradata/&&DBNAME/log/redo02.log') SIZE 10240K REUSE
;
spool off
CreateDBFiles.sql
connect SYS/&&sysPassword as SYSDBA
set echo on
spool /oracle/orahome/assistants/dbca/logs/CreateDBFiles.log
CREATE TABLESPACE "USERS1" LOGGING DATAFILE
'/oradata/&&DBNAME/mnt2/users01.dbf' SIZE 5M REUSE AUTOEXTEND ON NEXT 1280K
MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT
AUTO ;
CREATE TABLESPACE "USERS2" LOGGING DATAFILE
'/oradata/&&DBNAME/mnt3/users02.dbf' SIZE 5M REUSE AUTOEXTEND ON NEXT 1280K
MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT
AUTO ;
CREATE TABLESPACE "USERS3" LOGGING DATAFILE
'/oradata/&&DBNAME/mnt4/users03.dbf' SIZE 5M REUSE AUTOEXTEND ON NEXT 1280K
MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT
AUTO ;
spool off
CreateDBCatalog.sql
connect SYS/&&sysPassword as SYSDBA
set echo on
spool /oracle/orahome/assistants/dbca/logs/CreateDBCatalog.log
@/oracle/orahome/rdbms/admin/catalog.sql;

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

63

@/oracle/orahome/rdbms/admin/catblock.sql;
@/oracle/orahome/rdbms/admin/catproc.sql;
@/oracle/orahome/rdbms/admin/catoctk.sql;
@/oracle/orahome/rdbms/admin/owminst.plb;
connect SYSTEM/manager
@/oracle/orahome/sqlplus/admin/pupbld.sql;
connect SYSTEM/manager
set echo on
spool /oracle/orahome/assistants/dbca/logs/sqlPlusHelp.log
@/oracle/orahome/sqlplus/admin/help/hlpbld.sql helpus.sql;
spool off
postDBCreation.sql
connect SYS/&&sysPassword as SYSDBA
set echo on
spool /oracle/orahome/assistants/dbca/logs/postDBCreation.log
create spfile='/oradata/tpcc/pfile/spfiletpcc.ora' FROM pfile='&&SCRPT_PATH/init.ora';
shutdown immediate;
host cp $SCRPT_PATH/init.ora /oracle/orahome/dbs/inittpcc.ora
connect SYS/&&sysPassword as SYSDBA
startup ;
select 'utl_recomp_begin: ' || to_char(sysdate, 'HH:MI:SS') from dual;
execute utl_recomp.recomp_serial();
select 'utl_recomp_end: ' || to_char(sysdate, 'HH:MI:SS') from dual;
alter user scott account unlock ;
alter user scott identified by tiger replace tiger ;
create user tpcc identified by tpcc;
grant dba, connect, resource to tpcc;
disconnect
connect tpcc/tpcc
create table tpcc.test ( tstamp date );
insert into tpcc.test (tstamp) values (SYSDATE);
disconnect
connect tpcc/tpcc
update tpcc.test set ( tstamp ) = SYSDATE;
select TO_CHAR(tstamp, 'MON DD, YYYY HH:MI:SS AM') from tpcc.test;
exit;
Listener.ora Files
Login as oracle and execute netca to configure the listener.
The listener files used in this configuration is listed below.
# listener.ora Network Configuration File: /oracle/orahome/network/admin/listener.ora
# Generated by Oracle configuration tools.
LISTENER_ ClustsvcPNd1=
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = ClustsvcPNd1.abc.def.com)(PORT = 1521))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
)
)

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

64

LISTENER_ ClustsvcPNd2=
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = ClustsvcPNd2.abc.def.com)(PORT = 1521))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
)
)
LISTENER_ ClustsvcSNd1=
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = ClustsvcSNd1.abc.def.com)(PORT = 1521))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
)
)
LISTENER_ ClustsvcSNd2=
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = ClustsvcSNd2.abc.def.com)(PORT = 1521))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
)
)
SID_LIST_TPCC =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /oracle/orahome)
(PROGRAM = extproc)
)
)

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

65

Appendix C: Veritas Software Filesets Listing


List of Veritas Storage Foundation and VCS Agent Packages installed with the steps followed in the
procedure mentioned in this whitepaper.
Package
Contents
VRTSaa
VRTSacclib.rte
VRTSalloc
VRTSat
VRTSat
VRTSccg
VRTScmccc.rte
VRTScmcs.rte
VRTScscm.rte
VRTScscw.rte
VRTScssim.rte
VRTScutil.rte
VRTSdcli
VRTSddlpr
VRTSdsa
VRTSfsdoc
VRTSfsman
VRTSfsmnd
VRTSfspro
VRTSfssdk
VRTSgab.rte
VRTSgapms
VRTSicsco
VRTSjre15.rte
VRTSllt.rte
VRTSmapro
VRTSmh
VRTSob
VRTSobc33
VRTSobgui
VRTSpbx
VRTSperl.rte
VRTSsmf
VRTSspt

Veritas Enterprise Administrator Action Agent by Symantec


Veritas Cluster Server ACC Library 5.0 by Symantec
Veritas Storage Foundation Intelligent Storage Provisioning by Symantec
Authentication client Fileset
Authentication server Fileset
Veritas Enterprise Administrator Central Control Grid by Symantec
Veritas Cluster Management Console Cluster Connector 5.0MP1 by
Symantec
Veritas Cluster Management Console for single cluster environnments
5.0MP1 by Symantec
Veritas Cluster Manager - Java Console 5.0MP1 by Symantec
Veritas Cluster Server Configuration Wizard by Symantec
Veritas Cluster Server 5.0MP1 Simulator by Symantec
Symantec Veritas Cluster Utility
Veritas Distributed Command Line by Symantec
Veritas Device Discovery Services Provider Library by Symantec
Veritas Datacenter Storage Agent by Symantec
Veritas File System Documentation by Symantec
Veritas File System Manual Pages by Symantec
Veritas File System SDK Manual Pages by Symantec
Veritas File System Services Provider by Symantec 5.0MP1t
Veritas Libraries and Header Files for VxFS by Symantec
Veritas Group Membership and Atomic Broadcast 5.0MP1 by Symantec
11/29/2006-23
Veritas Generic Array Plugin
Symantec Infrastructure Core Services Common Fileset
Symantec JRE Redistribution
Veritas Low Latency Transport 5.0MP1 by Symantec 11/29/2006-23
Veritas Storage Foundation Mapping Provider from Symantec,
PSTAMP=05.22.2006.17.57, BUILD=GA01
Veritas Enterprise Administrator Managed Host by Symantec
Veritas Enterprise Administrator Service
Veritas Enterprise Administrator Service
Veritas Enterprise Administrator Service
Symantec Private Branch Exchange Fileset
Perl 5.8.8 for Veritas
Symantec Service Management Framework Fileset
Veritas Support Tools by Symantec

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

66

VRTSvail
VRTSvcs.doc
VRTSvcs.man
VRTSvcs.msg.en_US
VRTSvcs.rte
VRTSvcsag.rte
VRTSvcsdb.rte
VRTSvcssvc.rte
VRTSvcsor.rte
VRTSvcsvr
VRTSvdid
VRTSveki
VRTSvlic
VRTSvmdoc
VRTSvmman
VRTSvmpro
VRTSvrdoc
VRTSvrpro
VRTSvrw
VRTSvsvc
VRTSvxfen.rte
VRTSvxfs
VRTSvxmsa
VRTSvxvm
VRTSweb.rte

Veritas Array Providers


User Documentation for Veritas Cluster Server by Symantec
Manual Pages for Veritas Cluster Server by Symantec
Veritas Cluster Server English Message Catalogs by Symantec
Veritas Cluster Server 5.0MP1 by Symantec 11/29/2006-23
Veritas Cluster Server 5.0MP1 Bundled Agents by Symantec
Veritas High Availability 5.0MP1 Agent for DB2 by Symantec
Veritas Clustering Support for IBM SVC Copy Services(tm)
Veritas High Availability 5.0MP1 Agent for Oracle by Symantec.
Veritas Cluster Server Volume Replicator Agents by Symantec
Veritas Device Identifier (VDID)
Veritas Kernel Interface by Symantec
VRTSvlic Symantec License Utilities
User Documentation for Veritas Volume Manager by Symantec
manual pages for Veritas Volume Manager by Symantec
Veritas Volume Manager Servs Provider by Symantec
User Documentation for Veritas Volume Replicator by Symantec
Veritas Volume Replicator Management Services Provider
Veritas Volume Replicator Web Console by Symantec
Veritas Volume Server and Client Providers by Symantec
Veritas I/O Fencing 5.0MP1 by Symantec 11/29/2006-23
Veritas File System by Symantec
Veritas - VxMS Mapping Service, Application Libraries
Veritas Volume Manager by Symantec
Symantec Web Server

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

67

Trademarks and Special Notices


International Business Machines 1994-2008. IBM, the IBM logo, System Storage, DB2, FlashCopy and
other referenced IBM products and services are trademarks or registered trademarks of International
Business Machines Corporation in the United States, other countries, or both. All rights reserved.
References in this document to IBM products or services do not imply that IBM intends to make them
available in every country.
Storage Foundation Enterprise HA, Symantec, the Symantec logo, Veritas, and Veritas Storage
Foundation are trademarks or registered trademarks of Symantec Corporation or its affiliates in the U.S.
and other countries.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
Information is provided "AS IS" without warranty of any kind.
Information concerning non-IBM products was obtained from a supplier of these products, published
announcement material, or other publicly available sources and does not constitute an endorsement of
such products by IBM. Sources for non-IBM list prices and performance numbers are taken from publicly
available information, including vendor announcements and vendor worldwide homepages. IBM has not
tested these products and cannot confirm the accuracy of performance, capability, or any other claims
related to non-IBM products. Questions on the capability of non-IBM products should be addressed to the
supplier of those products.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in
any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part
of the materials for this IBM product and use of those Web sites is at your own risk.

Disaster Recovery using Veritas Storage Foundation Enterprise HA and IBM SAN Volume Controller Metro Mirror & Global Mirror

68

Você também pode gostar