Você está na página 1de 162

VERITAS Database Edition / TM

Advanced Cluster for Oracle9i RAC

Installation and Configuration Guide


Solaris

July 2002
N08843F
Disclaimer
The information contained in this publication is subject to change without notice.
VERITAS Software Corporation makes no warranty of any kind with regard to this
manual, including, but not limited to, the implied warranties of merchantability and
fitness for a particular purpose. VERITAS Software Corporation shall not be liable for
errors contained herein or for incidental or consequential damages in connection with the
furnishing, performance, or use of this manual.

Copyright
Copyright © 2002 VERITAS Software Corporation. All rights reserved. VERITAS,
VERITAS Software, the VERITAS logo, and all other VERITAS product names and slogans
are trademarks of VERITAS Software Corporation in the USA and/or other countries.
Other product names and/or slogans mentioned herein may be trademarks or registered
trademarks or their respective companies.
VERITAS Software Corporation
350 Ellis St.
Mountain View, CA 94043
Phone 650–527–8000
Fax 650–527–2908
www.veritas.com
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
How This Guide is Organized . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
VERITAS Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xii
VERITAS Volume Manager Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xii
VERITAS File System Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xii
VERITAS Cluster Server Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
VERITAS Cluster Server Oracle Enterprise Agent . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Man Pages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Oracle Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Obtaining License Keys for DBE/AC for Oracle9i RAC . . . . . . . . . . . . . . . . . . . . . . . . . xiv
Using the VERITAS vLicenseTM Web Site to Obtain a License Key . . . . . . . . . . xiv
Faxing the License Key Request Form to Obtain a License Key . . . . . . . . . . . . xiv
Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xv
Technical Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xv

Chapter 1. Overview: DBE/AC for Oracle9i RAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1


What is RAC? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
RAC Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Oracle Instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Operating System Dependant Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Cluster Manager (CM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Cluster Inter-Process Communication (IPC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Lock Management and Cache Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Shared Disk Subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
DBE/AC Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Data Stack Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Communications Stack Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
DBE/AC Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Low Latency Transport (LLT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
IPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

iii
Traffic Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Heartbeat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Group Membership Services/Atomic Broadcast (GAB) . . . . . . . . . . . . . . . . . . . . . . . 6
Cluster Membership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Cluster Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Cluster Volume Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
CVM Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Cluster File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
CFS Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
CFS Usage in DBE/AC for RAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Oracle Disk Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
ODM Clustering Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
VERITAS Cluster Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
VCS Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
DBE/AC Service Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
DBE/AC OSD Layer Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Cluster Membership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Inter-Process Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
I/O Fencing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Understanding Split Brain and the Need for I/O Fencing . . . . . . . . . . . . . . . . . . . . 13
SCSI-3 Persistent Reservations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
I/O Fencing Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Data Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Coordinator Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
I/O Fencing Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

Chapter 2. Installing the Component Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17


Installation Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
After Installation View - A Cluster Running Oracl9i RAC . . . . . . . . . . . . . . . . . . . . 17
Phases of the Installation: An Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

iv VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Phase One: Setting up and Configuring the Hardware . . . . . . . . . . . . . . . . . . . . 18
Phase Two: Installing DBE/AC and Configuring its Components . . . . . . . . . . 19
Phase Three: Installing Oracle9i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Phase Four: Creating the Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Phase Five: Setting up VCS to Manage RAC Resources . . . . . . . . . . . . . . . . . . . . 20
Installation and Configuration Flowchart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Prerequisites for Installing Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Preparing for Using the installDBAC Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
General Information Requested by Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Web-based ClusterManager Information Requested by Script . . . . . . . . . . . . . . 23
SMTP/SNMP (Optional) Requested by Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Installing DBE/AC Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Setting the PATH Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Setting the MANPATH Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Setting Up System-to-System Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Mounting the VERITAS DBE/AC for Oracle9i RAC CD . . . . . . . . . . . . . . . . . . . . . . . 25
Running the VERITAS DBE/AC Install Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Verifying Systems for Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Configuring the Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Configuring Cluster Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Configuring SMTP Email Notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Configuring SNMP Trap Notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Installing VCS Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Configuring VCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Starting VCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Installing Base CVM and CFS Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Configuring CVM and CFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Completing Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Verifying All Prerequisite Packages Are Installed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Configuring VxVM Using vxinstall and Rebooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

v
Verifying Storage Supports SCSI-3 Persistent Reservations . . . . . . . . . . . . . . . . . . . . . . 40
For EMC Symmetrix 8000 Series Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
For Hitachi Data Systems 99xx Series Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Verifying Shared Storage Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Using the vxfentsthdw Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Setting Up Coordinator Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Requirements for Coordinator Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Configuring a Disk Group Containing Coordinator Disks . . . . . . . . . . . . . . . . . . . . 44
Starting I/O Fencing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
The Contents of the /etc/vxfentab File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Replacing Failed Coordinator Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Running gabconfig -a After Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Example VCS Configuration File After Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Example main.cf file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

Chapter 3. Installing Oracle9i Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51


Oracle9i Installation Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Prerequisites for Installing Oracle9i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Creating Shared Disk Groups and Volumes - General . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Determining Information About a Disk Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Setting the Connectivity Policy on a Shared Disk Group . . . . . . . . . . . . . . . . . . . . . 52
Enabling Write Access to the Volumes in the Disk Group . . . . . . . . . . . . . . . . . . . . 52
Creating Shared Disk Group and Volume for SRVM . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Installing Oracle9i Release 1 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Installing Oracle9i Release 1 on Shared Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Alternate Method - Installing Oracle9i Release 1 Locally . . . . . . . . . . . . . . . . . . . . . 58
Installing Oracle9i Release 1 Patch Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Installing Oracle9i Release 2 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Installing Oracle9i Release 2 on Shared Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Alternate Method - Installing Oracle9i Release 2 Locally . . . . . . . . . . . . . . . . . . . . . 66

vi VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Removing Temporary rsh Access Permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Linking VERITAS ODM Libraries to Oracle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Creating Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

Chapter 4. Configuring VCS Service Groups for Oracle . . . . . . . . . . . . . . . . . . . . . .71


Checking cluster_database Flag in Oracle9i Parameter File . . . . . . . . . . . . . . . . . . . . . . 71
Configuring CVM and Oracle Groups in main.cf File . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
VCS Service Groups for RAC: Dependencies Chart . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Setting Attributes as Local for the CVM and Oracle Groups . . . . . . . . . . . . . . . . . . 73
Modifying the CVM Service Group in the main.cf File . . . . . . . . . . . . . . . . . . . . . . . 73
Adding Sqlnet, NIC, IP CVMVolDg, and CFSMount Resources . . . . . . . . . . . . 73
Adding Oracle Resource Groups in the main.cf File . . . . . . . . . . . . . . . . . . . . . . . . . 76
Additional RAC Processes Monitored by the VCS Oracle Agent . . . . . . . . . . . . 77
Attributes of CVM and Oracle Groups that Must be Defined as Local . . . . . . . . . . 78
Modifying the VCS Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Location of VCS and Oracle Agent Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

Chapter 5. I/O Fencing Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .81


I/O Fencing of Shared Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
The Role of Coordinator Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
How I/O Fencing Works in Different Event Scenarios . . . . . . . . . . . . . . . . . . . . . . . 82
The vxfenadm Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Registration Key Formatting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

Chapter 6. Uninstalling DBE/AC for Oracle9i RAC . . . . . . . . . . . . . . . . . . . . . . . . . . .87


Offlining the Oracle and Sqlnet Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Removing the Oracle Database (optional) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Uninstalling Oracle9i (optional) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Offlining the Oracle Service Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Stopping VCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Unmount Other VxFS File Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

vii
Unconfiguring the CFS Drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Unconfiguring the I/O Fencing Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Removing Optional CFS and CVM Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Removing Required CFS and CVM Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Remove VxVM Package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Uninstalling VCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Removing License Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Remove Other Configuration Files (Optional) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

Chapter 7. Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Running Scripts for Engineering Support Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
getdbac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
getcomms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
hagetcf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Troubleshooting Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Error When Starting an Oracle Instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Missing Dialog Box During Installation of Oracle9i Release 1 . . . . . . . . . . . . . . . . . 98
Missing Dialog Box During Installation of Oracle9i Release 2 . . . . . . . . . . . . . . . . . 99
Instance Numbers Must be Unique (Error Code 205) . . . . . . . . . . . . . . . . . . . . . . . . 99
ORACLE_SID Must be Unique (Error Code 304) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Oracle Log Files Show Shutdown Called Even When Not Shutdown Manually 100
File System Configured Incorrectly for ODM Shuts Down Oracle . . . . . . . . . . . . 100
VCSIPC Errors in Oracle Trace/Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Shared Disk Group Cannot be Imported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
vxfentsthdw Fails When SCSI TEST UNIT READY Command Fails . . . . . . . . . . 102
CVMVolDg Does Not Go Online Even Though CVMCluster is Online . . . . . . . . 102
Restoring Communication Between Host and Disks After Cable Disconnection 102
Adding a System to an Existing Two-Node Cluster Causes Error . . . . . . . . . . . . 103
Removing Existing Keys From Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
System Panics to Prevent Potential Data Corruption . . . . . . . . . . . . . . . . . . . . . . . 105

viii VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
How vxfen Driver Checks for Pre-existing Split Brain Condition . . . . . . . . . . 105
Case 1: System 2 Up, System 1 Ejected (Actual Potential Split Brain) . . . . . . . 106
Case 2: System 2 Down, System 1 Ejected (Apparent Potential Split Brain) . . 106
Using vxfenclearpre Command to Clear Keys After Split Brain . . . . . . . . . . . . . . . 107

Appendix A. Sample main.cf File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .109


/etc/VRTSvcs/conf/sample_rac/main.cf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

Appendix B. CVMCluster, CVMVolDg, and CFSMount Agents . . . . . . . . . . . . . . . .115


CVMCluster Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
CVMCluster Agent Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
CVMCluster Agent Type Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
CVMCluster Agent Sample Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Configuring the CVMVolDg and CFSMount Resources . . . . . . . . . . . . . . . . . . . . . . . . 118
CVMVolDg Agent, Entry Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
CVMVolDg Agent Type, Attribute Descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
CVMVolDg Agent Type Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Sample CVMVolDg Agent Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
CFSMount Agent, Entry Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
CFSMount Agent Type, Attribute Descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
CFSMount Agent Type Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Sample CFSMount Agent Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

Appendix C. Tunable Kernel Driver Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . .123


LMX Tunable Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Example: Configuring LMX Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
vxfen Tunable Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

Appendix D. Error Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .127


LMX Error Messages, Critical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
LMX Error Messages, Non-Critical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

ix
VxVM Errors Related to I/O Fencing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
VXFEN Driver Error Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
VXFEN Driver Informational Message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Informational Messages When Node is Ejected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

Appendix E. Creating Starter Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133


Using Oracle dbca to Create a Starter Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Preparing Tablespaces for Use With dbca . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Tablespace Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Creating Shared Raw Volumes for Tablespaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Creating the Oracle Starter Database Using Oracle9i dbca . . . . . . . . . . . . . . . . . . . 136
Using VERITAS Scripts to Create a Starter Database . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Using VERITAS Scripts to Create Database on Raw Volumes . . . . . . . . . . . . . . . . 137
Using VERITAS Scripts to Create Database on Cluster File System . . . . . . . . . . . 139

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

x VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Preface
The VERITAS Database Edition/Advanced ClusterTM for Oracle9i RAC software is an
integrated set of software products. It enables administrators of Oracle Real Application
Cluster (RAC) to operate a database in an environment of cluster systems running
VERITAS Cluster ServerTM (VCS) and the cluster features of VERITAS Volume ManagerTM
and VERITAS File SystemTM, also known as CVM and CFS, respectively.
The VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide is
intended for system administrators responsible for configuring and maintaining Oracle
Real Application Cluster running on a VCS cluster with VxVM disk management.
This guide assumes that the administrator has a:
◆ Basic understanding of system and database administration
◆ Working knowledge of the Solaris operating system
◆ Working knowledge of Oracle databases

How This Guide is Organized


Chapter 1 “Overview: DBE/AC for Oracle9i RAC” describes the components of Oracle
RAC and the DBE/AC components in support of Oracle Real Application Cluster.
Chapter 2 “Installing the Component Packages” describes the sequence of steps to install
the required software. The “Installation and Configuration Flowchart” on page 20
provides a visual summary of the installation activities.
Chapter 3 “Installing Oracle9i Software” describes how to set up shared storage, install
Oracle9i software. Installation is typically on shared storage, but procedures for installing
on local systems is also provided.
Chapter 4 “Configuring VCS Service Groups for Oracle” describes configuring VCS
service groups for Oracle, the database, and the listener process in parallel mode. It
describes configuring the agents within the services group.
Chapter 5 “I/O Fencing Scenarios” describes I/O fencing.
Chapter 6 “Uninstalling DBE/AC for Oracle9i RAC” describes how to uninstall and
remove the components.

xi
VERITAS Documentation

Chapter 7 “Troubleshooting” discusses typical problems and provides recommendations


for solving them. Also, you can use the scripts described in this chapter to generate
information about your systems that VERITAS support personnel can use to assist you.
Appendix A “Sample main.cf File” provides a sample VCS configuration file main.cf,
which includes an example configuration of the Oracle and CVM service groups.
Appendix B “CVMCluster, CVMVolDg, and CFSMount Agents” provides details for the
CVMCluster agent configured when you run the installDBAC script. It also describes
the CVMVolDg and CFSMount agents that must be configured after installation.
Appendix C “Tunable Kernel Driver Parameters” describes tunable parameters for the
LMX and VXFEN drivers.
Appendix D “Error Messages” lists LMX and VXFEN messages that you may encounter.
Appendix E “Creating Starter Databases” contains procedures for creating starter
databases using dbac or VERITAS scripts on raw shared volumes or in VxFS file systems.

VERITAS Documentation
The following documents provide important information on the software components
that support CVM, VxFS, and VCS. They are provided as PDF files on the VERITAS
Storage Solutions 3.5 CDs.

VERITAS Volume Manager Documentation


After installation, find the PDF versions of these documents in /opt/VRTSvxvm/docs.
VERITAS Volume Manager 3.5 Installation Guide, Solaris
VERITAS Volume Manager 3.5 Storage Administrator Administrator’s Guide
VERITAS Volume Manager Administrator’s Guide
VERITAS Volume Manager 3.5 Troubleshooting Guide
VERITAS Volume Manager 3.5 Hardware Notes, Solaris

VERITAS File System Documentation


After installation, find the PDF versions of these documents in /opt/VRTSfsdoc.
VERITAS File System 3.5 Installation Guide
VERITAS File System 3.5 System Administrator’s Guide

xii VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Oracle Documentation

VERITAS Cluster Server Documentation


After installation, find the PDF versions of these documents in /opt/VRTSvcsdc.
VERITAS Cluster Server 3.5 Installation Guide
VERITAS Cluster Server 3.5 User’s Guide
VERITAS Cluster Server 3.5 Bundled Agents Reference Guide
VERITAS Cluster Server 3.5 Agent Developer’s Guide

VERITAS Cluster Server Oracle Enterprise Agent


On theVERITAS Storage Solutions 3.5 CD containing enterprise agents, find a PDF version
of this document:
VERITAS Cluster Server Enterprise Agent for Oracle Installation and Configuration Guide

Man Pages
See “Setting the MANPATH Variable” on page 24 to enable access to manual pages.

Oracle Documentation
The following documents, not shipped with VERITAS Database Edition/AC for Oracle9i
RAC, provide necessary related information:
◆ Oracle9i Installation Guide
◆ Oracle9i Parallel Server Setup and Configuration Guide
◆ Notes for the latest Oracle9i Data Server patch set

Preface xiii
Obtaining License Keys for DBE/AC for Oracle9i RAC

Obtaining License Keys for DBE/AC for Oracle9i RAC


DBE/AC for Oracle RAC is a licensed software product. The installDBAC program
prompts you for a license key for each system. You cannot use your VERITAS software
product until you have completed the licensing process. Use either method described in
the following two sections to obtain a valid license key.

Using the VERITAS vLicense Web Site to Obtain a License Key


TM

You can obtain your license key most efficiently using the VERITAS vLicense web site.
The License Key Request Form (LKRF) has all the information needed to establish a User
Account on vLicense and generate your license key. The LKRF is a one-page insert
included with the CD in your product package. You must have this form to obtain a
software license key for your VERITAS product.

Note Do not discard the License Key Request Form. If you have lost or do not have the form
for any reason, email license@veritas.com.

The License Key Request Form contains information unique to your VERITAS software
purchase. To obtain your software license key, you need the following information shown
on the form:
◆ Your VERITAS customer number
◆ Your order number
◆ Your serial number
Follow the appropriate instructions on the vLicense web site to obtain your license key
depending on whether you are a new or previous user of vLicense:

1. Access the web site at http://vlicense.veritas.com.

2. Log in or create a new login, as necessary.

3. Follow the instructions on the pages as they are displayed.


When you receive the generated license key, you can proceed with installation.

Faxing the License Key Request Form to Obtain a License Key


If you do not have Internet access, you can fax the LKRF to VERITAS. Be advised that
faxing the form generally requires several business days to process in order to provide a
license key.
Before faxing, sign and date the form in the appropriate spaces. Fax it to the number
shown on the form.

xiv VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Conventions

Conventions

Typeface Usage

courier computer output, files, attribute names, device names, and directories

courier (bold) user input and commands, keywords in grammar syntax

italic new terms, titles, emphasis

italic variables

Symbol Usage

% C shell prompt

$ Bourne/Korn shell prompt

# root user prompt (for all shells)

Technical Support
For assistance with any VERITAS product, contact Technical Support:
U.S. and Canada: call 1-800-342-0652.
Europe, the Middle East, or Asia: visit the Technical Support Web site at
http://support.veritas.com for a list of each country’s contact information.
Software updates, Tech Notes, product alerts, and hardware compatibility lists, are also
available from http://support.veritas.com.
To learn more about VERITAS and its products, visit http://support.veritas.com.

Preface xv
Technical Support

xvi VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Overview: DBE/AC for Oracle9i RAC 1
What is RAC?
Real Application Clusters (RAC) is a parallel database environment that takes advantage
of the processing power of multiple, interconnected computers. A cluster comprises two
or more computers, also known as nodes or servers. In RAC environments, all nodes
concurrently run Oracle instances and execute transactions against the same database.
RAC coordinates each node’s access to the shared data to provide consistency and
integrity. Each node adds its processing power to the cluster as a whole and can increase
overall throughput or performance.
RAC serves as an important component of a robust high availability solution. A properly
configured Real Application Clusters environment can tolerate failures with minimal
downtime and interruption to users. With clients accessing the same database on multiple
nodes, failure of a node does not completely interrupt access, as clients accessing the
surviving nodes continue to operate. Clients attached to the failed node simply reconnect
to a surviving node and resume access. Recovery after failure in a RAC environment is far
quicker than a failover database, because another instance is already up and running.
Recovery is simply a matter of applying outstanding redo log entries from the failed node.

RAC Architecture
At the highest level, RAC is multiple Oracle instances, accessing a single Oracle database
and carrying out simultaneous transactions. An Oracle database is the physical data
stored in tablespaces on disk. An Oracle instance represents the software processes
necessary to access and manipulate the database. In traditional environments, only one
instance accesses a database at a specific time. With Oracle RAC, multiple instances
communicate to coordinate access to a physical database, greatly enhancing scalability
and availability.

1
What is RAC?

OCI Client

Listener Listener

9i RAC 9i RAC
Instance High Speed Interconnect Instance
ODM ODM
Database
VCS

VCS
Datafiles &
Control
CFS (files) CFS

CVM CVM
Index and Temp
(files)

Online Redo
Logs (files)

Archive Logs
(files)

Oracle Instance
The Oracle instance consists of a set of processes and shared memory space that provide
access to the physical database. The instance consists of server processes acting on behalf
of clients to read data into shared memory as well as make modifications of it, and
includes background processes to write changed data out to disk.
In an Oracle9i RAC environment, multiple instances access the same database. This
requires significant coordination between the instances to keep each instance’s view of the
data consistent.

2 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
What is RAC?

Operating System Dependant Layer


Oracle9i RAC relies on several support services provided by the VCS. The important
features are cluster membership carried out by the cluster manager and inter-node
communication. The actual implementation of these functions is described later in this
chapter.

Cluster Manager (CM)


The Cluster Manager provides a global view of the cluster and all nodes in it. CM is
responsible for determining cluster membership and enforcing protection of data by
preventing non-cluster nodes from corrupting stored data.

Cluster Inter-Process Communication (IPC)


RAC relies heavily on an underlying high-speed interprocess communication (IPC)
mechanism. IPC defines the protocols and interfaces required for the RAC environment to
transfer messages between instances.

Lock Management and Cache Fusion


Lock management coordinates access by multiple instances to the same data to maintain
data consistency and integrity. Cache Fusion provides memory-to-memory transfers of
datablocks between RAC instances. Cache Fusion transfers are faster than transfers made
by writing to and then reading from disk.

Shared Disk Subsystems


RAC also requires that all nodes have simultaneous access to database storage. This gives
multiple instances concurrent access to the same database.

Chapter 1, Overview: DBE/AC for Oracle9i RAC 3


DBE/AC Overview

DBE/AC Overview
Database Edition/Advanced Cluster provides a complete I/O and communications stack
to support Oracle9i RAC. It also provides monitoring and management of instance
startup and shutdown. The following section describes the overall data and
communications flow of the DBE/AC stack.

Data Stack Overview

LGWR LGWR
ARCH Oracle9i Oracle9i ARCH
CKPT RAC RAC CKPT
DBWR DBWR
Server Instance Redo Instance Server
Log
Files Archive
Storage
ODM ODM

CFS Data
CFS
and
Disk I/O Control Disk I/O
CVM Files CVM

The diagram above details the overall data flow from an instance running on a server to
the shared storage. The various Oracle processes making up an instance (such as DB
Writer, Log Writer, Checkpoint, Archiver, and Server) read and write data to storage via
the I/O stack shown in the diagram. Oracle communicates via the Oracle Disk Manager
(ODM) interface to the VERITAS Cluster File System (CFS), which in turn accesses the
storage via the VERITAS Cluster Volume Manager (CVM). Each of these components in
the data stack is described in this chapter.

4 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
DBE/AC Overview

Communications Stack Overview

Cluster State

RAC Cache Fusion/Lock Mgmt RAC

VCS Core
VCS Core

GAB
GAB
ODM Datafile Management ODM

LLT

LLT
CFS Filesystem MetaData CFS

CVM Volume Management CVM

The diagram above shows the data stack as well as the communications stack. Each of the
components in the data stack requires communications with its peer on other systems to
properly function. RAC instances must communicate to coordinate protection of data
blocks in the database. ODM processes must communicate to coordinate data file
protection and access across the cluster. CFS coordinates metadata updates for file
systems, and finally CVM must coordinate the status of logical volumes and distribution
of volume metadata across the cluster. VERITAS Cluster Server (VCS) controls starting
and stopping of components in the DBE/AC stack and provides monitoring and
notification on failure. VCS must communicate status of its resources on each node in the
cluster. For the entire system to work, each layer must reliably communicate.
The diagram also shows Low Latency Transport (LLT) and the Group Membership
Services/Atomic Broadcast (GAB), which make up the communications package central
to the operation of DBE/AC. During an operational steady state, the only significant
traffic through LLT and GAB is due to Lock Management and Cache Fusion, while the
traffic for the other data is relatively sparse.

Chapter 1, Overview: DBE/AC for Oracle9i RAC 5


DBE/AC Communications

DBE/AC Communications
DBE/AC communications consist of LLT and GAB

Low Latency Transport (LLT)


LLT provides fast, kernel-to-kernel communications, and monitors network connections.
LLT functions as a high performance replacement for the IP stack. LLT runs directly on
top of the Data Link Protocol Interface (DLPI) layer. The use of LLT rather than IP
removes latency and overhead associated with the IP stack. LLT has several major
functions.

IPC
RAC Inter-Process Communications (IPC) uses the VCSIPC shared library for interprocess
communication. In turn, VCSIPC uses LMX, an LLT multiplexer, to provide fast data
transfer between Oracle processes on different cluster nodes. IPC leverages all of LLT’s
features.

Traffic Distribution
LLT distributes (load balances) inter-node communication across all available private
network links. All cluster communications are evenly distributed across as many as eight
network links for performance and fault resilience. On failure of a link, traffic is redirected
to remaining links.

Heartbeat
LLT is responsible for sending and receiving heartbeat traffic over network links. This
heartbeat is used by the Group Membership Services function of GAB to determine
cluster membership.

Group Membership Services/Atomic Broadcast (GAB)


The Group Membership Services /Atomic Broadcast protocol (GAB), is responsible for
Cluster Membership and Cluster Communications as described below.

6 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
DBE/AC Communications

Cluster Membership
A distributed system such as DBE/AC requires all nodes to be aware of which nodes are
currently participating in the cluster. Nodes can leave or join the cluster because of
shutting down, starting up, rebooting, powering off, or faulting.
Cluster membership is determined using LLT heartbeats. When systems no longer receive
heartbeats from a peer for a predetermined interval, then a protocol is initiated to exclude
the peer from the current membership. When systems start receiving heartbeats from a
peer that is not part of the membership, then a protocol is initiated for enabling it to join
the current membership.
The new membership is consistently delivered to all nodes and actions specific to each
module are initiated. For example, following a node fault, the Cluster Volume Manager
must initiate volume recovery, and Cluster File System must do a fast parallel file system
check.

Cluster Communications
GAB’s second function is to provide reliable cluster communications, used by many
DBE/AC modules. GAB provides guaranteed delivery of point to point messages and
broadcast messages to all nodes. Point to point uses a send and acknowledgment. Atomic
Broadcast ensures all systems within the cluster receive all messages. If a failure occurs
while transmitting a broadcast message, GAB’s atomicity ensures that, upon recovery, all
systems have the same information.
NIC

GAB Messaging

NIC
- Cluster Membership/State

Server
Server

- Datafile Management
- Filesystem Metadata
NIC

NIC
- Volume Management

Chapter 1, Overview: DBE/AC for Oracle9i RAC 7


Cluster Volume Manager

Cluster Volume Manager


Cluster Volume Manager is an extension of VERITAS Volume Manager (VxVM), the
industry standard storage virtualization platform. CVM extends the concepts of VxVM
across multiple nodes. Each node sees the same logical volume layout, and more
importantly, the same state of all volume resources.
In a DBE/AC cluster, all storage is managed with standard VxVM commands from one
node in the cluster. All other nodes immediately recognize any changes in disk group and
volume configuration with no interaction. CVM supports all performance enhancing
capabilities, such as striping, mirroring, and mirror break-off (snapshot) for off host
backup.

CVM Architecture
Cluster Volume manager is designed with a master/slave architecture. One node in the
cluster acts as the configuration master for logical volume management, and all others are
slaves. Any node can take over as master if the existing master fails. The CVM master is
established on a per cluster basis.
Since CVM is an extension of VxVM, it operates in a very similar fashion. The volume
manager configuration daemon, vxconfigd, maintains the configuration of logical
volumes. Any changes to volumes are handled by vxconfigd, which updates the
operating system at the kernel level when the new volume state is determined. For
example, if a mirror of a volume fails, it is detached from the volume and the error is
passed off to vxconfigd, which then determines the proper course of action, updates the
new volume layout, and informs the kernel of a new volume layout. CVM extends this
behavior across multiple nodes in a master-slave architecture. Changes to a volume are
propagated to the master vxconfigd. The vxconfigd process on the master pushes these
changes out to slave vxconfigd processes, each of which in turn updates the local kernel.
CVM does not impose any write locking between nodes. Each node is free to update any
area of the storage. All data integrity is the responsibility of the upper application. From
an application perspective, logical volumes are accessed identically on a stand-alone
system as on a CVM system.
CVM imposes a “Uniform Shared Storage” model. All systems must be connected to the
same disk sets for a given disk group. Any system unable to see the entire set of physical
disks as other nodes for a given disk group cannot import the disk group. If a node loses
contact with specific disk, it is excluded from participating in the use of that disk.
CVM uses GAB and LLT for transport of all its configuration data.

8 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Cluster File System

Cluster File System


The VERITAS Cluster File System is an extension of the industry standard VERITAS File
System (VxFS). CFS allows the same file system to be simultaneously mounted on
multiple nodes. Unlike other clustered file systems, CFS is a true SAN file system. All I/O
takes place over the storage area network. Coordination between nodes is enabled by
messages sent across the cluster interconnects.

CFS Architecture
CFS is designed with a master/slave architecture. Though any node can initiate an
operation to create, delete, or resize data, the actual operation is carried out by the master
node. Since CFS is an extension of VxFS, it operates in a very similar fashion. As with
VxFS, it caches metadata and data in memory (typically called buffer cache or vnode
cache). A distributed locking mechanism, called the Global Lock Manager (CGM) is used
for metadata and cache coherency across the multiple nodes. GLM provides a way to
ensure all nodes have a consistent view of the file system. When any node wishes to read
data, it requests a shared lock. If another node wishes to write to the same area of the file
system, it must request an exclusive lock. The GLM revokes all shared lock before
granting the exclusive lock and informs the reading nodes that their data is no longer
valid. If a node requests a shared lock for data that is exclusively held by another node, in
addition to revoking the exclusive lock, GLM sends the data over the cluster interconnects
to the requesting node.

CFS Usage in DBE/AC for RAC


CFS is used in DBE/AC to manage a file system in a large database environment. When
used in DBE/AC for Oracle9i RAC, Oracle accesses data files stored on CFS file systems
with the ODM interface. This essentially bypasses the filesystem buffer and filesystem
locking for data. This means only Oracle buffers data and coordinates writing to files and
not the GLM, which is minimally used with the ODM interface. A single point of locking
and buffering ensures maximum performance.

Oracle Disk Manager


The Oracle Data Manager (ODM) is a standard API created by Oracle for doing database
I/O. For example, when Oracle wishes to write, it calls the odm_io function. ODM
improves both performance and manageability of the file system.
ODM improves performance by providing direct access for the database to the underlying
storage without passing through the actual filesystem interface. This means the database
sees performance equivalent to using raw devices. The administrator sees the storage as

Chapter 1, Overview: DBE/AC for Oracle9i RAC 9


VERITAS Cluster Server

easy-to- manage file systems, including the support of resizing data files while in use.
ODM improves manageability by allowing Oracle to directly invoke operations such as
resizing data files while the database is online.

ODM Clustering Extensions


All Oracle Disk Manager features have been extended to operate in a cluster environment.
Nodes communicate before performing any operation that could potentially affect
another node. For example, before creating a new data file with a specific name, ODM
checks with other nodes to see if the file name is already in use.

VERITAS Cluster Server


VCS in the DBE/AC environment performs as a director of operations. It controls startup
and shutdown of the component layers of RAC. In the DBE/AC configuration, the RAC
service groups run as parallel service groups. VCS does not attempt to migrate a failed
service group, but it can be configured to restart it on failure. VCS also notifies users of
any failures.
DBE/AC provides specific agents for VCS to operate in a DBE/AC environment,
including CVM, CFS and Oracle.

VCS Architecture
VCS communicates the status of resources running on each system to all systems in the
cluster. The High Availability Daemon, or “HAD,” is the main VCS daemon running on
each system. HAD collects all information about resources running on the local system,
forwarding it to all other systems in the cluster. It also receives information from all other
cluster members to update its own view of the cluster.
Each type of resource supported in a cluster is associated with an agent. An agent is an
installed program designed to control a particular resource type. Each system runs
necessary agents to monitor and manage resources configured to run on that node. The
agents communicate with HAD on the node. HAD distributes its view of resources on the
local node to other nodes in the cluster using GAB and LLT.

10 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
VERITAS Cluster Server

DBE/AC Service Groups


DBE/AC uses parallel service groups to support RAC. There is one CVM/Infrastructure
group per server. This group has the CVM resource and the necessary resources for
support of CFS. This group also contains all common components needed by Oracle to
support RAC. This includes a shared ORACLE_HOME directory and the Oracle Net
Services process (LISTENER).
For each Oracle database, there is a service group . This service group includes the
database on shared storage and the supporting CVM and CFS resources

CVM Service Group


Sqlnet Parallel Service Group

CFSMount

IP CFSfsckd CVMVolDg

NIC CFSQlogckd CVMCluster

Oracle Database
Oracle Service Group

CFSMount CFSMount CFSMount

CVMVolDg

Chapter 1, Overview: DBE/AC for Oracle9i RAC 11


DBE/AC OSD Layer Support

DBE/AC OSD Layer Support


DBE/AC OSD layer support includes the VCSMM and IPC components

Cluster Membership
Oracle provides an API for providing membership information to RAC. This is known as
skgxn (system kernel generic interface node membership). Oracle RAC expects to make
specific skgxn calls and get membership information required. In DBE/AC this is
implemented as a library linked to Oracle when 9i RAC is installed. The skgxn library in
turn makes ioctl calls to a kernel module for membership information.
This module is known as VCSMM (VCS membership module). Oracle uses the linked in
skgxn library to communicate with VCSMM, which obtains membership information
from GAB.

Inter-Process Communications
In order to coordinate access to a single database by multiple instances, Oracle uses
extensive communications between nodes and instances. Oracle uses Inter-Process
Communications (IPC) for locking traffic and Cache Fusion.
DBE/AC uses LLT to support IPC in a cluster and leverages its high performance and
fault resilient capabilities.
Oracle has a defined API for Inter-Process Communication that isolates Oracle from the
underlying transport mechanism. Oracle communicates between processes on instances,
and does not have to know how the data is moved between systems. The Oracle API for
IPC is referred to as System Kernel Generic Interface Inter-Process Communications
(skgxp).
DBE/AC provides a library linked to Oracle at install time to implement the skgxp
functionality. This module communicates with the LLT Multiplexer (LMX) via ioctl calls.
The LMX module is a kernel module designed to receive communications from the skgxn
module and pass to the correct process on the correct instance on other nodes. The module
“multiplexes” communications between multiple related processes into a single multi
threaded LLT port between systems. LMX leverages all features of LLT, including load
balancing and fault resilience.

I/O Fencing
I/O fencing is a new feature to DBE/AC, designed to guarantee data integrity, even in the
case of faulty cluster communications causing a split brain condition.

12 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
I/O Fencing

Understanding Split Brain and the Need for I/O Fencing


Split brain is an issue faced by all cluster solutions. To provide high availability, the cluster
must be capable of taking corrective action when a node fails. In DBE/AC, this is carried
out by the reconfiguration of CVM, CFS and RAC to change membership. Problems arise
when the mechanism used to detect the failure of a node breaks down. The symptoms
look identical to a failed node. For example, if a system were to fail, it would stop sending
heartbeats over the private interconnects and the remaining nodes would take corrective
action. However, the failure of the private interconnects would present identical
symptoms. In this case, both nodes would determine that their peer has departed and
attempt to take corrective action. This typically results in data corruption when both
nodes attempt to take control of data storage in an uncoordinated manner.
In addition to a broken set of private networks, other scenarios can cause this situation. If
a system were so busy as to appear hung, it would be declared dead. This can also happen
on systems where the hardware supports a “break” and “resume” function. Dropping the
system to prom level with a break and subsequently resuming means the system could be
declared as dead, the cluster could reform, and when the system returns, it could begin
writing again.
DBE/AC uses a technology called I/O fencing to remove the risk associated with split
brain. I/O fencing blocks access to storage from specific nodes. This means even if the
node is alive, it cannot cause damage.

SCSI-3 Persistent Reservations


DBE/AC uses a relatively new enhancement to the SCSI specification, known as SCSI-3
Persistent Reservations, (SCSI-3 PR). SCSI-3 PR is designed to resolve the issues of using
SCSI reservations in a modern clustered SAN environment. SCSI-3 PR supports multiple
nodes accessing a device while at the same time blocking access to other nodes. SCSI-3 PR
reservations are persistent across SCSI bus resets and SCSI-3 PR also supports multiple
paths from a host to a disk. By contrast, SCSI-2 reservations can only be used by one host,
with one path. This means if there is a need to block access for data integrity concerns,
only one host and one path remain active. The requirements for larger clusters, with
multiple nodes reading and writing to storage in a controlled manner have made SCSI-2
reservations obsolete.
SCSI-3 PR uses a concept of registration and reservation. Systems wishing to participate
register a “key” with a SCSI-3 device. Each system registers its own key. Multiple systems
registering keys form a membership. Registered systems can then establish a reservation.
This is typically set to “Write Exclusive Registrants Only” (WERO). This means registered
systems can write, and all others cannot.
With SCSI-3 PR technology, blocking write access is as simple as removing a registration
from a device. Only registered members can “eject” the registration of another member. A
member wishing to eject another member issues a “preempt and abort” command that

Chapter 1, Overview: DBE/AC for Oracle9i RAC 13


I/O Fencing

ejects another node from the membership. Nodes not in the membership cannot issue this
command. Once a node is ejected, it cannot in turn eject another. This means ejecting is
final and “atomic”.
In the DBE/AC implementation, a node registers the same key for all paths to the device.
This means that a single preempt and abort command ejects a node from all paths to the
storage device.
There are several important concepts to summarize here
◆ Only a registered node can eject another
◆ Since a node registers the same key down each path, ejecting a single key blocks all
I/O paths from the node
◆ Once a node is ejected, it has no key registered and it cannot eject others.
The SCSI-3 PR specification simply describes the method to control access to disks with
the registration and reservation mechanism. The method to determine who can register
with a disk and when a registered member should eject another node is implementation
specific. The following paragraphs describe DBE/AC I/O fencing concepts and
implementation.

I/O Fencing Components


I/O Fencing, or simply fencing, allows write access to members of the active cluster and
blocks access to non-members. I/O fencing in DBE/AC uses several components. The
physical components are coordinator disks and data disks. Each has a unique purpose and
uses different physical disk devices.

Data Disks
Data disks are standard disk devices used for data storage. These can be physical disks or
RAID Logical Units (LUNs). These disks must support SCSI-3 PR. Data disks are
incorporated in standard VxVM/CVM disk groups. In operation, CVM is responsible for
fencing data disks on a disk group basis. Since VxVM enables I/O fencing, several other
features are provided. Disks added to a disk group are automatically fenced, as are new
paths discovered to a device.

Coordinator Disks
Coordinator disks are special purpose disks in a DBE/AC environment. Coordinator
disks are three standard disks, or LUNs, that are set aside for use by I/O fencing during
cluster reconfiguration.

14 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
I/O Fencing

The coordinator disks act as a global lock device during a cluster reconfiguration. This
lock mechanism is used to determine who gets to fence off data drives from other nodes.
From a high level, a system must eject a peer from the coordinator disks before it can fence
the peer from the data drives. This concept of racing for control of the coordinator disks to
gain the capability to fence data disks is key to understanding the split brain prevention
capability of fencing.
Coordinator disks cannot be used for any other purpose in the DBE/AC configuration.
The user may not store data on these disks, or include the disks in a disk group used by
user data. The coordinator disks can be any three disks that support SCSI-3 PR. VERITAS
typically recommends the smallest possible LUNs for coordinator use.

I/O Fencing Operation


I/O fencing provided by the kernel-based fencing module performs identically on node
failures and communications failures. When the fencing module on a node is informed of
a change in cluster membership by the GAB module, it immediately begins the fencing
operation. The node immediately attempts to eject the key for departed node(s) from the
coordinator disks using the preempt and abort command. When the node has successfully
ejected the departed nodes from the coordinator disks, it ejects the departed nodes from
the data disks. If this were a split brain scenario, both sides of the split would be “racing”
for control of the coordinator disks. The side winning the majority of the coordinator disks
wins the race and fences the loser. The loser then panics and reboots.

Chapter 1, Overview: DBE/AC for Oracle9i RAC 15


I/O Fencing

16 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installing the Component Packages 2
Installation Overview
The following few paragraphs introduce what’s involved in installing and configuring a
DBE/AC cluster with Oracle9i RAC, and describe a completed installation.

After Installation View - A Cluster Running Oracl9i RAC


From a high level, when installation of VERITAS DBE/AC for Oracle9i RAC and Oracle9i
is completed and the starter database is created, the RAC cluster should have the
following characteristics:
◆ Two systems are connected by two VCS private network links using Ethernet
controllers on each system. The links are cross-over Ethernet cables, or independent
hubs, for each VCS communication network. Hubs are powered from separate
sources. Each system uses two independent network cards to provide redundancy.
◆ Each system is connected to the shared storage devices through a switch.
◆ Each system runs VERITAS Cluster Server software, VERITAS Volume Manager with
cluster features (CVM), VERITAS Cluster File System with cluster features (CFS), and
the DBE/AC agents and components, including I/O fencing.
◆ Each system has a local VxVM with a “rootdg.” This could be an encapsulated root
drive or an internal disk. (VERITAS does not support using the rootdg for storage
shared between systems. Oracle 9i RAC software must be placed in another disk
group.)
◆ The Oracle9i RAC software is installed on the shared storage accessible to both
systems. It could also have been installed locally, but this document features a shared
installation.
◆ The Oracle9i RAC database is configured on the shared storage (cluster file system or
raw) available to both systems.
◆ VCS is configured such that agents direct and manage the resources required by
Oracle9i RAC, which runs in parallel on both systems.

17
Installation Overview

Phases of the Installation: An Overview


The installation and configuration of a DBE/AC cluster with Oracle9i RAC can be seen to
consist of five phases.

Phase One: Setting up and Configuring the Hardware


Prior to installing the DBE/AC software you need a basic hardware setup in place. This
document does not describe the steps to install this hardware, but you can find the
supported hardware detailed in the VERITAS DBE/AC for Oracle9i RAC Release Notes.
✔ The two systems are up and running the same Solaris operating system and
connected to the public network.
✔ Two to four ethernet links directly link the systems to form a private network that is to
handle direct system-to-system communication. Two switches may also be used for
the private network links.
✔ Shared storage is set up that both systems can access through a switch. Note that this
storage must support SCSI-3 persistent reservations.

Clients
LAN

Server A Server B
Private Network

Switch SAN

Disk Arrays

Coordinator Disks

18 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installation Overview

Phase Two: Installing DBE/AC and Configuring its Components


This guide describes the installation procedures for installation DBE/AC. From a high
level view, when you install DBE/AC, you are:
✔ Running the installDBAC script to install VCS, CVM, and CFS. Included with this
installation is the VCS Oracle enterprise agent for Oracle. The script is interactive.
✔ Running the vxinstall utility to initialize VxVM and encapsulate the local disk in a
root disk group. You do not initialize the shared disks at this time.
✔ Rebooting the systems.
✔ Running a script called vxfentsthdw to verify shared storage can be I/O fenced.
✔ Setting up the coordinator disks used by the I/O fencing feature into a disk group.
✔ Starting I/O fencing.
This chapter describes these procedures.

Phase Three: Installing Oracle9i


When the DBE/AC components are installed and configured, you can install the Oracle9i
software, either Oracle9i Release 1 or Oracle9i Release 2. In either case, installing Oracle9i
includes:
✔ Creating a disk group and a shared volume or a cluster file system for the Oracle9i
SRVM component.
✔ Creating a disk group and a shared volume or a cluster file system for the Oracle9i
RAC software.
✔ Installing Oracle on the shared storage by running the Oracle9i installer.

Note Oracle9i Release 1 requires a special procedure prior to installing the software.
Carefully follow the procedure “Installing Oracle9i Release 1 Software” on page 54.

✔ Linking the Oracle software to the VERITAS ODM libraries.


Installing Oracle9i is described in “Installing Oracle9i Software” on page 51.

Phase Four: Creating the Database


There are many way to create a database. This guide describes how to create a starter
database on raw volumes within a VERITAS VxVM disk group or on a VERITAS Cluster
File System, using either the Oracle9i dbca utility or scripts provided by VERITAS. These
procedures are provided in case you don’t have database creation tools available. Refer to
“Creating Starter Databases” on page 133.

Chapter 2, Installing the Component Packages 19


Installation and Configuration Flowchart

Phase Five: Setting up VCS to Manage RAC Resources


VCS uses a configuration file, called main.cf, to manage the resources used by the
cluster. A basic VCS configuration file is created during installation. After Oracle is
installed and the database is created, the main.cf file must be modified to reflect the new
resources and their configuration. The examples used in Chapter 4 describe:
✔ Editing the CVM service group to define the location Oracle9i binaries, the listener
process, and the resources on which they depend.
✔ Creating an Oracle database service group to define the database and the resources it
depends on.

Installation and Configuration Flowchart


The flowchart graphically depicts the steps to install and configure VERITAS DBE/AC for
Oracle9i RAC:

See this chapter Set PATH, MANPATH variables

Mount CD and cd /cdrom/database_ac_for_oracle9i

Run installDBAC to install VCS,


VxVM, VxFS packages

Run chk_dbac_pkgs to verify installation

Another Yes
system?
No
See VxVM Run vxinstall
System Administrator’s
Guide
Reboot

Yes
Another
system?
No
See next page

20 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installation and Configuration Flowchart

From previous page

See this chapter Verify SCSI-3 PR support for all shared


disks including coordinator disks

Create & configure coordinator disk group

Create /etc/vxfendg

Start I/O fence driver

Another Yes
system?

No
See Chapter 3, and VxVM
System Administrator’s Guide Create shared DG’s / Vol’s for Oracle

See Chapter 3, and Oracle


Installation Guide Install Oracle9i and Oracle9i patch

See Chapter 3 Link Oracle to VERITAS ODM libraries

See Chapter 3, and Oracle


Installation Guide Create Oracle starter database - Optional

See Chapter 4 Edit CVM group

See Chapter 4
& VCS Oracle E.A. ICG Edit Oracle group

Chapter 2, Installing the Component Packages 21


Prerequisites for Installing Components

Prerequisites for Installing Components


✔ Obtain license key to use VERITAS Database Edition for Advanced Cluster. Refer to
“Obtaining License Keys for DBE/AC for Oracle9i RAC” on page xiv.
✔ Make sure your cluster meets the necessary hardware and software requirements.
Refer to the VERITAS DBE/AC for Oracle9i RAC Release Notes.
✔ Each system in the cluster requires the following local disk space:
- For the installation of VERITAS DBE/AC for Oracle9i RAC:

/ 450 MB
/tmp 100MB for the duration of the installation

- For the installation of Oracle9i, each local system requires approximately 4.6
gigabytes. For the installation of Oracle9i Release 1, a portion of the disk space,
approximately 2 GB, is required for copies of the Oracle9i installation disks.
✔ Remove any versions of VCS and unmount the file systems (VxFS) that currently run
on the systems where you intend to install VERITAS DBE/AC for Oracle9i RAC. The
VERITAS DBE/AC for Oracle9i RAC product requires the versions of VCS, VxVM, and
VxFS contained on the CD. Refer to the installation guide corresponding to each
product for uninstallation instructions. If you are running versions of VCS, VxVM, or
VxFS, uninstall VCS first and upgrade to release 3.5 using the installDBAC utility.
✔ Obtain and install the Sun SPARCstorage Array (SUNWsan) package on each node.
✔ Obtain and install the Solaris 8 patches required by VxVM one each node; they
include 108528-14, 108827-19, 109529-06, 110722-01, and 111413-06.
✔ Obtain and install the Solaris 8 patches required by VxFS on each node: 108528-02 and
108901-03.

22 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Preparing for Using the installDBAC Script

Preparing for Using the installDBAC Script


The installDBAC script is interactive. It prompts you for information about the cluster
where you are installing VERITAS DBE/AC for Oracle9i RAC. Before you start the script,
have the following information at hand:

General Information Requested by Script


✔ The name of the cluster (must begin with a letter of the alphabet (a-z, A-Z)
✔ The ID number of the cluster (each cluster within a site requires a unique ID)
✔ The host names of the systems in the cluster
✔ The device names for the two private network links
✔ A valid license key for VERITAS DBE/AC for Oracle9i RAC

Web-based ClusterManager Information Requested by Script


✔ The name of the public NIC for the VCS ClusterManager
✔ The virtual IP address for the NIC used by ClusterManager
✔ The netmask for the virtual IP address

SMTP/SNMP (Optional) Requested by Script


✔ Domain-based address of SMTP server
✔ Email address of SMTP notification recipients
✔ Severity level of events for SMTP notification
✔ System name for the SNMP console.
✔ Severity level of events for SNMP notification

Note The installation of VCS using the installDBAC installation utility is essentially the
same as that described in the VERITAS Cluster Server Installation Guide. The VCS
Installation Guide contains more extensive details about VCS installation.

Chapter 2, Installing the Component Packages 23


Installing DBE/AC Packages

Installing DBE/AC Packages


The installDBAC script installs VERITAS Cluster Server (VCS), VERITAS Volume
Manager (VxVM), VERITAS File System (VxFS), on each system. Refer to the preceding
“Installation and Configuration Flowchart.”

Setting the PATH Variable


The installation and other commands are located in various directories. On each system,
add these directories to your PATH environment variable using the following commands:
If you use the Bourne Shell (sh or ksh):
$ PATH=/sbin:/usr/sbin:/usr/bin:/usr/lib/vxvm/bin:\
/usr/lib/fs/vxfs:/opt/VRTSvxfs/sbin:/opt/VRTSvcs/bin:\
/opt/VRTSvcs/rac/bin:/opt/VRTSob/bin:$PATH; export PATH
If you use the C Shell (csh or tcsh):
% setenv PATH /sbin:/usr/sbin:/usr/bin:/usr/lib/vxvm/bin:\
/usr/lib/fs/vxfs:/opt/VRTSvxfs/sbin:/opt/VRTSvcs/bin:\
/opt/VRTSvcs/rac/bin:/opt/VRTSob/bin:$PATH

Note For root user, do not define paths to a cluster file system in the
LD_LIBRARY_PATH variable. For example, define ORACLE_HOME/lib in
LD_LIBRARY_PATH for user oracle only.

The path defined to /opt/VRTSob/bin is optional, required if you install the optional
package, VERITAS Enterprise Administrator.

Setting the MANPATH Variable


Set the MANPATH variable to enable viewing manual pages:
If you use the Bourne Shell (sh or ksh):
$ MANPATH=/usr/share/man:/opt/VRTS/man; export MANPATH
If you use the C Shell (csh or tcsh):
% setenv MANPATH /usr/share/man:/opt/VRTS/man

24 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installing DBE/AC Packages

Setting Up System-to-System Communication


Provide the capability of each node to have remote rsh access to the other, at least during
installation and disk verification. On each system, placing a “+” character in the first line
of the file /.rhosts gives remote access to the system running the install program. You
can also edit the hosts.equiv file to limit the remote access to specific systems. Refer to
the manual page for hosts.equiv files for more information about these files.

Note Remove the remote rsh access permissions when they are no longer required for
installation and disk verification. See “Removing Temporary rsh Access
Permissions” on page 69.

Mounting the VERITAS DBE/AC for Oracle9i RAC CD


Insert the CD containing the VERITAS DBE/AC for Oracle9i RAC software in a CD-ROM
drive connected to the system. The Solaris volume-management tool automatically
mounts the CD.
With the CD mounted, you are ready to start installing the packages.

Running the VERITAS DBE/AC Install Script


To start the installation of VERITAS DBE/AC:

1. Log in as root user on one of the systems for installation.

2. Enter the following commands to start installDBAC:


# cd /cdrom/database_ac_for_oracle9i
# ./installDBAC

3. At the prompt, enter the names of the two systems for installation:
Enter the names of the systems on which VCS is to be installed
separated by spaces (example: system1 system2): galaxy nebula

Analyzing . . .

Chapter 2, Installing the Component Packages 25


Installing DBE/AC Packages

Verifying Systems for Installation

4. The utility begins by checking that the systems are ready for installation. On the
system where you are running the utility, the utility verifies that:

a. Sufficient disk space exists to install the packages

b. The required Solaris patches and the Solaris SUNWsan package are installed

Note You can ignore warnings about recommended optional patches that are not present

c. User has root permissions


The utility exits with a brief message if a system is not ready for the installation.

5. Before installing any packages, the utility checks whether the VERITAS license
package, VRTSvlic, is present on the installation system.
- If the VRTSvlic package is not present, the installation utility installs it.
- If the installed VRTSvlic package is not the current version, the utility prompts
you to upgrade to the current version. The utility exits if you decline to upgrade.

6. The utility repeats the checks performed in step 4 and step 5 on the second system.

7. The utility checks for a VERITAS DBE/AC license key on the first system. If it cannot
find a key, it prompts you to enter one:
You do not have a Database Edition for Advanced Cluster license
installed on galaxy.

Do you want to add a license key for this product on galaxy ?


[Y/N](Y) y

Enter your license key : XXXX-XXXX-XXXX-XXXX-XXXX-XXX


Registering license for galaxy.

You do not have a Database Edition for Advanced Cluster license


installed on nebula.

Do you want to add a license key for this product on nebula ?


[Y/N](Y) y

Enter your license key : XXXX-XXXX-XXXX-XXXX-XXXX-XXX


Registering license for nebula.

26 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installing DBE/AC Packages

8. When you are prompted to continue VERITAS Cluster Server installation, press
Return to accept the default (Y) or enter “Y.”
You can skip VERITAS Cluster server installation, if you need to
install only VERITAS Volume manager and VERITAS Filesystem

Would you like to continue VCS installation. [Y/N](Y)? Y


Starting VCS installation.

9. For each system, the script checks system to system communications:


Checking for ssh on galaxy ............................ not found
Verifying communication with nebula ............. ping successful
Attempting rsh with nebula ....................... rsh successful
Checking OS version on galaxy. ........................ SunOS 5.8
Creating /tmp subdirectory on galaxy... /tmp subdirectory created
Checking OS version on nebula ......................... SunOS 5.8

Using /usr/bin/rsh and /usr/bin/rcp to communicate with galaxy


Communication check completed successfully

10. The VCS portion of the installation utility verifies the license status of each node.
VCS licensing verification:

Checking galaxy .............found DBED/AC Node Locked key


Do you want to enter a new key for galaxy? (N) n
Using the existing DBED/AC Node Locked key located on nebula.

Checking nebula .............found DBED/AC Node Locked key


Using the existing DBED/AC Node Locked key located on nebula.

DBED/AC licensing completed successfully

11. When you are ready to start installation of VCS, enter “Y” when prompted:
Are you ready to start the Cluster installation now? (Y) Y

12. For each system, the VCS portion of the utility checks if any of the packages to be
installed are present. If any are found, the utility continues if there are no conflicts or
exits if packages must be manually uninstalled.

13. For each system, the utility checks for the necessary file system space.

14. For each system, the utility check whether any DBE/AC processes are running. Any
running processes are stopped.

Chapter 2, Installing the Component Packages 27


Installing DBE/AC Packages

Configuring the Cluster

15. The script highlights the information required to proceed with installation of VCS:
To configure VCS the following is required:

A unique Cluster name


A unique Cluster ID number between 0-255
Two NIC cards on each system used for private network
heartbeat links

Are you ready to configure DBED/AC on these systems? (Y)

Note When the installDBAC installation script shows a single response within
parentheses, it is the default response. Press Return to choose the default response.

Enter the unique Cluster Name: vcs_racluster


Enter the unique Cluster ID number between 0-255: 7

16. After you enter the cluster ID number, the program discovers and lists all NICs on the
first system:
Discovering NICs on galaxy: ..... discovered hme0 qfe0 qfe1
qfe2 qfe3

Enter the NIC for the first private network heartbeat link on
galaxy: (hme0 qfe0 qfe1 qfe2 qfe3) qfe0

Enter the NIC for the second private network heartbeat link on
nebula: (hme0 qfe1 qfe2 qfe3) qfe1

Are you using the same NICs for private heartbeat links on all
systems? (Y)
If you answer “N,” the program prompts for the NICs of each system.

17. The installation program asks you to verify the cluster information:
Cluster information verification:

Cluster Name: vcs_racluster


Cluster ID Number: 7
Private Network Heartbeat Links for galaxy: link1=qfe0 link2=qfe1
Private Network Heartbeat Links for nebula: link1=qfe1 link2=qfe1

Is this information correct? (Y)


If you enter “N,” the program returns to the screen describing the installation
requirements (step 15).

28 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installing DBE/AC Packages

Configuring Cluster Manager

18. The installation program describes the information required to configure Cluster
Manager (Web Console):
The following information is required to configure the Cluster
Manager:

A public NIC used by each system in the cluster


A Virtual IP address and netmask for the Cluster Manager

Do you want to configure the Cluster Manager (Web Console) (Y)


Press Return to configure Cluster Manager (Web Console) on the systems. You can
enter “N” and skip configuring Cluster Manager and the program advances you to
the screen enabling you to configure SMTP notification.

19. In reply to the prompts that follow, confirm that you want to use the discovered
public NIC on the first system. Also indicate whether all systems use the same public
NIC. Press Return to choose the default responses.
Active NIC devices discovered on galaxy: hme0
Enter the NIC for the Cluster Manager (Web Console) to use on
galaxy: (hme0)
Is hme0 the public NIC used for all systems (Y)

20. Enter the virtual IP address to be used by Cluster Manager:


Enter the Virtual IP address for the Cluster Manager:
10.180.88.199
You can confirm the default netmask, or enter another:
Enter the netmask for IP 10.180.88.199: (255.0.0.0)

21. The installation program asks you to verify the Cluster Manager information:
Cluster Manager (Web Console) verification:

NIC: hme0
IP: 10.180.88.199
Netmask: 255.0.0.0

Is this information correct? (Y)


To indicate the information is correct, press Return. Otherwise, enter “n” and the
program returns to the screen describing the Cluster Manager requirements (step 18).

Chapter 2, Installing the Component Packages 29


Installing DBE/AC Packages

Configuring SMTP Email Notification

22. The installation program describes the information required to configure the SMTP
notification feature of VCS:
The following information is required to configure SMTP
notification:

The domain based address of the SMTP server


The email address of each SMTP recipient
A minimum severity level of messages to send to each
recipient

Do you want to configure SMTP notification? (Y) y


You can enter “N” and skip configuring SMTP notification. The program advances
you the screen enabling you to choose SNMP notification (see step 25).

23. Respond to the prompts and provide information to configure SMTP notification.
Enter the domain based address of the SMTP server (example:
smtp.yourcompany.com) smtp.xyzstar.com
Enter the email address of the SMTP recipient ozzie@xyzstar.com
Enter the minimum severity of events for which mail should be
sent to ozzie@xyzstar.com: [I=Information, W=Warning, E=Error,
S=SevereError] w
Would you like to add another SMTP recipient? (N) y
Enter the email address of the SMTP recipient harriet@xyzstar.com
Enter the minimum severity of events for which mail should be
sent to harriet@xyzstar.com: [I=Information, W=Warning,
E=Error, S=SevereError] e
Would you like to add another SMTP recipient? (N)

24. Verify that the SMTP notification information is correct.


SMTP notification verification:

SMTP Address: smtp.xyzstar.com


Recipient: ozzie@xyzstar.com receives email for Warning or
higher events
Recipient: harriet@xyzstar.com receives email for Error or
higher events

Is this information correct? (Y)


Press Return if the information is correct. If it is incorrect, enter “N.” The program
returns to the screen describing SMTP notification requirements (see step 22). You can
bypass configuration of SMTP notification or reenter the information.

30 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installing DBE/AC Packages

Configuring SNMP Trap Notification

25. The installation program describes the information required to configure the SNMP
notification feature of VCS:
The following information is required to configure SNMP
notification:

System names of SNMP console(s) to receive DBED/AC trap messages


SNMP trap daemon port numbers for each console
A minimum severity level of messages to send to each console

Do you want to configure SNMP notification? (Y)


You can enter “N” and skip configuring SNMP notification. The installation program
starts installation; see step 28.

26. Respond to the prompts and provide information to configure SNMP notification.
Enter the SNMP trap daemon port (162)
Please enter the SNMP console system name saturn
Enter the minimum severity of events for which SNMP traps should
be sent to saturn: [I=Information, W=Warning, E=Error,
S=SevereError] e
Would you like to add another SNMP console? (N) y
Please enter the SNMP console system name jupiter
Enter the minimum severity of events for which SNMP traps should
be sent to jupiter: [I=Information, W=Warning, E=Error,
S=SevereError] s
Would you like to add another SNMP console? (N)

27. Verify that the SNMP notification information is correct.


SNMP notification verification:

SNMP Port: 162


Console: saturn receives SNMP traps for Error or higher events
Console: jupiter receives SNMP traps for SevereError or higher
events

Is this information correct? (Y)


Press Return if the information is correct. If the information is incorrect, enter “N” and
the program returns to the screen describing the SNMP notification requirements (see
step ). You can either bypass configuration of SNMP notification or reenter the
information.

Chapter 2, Installing the Component Packages 31


Installing DBE/AC Packages

Installing VCS Packages

28. After you have verified that the information you have entered is correct, the
installation program begins installing the packages on the first system:
Installing VCS on galaxy:

Installing VRTSvlic package ........................... Done


Installing VRTSperl package ........................... Done
Installing VRTSllt package ............................ Done
Installing VRTSgab package ............................ Done
Installing VRTSvcs package ............................ Done
Installing VRTSvcsmg package .......................... Done
Installing VRTSvcsag package .......................... Done
Installing VRTSweb package ............................ Done
Installing VRTSvcsw package ........................... Done
Installing VRTSvcsdc package .......................... Done
Installing VRTSvcsmn package .......................... Done
Installing VRTSvcsor package .......................... Done
Installing VRTSdbac package ........................... Done
The same packages are installed on each machine in the cluster:
Installing VCS on nebula:

Copying VRTSvlic binaries ............................. Done


Installing VRTSvlic package ........................... Done
Copying VRTSperl binaries ............................. Done
Installing VRTSperl package ........................... Done
.
.
.

Package installation completed successfully

Configuring VCS

29. The installation program continues by creating configuration files and copying them
to each system:
Configuring DBED/AC ..................................... Done
Copying DBED/AC configuration files to galaxy ........... Done
Copying DBED/AC configuration files to nebula ........... Done

Configuration files copied successfully

32 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installing DBE/AC Packages

Starting VCS

30. You can now start VCS and its components on each system:
Do you want to start the cluster components now? (Y)
.
.
.

Starting DBED/AC on galaxy

Starting LLT ...................................... Started


Starting GAB ...................................... Started
Starting DBED/AC .................................. Started
Starting DBED/AC drivers........................... Started

Starting DBED/AC on nebula

Starting LLT ...................................... Started


Starting GAB ...................................... Started
Starting DBED/AC .................................. Started
Starting DBED/AC drivers........................... Started

Chapter 2, Installing the Component Packages 33


Installing DBE/AC Packages

31. When the utility completes the installation of VCS, it reports on:
- The creation of backup configuration files (named with the extension
“init.name_of_cluster”). For example, the file /etc/llttab in the installation
example has a backup file named /etc/llttab.init.vcs_racluster.
- The URL and the login information for Cluster Manager (Web Console), if you
chose to configure it. Typically this resembles:
http://10.180.88.199:8181/vcs
You can access the Web Console using the User Name: “admin” and the
Password: “password”. Use the /opt/VRTSvcs/bin/hauser command to add
new users.
- The location of a VCS installation report, which is named:
/var/VRTSvcs/installvcsReport.name_of_cluster

32. When installation is complete, you see the message:


DBED/AC installation completed successfully
Successful installation of VCS

Installing Base CVM and CFS Packages

33. After the script installs and starts VCS on each cluster system, it prompts you about
installing the base VERITAS DBE/AC packages, which include those for the Cluster
Volume Manager (CVM) and Cluster File System (CFS) components.
The installDBAC script installs required packages followed by the optional
packages that you choose. The base packages are installed first on the first system
entered at the start of installation.
Installing on galaxy machine.

Starting installation of VERITAS Database Edition for Advanced


Cluster on galaxy

Kernel Config parameters are set.

The following packages need to be installed


for VERITAS Database Edition for Advanced Cluster.

VRTSvxvm is required
VRTSvxfs is required
VRTSglm is required
VRTSgms is required
VRTSodm is required

34 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installing DBE/AC Packages

VRTScavf is required
VRTSvmman is Optional
VRTSvmdoc is Optional
VRTSfsdoc is Optional
VRTSob is Optional
VRTSvmpro is Optional
VRTSfspro is Optional
VRTSobgui is Optional

Continue? [y,n,?,q] y

34. Installation of the packages listed proceeds when you answer “y.” First, the required
packages are installed. After each package is installed, press Return to install the next
required package.
Checking existing package installation ...
Installing VRTSvxvm.
Installation of VRTSvxvm................................Done
Installing VRTSvxfs.
.
.
.

35. When prompted about installing the optional packages, you can choose which
packages you want to install. For example:
.
.
.
Installing VRTSvmman.
system VRTSvmman VERITAS Volume Manager, Manual Pages
This package is optional. Install? [y,n,?,q] y
Installation of VRTSvmman................................Done
.
.
.

Chapter 2, Installing the Component Packages 35


Installing DBE/AC Packages

36. The script completes installation on the first system:


Completed Installation of VERITAS database Edition for Advanced
Cluster.
The following packages were installed:

VRTSobgui
VRTSfspro
VRTSvmpro
VRTSob
VRTSfsdoc
VRTSvmdoc
VRTSvmman
VRTScavf
VRTSodm
VRTSgms
VRTSglm
VRTSvxfs
VRTSvxvm

Successfully Installed Foundation products on galaxy

37. Installation continues on the next system:


Installing on nebula machine.
Starting installation of VERITAS Database Edition for Advanced
Cluster on nebula.

Kernel Config parameters are set.

The following packages need to be installed


for VERITAS Database Edition for Advanced Cluster.

VRTSvxvm is required
VRTSvxfs is required
VRTSglm is required
VRTSgms is required
VRTSodm is required
VRTScavf is required
VRTSvmman is Optional
VRTSvmdoc is Optional
VRTSfsdoc is Optional
VRTSob is Optional
VRTSvmpro is Optional
VRTSfspro is Optional
VRTSobgui is Optional

36 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installing DBE/AC Packages

Continue? [y,n,?,q] y

Checking existing package installation ...


Copying VRTSvxvm binaries ...
Installing VRTSvxvm.
Installation of VRTSvxvm...............................Done
Copying VRTSvxfs binaries ...
.
.

Completed Installation of VERITAS Database Edition for Advanced


Cluster.
The following packages were installed:
VRTSobgui
VRTSfspro
VRTSvmpro
VRTSob
VRTSfsdoc
VRTSvmdoc
VRTSvmman
VRTScavf
VRTSodm
VRTSgms
VRTSglm
VRTSvxfs
VRTSvxvm

Configuring CVM and CFS

38. The install script prompts you about configuring Cluster Volume Manager and
Cluster File System:
Successfully installed Foundation products on nebula
Would you like to configure Cluster Volume Manager and Cluster
File System [Y/N](Y) y

The cluster configuration information as read from cluster


configuration file is as follows.
Cluster : vcs_racluster
Nodes : galaxy nebula

You will now be prompted to enter the information pertaining


to the cluster and the individual nodes.

Chapter 2, Installing the Component Packages 37


Installing DBE/AC Packages

39. You can choose a different “timeout” value, if necessary, or press Return to accept the
default:
Enter the timeout value in seconds. This timeout will be
used by CVM in the protocol executed during cluster
reconfiguration. Choose the default value, if you do not wish
to enter any specific value.

Timeout: [<timeout>,q,?] (default: 200)

40. When additional cluster information is displayed, “gab” is shown as the configured
transport protocol, meaning GAB is to provide communications between systems in
the cluster. For VERITAS DBE/AC for Oracle9i RAC, GAB is required.
------- Following is the summary of the information: ------
Cluster : vcs_racluster
Nodes : galaxy nebula
Transport : gab
-----------------------------------------------------------
Hit RETURN to continue.

Is this correct [y,n,q,?] (default: y) y

41. The configuration of CVM begins. When CVM has been added, you receive a message
resembling:
You will now be prompted to enter whether you want to start
cvm on the systems in the cluster. If you choose to do so,
then attempt will be made to start the cvm on all the systems
in the cluster. It will not be started on the systems
which do not have cluster manager running.

Do you want to start cvm ? [y,n] n

========================================================
Cluster File System Configuration is in progress...
cfscluster: CFS Cluster Configured Successfully

38 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Verifying All Prerequisite Packages Are Installed

Completing Installation

42. After reporting that the product is successfully installed, the utility tells you that you
need to reboot all systems in the cluster:
You must reboot all the system in the cluster.

Note Do not reboot the systems at this time. You must first run the vxinstall utility on
each system before rebooting.

43. The installDBAC installation utility is done.

Verifying All Prerequisite Packages Are Installed


After running installDBAC, verify that all the necessary packages are installed. Use the
following command on each system:
# /opt/VRTSvcs/bin/chk_dbac_pkgs
This utility does not check for the documentation packages (VRTSvmdoc, VRTSfsdoc,
VRTSvcsdc, VRTSvcsmn, or VRTSvmman) or the graphical user interface packages
(VRTSob, VRTSobgui, VRTSfspro, VRTSvmpro, VRTSweb, or VRTSvcsw).
If a requisite package or patch is not present, the utility reports an error, such as:
Checking package - VRTSglm
ERROR: Information for "VRTSglm" was not found
Package VRTSglm not installed - install it before proceeding

Chapter 2, Installing the Component Packages 39


Configuring VxVM Using vxinstall and Rebooting

Configuring VxVM Using vxinstall and Rebooting


On each system, you must run the vxinstall utility. The local disk on each system must
be encapsulated in the root disk group, or “rootdg.” Do not include the shared disks in
rootdg. You must reboot both systems after running vxinstall.
# vxinstall
Refer to the VERITAS Volume Manager Installation Guide for instructions on how to use the
vxinstall utility to initialize VxVM. You must run vxinstall on each system before
verifying that shared storage can support SCSI-3 persistent reservations.

Verifying Storage Supports SCSI-3 Persistent Reservations


The shared storage used with VERITAS DBE/AC for Oracle9i RAC software must support
SCSI-3 persistent reservations. SCSI-3 persistent reservations support enables the use of
I/O fencing to prevent data corruption caused by a split brain. I/O fencing is described in
the chapter on “I/O Fencing Scenarios” on page 81.
To protect the data on shared disks, each system in the cluster must be configured with
I/O fencing. You must verify that each disk array you use for shared storage, including
the coordinator disks, supports SCSI-3 persistent reservations.
While the following sections pertain to all shared storage used with VERITAS DBE/AC for
Oracle9i RAC, they also pertain to the coordinator disks.

40 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Verifying Storage Supports SCSI-3 Persistent Reservations

For EMC Symmetrix 8000 Series Storage


◆ Ask your EMC representative to help you verify that the firmware on the storage unit
is set to enable SCSI-3 persistent reservations. The PR flag must be set on each EMC
volume.
◆ Verify that the same serial number for a LUN is returned on all paths to the LUN.
Refer to the vxfenadm(1M) manual page. You can use the command:
vxfenadm -i diskpath
For example:
# vxfenadm -i /dev/rdsk/c2t13d0s2
Vendor id : EMC
Product id : SYMMETRIX
Revision : 5567
Serial Number : 42031000a

For Hitachi Data Systems 99xx Series Storage


◆ The SCSI-3 persistent reservations feature requires that the disk array be running the
latest firmware (see VERITAS DBE/AC for Oracle9i RAC Release Notes).
◆ System Mode 186 must be enabled for the SCSI reservations feature. Request
assistance from HDS to verify this.
◆ Verify that the same serial number for a LUN is returned on all paths to the LUN.
Refer to the vxfenadm(1M) manual page. You can use the command:
vxfenadm -i diskpath
For example:
# vxfenadm -i /dev/rdsk/c2t0d2s2
Vendor id : HITACHI
Product id : OPEN-3 -SUN
Revision : 0117
Serial Number : 0401EB6F0002

Chapter 2, Installing the Component Packages 41


Verifying Storage Supports SCSI-3 Persistent Reservations

Verifying Shared Storage Arrays


To verify that the shared storage arrays support SCSI-3 persistent reservations and I/O
Fencing, use the vxfentsthdw utility. Test at least one disk in each shared disk array. We
recommend you test as many disks as possible in each array.

Note Only supported storage devices can be used with I/O fencing of shared storage.
Currently, supported devices include: EMC Symmetrix 8000 Series or Hitachi Data
System Series 99xx. Other disks, even if they pass the following test procedure, are
not supported. Please see the VERITAS DBE/AC for Oracle9i RAC Release Notes for
additional information about supported shared storage.

The utility, which you can run from one system in the cluster, tests the storage by setting
SCSI-3 registrations on the disk you specify, verifying the registrations on the disk, and
removing the registrations from the disk. See the chapter on “I/O Fencing Scenarios” on
page 81 for information on I/O fencing and a description of how it works. Refer also to
the vxfenadm(1M) manual page.

Caution The utility vxfentsthdw overwrites and destroys existing data on the disks.

Using the vxfentsthdw Utility


Assume that you need to check the shared device that, in the example, both systems know
as /dev/rdsk/c4t8d0s2. (It’s also possible that each system would use a different
name for the same physical device.)

1. Make sure system-to-system communication is setup. See “Setting Up


System-to-System Communication” on page 25.

2. On one system, start the utility:


# cd /opt/VRTSvcs/rac/bin
# ./vxfentsthdw
The utility begins by providing an overview of its function and behavior. It warns you
that the tests it performs overwrites any data on the disks you check:

******** WARNING!!!!!!!! ********


THIS UTILITY WILL DESTROY THE DATA ON THE DISK!!

Do you still want to contnue : [y/n] (default: n)


y
Enter the first node of the cluster:
galaxy

42 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Verifying Storage Supports SCSI-3 Persistent Reservations

Enter the second node of the cluster:


nebula

3. Enter the name of the disk you are checking. For each node, the disk may be known
by the same name, as in our example.
Enter the disk name to be checked for SCSI-3 PGR on node galaxy in
the format: /dev/rdsk/cxtxdxsx
/dev/rdsk/c4t8d0s2

Enter the disk name to be checked for SCSI-3 PGR on node nebula in
the format: /dev/rdsk/cxtxdxsx
Make sure it’s the same disk as seen by nodes galaxy and nebula
/dev/rdsk/c4t8d0s2
Note the disk names, whether or not they are identical, must refer to the same
physical disk, or the testing terminates without success.

4. The utility starts to perform the check and report its activities. For example:
Keys registered for disk /dev/rdsk/c2t8d0s2 on node galaxy

Verifying registrations of keys for disk /dev/rdsk/c2t8d0s2


on node galaxy: passed

Reads from disk /dev/rdsk/c4t8d0s2 successful on node galaxy

Reads from disk /dev/rdsk/c4t8d0s2 successful on node nebula

Writes from disk /dev/rdsk/c4t8d0s2 successful on node galaxy

Writes from disk /dev/rdsk/c4t8d0s3 successful on node nebula


.
.
.

5. For a disk that is ready to be configured for I/O Fencing on both systems, the utility
reports success. For example:
The disk /dev/rdsk/c4t8d0s2 is ready to be configured for I/O
Fencing on node galaxy
The disk /dev/rdsk/c4t8d0s2 is ready to be configured for I/O
Fencing on node nebula

6. Run the vxfentsthdw utility for each disk you intend to verify.

Chapter 2, Installing the Component Packages 43


Setting Up Coordinator Disks

Setting Up Coordinator Disks


I/O Fencing requires coordinator disks configured in a disk group that each of the
systems in the cluster can access. The use of coordinator disks enables the vxfen driver to
resolve potential split brain conditions and prevent data corruption. See the chapter “I/O
Fencing Scenarios” on page 81 for a discussion of I/O fencing and the role of coordinator
disks.
A coordinator disk is not used for data storage, and so may be configured as the smallest
possible LUN on a disk array to avoid wasting space.

Requirements for Coordinator Disks


Coordinator disks have the following requirements:
✔ There must be at least three coordinator disks and the total number of disks must be
an odd number. This ensures a majority of disks can be achieved during I/O fencing.
✔ Each of the coordinator disks must use a physically separate disk or LUN.
✔ Each of the coordinator disks should be on a different disk array, if possible.
✔ Coordinator disks in a disk array should use hardware-based mirroring.
✔ The coordinator disks must support SCSI-3 persistent reservations. See “Verifying
Storage Supports SCSI-3 Persistent Reservations” on page 40.

Configuring a Disk Group Containing Coordinator Disks


1. From one system, create a disk group named: vxfencoorddg. This group must
contain an odd number of disks/LUNs and a minimum of three disks. The
disks/LUNs must support SCSI-3 persistent reservations. It is recommended you use
the smallest size disks/LUNs, so that space for data is not wasted. The disk group
must be accessible to all cluster systems.
Refer to the VERITAS Volume Manager Administrator’s Guide for instructions on
creating disk groups.

2. Deport the disk group:


# vxdg deport vxfencoorddg

3. Import the disk group with the -t option so that it is not automatically imported
when the systems are rebooted:
# vxdg -t import vxfencoorddg

44 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Setting Up Coordinator Disks

4. Deport the disk group. The disk group is no longer imported. It is used to prevent the
coordinator disks from being used for other purposes.
# vxdg deport vxfencoorddg

5. On all systems, enter the command:


# echo "vxfencoorddg" > /etc/vxfendg
No spaces should appear between the quotes in the "vxfencoorddg" text.
This command creates the file /etc/vxfendg, which includes the name of the
coordinator disk group. Based on the /etc/vxfendg file, the rc script creates the file
/etc/vxfentab for use by the vxfen driver.

Starting I/O Fencing


To start the I/O fencing driver, issue the following command on each system:
# /etc/init.d/vxfen start

The Contents of the /etc/vxfentab File


On each system, the disks used as coordinator disks are listed in the file /etc/vxfentab.
The same disks may be listed using different names on each system. An example
/etc/vxfentab file on one system resembles:
/dev/rdsk/c1t1d0s2
/dev/rdsk/c2t1d0s2
/dev/rdsk/c3t1d0s2
The rc startup script automatically creates /etc/vxfentab by reading the disks
contained in the disk group listed in /etc/vxfendg, then invokes the vxfenconfig
command. This command configures the vxfen driver to start and use the coordinator
disks listed in /etc/vxfentab.

Replacing Failed Coordinator Disks


At the present time, it is not possible to replace a failed coordinator disk without
rebooting the cluster.

Chapter 2, Installing the Component Packages 45


Running gabconfig -a After Installation

Running gabconfig -a After Installation


After you have installed the VERITAS DBE/AC for Oracle9i RAC packages, run vxinstall,
and rebooted both systems, you can verify the installation by running the command:
# /sbin/gabconfig -a
For example:
galaxy# /sbin/gabconfig -a
GAB Port Memberships
===============================================================
Port a gen 4a1c0001 membership 01
Port b gen g8ty0002 membership 01
Port d gen 40100001 membership 01
Port f gen f1990002 membership 01
Port h gen d8850002 membership 01
Port o gen f1100002 membership 01
Port q gen 28d10002 membership 01
Port v gen 1fc60002 membership 01
Port w gen 15ba0002 membership 01
The output of the gabconfig -a command displays which cluster systems have
membership with the modules that have been installed and configured thus far in the
installation.The first line indicates that both systems (0 and 1) have membership with the
GAB utility, which uses “Port a.” The ports listed, including port a, are configured for the
following functions:

Port Function

b I/O fencing

d ODM (Oracle Disk Manager)

f CFS (Cluster File System)

h VCS (VERITAS Cluster Server: high availability daemon)

o VCSMM driver

q QuickLog daemon

v CVM (Cluster Volume Manager)

w vxconfigd (module for cvm)

46 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Example VCS Configuration File After Installation

Example VCS Configuration File After Installation


To verify the installation, you can examine the VCS configuration file, main.cf, in the
directory /etc/VRTSvcs/conf/config. Also, see an example, “Example main.cf file”
on page 48. The main.cf file is created during the installation of VERITAS DBE/AC for
Oracle9i RAC software. Note the following about the VCS configuration:
◆ The “include” statements list types files for VCS (types.cf), CFS (CFSTypes.cf),
CVM (CVMTypes.cf), and the Oracle enterprise agent (OracleTypes.cf). These
files are located in /etc/VRTSvcs/conf/config and they define the agents that
control the resources in the cluster.
- The VCS types include all agents bundled with VCS. Refer to the VERITAS
Bundled Agents Reference Guide for a information about all VCS agents.
- The CFS types include the CFSMount, CFSQlogckd, and CFSfsckd.
The CFSMount agent mounts and unmounts the shared volume file systems.
The CFSQlogckd and CFSfsckd are types defined for cluster file system
daemons and do not require user configuration.
- The CVM types include the CVMCluster and CVMVolDg types.
The CVMCluster resource, which is automatically configured during installation,
imports the shared disk groups and defines how systems in the cluster are to
communicate volume state information. Refer to “CVMCluster, CVMVolDg, and
CFSMount Agents” on page 115.
The CVMVolDg resource imports disk groups and sets activation modes. See
“Modifying the CVM Service Group in the main.cf File” on page 73.
- The Oracle enterprise agent types file includes definitions for the Oracle agent
and the Sqlnet agent. The Oracle agent monitors the resources for an Oracle
database, and the Sqlnet resource manages the resources for the listener process.
◆ The service group “cvm” is added. It includes definitions for monitoring the CFS and
CVM resources. The CVMCluster agent resource definition indicates how the systems
in the cluster are to communicate volume state information. Later, after the
installation of Oracle, the cvm group must be modified to include Sqlnet, IP, and NIC
resources required for the listener process.
◆ The cvm group has the parallel attribute set to 1, which means that the resources are
set to run in parallel on each system in the system list.

Chapter 2, Installing the Component Packages 47


Example VCS Configuration File After Installation

Example main.cf file


include "types.cf"
include "CFSTypes.cf"
include "CVMTypes.cf"
include "OracleTypes.cf"

cluster vcs_racluster (
UserNames = { admin = "cDRpdxPmHpzS." }
Administrators = { admin }
HacliUserLevel = COMMANDROOT
CounterInterval = 5
)

system galaxy (
)

system nebula (
)

group cvm (
SystemList = { galaxy = 0, nebula = 1 }
AutoFailOver = 0
Parallel = 1
AutoStartList = { galaxy, nebula }
)

CFSQlogckd qlogckd (
Critical = 0
)

CFSfsckd vxfsckd (
)

CVMCluster cvm_clus (
Critical = 0
CVMClustName = vcs_racluster
CVMNodeId = { galaxy = 0, nebula = 1 }
CVMTransport = gab
CVMTimeout = 200
)

vxfsckd requires qlogckd

// resource dependency tree


//
// group cvm

48 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Example VCS Configuration File After Installation

// {
// CVMCluster cvm_clus
// CFSfsckd vxfsckd
// {
// CFSQlogckd qlogckd
// }
// }

Chapter 2, Installing the Component Packages 49


Example VCS Configuration File After Installation

50 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installing Oracle9i Software 3
This chapter describes how to install Oracle9i software. You can install Oracle9i on shared
storage or locally on each node. However, the shared storage installation is featured in this
guide. The configuration of VCS service groups described in Chapter 4 and the example
main.cf file shown in Appendix A are based on using shared storage to install Oracle9i.

Oracle9i Installation Tasks


Installing Oracle9i in a VERITAS DBE/AC for Oracle9i RAC environment involves the
following tasks:
◆ Creating shared disk group and volume for the Oracle9i SRVM component
◆ Installing Oracle9i, Release 1 or Release 2
You can install Oracle9i on either shared disks or a local disk.
◆ Linking VERITAS ODM libraries to Oracle

Prerequisites for Installing Oracle9i


Before installing Oracle and creating a database, do the following:

1. Edit the file /etc/system and set the shared memory parameter. Refer to the
Oracle9i Installation Guide.

2. Verify that the disk arrays used for shared storage support SCSI-3 persistent
reservations and I/O fencing. Refer to “Verifying Storage Supports SCSI-3 Persistent
Reservations” on page 40.
The shared storage used with VERITAS DBE/AC for Oracle9i RAC must support
SCSI-3 persistent reservations. SCSI-3 persistent reservations support enables the use
of I/O fencing to prevent data corruption caused by a split brain. I/O fencing is
described in the chapter on “I/O Fencing Scenarios” on page 81.

51
Creating Shared Disk Groups and Volumes - General

Creating Shared Disk Groups and Volumes - General


The following paragraphs highlight general information that applies to creating shared
disk groups. Refer to the VERITAS Volume Manager Administrator’s Guide for full
documentation about creating and managing shared disk groups.

Determining Information About a Disk Group


You can display information about a specific disk group by entering the command:
# vxdg list disk_group

Setting the Connectivity Policy on a Shared Disk Group


To set the connectivity policy for a disk group as “local,” use the following command from
the master node:
# vxedit set diskdetpolicy=local shared_disk_group
The command has effect on all systems in the cluster. The local connectivity policy means
that when a disk fails, the disk is detached only from the system detecting the failure.
Subsequently, the disk is detached cluster wide only when all systems report the failure.

Enabling Write Access to the Volumes in the Disk Group


At the time VCS is installed, the installation utility creates and edits the configuration file
/etc/default/vxdg on each system such that the default activation mode of shared
disk groups is set to “off” when the cluster starts. This setting prevents uncontrolled
access to the shared storage. Later, after you create shared disk groups and volumes for
the Oracle9i binaries and database and configure them in the Oracle or CVM service
groups, VCS automatically controls access to the volumes using the CVMVolDg agent.
However, so that you can now create databases on the shared volumes, you must enable
write access to them. You can use the following commands:

1. On the CVM master node, enter:


# vxdg -s import shared_disk_group
# vxvol -g shared_disk_group startall
# vxdg -g shared_disk_group set activation=sw

2. On the slave node, enter:


# vxdg -g shared_disk_group set activation=sw

52 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Creating Shared Disk Group and Volume for SRVM

Refer to the description of disk group activation modes in the VERITAS Volume Manager
Administrator’s Guide for more information (see the chapter on “Cluster Functionality”).

Note For VERITAS DBE/AC for Oracle9i RAC, release 3.5, a shared disk group is not
automatically configured for I/O fencing when it is created. A newly created disk
group must be deported and re-imported to be configured with I/O fencing. You
can use vxdg commands to accomplish this; refer to VERITAS Volume Manager 3.5
User’s Guide - VERITAS Enterprise Administrator.

Creating Shared Disk Group and Volume for SRVM


On the shared storage device, you must create a shared raw volume for use by the Oracle
SRVM component, srvconfig. Oracle requires the SRVM component when the Oracle
database is set up on shared storage. This volume requires at least 200 megabytes.
For example, to create the volume srvm_vol of 300 megabytes in a shared disk group
orasrv_dg on disk c2t3d1, do the following:

1. From the master node, create the shared disk group on the shared disk c2t3d1:
# vxdg -s init orasrv_dg c2t3d1

2. Create the volume in the shared disk group:


# vxassist -g orasrv_dg make srvm_vol 300M
Later, when you install the Oracle9i software, you must provide the name of this
volume. For example, you would enter /dev/vx/rdsk/orasrv_dg/srvm_vol.

3. Set the activation mode to rw:


# vxdg -g orasrv_dg set activation=sw

4. On the other node, enter:


# vxdg -g orasrv_dg set activation=sw

Chapter 3, Installing Oracle9i Software 53


Installing Oracle9i Release 1 Software

Installing Oracle9i Release 1 Software


You can install Oracle9i Release 1 on a shared disk or on each nodes local hard disk.

Installing Oracle9i Release 1 on Shared Disk


The following procedure describes installing Oracle9i binaries on a cluster file system. If
you intend to install Oracle9i binaries on each system locally, refer to “Alternate Method -
Installing Oracle9i Release 1 Locally” on page 58.

Note If you are installing Oracle9i Release 1 Patch software, you must first install Oracle9i
Release 1 software. Then use the procedure in “Installing Oracle9i Release 1 Patch
Software” on page 61.

1. Log in as root user.

2. Add the directory path to the jar utility in the PATH environment variable. Typically,
this is /usr/bin. Do this on both nodes.

3. On one node, create a shared disk group:


# vxdg -s init orabinvol_dg c2t3d1s2

4. Create the volume in the shared group:


# vxassist -g orabinvol_dg make orabinvol 5000M
For the Oracle9i Release 1 binaries, make the volume 5,000 MB.

5. Set the activation mode on one system:

# vxdg -g orabinvol_dg set activation=sw

6. On the other node, enter:


# vxdg -g orabinvol_dg set activation=sw

7. On one node, create a VxFS file system on the shared volume on which to install the
Oracle9i binaries. For example, create the file system on orabinvol:
# mkfs -F vxfs /dev/vx/rdsk/orbinvol_dg/orabinvol

54 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installing Oracle9i Release 1 Software

8. On both systems, create the mount point for the file system:
# mkdir /oracle

9. On both systems, mount the file system, using the device file for the block device:
# mount -F vxfs -o cluster /dev/vx/dsk/orbinvol_dg/orabinvol
/oracle

10. On both systems, create a local group and a local user for Oracle. For example, create
the group dba and the user oracle. Be sure to assign the same user ID and group ID
for the user on each system.

11. Set the home directory for the oracle user as /oracle.

12. On both systems, assign ownership of the Oracle directory to oracle:


# chown oracle:dba /oracle/*

13. On one system, create the directory where you intend to copy the contents of the
Oracle9i software CDs. A total of 2 gigabytes is required.
# mkdir /install
# cd /install

14. Copy all the files from the three Oracle9i CDs. For example:

a. Insert the first of the three Oracle9i disks in the CD-ROM drive.
- If you are running Solaris volume-management software, the software
automatically mounts the CD as /cdrom/cdrom0. Type the command:
# cd /cdrom/cdrom0
- If you are not running Solaris volume-management software, you must
mount the CD manually. For example:
# mount -F hsfs -o ro /dev/dsk/c0t6d0s2 /cdrom
In this example, /dev/dsk/c0t6d0s2 is the name for the CD drive.
# cd /cdrom

b. Copy the files to their appropriate locations.


# cp -R Disk1 /install

c. Repeat step a and step b for the remaining Oracle9i software CDs, copying the
files from the second CD into /install/Disk2 and the files from the third CD
into /install/Disk3.

Chapter 3, Installing Oracle9i Software 55


Installing Oracle9i Release 1 Software

15. Run the VERITAS DBE/AC for Oracle9i RAC script 9iinstall:
# /opt/VRTSvcs/rac/lib/9iinstall
The script prompts you to specify whether the Oracle you are installing is 32-bit or
64-bit.
Enter Oracle bits 32 or 64
64
Enter the directory location where you copied the contents of the Oracle9i installation
CDs. For this example, you would enter /install:
Please enter Oracle Installation copy location.
/install
The script modifies some of the copied Oracle9i files to be compatible with VERITAS
DBE/AC for Oracle9i.

16. Log in as oracle.

17. On one system, and create a directory for the installation of the Oracle9i binaries. For
example:
$ mkdir VRT
$ export ORACLE_HOME=/oracle/VRT

18. On both systems, edit the file .rhosts to provide the other system access to the local
system during the installation. Place a “+” character in the first line of the file. Note
that you can remove this permission after installation is complete.

19. Set the DISPLAY variable.


If you use the Bourne Shell (sh or ksh):
$ DISPLAY=host:0.0;export DISPLAY
If you use the C Shell (csh or tcsh):
$ setenv DISPLAY host:0.0

20. Run the Oracle9i utility runInstaller. The utility is located in the directory where
you copied Oracle9i software from Disk1.
$ /install/Disk1/runInstaller
- As the utility starts up, be sure to select the installation option: Software Only.
Refer to the Oracle9i Installation Guide for additional information about running
the utility.
- When the Node Selection dialog box appears, select only the local node.

56 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installing Oracle9i Release 1 Software

Note If the Node Selection dialog box does not appear, refer to “Missing Dialog Box
During Installation of Oracle9i Release 1” on page 98.

- As the installer runs, it prompts you to provide the name of the shared volume
you created previously for the Oracle SRVM component, srvconfig. In this
example, we would enter /dev/vx/rdsk/orasrv_dg/srvm_vol.See the
“Creating Shared Disk Group and Volume for SRVM” on page 53.

21. Oracle requires the global services daemon (gsd) to be running before any server
utilities can be started. When installation completes, run the gsd.sh script in the
background on both systems as user oracle. For example, where $ORACLE_HOME
equals /oracle/VRT, enter:
$ /oracle/VRT/bin/gsd.sh &

22. Log in as root.

23. Copy /var/opt/oracle from the installation node to the other node:
# rcp -r /var/opt/oracle sysb:/var/opt

Chapter 3, Installing Oracle9i Software 57


Installing Oracle9i Release 1 Software

Alternate Method - Installing Oracle9i Release 1 Locally


The following procedure describes installing Oracle9i binaries on the local hard disk on
each system. If you intend to install Oracle9i binaries on a shared disk, refer to “Installing
Oracle9i Release 1 on Shared Disk” on page 54.

1. Log in as root user on the first system.

2. Add the directory path to the jar utility in the PATH environment variable. Typically,
this is /usr/bin.

3. On one node, create a shared disk group:


# vxdg init or_dg c0d1t1s4

4. Create the volume in the shared group:


# vxassist -g or_dg make or_vol 5000M
For the Oracle9i Release 1 binaries, make the volume 5,000 MB.

5. Create a VxFS file system on orabinvol to install the Oracle9i binaries. For example:
# mkfs -F vxfs /dev/vx/rdsk/or_dg/or_vol

6. Create the mount point for the file system:


# mkdir /oracle

7. Mount the file system, using the device file for the block device:
# mount -F vxfs /dev/vx/rdsk/or_dg/or_vol /oracle

8. Edit the /etc/vfstab file, list the new file system, and specify “yes” for the “mount
at boot” choice. For example:
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
.
.
/dev/vx/rdsk/or_dg/or_vol /dev/vx/rdsk/or_dg/or_vol /oracle vxfs 1
yes -

9. Create a local group and a local user for Oracle. For example, create the group dba
and the user oracle. Be sure to assign the same user ID and group ID for the user on
each system.

10. Set the home directory for the oracle user as /oracle.

58 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installing Oracle9i Release 1 Software

11. Assign ownership of the Oracle directory to oracle:


# chown oracle:dba /oracle/*

12. Repeat step 1 through step 11 on the other system.

13. On one system, create the directory where you intend to copy the contents of the
Oracle9i software CDs. A total of 2 gigabytes is required.
# mkdir /install
# cd /install

14. From one system, copy all the files from the three Oracle9i CDs. For example:

a. Insert the first of the three Oracle9i disks in the CD-ROM drive.
- If you are running Solaris volume-management software, the software
automatically mounts the CD as /cdrom/cdrom0. Type the command:
# cd /cdrom/cdrom0
- If you are not running Solaris volume-management software, you must
mount the CD manually. For example:
# mount -F hsfs -o ro /dev/dsk/c0t6d0s2 /cdrom
In this example, /dev/dsk/c0t6d0s2 is the name for the CD drive.
# cd /cdrom

b. Copy the files to their appropriate locations.


# cp -R Disk1 /install

c. Repeat step a and step b for the remaining Oracle9i software CDs, copying the
files from the second CD into /install/Disk2 and the files from the third CD
into /install/Disk3.

15. Run the VERITAS DBE/AC for Oracle9i RAC script 9iinstall:
# /opt/VRTSvcs/rac/lib/9iinstall
The script prompts you to specify the Oracle you are installing as 64-bit or 32-bit.
Enter Oracle bits 32 or 64
64
Enter the directory location where you copied the contents of the Oracle9i installation
CDs. For this example, you would enter /install:
Please enter Oracle Installation copy location.
/install

Chapter 3, Installing Oracle9i Software 59


Installing Oracle9i Release 1 Software

The script modifies some of the copied Oracle9i files to be compatible with VERITAS
DBE/AC for Oracle9i.

16. Log in as oracle.

17. On both systems, as user oracle, create a directory to install the Oracle9i binaries:
$ mkdir VRT
$ export ORACLE_HOME=/oracle/VRT

18. On both systems, edit the file .rhosts to provide the other system access to the local
system during the installation. Place a “+” character in the first line of the file. Note
that you can remove this permission after installation is complete.

19. Set the DISPLAY variable.


If you use the Bourne Shell (sh or ksh):
$ DISPLAY=host:0.0;export DISPLAY
If you use the C Shell (csh or tcsh):
$ setenv DISPLAY host:0.0

20. Run the Oracle9i utility runInstaller. The utility is located in the directory where
you copied Oracle9i software from Disk1.
$ /install/Disk1/runInstaller
- As the utility starts, select the installation option: Software Only. Refer to the
Oracle9i Installation Guide for additional information about running the utility.
- When the Node Selection dialog box appears, select both nodes for installation.

Note If the Node Selection dialog box does not appear, refer to “Missing Dialog Box
During Installation of Oracle9i Release 1” on page 98.

- As the installer runs, it prompts you to provide the name of the shared volume
you created previously for the Oracle SRVM component, srvconfig. In this
example, we would enter /dev/vx/rdsk/orasrv_dg/srvm_vol.See the
“Creating Shared Disk Group and Volume for SRVM” on page 53.

21. Oracle requires the global services daemon (gsd) to be running before any server
utilities can be started. When installation completes, run the gsd.sh script in the
background on both systems as user oracle. For example, where $ORACLE_HOME
equals /oracle/VRT, enter:
$ /oracle/VRT/bin/gsd.sh &

60 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installing Oracle9i Release 1 Patch Software

Installing Oracle9i Release 1 Patch Software


To install Oracle9i Patch 2 or Patch 3 Software, you must have installed Oracle9i Release 1
software. Before you install the Oracle patch, review the Patch Set Notes that accompany
Oracle 9i Data Server 9.0.1 Patch Set for Sun SPARC Solaris for additional instructions
installing the patch set and on performing the post install actions.

1. Log in as superuser.

2. Add the directory path to the jar utility in the PATH environment variable. Typically
this is /usr/bin.

3. On one system, create the directory where you intend to copy the Oracle9i Patch
software.
# mkdir /install/patch
# cd /install/patch

4. Copy all the files included with the downloaded Oracle9i patch software to
/install/patch. When you uncompress and untar the downloaded ZIP file, the
software is placed in a directory Disk1.

5. Log in as superuser; you may log into another window.

6. Depending on the level of patch you are installing, run the applicable VERITAS
DBE/AC for Oracle9i RAC script.
For Oracle9i Patch 2, enter:
# /opt/VRTSvcs/rac/lib/9iP2install
For Oracle9i Patch 3, enter:
# /opt/VRTSvcs/rac/lib/9iP3install

7. The script prompts you to specify whether the Oracle you are installing is 32-bit or
64-bit.
Enter Oracle bits 32 or 64
64

8. The script prompts you for the location of the copied Oracle9i Patch files. For this
example, you would enter the directory /install/patch:
Please enter Oracle patch Installation copy location.
/install/patch
As the script proceeds, it modifies some of the copied Oracle9i Patch files to be
compatible with VERITAS DBE/AC for Oracle9i.

Chapter 3, Installing Oracle9i Software 61


Installing Oracle9i Release 1 Patch Software

9. Log in as oracle.

10. On both systems, edit the file .rhosts to provide the other system access to the local
system during the installation. Place a “+” character in the first line of the file. Note
that you can remove this permission after installation is complete.

11. Set the DISPLAY variable.


If you use the Bourne Shell (sh or ksh):
$ DISPLAY=host:0.0;export DISPLAY
If you use the C Shell (csh or tcsh):
$ setenv DISPLAY host:0.0

12. Run the Oracle9i utility runInstaller:


$ $ORACLE_BASE/oui/install/runInstaller

13. As the runInstaller utility starts up, be sure to:


- Select product.jar from the directory /install/patch/Disk1/stage1.
- Select both nodes for installation when the Node Selection dialog box appears.

14. Permit the installation to continue.

15. To perform post install actions, refer to the Patch Set Notes that accompany the patch.

62 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installing Oracle9i Release 2 Software

Installing Oracle9i Release 2 Software


You can install Oracle9i Release 2 on a shared disk or on each nodes local hard disk.

Installing Oracle9i Release 2 on Shared Disk


The following procedure describes installing the Oracle9i Release 2 Software in a RAC
environment. The procedure installs the software on shared storage:

1. Log in as root user.

2. Add the directory path to the jar utility in the PATH environment variable. Typically,
this is /usr/bin. Do this on both nodes.

3. On the master node, create a shared disk group:


# vxdg init orabinvol_dg c2t3d1s2

4. Create the volume in the shared group:


# vxassist -g orabinvol_dg make orabinvol 5000M
For the Oracle9i Release2 binaries, make the volume 5,000 MB.

5. Deport and import the disk group and set the activation:
# vxdg deport orabinvol_dg
# vxdg -s import orabinvol_dg
# vxvol -g orabinvol_dg startall
# vxdg -g orabinvol_dg set activation=sw

6. On the other node, enter:


# vxdg -g orabinvol_dg set activation=sw

7. On the master node, create a VxFS file system on the shared volume on which to
install the Oracle9i binaries. For example, create the file system on orabinvol:
# mkfs -F vxfs /dev/vx/rdsk/orbinvol_dg/orabinvol

8. On both systems, create the mount point for the file system:
# mkdir /oracle

9. On both systems, mount the file system, using the device file for the block device:
# mount -F vxfs -o cluster /dev/vx/dsk/orbinvol_dg/orabinvol
/oracle

Chapter 3, Installing Oracle9i Software 63


Installing Oracle9i Release 2 Software

10. On both systems, create a local group and a local user for Oracle. For example, create
the group dba and the user oracle. Be sure to assign the same user ID and group ID
for the user on each system.

11. On both systems, set the home directory for the oracle user as /oracle.

12. On both systems, assign ownership of the Oracle directory to oracle:


# chown oracle:dba /oracle/*

13. On one system, enter the following commands, using the appropriate example below
to copy the VERITAS CM library based on the version of Oracle9i:
# cd /opt/ORCLcluster/lib/9iR2
If the version of Oracle9i is 32-bit, enter:
# cp libskgxn2_32.so ../libskgxn2.so
If the version of Oracle9i is 64-bit, enter:
# cp libskgxn2_64.so ../libskgxn2.so

14. On the first system, insert Disk1 of the Oracle9i disks in the CD-ROM drive.
- If you are running Solaris volume-management software, the software
automatically mounts the CD as /cdrom/cdrom0. Type the command:
# cd /cdrom/cdrom0
- If you are not running Solaris volume-management software, you must mount
the CD manually. For example:
# mount -F hsfs -o ro /dev/dsk/c0t6d0s2 /cdrom
In this example, /dev/dsk/c0t6d0s2 is the name for the CD drive.
# cd /cdrom

15. Log in as oracle on both systems.

16. On the first system, edit the file .rhosts to provide the other system access to the
local system during the installation. Place a “+” character in the first line of the file.
Note that you can remove this permission after installation is complete.

17. Repeat step 16 on the other system.

18. On both systems, create a directory for the installation of the Oracle9i binaries:
$ mkdir VRT
$ export ORACLE_HOME=/oracle/VRT

64 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installing Oracle9i Release 2 Software

19. On the first system, set the DISPLAY variable.


If you use the Bourne Shell (sh or ksh):
$ DISPLAY=host:0.0;export DISPLAY
If you use the C Shell (csh or tcsh):
$ setenv DISPLAY host:0.0

20. On the first system, run the Oracle9i utility runInstaller.


$ /cdrom/Disk1/runInstaller
- As the utility starts up, be sure to select the installation option: Software Only.
Refer to the Oracle9i Installation Guide for additional information about running
the utility.
- When the Node Selection dialog box appears, select both nodes for installation.

Note If the Node Selection dialog box does not appear, refer to “Missing Dialog Box
During Installation of Oracle9i Release 2” on page 99.

- As the installer runs, it prompts you to provide the name of the shared volume
you created previously for the Oracle SRVM component, srvconfig. In this
example, we would enter /dev/vx/rdsk/orasrv_dg/srvm_vol.See the
“Creating Shared Disk Group and Volume for SRVM” on page 53.
- When you are prompted for other Oracle9i disks, refer to step 14 if necessary. You
may need to log in as root to manually mount the CDs and log back in as oracle
to continue.

21. On each system, copy the relevant VCSIPC library to $ORACLE_HOME/lib.


For 32 bit Oracle:
$ cp /opt/ORCLcluster/lib/9iR2/libskgxp92_32.so
$ORACLE_HOME/lib/libskgxp9.so
For 64 bit Oracle:
$ cp /opt/ORCLcluster/lib/9iR2/libskgxp92_64.so
$ORACLE_HOME/lib/libskgxp9.so

Chapter 3, Installing Oracle9i Software 65


Installing Oracle9i Release 2 Software

Alternate Method - Installing Oracle9i Release 2 Locally


Use this procedure to install Oracle9i Release 2 on each system locally in a VERITAS
DBE/AC for Oracle9i RAC environment.

1. Log in as root user on one system.

2. Add the directory path to the jar utility in the PATH environment variable. Typically,
this is /usr/bin.

3. On one node, create a shared disk group:


# vxdg init or_dg c0d1t1s4

4. Create the volume in the shared group:


# vxassist -g or_dg make or_vol 5000M
For the Oracle9i Release 1 binaries, make the volume 5,000 MB.

5. Create a VxFS file system on orabinvol to install the Oracle9i binaries. For example:
# mkfs -F vxfs /dev/vx/rdsk/or_dg/or_vol

6. Create the mount point for the file system:


# mkdir /oracle

7. Mount the file system, using the device file for the block device:
# mount -F vxfs /dev/vx/rdsk/or_dg/or_vol /oracle

8. Edit the /etc/vfstab file, list the new file system, and specify “yes” for the “mount
at boot” choice. For example:
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
.
.
/dev/vx/rdsk/or_dg/or_vol /dev/vx/rdsk/or_dg/or_vol /oracle vxfs 1
yes -

9. Create a local group and a local user for Oracle. For example, create the group dba
and the user oracle. Be sure to assign the same user ID and group ID for the user on
each system.

10. Set the home directory for the oracle user as /oracle.

66 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installing Oracle9i Release 2 Software

11. Assign ownership of the Oracle directory to oracle:


# chown oracle:dba /oracle/*

12. Repeat step 1 through step 11 on the other system.

13. On one system, enter the following commands, using the appropriate example below
to copy the VERITAS CM library based on the version of Oracle9i:
# cd /opt/ORCLcluster/lib/9iR2
If the version of Oracle9i is 32-bit, enter:
# cp libskgxn2_32.so ../libskgxn2.so
If the version of Oracle9i is 64-bit, enter:
# cp libskgxn2_64.so ../libskgxn2.so

14. On the first system, insert Disk1 of the Oracle9i disks in the CD-ROM drive.
- If you are running Solaris volume-management software, the software
automatically mounts the CD as /cdrom/cdrom0. Type the command:
# cd /cdrom/cdrom0
- If you are not running Solaris volume-management software, you must mount
the CD manually. For example:
# mount -F hsfs -o ro /dev/dsk/c0t6d0s2 /cdrom
In this example, /dev/dsk/c0t6d0s2 is the name for the CD drive.
# cd /cdrom

15. Log in as oracle on both systems.

16. On the first system, edit the file .rhosts to provide the other system access to the
local system during the installation. Place a “+” character in the first line of the file.
Note that you can remove this permission after installation is complete.

17. Repeat step 16 on the other system.

18. On both systems, create a directory for the installation of the Oracle9i binaries:
$ mkdir VRT
$ export ORACLE_HOME=/oracle/VRT

19. On the first system, set the DISPLAY variable.


If you use the Bourne Shell (sh or ksh):
$ DISPLAY=host:0.0;export DISPLAY

Chapter 3, Installing Oracle9i Software 67


Installing Oracle9i Release 2 Software

If you use the C Shell (csh or tcsh):


$ setenv DISPLAY host:0.0

20. On the first system, run the Oracle9i utility runInstaller.


$ /cdrom/Disk1/runInstaller
- As the utility starts up, be sure to select the installation option: Software Only.
Refer to the Oracle9i Installation Guide for addition information about running the
utility.
- When the Node Selection dialog box appears, select both nodes for installation.

Note If the Node Selection dialog box does not appear, refer to “Missing Dialog Box
During Installation of Oracle9i Release 2” on page 99.

- As the installer runs, it prompts you to provide the name of the shared volume
you created previously for the Oracle SRVM component, srvconfig. In this
example, we would enter /dev/vx/rdsk/orasrv_dg/srvm_vol.See the
“Creating Shared Disk Group and Volume for SRVM” on page 53.
- When you are prompted for other Oracle9i disks, refer to step 14 if necessary. You
may need to log in as root to manually mount the CDs and log back in as oracle
to continue.

21. On each system, copy the relevant VCSIPC library to $ORACLE_HOME/lib.


For 32 bit Oracle:
$ cp /opt/ORCLcluster/lib/9iR2/libskgxp92_32.so
$ORACLE_HOME/lib/libskgxp9.so
For 64 bit Oracle:
$ cp /opt/ORCLcluster/lib/9iR2/libskgxp92_64.so
$ORACLE_HOME/lib/libskgxp9.so

68 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Removing Temporary rsh Access Permissions

Removing Temporary rsh Access Permissions


When Oracle9i installation is complete, remove the temporary rsh access permissions you
have set for the systems in the cluster. For example, if you added a “+” character in the
first line of the .rhosts file, remove that line.

Linking VERITAS ODM Libraries to Oracle


After installing Oracle, create a link to the VERITAS ODM libraries on both systems.

1. Log in as the user oracle.

2. Rename the file libodm9.so:


$ cd $ORACLE_HOME/lib
$ mv libodm9.so libodm9.so.old

3. Depending on whether you are using the 32-bit or the 64-bit Oracle version, link the
libraries using one of the following commands:
For 32-bit versions:
$ ln -s /usr/lib/libodm.so libodm9.so
For 64-bit versions:
$ ln -s /usr/lib/sparcv9/libodm.so libodm9.so

4. Repeat step 1 through step 3 on the other system.


Later, after starting Oracle instances, you can confirm that Oracle is using the VERITAS
ODM libraries by examining the Oracle alert file, alert_$ORACLE_SID.log. Look for
the line that reads:
Oracle instance running with ODM: VERITAS 3.5 ODM Library, Version 1.1

Creating Databases
At this time, you can an create Oracle database on shared storage. Use you own tools or
refer to “Creating Starter Databases” on page 133 to create a starter database.

Chapter 3, Installing Oracle9i Software 69


Creating Databases

70 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Configuring VCS Service Groups for Oracle 4
This chapter describes how to modify the VCS configuration file, main.cf, to define the
service groups for CVM and for Oracle databases in a VCS configuration within a RAC
environment.

Checking cluster_database Flag in Oracle9i Parameter File


On each system, confirm that the “cluster_database” flag is set in the Oracle9i
parameter file $ORACLE_HOME/dbs/init$ORACLE_SID.ora. This flag enables the
Oracle instances to run in parallel. Verify the file contains the line:
cluster_database = true

Configuring CVM and Oracle Groups in main.cf File


During the installation of DBE/AC for Oracle9i RAC, a rudimentary CVM service group is
automatically defined in the VCS configuration file, main.cf. This CVM service group
includes the CVMCluster resource, which enables Oracle RAC communications between
the nodes of the cluster, and the CFSQlogckd and CFSfsckd daemons that control the
transmission of file system data within the cluster. Refer to the “Example VCS
Configuration File After Installation” on page 47.
After installing Oracle9i and creating a database, you can modify the CVM service group
to include the Oracle listener process, the IP and NIC resources used by the listener
process, and the CFSMount and CVMVolDg resources. The NIC and IP resources are used
by the listener process to communicate with clients via the public network. The
CFSMount and CVMVolDg resources are for the Oracle binaries installed on shared
storage. The modification of the CVM service group is discussed in “Modifying the CVM
Service Group in the main.cf File” on page 73.
Then you can add the Oracle database service group. This group typically consists of an
Oracle database created on shared storage, and the CFSMount and CVMVolDg resources
used by the database. There may be multiple Oracle database service groups, one for each
database. See “Adding Oracle Resource Groups in the main.cf File” on page 76.

71
Configuring CVM and Oracle Groups in main.cf File

VCS Service Groups for RAC: Dependencies Chart


The following illustration shows the dependencies among the resources in a typical RAC
environment. The illustration shows two Oracle service groups and one CVM service
group. The dependencies among the resources within a group are shown as well as the
dependency that each Oracle group has on the CVM group. This chapter describes how to
specify the resources.

oradb1_grp oradb2_grp

VRT_db rac_db

Oracle Oracle

oradb1_mnt oradb2_mnt

CFSMount CFSMount

oradb1_voldg oradb2_voldg

CVMVolDg CVMVolDg

cvm

LISTENER

Sqlnet

orabin_mnt listener_ip

CFSMount IP

vxfsckd orabin_voldg listener_hme_0

CFSfsckd CVMVolDg NIC

qlogckd cvm_clus

CFSQlogckd CVMCluster

72 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Configuring CVM and Oracle Groups in main.cf File

Setting Attributes as Local for the CVM and Oracle Groups


The CVM and Oracle service groups must be configured as “parallel,” with certain
attributes of key resources defined locally, that is, to reflect the specific system on which
they run. The attributes that must be defined locally are described in a table; see
“Attributes of CVM and Oracle Groups that Must be Defined as Local” on page 78.

Modifying the CVM Service Group in the main.cf File


The cvm service group is created during the installation of DBE/AC for Oracle9i RAC.
After installation, the main.cf file resembles the example shown in “Example VCS
Configuration File After Installation” on page 47. Because Oracle9i had not been installed,
the cvm service group includes only resources for the CFSQlogckd and CFSfsckd
daemons and the CVMCluster resource.

Adding Sqlnet, NIC, IP CVMVolDg, and CFSMount Resources


You must modify the cvm service group to add the Sqlnet, NIC, IP, CVMVolDg, and
CFSMount resources to the configuration. You can refer to “Sample main.cf File” on
page 109 to see a complete example of how a cvm group is configured.
When configuring these resources, use the following procedure (key lines of the
configuration file are shown in bold font):

1. Make sure the cvm group has the group Parallel attribute set to 1. Typically this is
already done during installation.
.
group cvm (
SystemList = { galaxy = 0, nebula = 1 }
AutoFailOver = 0
Parallel = 1
AutoStartList = { galaxy, nebula }
)
.
.

Chapter 4, Configuring VCS Service Groups for Oracle 73


Configuring CVM and Oracle Groups in main.cf File

2. Define the NIC and IP resources. The VCS bundled NIC and IP agents are described
in VERITAS Cluster Server Bundled Agents Reference Guide. The device name and the IP
addresses are required by the listener for public network communication. Note that
for the IP resource, the Address attribute is localized for each node (see “Attributes
of CVM and Oracle Groups that Must be Defined as Local” on page 78).
.
NIC listener_hme0 (
Device = hme0
NetworkType = ether
)

IP listener_ip (
Device = hme0
Address @galaxy = "192.2.40.21"
Address @nebula = "192.2.40.22"
)
.

3. Define the Sqlnet resource. The Sqlnet listener agent is described in detail in the
VERITAS Cluster Server Enterprise Agent for Oracle, Installation and Configuration Guide.
Note that the listener attribute is localized for each node.
.
Sqlnet LISTENER (
Owner = oracle
Home = "/oracle/orahome"
TnsAdmin = "/oracle/orahome/network/admin"
MonScript = "./bin/Sqlnet/LsnrTest.pl"
Listener @galaxy = LISTENER_a
Listener @nebula = LISTENER_b
EnvFile = "/opt/VRTSvcs/bin/Sqlnet/envfile"
)
.

4. You must configure the CVMVolDg and CFSMount resources in the cvm group for the
Oracle binaries installed on shared storage. Refer to the appendix “CVMCluster,
CVMVolDg, and CFSMount Agents” on page 115 for description of CVMVolDg and
CFSMount agents.
.
CVMVolDg orabin_voldg (
CVMDiskGroup = orabindg
CVMVolume = { "orabinvol", "srvmvol" }
CVMActivation = sw
)

74 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Configuring CVM and Oracle Groups in main.cf File

CFSMount orabin_mnt (
Critical = 0
MountPoint = "/oracle"
BlockDevice = "/dev/vx/dsk/orabindg/orabinvol"
Primary = galaxy
)
.
.

5. Define the dependencies of resources in the group. The dependencies are specified
such that the Sqlnet resource requires the IP resource that, in turn, depends on the
NIC resource. The Sqlnet resource also requires the CFSMount resource. The
CFSMount resource requires the daemons, vxfsckd and qlogckd, used by the cluster
file system. The CFSMount resource also depends on the CVMVolDg resource, which,
in turn, requires the CVMCluster resource for communications within the cluster. The
“VCS Service Groups for RAC: Dependencies Chart” on page 72 visually shows the
dependencies specified in the following statements:
.
.
vxfsckd requires qlogckd
orabin_voldg requires cvm_clus
orabin_mnt requires vxfsckd
orabin_mnt requires orabin_voldg
listener_ip requires listener_hme0
LISTENER requires listener_ip
LISTENER requires orabin_mnt
.
.

Chapter 4, Configuring VCS Service Groups for Oracle 75


Configuring CVM and Oracle Groups in main.cf File

Adding Oracle Resource Groups in the main.cf File


For a complete description of the VCS Oracle enterprise agent, refer to the document,
VERITAS Cluster Server Enterprise Agent for Oracle, Installation and Configuration Guide. That
document includes instructions for configuring the Oracle and Sqlnet agents.

Note The VCS Enterprise Agent for Oracle version 2.0.1 is installed when you run
installDBAC. When you refer to the VERITAS Cluster Server Enterprise Agent for
Oracle Installation and Configuration Guide, ignore the steps described in the section
“Installing the Agent Software.”

1. Using the “Sample main.cf File” on page 109 as an example, add a service group to
contain the resources for an Oracle database. For example, add the group
oradb1_grp. Make sure you assign the Parallel attribute a value of 1.
.
group oradb1_grp (
SystemList = { galaxy = 0, nebula = 1 }
AutoFailOver = 1
Parallel = 1
AutoStartList = { galaxy, nebula }
)
.
.

2. Create the CVMVolDg and CFSMount resource definitions. See “CVMCluster,


CVMVolDg, and CFSMount Agents” on page 115 for a description of these agents and
their attributes.
.
.
CVMVolDg oradb1_voldg (
CVMDiskGroup = oradb1dg
CVMVolume = { "oradb1vol" }
CVMActivation = sw
)

CFSMount oradb1_mnt (
MountPoint = "/oradb1"
BlockDevice = "/dev/vx/dsk/oradb1dg/oradb1vol"
Primary = galaxy

)
.
.

76 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Configuring CVM and Oracle Groups in main.cf File

3. Define the Oracle database resource. Refer to the VERITAS Cluster Server Enterprise
Agent for Oracle, Installation and Configuration Guide for information on the VCS
enterprise agent for Oracle. Note that the Oracle attributes Sid, Pfile, and Table
attributes must be set locally, that is, they must be defined for each cluster system.
.
Oracle VRT_db (
Sid @galaxy = VRT1
Sid @nebula = VRT2
Owner = oracle
Home = "/oracle/orahome"
Pfile @galaxy = "/oracle/orahome/dbs/initVRT1.ora"
Pfile @nebula = "/oracle/orahome/dbs/initVRT2.ora"
User = scott
Pword = tiger
Table @galaxy = vcstable_galaxy
Table @nebula = vcstable_nebula
MonScript = "./bin/Oracle/SqlTest.pl"
AutoEndBkup = 1
EnvFile = "/opt/VRTSvcs/bin/Oracle/envfile"
)
.

4. Define the dependencies for the Oracle service group. Note that the Oracle database
group is specified to require the cvm group, and that the required dependency is
defined as “online local firm,” meaning that the cvm group must be online and
remain online on a system before the Oracle group can come online on the same
system. Refer to the VERITAS Cluster Server User’s Guide for a description of group
dependencies.
Refer to “VCS Service Groups for RAC: Dependencies Chart” on page 72.
.
.
requires group cvm online local firm
oradb1_mnt requires oradb1_voldg
VRT requires oradb1_mnt
.
.
See the “Sample main.cf File” on page 109 for a complete example. You can also find the
complete file in /etc/VRTSvcs/conf/sample_rac/main.cf.

Additional RAC Processes Monitored by the VCS Oracle Agent


For shallow monitoring, the VCS Oracle agent monitors the ora_lmon and ora_lmd
processes in addition to the ora_dbw, ora_smon, ora_pmon, and ora_lgwr processes.

Chapter 4, Configuring VCS Service Groups for Oracle 77


Configuring CVM and Oracle Groups in main.cf File

Attributes of CVM and Oracle Groups that Must be Defined as


Local
The following table lists attributes that must be defined as local for the CVM and Oracle
service groups (note that each attribute has string-scalar as the type and dimension).

Resource Attribute Definition

IP Address The virtual IP address (not the base IP address) associated with the
interface. For example:
Address @sysa = "192.2.40.21"
Address @sysb = "192.2.40.22"

Sqlnet Listener The name of the Listener. For example:


Listener @sysa = LISTENER_a
Listener @sysb = LISTENER_b

Oracle Sid The variable $ORACLE_SID represents the Oracle system ID. For
example, if the SIDs for two systems, sysa and sysb, are VRT1 and
VRT2 respectively, their definitions would be:
Sid @sysa = VRT1
Sid @sysb = VRT2

Oracle Pfile The parameter file: $ORACLE_HOME/dbs/init$ORACLE_SID.ora.


For example:
Pfile @sysa = "/oracle/VRT/dbs/initVRT1.ora"
Pfile @sysb = "/oracle/VRT/dbs/initVRT2.ora"

Oracle Table The table used for in-depth monitoring by User/PWord on each cluster
node. For example:
Table @sysa = vcstable_sysa
Table @sysb = vcstable_sysb
Using the same table on all Oracle instances is not recommended. Using
the same table generates IPC traffic and could cause conflicts between
the Oracle recovery processes and the agent monitoring processes in
accessing the table.
Note Table is only required if in-depth monitoring is used. If the
PWord varies by RAC instance, it must also be defined as local.

If other attributes for the Oracle resource differ for various RAC instances, define them
locally as well. These other attributes may include the Oracle resource attributes User,
PWord, the CVMVolDg resource attribute CVMActivation, and others.

78 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Configuring CVM and Oracle Groups in main.cf File

Modifying the VCS Configuration


For additional information and instructions on modifying the VCS configuration, refer to
the VERITAS Cluster Server User’s Guide.

Location of VCS and Oracle Agent Log Files


On all cluster nodes, look at the following log files for any errors or status messages:
/var/VRTSvcs/log/engine_A.log
/var/VRTSvcs/log/Oracle_A.log
/var/VRTSvcs/log/Sqlnet_A.log

Chapter 4, Configuring VCS Service Groups for Oracle 79


Configuring CVM and Oracle Groups in main.cf File

80 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
I/O Fencing Scenarios 5
I/O Fencing of Shared Storage
When two systems have access to the data on shared storage, the integrity of the data
depends on the systems communicating with each other so that each is aware when the
other is writing data. Usually this communication occurs in the form of heartbeats
through the private networks between the systems. If the private links are lost, or even if
one of the systems is hung or too busy to send or receive heartbeats, each system could be
unaware of the other’s activities with respect to writing data. This is a split brain condition
and can lead to data corruption.
The I/O fencing capability of the DBE/AC, managed by VERITAS Volume Manager,
prevents data corruption in the event of a split brain condition by using SCSI-3 persistent
reservations for disks. This allows a set of systems to have registrations with the disk and
a write-exclusive registrants-only reservation with the disk containing the data. This
means that only these systems can read and write to the disk, while any other system can
only read the disk. The I/O fencing feature fences out a system that no longer sends
heartbeats to the other system by preventing it from writing data to the disk.
The VxVM manages all shared storage subject to I/O fencing. It assigns the keys that
systems use for registrations and reservations for the disks—including all paths—in the
specified disk groups. The vxfen driver is aware of which systems have registrations and
reservations with specific disks.
To protect the data on shared disks, each system in the cluster must be configured to use
I/O fencing.

The Role of Coordinator Disks


The vxfen driver, which is installed on each system, uses coordinator disks to resolve
potential split brain conditions. If a split brain condition occurs, systems “race” for the
coordinator disks to gain control of them. The winning system remains in the cluster. The
driver ejects systems that lose the race and fences them off from the shared storage.
Because they lose their reservations for any data disks, ejected systems cannot write to
them and corrupt the data. If a system realizes that it has been ejected, it removes itself
from the cluster. See “Setting Up Coordinator Disks” on page 44.

81
I/O Fencing of Shared Storage

How I/O Fencing Works in Different Event Scenarios


The following table describes how I/O fencing functions to prevent data corruption in
different failure event scenarios. For each event, corrective operator actions are indicated.

Event Node A: What Happens? Node B: What Happens? Operator Action

Both private Node A races for majority of Node B races for majority of When Node B is
networks fail. coordinator disks. coordinator disks. ejected from cluster,
If Node A wins race for If Node B loses the race for repair the private
coordinator disks, Node A the coordinator disks, Node networks before
ejects Node B from the B removes itself from the attempting to bring
shared disks and continues. cluster. Node B back.

Both private Node A continues to work. Node B has crashed. It Reboot Node B
networks cannot start the database after private
function again since it is unable to write to networks are
after event the data disks. restored.
above.

One private Node A prints message Node B prints message Repair private
network fails. about an IOFENCE on the about an IOFENCE on the network. After
console but continues. console but continues. network is repaired,
both nodes
automatically use it.

Node A hangs. Node A is extremely busy Node B loses heartbeats


for some reason or is in the with Node A, and races for
kernel debugger. a majority of coordinator
disks.
Node B wins race for
coordinator disks and ejects
When Node A is no longer
Node A from shared data
hung or in the kernel
disks.
debugger, any queued
writes to the data disks fail
because Node A is ejected. Verify private
When Node A receives networks function
message from GAB about and reboot Node A.
being ejected, it removes
itself from the cluster.

82 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
I/O Fencing of Shared Storage

Event Node A: What Happens? Node B: What Happens? Operator Action

Nodes A and B
and private
networks lose
power.
Coordinator
and data disks
retain power.
Power returns
to nodes and
they reboot,
but private
networks still Node A reboots and I/O Node B reboots and I/O Refer to section in
have no power. fencing driver (vxfen) fencing driver (vxfen) Troubleshooting
detects Node B is registered detects Node A is registered chapter for
with coordinator disks. The with coordinator disks. The instructions on
driver does not see Node B driver does not see Node A resolving
listed as member of cluster listed as member of cluster preexisting split
because private networks because private networks brain condition.
are down. This causes the are down. This causes the
I/O fencing device driver to I/O fencing device driver to
prevent Node A from prevent Node B from
joining the cluster. Node A joining the cluster. Node B
console displays: console displays:
Potentially a Potentially a
preexisting split preexisting split
brain. Dropping brain. Dropping
out of the out of the
cluster. Refer to cluster. Refer to
the user the user
documentation for documentation for
steps required to steps required to
clear preexisting clear preexisting
split brain. split brain.

Chapter 5, I/O Fencing Scenarios 83


I/O Fencing of Shared Storage

Event Node A: What Happens? Node B: What Happens? Operator Action

Node A crashes
while Node B
is down. Node
B comes up
and Node A is
still down. Node A is crashed. Node B reboots and detects Refer to section in
Node A is registered with Troubleshooting
the coordinator disks. The chapter for
driver does not see Node A instructions on
listed as member of the resolving
cluster. The I/O fencing preexisting split
device driver prints brain condition.
message on console:
Potentially a
preexisting split
brain. Dropping
out of the
cluster. Refer to
the user
documentation for
steps required to
clear preexisting
split brain.

The disk array


containing two
of the three
coordinator
disks is
powered off. Node A continues to Node B continues to
operate as long as no nodes operate as long as no nodes
leave the cluster. leave the cluster.

Node B leaves
the cluster and Node A races for a majority Node B leaves the cluster.
Power on failed
the disk array of coordinator disks. Node
disk array and
is still powered A fails because only one of
restart I/O fencing
off. three coordinator disks is
driver to enable
available. Node A removes
Node A to register
itself from the cluster.
with all coordinator
disks.

84 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
I/O Fencing of Shared Storage

The vxfenadm Utility


Administrators can use the vxfenadm command to troubleshoot and test fencing
configurations. The command’s options for use by administrators are:
-g - read and display keys
-i - read SCSI inquiry information from device
-m - register with disks
-n - make a reservation with disks
-p - remove registrations made by other systems
-r - read reservations
-x - remove registrations

Registration Key Formatting


The key defined by VxVM associated with a disk group consists of seven bytes maximum.
This key becomes unique among the systems when the VxVM prefixes it with the ID of
the system. The key used for I/O fencing, therefore, consists of eight bytes.

0 7

Node VxVM VxVM VxVM VxVM VxVM VxVM VxVM


ID Defined Defined Defined Defined Defined Defined Defined

The keys currently assigned to disks can be displayed by using the vxfenadm command.
For example, from the system with node ID 1, display the key for the disk
/dev/rdsk/c2t1d0s2 by entering:
# vxfenadm -g /dev/rdsk/c2t1d0s2
Reading SCSI Registration Keys...
Device Name: /dev/rdsk/c2t1d0s2
Total Number of Keys: 1
key[0]:
Key Value [Numeric Format]: 65,80,71,82,48,48,48,48
Key Value [Character Format]: APGR0000
The -g option of vxfenadm displays all eight bytes of a key value in two formats. In the
numeric format, the first byte, representing the Node ID, contains the system ID plus 65.
The remaining bytes contain the ASCII values of the letters of the key, in this case,
“PRG0000.” In the next line, the node ID 0 is expressed as “A;” node ID 1 would be “B.”

Chapter 5, I/O Fencing Scenarios 85


I/O Fencing of Shared Storage

86 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Uninstalling DBE/AC for Oracle9i RAC 6
When removing the VERITAS DBE/AC for Oracle9i RAC software from the systems in the
cluster, use the sequence shown in the flowchart below. Some of the activities require that
you refer to other documents, namely, the VERITAS Volume Manager Installation Guide, the
VERITAS File System Installation Guide, and the Oracle9i Installation Guide.

Start deinstallation on one system

Offline Oracle & Sqlnet resources

Another Yes
system?

No
Remove database (optional)

Remove Yes Oracle No


on CFS? Remove binaries
Oracle9i?

Yes
No Another Yes
Remove binaries system?

No

Offline Oracle service group

Another Yes
system?

No
Next page

87
Stop VCS

Unmount VxFS file systems

Unconfigure CFS related drivers

Unconfigure I/O fencing driver vxfen

Remove optional packages (in order shown):


pkgrm VRTSfspro VRTSvmpro VRTSobgui
VRTSob VRTSfsdoc VRTSvmdoc VRTSvmman

Remove required packages (in order shown):


pkgrm VRTScavf VRTSodm VRTSgms
VRTSglm VRTSvxfs

Encapsulated Yes
Remove VxVM:
root DG? Refer to VxVM
Install Guide
No
Remove VxVM: pkgrm VRTSvxvm

Another Yes
system?

No

Remove VCS: uninstallvcs

Remove license & configuration files (optional)

Another Yes
system?

No

Done

88 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Offlining the Oracle and Sqlnet Resources

Offlining the Oracle and Sqlnet Resources


Offline the Oracle and Sqlnet resources on each node. For example:
# hares -offline VRT -sys galaxy
# hares -offline rac -sys galaxy
# hares -offline VRT -sys nebula
# hares -offline rac -sys nebula
# hares -offline LISTENER -sys galaxy
# hares -offline LISTENER -sys nebula
These commands stop the Oracle instances running on the specified systems.

Removing the Oracle Database (optional)


You can remove the Oracle database after safely relocating the data as necessary.

Uninstalling Oracle9i (optional)


To uninstall Oracle9i, use the runInstaller utility. You must run the utility on each
system if Oracle9i is installed locally. If Oracle9i is installed on a cluster file system, you
need only run the utility once.

1. On one system, log in as oracle, if necessary.

2. Set the DISPLAY variable.


If you use the Bourne Shell (sh or ksh):
$ DISPLAY=host:0.0;export DISPLAY
If you use the C Shell (csh or tcsh):
$ setenv DISPLAY host:0.0

3. Run the Oracle9i utility runInstaller.


$ cd $ORACLE_BASE/oui/install
$ runInstaller
As the utility starts up, be sure to select the option to uninstall the Oracle9i software.
Refer to the Oracle9i Installation Guide for addition information about running the
utility.

4. If necessary, remove Oracle9i from the other system.

Chapter 6, Uninstalling DBE/AC for Oracle9i RAC 89


Offlining the Oracle Service Groups

Offlining the Oracle Service Groups


Offline the Oracle service groups on each system:
# hagrp -offline oragrp -sys galaxy
# hagrp -offline oradb_grp -sys galaxy
# hagrp -offline oragrp -sys nebula
# hagrp -offline oradb_grp -sys nebula
These commands take the Oracle resource groups, including all their configured
resources, offline.

Stopping VCS
On one node, stop VCS by entering:
# hastop -local
This command removes membership for ports h (VCS), v (CVM), and w (the vxconfigd
daemon). Ports b (I/O fencing), d (ODM), f (CFS), q (QIO), and o (VCSMM) still have
membership. Use the /sbin/gabconfig -a command to verify this.
You can stop VCS on the other node later. Refer to the flow diagram at the beginning of
this chapter.

Unmount Other VxFS File Systems


Other file VxFS file systems not under VCS control must be unmounted.

1. Determine file systems to be unmounted. You can check the /etc/mnttab file. For
example, enter:
# cat /etc/mnttab | grep -i vxfs
The output shows each line for the file /etc/mnttab that contains an entry for a
vxfs file system.

2. By specifying its mount point, unmount each of the vxfs file systems listed in the
output:
# umount mount_point

90 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Unconfiguring the CFS Drivers

Unconfiguring the CFS Drivers


To unconfigure the CFS drivers, run the following commands:
# Cd /opt/VRTSvcs/rac
# ./uload_drv
The uload_drv script unconfigures the CFS related drivers, which use ports f, q, and
d. You can use the /sbin/gabconfig -a command to verify they are no longer
configured.

Unconfiguring the I/O Fencing Driver


On each system, unconfigure the I/O fencing driver, vxfen, by running the command:
# /etc/init.d/vxfen stop
Use the /sbin/gabconfig -a command to verify port b is no longer configured.

Removing Optional CFS and CVM Packages


From each system, remove the optional CFS and CVM packages.

1. Log in as root user on one of the systems.

2. Remove any optional packages from the system; use the order shown in the following
example:
# pkgrm VRTSfspro VRTSvmpro VRTSobgui VRTSob VRTSfsdoc
VRTSvmdoc VRTSvmman

3. Repeat step 1 and step 2 on the other system to remove any of the optional packages.

Chapter 6, Uninstalling DBE/AC for Oracle9i RAC 91


Removing Required CFS and CVM Packages

Removing Required CFS and CVM Packages


From each system, remove the required CFS and CVM packages.

1. Log in as root user on one of the systems.

2. Remove the required packages from the system; use the order shown in the following
example:
# pkgrm VRTScavf VRTSodm VRTSgms VRTSglm VRTSvxfs

3. Repeat step 1 and step 2 on the other system to remove the required packages.

Remove VxVM Package


If the root file system resides in an encapsulated disk group, refer to the VERITAS Volume
Manager Installation Guide for instructions on how to unconfigure VxVM.
Otherwise, you can use the following command on each system:
# pkgrm VRTSvxvm

Uninstalling VCS
Use the uninstallvcs utility to remove all packages installed by the installvcs
utility, which include VRTSperl, VRTSvcs, VRTSgab, VRTSllt, VRTSvcsag,
VRTSvcsmg, VRTSvcsdc, VRTSdbac, and VRTSvcsor. The uninstallvcs utility also
unconfigures the drivers associated with VCS and VERITAS DBE/AC for Oracle9i RAC.
Before removing the packages from any system in the cluster, shutdown applications such
as the Java Console or any VCS enterprise agents that depend on VCS. The
uninstallvcs script supplied with VERITAS DBE/AC for Oracle9i RAC does not
uninstall VCS enterprise agents other than the VCS Oracle enterprise agent. See the
documentation for a specific enterprise agent for instructions on removing it.

Note Do not attempt to uninstall VCS if you are running another VERITAS application
that uses the VCS modules GAB or LLT. Refer to the documentation supplied with
that application for instructions on how to shut down the application before
uninstalling VCS or any of its utilities.

92 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Uninstalling VCS

1. Log in as root user, and at the system prompt, enter the following commands:
galaxy# cd /opt/VRTSvcs/install
galaxy# ./uninstallvcs

2. The script begins by discovering any VCS configuration files present on the system
and reporting cluster information from them:
VCS configuration files exist on this system with the following
information:

Cluster Name: vcs_racluster


Cluster ID Number: 7
Systems: galaxy nebula
Service Groups: ClusterService groupA groupB

Do you want to uninstall VCS from all of the systems in this


cluster?(Y)
The default is “Y.” If you answer “N,” the script prompts you to enter the names of
the systems in the cluster from which you want to uninstall VCS.
Verifying communication with nebula......... rsh available

Communication check completed successfully

3. The script determines the current version of the VCS packages it finds to remove and
indicates whether dependencies for them exist. If a package has dependencies, the
dependent packages are listed and you are prompted to remove them.
Checking current installation on galaxy:

Checking product mode ............ VCS version 3.5 installed


Checking VRTSvcsor ................. version 2.0.1 installed
Checking VRTSvcsor dependencies ....................... none
Checking VRTSvcsag ................... version 3.5 installed
Checking VRTSvcsag dependencies ....................... none
Checking VRTSvcsmg ................... version 3.5 installed
Checking VRTSvcsmg dependencies ....................... none
Checking VRTSvcs ..................... version 3.5 installed
Checking VRTSvcs dependencies .................... VRTSvcsor

Do you want to uninstall VRTSvcsor which is dependent on


package VRTSvcs? (Y)

Checking VRTSgab ..................... version 3.5 installed


Checking VRTSgab dependencies .......
.
.

Chapter 6, Uninstalling DBE/AC for Oracle9i RAC 93


Uninstalling VCS

.
Checking current installation on nebula:

Checking product mode ............ VCS version 3.5 installed


Checking VRTSvcsor ................... version 3.5 installed
Checking VRTSvcsor dependencies ....................... none
Checking VRTSvcsag ................... version 3.5 installed
Checking VRTSvcsag dependencies ....................... none
Checking VRTSvcsmg ................... version 3.5 installed
Checking VRTSvcsmg dependencies ....................... none
Checking VRTSvcs ..................... version 3.5 installed
Checking .....
.

4. The script prompts you to remove existing VCS configuration files and the VCS
license key. If you decide not to remove the configuration files, they remain in place.
You must remove them or rename them before reinstalling VCS.

Do you want to remove the VCS configuration files and license


key? (N) y
Do you want to remove the VCS logs and configuration file
copies? (N) y

Installation check completed successfully

5. The script proceeds to terminate VCS processes running on the cluster systems.
Terminating VCS processes on galaxy:

Checking VCS .................................... running


Checking ClusterService group ...........ONLINE on galaxy
Offlining ClusterService Group on galaxy............ Done
Stopping VCS on all systems ........................ Done
Checking CmdServer .............................. running
Killing CmdServer .................................. Done
.
.
Terminating VCS processon nebula:

Checking VCS .................................not running


Checking CmdServer .............................. running
Killing CmdServer .................................. Done
.
.
.
VCS process termination completed successfully

94 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Removing License Files

6. The script proceeds to remove VCS packages from the cluster systems after checking
dependencies among them.
Uninstalling VCS on galaxy:

Removing VRTSvcsw package .......................... Done


Removing VRTSvcsmn package ......................... Done
Removing VRTSvcsmg package ......................... Done
Removing VRTSvcsdc package ......................... Done
.
.
.
Removing VCS configuration files

Uninstalling VCS on nebula:


Removing VRTSvcsw package .......................... Done
.
.
.
VCS uninstallation completed successfully

Removing License Files


1. To see what license key files you have installed on a system, enter:
# /sbin/vxlicrep
The output lists the license keys and information about their respective products.

2. Go to the directory containing the license key files and list them. Enter:
# cd /etc/vx/licenses/lic
# ls -a

3. Using the output from step 1, identify and delete unwanted key files listed in the
step 2.

Remove Other Configuration Files (Optional)


You can remove the following configuration files:
/etc/vcsmmtab
/etc/vxfentab
/etc/vxfendg

Chapter 6, Uninstalling DBE/AC for Oracle9i RAC 95


Remove Other Configuration Files (Optional)

96 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Troubleshooting 7
Running Scripts for Engineering Support Analysis
You can use a set of three scripts that gather information about the configuration and
status of your cluster and its various modules. The scripts also identify package
information, debugging messages, console messages, and information about disk groups
and volumes. You can forward the output of each of these scripts to VERITAS customer
support who can analyze the information and assist you in solving any problems.

getdbac
This script gathers information about the VERITAS DBE/AC for Oracle9i RAC modules.
Enter the following command on each system:
# /opt/VRTSvcs/bin/getdbac -local
The file /tmp/vcsopslog.time_stamp.sys_name.tar.Z contains the script’s output.

getcomms
This script gathers information about the GAB and LLT modules. On each system, enter:
# /opt/VRTSgab/getcomms -local
The file /tmp/commslog.time_stamp.tar contains the script’s output.

hagetcf
This script gathers information about the VCS cluster and the status of resources. To run
this script, enter the following command on each system:
# /opt/VRTSvcs/bin/hagetcf
The output from this script is placed in a tar file, /tmp/vcsconf.sys_name.tar,on each
cluster system.

97
Troubleshooting Topics

Troubleshooting Topics
The following troubleshooting topics have headings that indicate likely symptoms or that
indicate procedures required for a solution.

Error When Starting an Oracle Instance


If the VCSMM driver (the membership module) is not configured, an error is displayed on
starting the Oracle instance that resembles:
ORA-29702: error Occurred in Cluster Group Operation
To start the driver, enter the following command:
# /sbin/vcsmmconfig -c
The command included in the /etc/vcsmmtab file enables the VCSMM driver to be
started at system boot.

Missing Dialog Box During Installation of Oracle9i Release 1


During installation of Oracle9i Release 1 using the runInstaller utility, if you choose
the Enterprise Edition or Custom Install (with RAC option), a dialog box prompting you
about the installation nodes should appear.
If the dialog box fails to appear, run the following command to verify that the correct
version of the lsnodes command is installed:
/opt/VRTSvcs/rac/lib/install_files/lsnodes
If it does not display the nodes in the cluster, the RAC installation does not succeed.
Oracle9i is installed as single instance as a result. Take the following corrective actions:

1. Make sure that the VCSMM driver is running on both the nodes and start it if is not:
# /sbin/vcsmmconfig -c

2. Make sure you have started with a clean system. It is possible to have a version of the
lsnodes command from another vendor. Delete the contents of directories
oraInventory and $ORACLE_HOME, and the directory where you copied the
Oracle9i files from the Oracle installation CDs.

3. Restart the installation.

98 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Troubleshooting Topics

Missing Dialog Box During Installation of Oracle9i Release 2


During installation of Oracle9i Release 2 using the runInstaller utility, if you choose
the Enterprise Edition or Custom Install (with RAC option), a dialog box prompting you
about the installation nodes should appear.
If the dialog box fails to appear, exit the runInstaller utility, and do the following:

1. On the first system, enter the following commands, using the appropriate example
below to copy the VERITAS CM library based on the version of Oracle9i:
# cd /opt/ORCLcluster/lib/9iR2
If the version of Oracle9i is 32-bit, enter:
# cp libskgxn2_32.so ../libskgxn2.so
If the version of Oracle9i is 64-bit, enter:
# cp libskgxn2_64.so ../libskgxn2.so

2. Start the VCSMM driver on both the nodes by entering:


# /sbin/vcsmmconfig -c

3. Restart the runInstaller utility.

Instance Numbers Must be Unique (Error Code 205)


If you encounter error code 205 when the skgxnreg function fails (look in the Oracle
trace files to find the error returned), make sure there is a unique instance number
specified in the $ORACLE_HOME/dbs/init${ORACLE_SID}.ora file on each system.

ORACLE_SID Must be Unique (Error Code 304)


If you encounter error code 304 when the skgxnreg function fails (look in the Oracle
trace file to find the error returned), make sure that the ORACLE_SID environment
variable specified during Oracle startup is unique on each system in your cluster. Also,
make sure that the SID attribute for the Oracle resource in the main.cf is specified
locally and is unique.

Chapter 7, Troubleshooting 99
Troubleshooting Topics

Oracle Log Files Show Shutdown Called Even When Not


Shutdown Manually
The Oracle enterprise agent calls shutdown if monitoring of the Oracle/Sqlnet resources
fails for some reason. On all cluster nodes, look at the following VCS and Oracle agent log
files for any errors or status:
/var/VRTSvcs/log/engine_A.log
/var/VRTSvcs/log/Oracle_A.log

File System Configured Incorrectly for ODM Shuts Down


Oracle
Linking Oracle9i with the VERITAS ODM libraries provides the best file system
performance. See “Linking VERITAS ODM Libraries to Oracle” on page 69 for
instructions on creating the link and confirming that Oracle uses the libraries. Shared file
systems in RAC clusters without ODM Libraries linked to Oracle9i may exhibit slow
performance and are not supported.
If ODM cannot find the resources it needs to provide support for cluster file systems, it
does not allow Oracle to identify cluster files and causes Oracle to fail at startup. Run the
following command:
# cat /dev/odm/cluster
cluster status: enabled
If the status is “enabled,” ODM is supporting cluster files. Any other cluster status
indicates that ODM is not supporting cluster files. Other possible values include:

pending ODM cannot yet communicate with its peers, but anticipates being able to
eventually.

failed ODM cluster support has failed to initialize properly. Check console logs.

disabled ODM is not supporting cluster files. If you think it should, check:
- /dev/odm mount options in /etc/vfstab. If the “nocluster” option is
being used, it can force the “disabled” cluster support state.
- Make sure the VRTSgms (group messaging service) package is installed.
Run the following command:
# /opt/VRTSvcs/bin/chk_dbac_pkgs
The utility reports any missing required packages.

If /dev/odm is not mounted, no status can be reported.

100 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Troubleshooting Topics

VCSIPC Errors in Oracle Trace/Log Files


If you see any VCSIPC errors in the Oracle trace/log files, check /var/adm/messages
for any LMX error messages. If you see messages that contain any of the following:
. . . out of buffers
. . . out of ports
. . . no minors available
Refer to “Tunable Kernel Driver Parameters” on page 123.
If you see any VCSIPC warning messages in Oracle trace/log files that resemble:
connection invalid
or,
Reporting communication error with node
check whether the Oracle Real Application Cluster instance on the other system is still
running or has been restarted. The warning message indicates that the VCSIPC/LMX
connection is no longer valid.

Shared Disk Group Cannot be Imported


If you see a message resembling:
vxvm:vxconfigd:ERROR:vold_pgr_register(/dev/vx/rdmp/disk_name):
local_node_id<0
Please make sure that CVM and vxfen are configured and operating correctly

This message is displayed when CVM cannot retrieve the node ID of the local system
from the vxfen driver. This usually happens when port b is not configured. Verify that
the vxfen driver is configured by checking the GAB ports with the command:
# /sbin/gabconfig -a
Port b must exist on the local system.

Chapter 7, Troubleshooting 101


Troubleshooting Topics

vxfentsthdw Fails When SCSI TEST UNIT READY Command


Fails
If you see a message resembling:
Issuing SCSI TEST UNIT READY to disk reserved by other node FAILED.
Contact the storage provider to have the hardware configuration
fixed.
The disk array does not support returning success for a SCSI TEST UNIT READY
command when another host has the disk reserved using SCSI-3 persistent reservations.
This happens with Hitachi Data Systems 99XX arrays if bit 186 of the system mode option
is not enabled.

CVMVolDg Does Not Go Online Even Though CVMCluster is


Online
When the CVMCluster resource goes online, the shared disk groups are automatically
imported. If the disk group import fails for some reason, the CVMVolDg resources fault.
Clearing and offlining the CVMVolDg type resources does not fix the problem.
Workaround:

1. Fix the problem causing the import of the shared disk group to fail.

2. Offline the service group containing the resource of type CVMVolDg as well as the
service group containing the CVMCluster resource type.

3. Bring the service group containing the CVMCluster resource online.

4. Bring the service group containing the CVMVolDg resource online.

Restoring Communication Between Host and Disks After


Cable Disconnection
If a fiber cable is inadvertently disconnected between the host and a disk, you can restore
communication between the host and the disk without rebooting by doing the following:

1. Reconnect the cable.

2. Use the format command to verify that the host sees the disks. It may take a few
minutes before the host is capable of seeing the disk.

102 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Troubleshooting Topics

3. Issue the following vxdctl command to force the VxVM configuration daemon
vxconfigd to rescan the disks:
# vxdctl enable

Adding a System to an Existing Two-Node Cluster Causes


Error
If the local system attempts to join an existing two-node cluster, an error is generated that
resembles:
vxfenconfig: ERROR: The maximum cluster size is two nodes.
This error is generated as a result of a check to limit cluster size to two systems. The
following messages are also sent to the console:
<date,time> <system> vxfen: WARNING: Cluster size maximum is two.
<date,time> <system> Dropping out of cluster.
<date,time> <system>
<date,time> <system> gab: GAB:20032: Port b closed.

Removing Existing Keys From Disks


To remove the registration and reservation keys created by another node from a disk, use
the following procedure:

1. Create a file to contain the access names of the disks:


# vi /tmp/disklist
For example:
/dev/rdsk/c1t0d11s2

2. Read the existing keys:


# vxfenadm -g all -f /tmp/disklist
The output from this command displays the key:
Device Name: /dev/rdsk/c1t0d11s2
Total Number Of Keys: 1
key[0]:
Key Value [Numeric Format]: 65,49,45,45,45,45,45,45
Key Value [Character Format]: A1------

Chapter 7, Troubleshooting 103


Troubleshooting Topics

3. If you know on which node the key was created, log in to that node and enter the
following command:
# vxfenadm -x -kA1 -f /tmp/disklist
The key is removed.

4. If you do not know on which node the key was created, follow step 5 through step 7
to remove the key.

5. Register a second key “A2” temporarily with the disk:


# vxfenadm -m -kA2 -f /tmp/disklist
Registration completed for disk path /dev/rdsk/c1t0d11s2

6. Remove the first key from the disk by preempting it with the second key:
# vxfenadm -p -kA2 -f /tmp/disklist -vA1
key: A2------ prempted the key: A1------ on disk
/dev/rdsk/c1t0d11s2

7. Remove the temporary key assigned in step 5.


# vxfenadm -x -kA2 -f /tmp/disklist
Deleted the key : [A2------] from device /dev/rdsk/c160d11s2
No registration keys exist for the disk.

104 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Troubleshooting Topics

System Panics to Prevent Potential Data Corruption


When a system experiences a split brain condition and is ejected from the cluster, it panics
and displays the following console message:
VXFEN:vxfen_plat_panic: Local cluster node ejected from cluster to
prevent potential data corruption.

How vxfen Driver Checks for Pre-existing Split Brain Condition


The vxfen driver functions to prevent an ejected node from rejoining the cluster after the
failure of the private network links and before the private network links are repaired.
For example, suppose the cluster of system 1 and system 2 is functioning normally when
the private network links are broken. Also suppose system 1 is the ejected system. When
system 1 reboots before the private network links are restored, its membership
configuration does not show system 2; however, when it attempts to register with the
coordinator disks, it discovers system 2 is registered with them. Given this conflicting
information about system 2, system 1 does not join the cluster and returns an error from
vxfenconfig that resembles:
vxfenconfig: ERROR: There exists the potential for a preexisting
split-brain. The coordinator disks list no nodes which are in the
current membership. However, they also list nodes which are not
in the current membership.

I/O Fencing Disabled!

Also, the following information is displayed on the console:


<date> <system name> vxfen: WARNING: Potentially a preexisting
<date> <system name> split-brain.
<date> <system name> Dropping out of cluster.
<date> <system name> Refer to user documentation for steps
<date> <system name> required to clear preexisting split-brain.
<date> <system name>
<date> <system name> I/O Fencing DISABLED!
<date> <system name>
<date> <system name> gab: GAB:20032: Port b closed
However, the same error can occur when the private network links are working and both
systems go down, system 1 reboots, and system 2 fails to come back up. From the view of
the cluster from system 1, system 2 may still have the registrations on the coordinator
disks.

Chapter 7, Troubleshooting 105


Troubleshooting Topics

Case 1: System 2 Up, System 1 Ejected (Actual Potential Split Brain)


Determine if system1 is up or not. If it is up and running, shut it down and repair the
private network links to remove the split brain condition. Reboot system 1.

Case 2: System 2 Down, System 1 Ejected (Apparent Potential Split


Brain)

1. Physically verify that system 2 is down.

2. Verify the systems currently registered with the coordinator disks. Use the following
command:
# vxfenadm -g all -f /etc/vxfentab
The output of this command identifies the keys registered with the coordinator disks.

3. Clear the keys on the coordinator disks as well as the data disks using the command
/opt/VRTSvcs/rac/bin/vxfenclearpre. See “Using vxfenclearpre Command
to Clear Keys After Split Brain” on page 107.

4. Make any necessary repairs to system 2 and reboot.

106 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Troubleshooting Topics

Using vxfenclearpre Command to Clear Keys After Split Brain


When you have encountered a split brain condition, use the vxfenclearpre command
to remove SCSI-3 registrations and reservations on the coordinator disks as well as on the
data disks in all shared disk groups.

1. Shut down all other systems in the cluster that have access to the shared storage. This
prevents data corruption.

2. Start the script:


# cd /opt/VRTSvcs/rac/bin
# ./vxfenclearpre

3. Read the script’s introduction and warning. Then, you can choose to let the script run.
Do you still want to continue: [y/n] (default : n)
y

Note Informational messages resembling the following may appear on the console of one
of the nodes in the cluster when a node is ejected from a disk/LUN:

<date> <system name> scsi: WARNING: /sbus@3,0/lpfs@0,0/sd@0,1(sd91):


<date> <system name> Error for Command: <undecoded cmd 0x5f> Error Level:
Informational
<date> <system name> scsi: Requested Block: 0 Error Block 0
<date> <system name> scsi: Vendor: <vendor> Serial Number: 0400759B006E
<date> <system name> scsi: Sense Key: Unit Attention
<date> <system name> scsi: ASC: 0x2a (<vendor unique code 0x2a>), ASCQ: 0x4,
FRU: 0x0

These informational messages may be ignored.

Cleaning up the coordinator disks...

Cleaning up the data disks for all shared disk groups...

Successfully removed SCSI-3 persistent registration and


reservations from the coordinator disks as well as the shared
data disks.

Reboot the server to proceed with normal cluster startup...


#

4. Reboot all systems in the cluster.

Chapter 7, Troubleshooting 107


Troubleshooting Topics

108 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Sample main.cf File A
A sample VCS configuration file, /etc/VRTSvcs/conf/sample_rac/main.cf, is
available online. It is also shown here for offline reference.

/etc/VRTSvcs/conf/sample_rac/main.cf
// %W% %G% %U% - %Q% #
//ident "@(#)vcsops:%M% %I%"
//
// Copyright (c) 2000-2002 VERITAS Software Corporation. All Rights
// Reserved.
//
// THIS SOFTWARE CONTAINS CONFIDENTIAL INFORMATION AND
// TRADE SECRETS OF VERITAS SOFTWARE. USE, DISCLOSURE,
// OR REPRODUCTION IS PROHIBITED WITHOUT THE PRIOR
// EXPRESS WRITTEN PERMISSION OF VERITAS SOFTWARE CORPORATION
//
// RESTRICTED RIGHTS LEGEND
// USE, DUPLICATION OR DISCLOSURE BY THE U.S. GOVERNMENT IS
// SUBJECT TO RESTRICTIONS SET FORTH IN
// THE VERITAS SOFTWARE CORPORATION LICENSE AGREEMENT AND AS
// PROVIDED IN DFARS 227.7202-1(a) and
// 227.7202-3(a) (1998), and FAR 12.212, as applicable.
// VERITAS Software Corporation
// 350 Ellis Street,
// Mountain View, CA 94043.
// UNPUBLISHED -- RIGHTS RESERVED UNDER THE COPYRIGHT
// LAWS OF THE UNITED STATES. USE OF A COPYRIGHT NOTICE
// IS PRECAUTIONARY ONLY AND DOES NOT IMPLY PUBLICATION
// OR DISCLOSURE.
//

include "types.cf"
include "CFSTypes.cf"
include "CVMTypes.cf"
include "OracleTypes.cf"

109
/etc/VRTSvcs/conf/sample_rac/main.cf

cluster vcs

system sysa

system sysb

group cvm (
SystemList = { sysa = 0, sysb = 1 }
AutoFailOver = 0
Parallel = 1
AutoStartList = { sysa, sysb }
)

CVMCluster cvm_clus (
Critical = 0
CVMClustName = vcs
CVMNodeId = { sysa = 0, sysb = 1 }
CVMTransport = gab
CVMTimeout = 200
)

CFSQlogckd qlogckd (
Critical = 0
)

CFSfsckd vxfsckd (
)

CVMVolDg orabin_voldg (
CVMDiskGroup = orabindg
CVMVolume = { "orabinvol", "srvmvol" }
CVMActivation = sw
)

CFSMount orabin_mnt (
Critical = 0
MountPoint = "/oracle"
BlockDevice = "/dev/vx/dsk/orabindg/orabinvol"
Primary = sysa
)

NIC listener_hme0 (
Device = hme0
NetworkType = ether
)

110 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
/etc/VRTSvcs/conf/sample_rac/main.cf

IP listener_ip (
Device = hme0
Address @sysa = "192.2.40.21"
Address @sysb = "192.2.40.22"
)

Sqlnet LISTENER (
Owner = oracle
Home = "/oracle/orahome"
TnsAdmin = "/oracle/orahome/network/admin"
MonScript = "./bin/Sqlnet/LsnrTest.pl"
Listener @sysa = LISTENER_a
Listener @sysb = LISTENER_b
EnvFile = "/opt/VRTSvcs/bin/Sqlnet/envfile"
)

vxfsckd requires qlogckd


orabin_voldg requires cvm_clus
orabin_mnt requires vxfsckd
orabin_mnt requires orabin_voldg
listener_ip requires listener_hme0
LISTENER requires listener_ip
LISTENER requires orabin_mnt

// resource dependency tree


//
// group cvm
// {
// Sqlnet LISTENER
// {
// CFSMount orabin_mnt
// {
// CFSfsckd vxfsckd
// {
// CFSQlogckd qlogckd
// }
// CVMVolDg orabin_voldg
// {
// CVMCluster cvm_clus
// }
// }
// IP listener_ip
// {
// NIC listener_hme0
// }
// }
// }

Appendix A, Sample main.cf File 111


/etc/VRTSvcs/conf/sample_rac/main.cf

group oradb1_grp (
SystemList = { sysa = 0, sysb = 1 }
AutoFailOver = 1
Parallel = 1
AutoStartList = { sysa, sysb }
)

CVMVolDg oradb1_voldg (
CVMDiskGroup = oradb1dg
CVMVolume = { "oradb1vol" }
CVMActivation = sw
)

CFSMount oradb1_mnt (
MountPoint = "/oradb1"
BlockDevice = "/dev/vx/dsk/oradb1dg/oradb1vol"
)

Oracle VRT_db (
Sid @sysa = VRT1
Sid @sysb = VRT2
Owner = oracle
Home = "/oracle/orahome"
Pfile @sysa = "/oracle/orahome/dbs/initVRT1.ora"
Pfile @sysb = "/oracle/orahome/dbs/initVRT2.ora"
User = scott
Pword = tiger
Table @sysa = vcstable_sysa
Table @sysb = vcstable_sysb
MonScript = "./bin/Oracle/SqlTest.pl"
AutoEndBkup = 1
EnvFile = "/opt/VRTSvcs/bin/Oracle/envfile"
)

requires group cvm online local firm


oradb1_mnt requires oradb1_voldg
VRT requires oradb1_mnt

112 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
/etc/VRTSvcs/conf/sample_rac/main.cf

// resource dependency tree


//
// group oradb1_grp
// {
// Oracle VRT
// {
// CFSMount oradb1_mnt
// {
// CVMVolDg oradb1_voldg
// }
// }
// }

group oradb2_grp (
SystemList = { sysa = 1, sysb = 0 }
AutoFailOver = 1
Parallel = 1
AutoStartList = { sysb, sysa }
)

CVMVolDg oradb2_voldg (
CVMDiskGroup = oradbdg2
CVMVolume = { "oradb2vol" }
CVMActivation = sw
)

CFSMount oradb2_mnt (
MountPoint = "/oradb2"
BlockDevice = "/dev/vx/dsk/oradbdg2/oradb2vol"
Primary = sysb
)

Oracle rac_db (
Sid @sysa = rac1
Sid @sysb = rac2
Owner = oracle
Home = "/oracle/orahome"
Pfile @sysa = "/oracle/orahome/dbs/initrac1.ora"
Pfile @sysb = "/oracle/orahome/dbs/initrac2.ora"
User = scott
Pword = tiger
Table @sysa = vcstable_sysa
Table @sysb = vcstable_sysb
MonScript = "./bin/Oracle/SqlTest.pl"
AutoEndBkup = 1
EnvFile = "/opt/VRTSvcs/bin/Oracle/envfile"
)

Appendix A, Sample main.cf File 113


/etc/VRTSvcs/conf/sample_rac/main.cf

requires group cvm online local firm


oradb2_mnt requires oradb2_voldg
rac requires oradb2_mnt

// resource dependency tree


//
// group oradb2_grp
// {
// Oracle rac
// {
// CFSMount oradb2_mnt
// {
// CVMVolDg oradb2_voldg
// }
// }
// }

114 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
CVMCluster, CVMVolDg, and CFSMount
Agents B
This Appendix describes the entry points and the attributes of the CVMCluster,
CVMVolDG, and CFSMount agents. Use this information to make necessary changes to
the configuration.

CVMCluster Agent
The CVMCluster resource is configured automatically during installation. The
CVMCluster agent controls system membership on the cluster port associated with VxVM
in a cluster.
The following table describes the entry points used by the CVMCluster agent.

Entry Point Description

Online Joins a system to the CVM cluster port.

Offline Removes a system from the CVM cluster port.

Monitor Monitors the system’s CVM cluster membership state.

115
CVMCluster Agent

CVMCluster Agent Type

Attribute Dimension Description

CVMClustName string-scalar Name of the cluster.

CVMNodeId{} string-association An associative list. The first part names the system and
the second part contains the system’s LLT ID number.

CVMTransport string-scalar Specifies cluster messaging mechanism.


Default = gab

CVMTimeout integer-scalar Timeout in seconds used for CVM cluster


reconfiguration.
Default = 200

CVMCluster Agent Type Definition


The following type definition is included in the file, CVMTypes.cf. Note that the
CVMNodeAddr, PortConfigd, and PortKmsgd attributes are not used in a DBE/AC
environment because GAB, the required cluster communication messaging mechanism,
does not use them.

type CVMCluster (
static int NumThreads = 1
static int OnlineRetryLimit = 2
static int OnlineTimeout = 400
static str ArgList[] = { CVMTransport, CVMClustName,
CVMNodeAddr, CVMNodeId, PortConfigd, PortKmsgd,
CVMTimeout }
NameRule = ""
str CVMClustName
str CVMNodeAddr{}
str CVMNodeId{}
str CVMTransport
int PortConfigd
int PortKmsgd
int CVMTimeout
)

116 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
CVMCluster Agent

CVMCluster Agent Sample Configuration


The following is an example definition for the CVMCluster service group. See also
Appendix A “Sample main.cf File” on page 109 for a more extensive main.cf example
that includes the CVMCluster resource.
CVMCluster cvm_clus (
Critical = 0
CVMClustName = RACCluster1
CVMNodeId = { galaxy = 0, nebula = 1 }
CVMTransport = gab
CVMTimeout = 200
)

Appendix B, CVMCluster, CVMVolDg, and CFSMount Agents 117


Configuring the CVMVolDg and CFSMount Resources

Configuring the CVMVolDg and CFSMount Resources


The CVMVolDg agent represents and controls CVM disk groups and the CVM volumes
within the disk groups. Because of the global nature of the CVM disk groups and the
CVM volumes, they are imported only once on the CVM master node.
Configure the CVMVolDg agent for each disk group used by an Oracle service group. A
disk group must be configured to only one Oracle service group. If cluster file systems are
used for the database, configure the CFSMount agent for each volume in the disk group.

CVMVolDg Agent, Entry Points


The following table describes the entry points used by the CVMVolDg agent.

Entry Point Description

Online If the system is the MASTER and the disk group is not imported, it
imports the disk group and starts all the volumes in the shared disk
group. It then sets the disk group activation mode to shared-write
as long as the CVMActivation attribute is set to sw. The activation
mode can be set on both slave and master systems.

Offline Sets the disk group activation mode to off so that all the volumes in
disk group are invalid.

Monitor Monitors the specified critical volumes in the disk group. The volumes
to be monitored are specified by the CVMVolList attribute. At least one
volume in a disk group must be specified.

Clean Sets the disk group activation mode to off so that all the volumes in
disk group are invalid (same as the Offline entry point).

118 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Configuring the CVMVolDg and CFSMount Resources

CVMVolDg Agent Type, Attribute Descriptions


The following table describes the user-modifiable attributes of the CVMVolDg resource
type.

Attribute Dimension Description

CVMDiskGroup string-scalar Names the disk group.

CVMVolume string-keylist Lists the critical volumes in the disk group. At least one volume
in the disk group must be specified.

CVMActivation string-scalar Sets the activation mode for the disk group.
Default = sw

CVMVolDg Agent Type Definition


The CVMVolDg type definition is included in the CVMTypes.cf file, installed by the
installDBAC utility.
type CVMVolDg (
static str ArgList[] = { CVMDiskGroup, CVMVolume, CVMActivation }
NameRule = ""
str CVMDiskGroup
keylist CVMVolume[]
str CVMActivation
)

Sample CVMVolDg Agent Configuration


Each Oracle service group requires a CVMVolDg resource type to be defined. Refer to
“/etc/VRTSvcs/conf/sample_rac/main.cf” on page 109 to see CVMVolDg defined in a
more extensive example.
CVMVolDg ora_voldg (
CVMDiskGroup = oradatadg
CVMVolume = { oradata1, oradata2 }
CVMActivation = sw
)

Appendix B, CVMCluster, CVMVolDg, and CFSMount Agents 119


Configuring the CVMVolDg and CFSMount Resources

CFSMount Agent, Entry Points


The CFSMount agent brings online, takes offline, and monitors a cluster file system mount
point. The agent executable is /opt/VRTSvcs/bin/CFSMount/CFSMountAgent. The
CFSMount type definition is in the file /etc/VRTSvcs/conf/config/CFSTypes.cf.

Entry Point Description

Online Mounts a block device in cluster mode.

Offline Unmounts the file system, forcing unmount if necessary, and sets
primary to secondary if necessary.

Monitor Determines if the file system is mounted. Checks mount status using
the fsclustadm command.

Clean A null operation for a cluster file system mount.

CFSMount Agent Type, Attribute Descriptions


The table lists user-modifiable attributes of the CFSMount Agent resource type.

Attribute Dimension Description

MountPoint string-scalar Directory for the mount point.

BlockDevice string-scalar Block device for the mount point.

MountOpt string-scalar Options for the mount command. To create a valid MountOpt
attribute string:
- Use the VxFS type-specific options only.
- Do not use the -o flag to specify the VxFS-specific options.
- Do not use the -F vxfs file system type option.
- The cluster option is not required.
- Specify options in comma-separated list as in these examples:
ro
ro,cluster
blkclear,mincache=closesync

Primary string-scalar INFORMATION ONLY. Stores primary node name for a VxFS file
system. Primary is automatically modified when an unmounted
file system is mounted or another node becomes the primary.

120 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Configuring the CVMVolDg and CFSMount Resources

CFSMount Agent Type Definition


The CFSMount agent type definition is included in the CFSTypes.cf file, installed by the
installDBAC utility.
type CFSMount (
static keylist RegList = { MountOpt, Policy, NodeList }
static int FaultOnMonitorTimeouts = 1
static int OnlineWaitLimit = 0
static str ArgList[] = { MountPoint, BlockDevice,
MountOpt }
NameRule = resource.MountPoint
str MountPoint
str MountType
str BlockDevice
str MountOpt
str Primary
keylist NodeList
keylist Policy
)

Sample CFSMount Agent Configuration


Each Oracle service group requires a CFSMount resource type to be defined. Refer to
“/etc/VRTSvcs/conf/sample_rac/main.cf” on page 109 to see CFSMount defined in a
more extensive example.
CFSMount ora_mount (
MountPoint = "/oradata"
BlockDevice = "/dev/vx/dsk/oradatadg/oradatavol1"
Primary = nebula
)

Appendix B, CVMCluster, CVMVolDg, and CFSMount Agents 121


Configuring the CVMVolDg and CFSMount Resources

122 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Tunable Kernel Driver Parameters C
LMX Tunable Parameters
Edit the file /kernel/drv/lmx.conf to change the values of the LMX driver tunable
global parameters. The following table describes the LMX driver tunable parameters.

LMX Description Default Maximum


Parameter Value Value

lltport Specifies the LLT port with which LMX registers and 10 N/A
receives data. The port can be an unused port between 0
and 31; dedicated ports such as 0, 7, 31 and others cannot
be used. Use the command /sbin/lltstat -p to
determine the ports in use.

contexts Specifies the maximum number of contexts system-wide. 8192 65535


(minors) Each Oracle process typically has two LMX contexts.
“Contexts” and “minors” are used interchangeably in the
documentation; “context” is an Oracle-specific term and
should be used to specify the value in the lmx.conf file.

ports Specifies the number of communication endpoints for 4096 65535


transferring messages from the sender to the receiver in a
uni-directional manner.

buffers Specifies the number of addressable regions in memory 4096 65535


to which LMX data can be copied.

update Enables asynchronous I/O. A value of zero (0) disables 1 [enable] N/A
asynchronous I/O.

msgbuf Specifies the size of the LMX message buffer in kilobytes 32 64


(KB).

123
LMX Tunable Parameters

Example: Configuring LMX Parameters


If you see the message “no minors available” on one node, you can edit the file
/kernel/drv/lmx.conf and add a configuration parameter increasing the value for
the maximum number of contexts. (While the term “minors” is reported in the error
message, the term “contexts” must be used in the configuration file.) Be aware that
increasing the number of contexts on a system has some impact on the resources of that
system.
In the following example, configuring contexts=16384 allows a maximum of 8192
Oracle processes (8192 * 2 = 16384). Note that double-quotes are not used to specify an
integer value.
#
# LMX configuration file
#
name=”lmx” parent=”pseudo” contexts=16384 instance=0;
For the changes to take effect, either reboot the system, or reconfigure the LMX module
using the following steps:

1. Shut down all Oracle service groups on the system:


# hagrp -offline oragrp -sys galaxy

2. Stop all Oracle client processes on the system, such as sqlplus and svrmgrl.

3. Unconfigure the LMX module:


# /sbin/lmxconfig -U

4. Determine the LMX module ID:


# /usr/sbin/modinfo | grep -i lmx
The module ID is the number in the first column of the output.

5. Unload the LMX module, using the module ID you determined:


# /usr/sbin/modunload -i module_ID

6. Configure the LMX module:


# /sbin/lmxconfig -c

7. Bring the service groups back online:


# hagrp -online oragrp -sys galaxy

124 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
vxfen Tunable Parameters

vxfen Tunable Parameters


On each node, edit the file /kernel/drv/vxfen.conf to change the values of the vxfen
driver tunable global parameters. You must reboot the system to put changes into effect.

vxfen Description Default Maximum


Parameter Value Value

dbg_log_size Size of the kernel log buffer in bytes. The minimum 32768 131072
value is 32768 bytes.

For example, to change the size of the kernel log buffer to 65536 bytes, edit the file and
add the configuration parameter dbg_log_size=65536:
#
# vxfen configuration file
#
name=”vxfen” parent=”pseudo” dbg_log_size=65536 instance=0;

Appendix C, Tunable Kernel Driver Parameters 125


vxfen Tunable Parameters

126 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Error Messages D
LMX Error Messages, Critical
The following table lists LMX kernel module error messages. These messages report
critical errors seen when the system runs out of memory, when LMX is unable to
communicate with LLT, or when you are unable to load or unload LMX. Refer to
“Running Scripts for Engineering Support Analysis” on page 97 for information on how
to gather information about your systems and configuration that VERITAS support
personnel can use to assist you.

Message ID LMX Message

00001 lmxload packet header size incorrect (number)

00002 lmxload invalid lmx_llt_port number

00003 lmxload context memory alloc failed

00004 lmxload port memory alloc failed

00005 lmxload buffer memory alloc failed

00006 lmxload node memory alloc failed

00007 lmxload msgbuf memory alloc failed

00008 lmxload tmp msgbuf memory alloc failed

00009 lmxunload node number conngrp not NULL

00010 lmxopen return, minor non-zero

00011 lmxopen return, no minors available

00012 lmxconnect lmxlltopen(1) err= number

127
LMX Error Messages, Non-Critical

Message ID LMX Message

00013 lmxconnect new connection memory alloc failed

00014 lmxconnect kernel request memory alloc failed

00015 lmxconnect mblk memory alloc failed

00016 lmxconnect conn group memory alloc failed

00017 lmxlltfini: LLT unregister failed err = number

00018 lmxload contexts number > number, max contexts = system limit = number

00019 lmxload ports number > number, max ports = system limit = number

00020 lmxload buffers number > number, max buffers = system limit = number

00021 lmxload msgbuf number > number, max msgbuf size = system limit = number

LMX Error Messages, Non-Critical


The following table contains LMX error messages that may be displayed during runtime.
Refer to “Running Scripts for Engineering Support Analysis” on page 97 for information
on how to gather information about your systems and configuration that VERITAS
support personnel can use to assist you.
If you encounter errors while running your Oracle application due to the display of these
messages, you may use the lmxconfig command to turn off their display. For example,
to disable the display of the messages:
# /sbin/lmxconfig -e 0
To re-enable the display of the messages, you can enter:
# /sbin/lmxconfig -e 1

Message ID LMX Message

06001 lmxreqlink duplicate kreq= 0xaddress, req= 0xaddress

06002 lmxreqlink duplicate ureq= 0xaddress kr1= 0xaddress, kr2= 0xaddress req type =
number

128 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
LMX Error Messages, Non-Critical

Message ID LMX Message

06003 lmxrequnlink not found kreq= 0xaddress from= number

06004 lmxrequnlink_l not found kreq= 0xaddress from= number

06101 lmxpollreq not in doneq CONN kreq= 0xaddress

06201 lmxnewcontext lltinit fail err= number

06202 lmxnewcontext lltregister fail err= number

06301 lmxrecvport port not found unode= number node= number ctx= number

06302 lmxrecvport port not found (no port) ctx= number

06303 lmxrecvport port not found ugen= number gen= number ctx= number

06304 lmxrecvport dup request detected

06401 lmxinitport out of ports

06501 lmxsendport lltsend node= number err= number

06601 lmxinitbuf out of buffers

06602 lmxinitbuf fail ctx= number ret= number

06701 lmxsendbuf lltsend node= number err= number

06801 lmxconfig insufficient privilege, uid= number

06901 lmxlltnodestat: LLT getnodeinfo failed err= number

Appendix D, Error Messages 129


VxVM Errors Related to I/O Fencing

VxVM Errors Related to I/O Fencing

Message Explanation

vold_pgr_register(disk_path): The vxfen driver has not been configured. Follow the
failed to open the vxfen device. instructions in “Setting Up Coordinator Disks” on page 44
Please make sure that the vxfen to set up coordinator disks and start I/O fencing. Then
driver is installed and configured clear the faulted resources and online the service groups.

vold_pgr_register(disk_path): Incompatible versions of VxVM and the vxfen driver are


Probably incompatible vxfen driver. installed on the system. Install the proper version of
DBE/AC.

VXFEN Driver Error Messages

Message Explanation

VXFEN: Unable to register with This message appears when the vxfen driver is unable to
coordinator disk with serial number: register with one of the coordinator disks. The serial
xxxx number of the coordinator disk that failed is printed.

VXFEN: Unable to register with a This message appears when the vxfen driver is unable to
majority of the coordinator disks. register with a majority of the coordinator disks. The
Dropping out of cluster. problems with the coordinator disks must be cleared before
fencing can be enabled.
This message is preceded with the message “VXFEN:
Unable to register with coordinator disk with serial number
xxxx.”

VXFEN Driver Informational Message


date and time VXFEN:00021:Starting to eject leaving nodes(s) from data
disks.

date and time VXFEN:00022:Completed ejection of leaving node(s) from


data disks.
These messages are for information only. They show how long it takes the data disks to be
fenced for nodes that have left the cluster.

130 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Informational Messages When Node is Ejected

Informational Messages When Node is Ejected


Informational messages resembling the following may appear on the console of one of the
nodes in the cluster when a node is ejected from a disk/LUN:
<date> <system name> scsi: WARNING: /sbus@3,0/lpfs@0,0/sd@0,1(sd91):
<date> <system name> Error for Command: <undecoded cmd 0x5f> Error
Level: Informational
<date> <system name> scsi: Requested Block: 0 Error Block 0
<date> <system name> scsi: Vendor: <vendor> Serial Number:
0400759B006E
<date> <system name> scsi: Sense Key: Unit Attention
<date> <system name> scsi: ASC: 0x2a (<vendor unique code 0x2a>), ASCQ:
0x4, FRU: 0x0
These informational messages may be ignored.

Appendix D, Error Messages 131


Informational Messages When Node is Ejected

132 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Creating Starter Databases E
The optional procedures in this appendix describe suggested methods for creating a
starter Oracle9i database on shared storage. They are provided in case you are not creating
your own database using tools you may have available.
You can create the database on raw VxVM volumes or on VxFS shared file systems. Two
methods to create the database are described. You can use either dbca (Oracle9i Database
Creation Assistant) or VERITAS provided scripts.
For Oracle9i Release 1 and Release 2, you can use dbca to create a starter database in raw
volumes, or you can use VERITAS scripts to a create database in either raw volumes or a
VxFS file system.

Using Oracle dbca to Create a Starter Database


If you intend to use the Oracle9i Database Configuration Assistant (dbca) utility to create
a starter database in raw VxVM volumes, you must prepare the shared storage for the
database tablespaces.
If you intend to create the database in a VxFS file system, you must use the VERITAS
scripts instead of dbca. See “Using VERITAS Scripts to Create a Starter Database” on
page 137.

133
Using Oracle dbca to Create a Starter Database

Preparing Tablespaces for Use With dbca


When creating an Oracle general purpose starter database, you must prepare a
configuration file that the database creation utilities can use to create the tablespaces. The
tablespace name and size requirements are described in the following table. Note that the
sizes are recommended, not minimum.

Tablespace Requirements

Tablespace Recommended Size


system1 1 gigabyte (GB)
spfile1 10 megabytes (MB)
users1 1 GB
temp1 1 GB
undotbs1 700 MB
undotbs2 700 MB
example1 350 MB
cwmlite1 200 MB
indx1 150 MB
tools1 50 MB
drsys1 200 MB
control1 200 MB
control2 200 MB
redo1_1 200 MB
redo1_2 200 MB
redo2_1 200 MB
redo2_2 200 MB

The total recommended size required for these tablespaces is 6.56 GB.

134 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Using Oracle dbca to Create a Starter Database

Creating Shared Raw Volumes for Tablespaces


The procedure described in this section applies if you plan to use the Oracle dbca utility
to create a starter database on raw volumes.

1. Log in as root user.

2. On the master node, create a shared disk group:


# vxdg init ora_dg c2t3d1s2

3. Create a volume in the shared group for each of the tablespaces listed in the previous
table, “Tablespace Requirements” on page 134:
# vxassist -g ora_dg make VRT_system1 1000M
# vxassist -g ora_dg make VRT_spfile1 10M
.
.

4. Enable write access to the shared disk group. On the master node, enter:
# vxdg deport ora_dg
# vxdg -s import ora_dg
# vxvol -g ora_dg startall
# vxdg -g ora_dg set activation=sw

5. On the slave node, enter:


# vxdg -g ora_dg set activation=sw
Refer to the description of disk group activation modes in the VERITAS Volume
Manager Administrator’s Guide for more information (see the chapter on “Cluster
Functionality”).

6. Define the access mode and permissions for the volumes storing the Oracle data. For
each volume listed in $ORACLE_HOME/raw_config, use the vxedit(1M) command:
vxedit -g disk_group set group=group user=user mode=660 volume
For example:
# vxedit -g ora_dg set group=dba user=oracle mode=660 VRT_system1

In this example, VRT_system1 is the name of one of the volumes. Repeat the
command to define access mode and permissions for each volume in the ora_dg.
You can now create the database. See “Creating the Oracle Starter Database Using
Oracle9i dbca” on page 136.

Appendix E, Creating Starter Databases 135


Using Oracle dbca to Create a Starter Database

Creating the Oracle Starter Database Using Oracle9i dbca


As Oracle user, start the Oracle9i Database Configuration Assistant (dbca) utility on the
master node to create a general purpose database. The dbca utility is a graphical user
interface and requires setting of the DISPLAY environment variable to run. Refer to the
Oracle9i Installation Guide.

Note If you find the dbca utility is unable to create a database, you can create the starter
database using the VERITAS scripts described in the next section.

136 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Using VERITAS Scripts to Create a Starter Database

Using VERITAS Scripts to Create a Starter Database


You may use the VERITAS scripts to create a starter database in either raw VxVM volumes
(see below) or a VERITAS cluster file system (see “Using VERITAS Scripts to Create
Database on Cluster File System” on page 139).

Using VERITAS Scripts to Create Database on Raw Volumes


When you create a starter database on raw volumes, you can manually create the Oracle
database using the scripts in the /opt/VRTSvcs/rac/db_scripts/raw directory. A
README file in that directory guides you through the following procedure, which assumes
you use sh or ksh. If you use another shell, make sure the syntax is appropriate.

1. Create a disk group on shared storage in which to create the database. For example:
# vxdg init raw_db_dg c2t3d1s2

2. Deport and import the disk group, start the volume, and enable write access to the
shared disk group. On the master node, enter:
# vxdg deport raw_db_dg
# vxdg -s import raw_db_dg
# vxvol -g raw_db_dg startall
# vxdg -g raw_db_dg set activation=sw

3. On the slave node, enter:


# vxdg -g ora_dg set activation=sw
Refer to the description of disk group activation modes in the VERITAS Volume
Manager Administrator’s Guide for more information (see the chapter on “Cluster
Functionality”).

4. Log in as user oracle.

5. Create a directory in which to copy the database creation scripts from the
/opt/VRTSvcs/rac/db_scripts/raw directory. Do not edit the original files in
the original directory because creating the database replaces the variables in the
scripts. For example:
$ mkdir db_cr_raw
$ cp /opt/VRTSvcs/rac/db_scripts/raw/* db_cr_raw

6. Read the file README that is copied with the database creation scripts. This procedure
is described in the README.

Appendix E, Creating Starter Databases 137


Using VERITAS Scripts to Create a Starter Database

7. Edit the file conf, and set all variable as they apply to your environment:
$ vi conf

8. Log in as root user.

9. As root user, source the conf file:


# . conf

10. Run the script cr_vols. This script creates volumes for tablespaces in the disk group
you created in step 1.

11. Log in as user oracle.

12. As user oracle, source the conf file:


$ . conf

13. Set the ORACLE_BASE and ORACLE_HOME environment variables.

14. Run the convert command. This command uses the values of the environment
variables set in the conf file to modify the database creation scripts. Enter:
$ convert

15. Examine the script runall.sh and use comment symbols to prevent the use of any
database options you don’t want.The following example shows the runall.sh script
edited such that demo schemas are not created:
$ vi runall.sh

# This is a sample database creation script.


# It is assumed that the script is run from Bourne or K shell.
# Comment out those lines which you don’t want to be executed,
# e.g., if demo schemas are not required, comment out the
# corresponding line.

ORACLE_SID=${INSTANCE_1}
export ORACLE_SID

# Create the directory structure for Oracle logs


mkdir -p ${ORACLE_BASE}/admin/${DB_NAME}/create
mkdir -p ${ORACLE_BASE}/admin/${DB_NAME}/bdump
mkdir -p ${ORACLE_BASE}/admin/${DB_NAME}/cdump
mkdir -p ${ORACLE_BASE}/admin/${DB_NAME}/udump

# Add this entry in the oratab: SEA1:${ORACLE_HOME}:Y


${ORACLE_HOME}/bin/sqlplus /nolog @${SCRIPT_DIR}/CreateDB.sql
${ORACLE_HOME}/bin/sqlplus /nolog@${SCRIPT_DIR}/CreateDBFiles.sql

138 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Using VERITAS Scripts to Create a Starter Database

${ORACLE_HOME}/bin/sqlplus /nolog @${SCRIPT_DIR}/CreateDBCatalog.sql


${ORACLE_HOME}/bin/sqlplus /nolog @${SCRIPT_DIR}/JServer.sql
${ORACLE_HOME}/bin/sqlplus /nolog @${SCRIPT_DIR}/ordinst.sql
${ORACLE_HOME}/bin/sqlplus /nolog @${SCRIPT_DIR}/spatial.sql
${ORACLE_HOME}/bin/sqlplus /nolog @${SCRIPT_DIR}/cwmlite.sql
#${ORACLE_HOME}/bin/sqlplus /nolog @${SCRIPT_DIR}/demoSchemas.sql
${ORACLE_HOME}/bin/sqlplus /nolog@${SCRIPT_DIR}/CreateClustDBViews.sql
${ORACLE_HOME}/bin/sqlplus /nolog @${SCRIPT_DIR}/postDBCreation.sql
exit

16. Check the permissions for runall.sh. It requires execute permission for the user.

17. Run the command to create the starter database:


$ runall.sh

Using VERITAS Scripts to Create Database on Cluster File


System
When creating a starter database in a cluster file system, manually create the Oracle
database using the scripts in the directory /opt/VRTSvcs/rac/db_scripts/fs. A
README file in that directory guides you through the following procedure, which assumes
you use sh or ksh. If you use another shell, make sure the syntax is appropriate.

1. Create a disk group on shared storage in which to create the database. For example:
# vxdg init racfs_db c2d2t1s1

2. Log in as user oracle.

3. Create a directory and copy the contents of /opt/VRTSvcs/rac/db_scripts/fs


to it. Don’t edit the original files in the original directory because creating the database
replaces the variables in the scripts. For example:
$ mkdir db_cr_fs
$ cp /opt/VRTSvcs/rac/db_scripts/fs/* db_cr_fs

4. Read the file README that is copied with the database creation scripts. This procedure
is described in the README.

5. Edit the file conf, and set all the variables as they apply to your environment:
$ vi conf

6. Log in as root user.

Appendix E, Creating Starter Databases 139


Using VERITAS Scripts to Create a Starter Database

7. As root user, source the conf file:


# . conf

8. Run the script prep_fs. This script creates and mounts a file system with the
required options and settings based on the contents of the conf file.

9. Log in as user oracle.

10. As user oracle, source the conf file:


$ . conf

11. Set the ORACLE_BASE and ORACLE_HOME environment variables.

12. Run the convert command. This command uses the values of the environment
variables set in the previous step to modify the database creation scripts. Enter:
$ convert

13. Examine the script runall.sh and use comment symbols to prevent the use of any
database options you don’t want:
$ vi runall.sh
The following example shows the runall.sh script edited such that demo schemas
are not created:
# This is a sample database creation script.
# It is assumed that the script is run from Bourne or K shell.
# Comment out those lines which you don’t want to be executed,
# e.g., if demo schemas are not required, comment out the
# corresponding line.

ORACLE_SID=${INSTANCE_1}
export ORACLE_SID

# Create the directory structure for Oracle logs


mkdir -p ${ORACLE_BASE}/admin/${DB_NAME}/create
mkdir -p ${ORACLE_BASE}/admin/${DB_NAME}/bdump
mkdir -p ${ORACLE_BASE}/admin/${DB_NAME}/cdump
mkdir -p ${ORACLE_BASE}/admin/${DB_NAME}/udump

# Add this entry in the oratab: SEA1:${ORACLE_HOME}:Y


${ORACLE_HOME}/bin/sqlplus /nolog @${SCRIPT_DIR}/CreateDB.sql
${ORACLE_HOME}/bin/sqlplus /nolog@${SCRIPT_DIR}/CreateDBFiles.sql
${ORACLE_HOME}/bin/sqlplus /nolog @${SCRIPT_DIR}/CreateDBCatalog.sql
${ORACLE_HOME}/bin/sqlplus /nolog @${SCRIPT_DIR}/JServer.sql
${ORACLE_HOME}/bin/sqlplus /nolog @${SCRIPT_DIR}/ordinst.sql
${ORACLE_HOME}/bin/sqlplus /nolog @${SCRIPT_DIR}/spatial.sql
${ORACLE_HOME}/bin/sqlplus /nolog @${SCRIPT_DIR}/cwmlite.sql

140 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Using VERITAS Scripts to Create a Starter Database

#${ORACLE_HOME}/bin/sqlplus /nolog @${SCRIPT_DIR}/demoSchemas.sql


${ORACLE_HOME}/bin/sqlplus /nolog@${SCRIPT_DIR}/CreateClustDBViews.sql
${ORACLE_HOME}/bin/sqlplus /nolog @${SCRIPT_DIR}/postDBCreation.sql
exit

14. Check the permissions for runall.sh. It requires execute permission for the user.

15. Run the command to create the starter database:


$ runall.sh

Appendix E, Creating Starter Databases 141


Using VERITAS Scripts to Create a Starter Database

142 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Index
Symbols /sbin/lltstat -p (listing ports in use) 123
.rhosts convert 138
editing to enable rsh 25, 56, 60, 62, 64, 67 format (verify disks) 103
editing to remove rsh permissions 69 lmxconfig -c (configure LMX) 124
/etc/default/vxdg 52 lmxconfig -U (unconfigure LMX) 124
/opt/VRTSvcs/bin/chk_dbac_pkgs 39 vxassist 53
/sbin/lmxconfig 124 vxdctl enable (scan disks) 103
/sbin/vcsmmconfig, starting VCSMM 98 vxdg list (disk group information) 52
vxedit (set shared volume mode) 135
Numerics
vxfenclearpre 107
9iinstall script 56, 59
Communications
9iP2install script 61
data stack illustration 5
9iP3install script 61
LLT and GAB overview 6
A provided by GAB 7
Activation mode, setting 53, 135, 137 Configuration files
Agents LMX tunable parameters 123
CFSMount 120 sample main.cf file 109
CVMCluster 116 VXFEN tunable parameters 125
CVMVolDg 118 Connectivity policy of disk groups 52
Oracle. See Oracle enterprise agent Contexts
Attributes also know as minors 123
CFSMount agent 120 LMX tunable parameter 123
CVMCluster agent 116 convert command 138
CVMVolDg agent 119 Coordinator disks
Oracle agent, setting as local 73 concept of 14
Sqlnet agent, setting as local 73 description 81
C setting up 44
CFS (Cluster File System) cr_vols script 138
extension of VxFS 9 CVM (Cluster Volume Manager)
used with ODM 9 architecture 8
CFSMount agent 120 extension of VxVM 8
definition 118 CVM service group
sample configuration 121 configuring in main.cf 73
type definition 119 created after installation 47
CFSTypes.cf file 120 CVMCluster agent
Cluster Manager (CM) 3 description 115
Cluster membership 3, 12 sample configuration 117
Commands type definition 116

143
CVMTypes.cf file 116, 119 port memberships 46
CVMVolDg agent getcomms (troubleshooting script) 97
description 118 getdbac (troubleshooting script) 97
type definition 119
H
D hagetcf (troubleshooting script) 97
Data corruption Hitachi Data Systems
preventing with I/O fencing 13 System Mode 186 must be set 41
system panics to prevent 105 Hitachi Data Systems storage
Database 99xx Series supported 41
using dbca to create 133
I
using scripts to create 137
I/O fencing
Dependencies among service groups 72
components 14
Disk group
description 81
creating for SRVM 53
overview 12
displaying information about 52
scenarios for I/O fencing 82
general guidelines 52
starting 45
importing 53, 135, 137
stopping 91
setting activation mode 53, 135, 137
Importing disk groups 53, 135, 137
setting the connectivity policy 52
Inter-Process Communication (IPC) 3
E
K
Ejected systems
Kernel driver parameters 123
error messages displayed 131
Keys
losing access to shared storage 81
registration keys, formatting of 85
recovering from ejection 105
removing registration keys 103
EMC Symmetrix 8000 series storage
PR flag requirement 41 L
Enterprise agent for Oracle License keys
See Oracle enterprise agent adding during installation 26
Environment variables obtaining xiv
MANPATH 24 Listener process
PATH variable 24 configuring as resource in main.cf 74
Error messages shown in configuration 11
LMX messages 127 LLT 6
running vxfenclearpre command 107 LMX
VXFEN messages 130 configuring with lmxconfig -c 124
VxVM, related to I/O fencing 130 error messages 127
when node is ejected 131 tunable driver parameters 123
unconfiguring 124
F
Local, defining attributes as 78
Fencing. See I/O fencing
Log files
Files
location of Oracle agent log files 79
.rhosts 25
location of VCS log files 79
/etc/default/vxdg 52
LUNs
Flowchart, RAC installation overview 20
using for coordinator disks 44
format command 103
verifying serial numbers for paths to 41
G
M
GAB
main.cf
part of communications package 5

144 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
example 109 displaying 46
example after installation 48 membership, list of 46
MANPATH environment variable 24 removing membership for 90
Manual pages, setting MANPATH 24 Ports, tunable kernel parameter 123
Minors prep_fs script 140
also known as contexts 123
R
appearing in LMX error messages 101
RAC
increasing maximum number of 124
architecture 1
O defined 1
ODM (Oracle Disk Manager) installation flowchart 20
described 9 Registration key
linking Oracle9i to VERITAS libraries 69 displaying with vxfenadm 85
Oracle enterprise agent formatting of 85
configuring for RAC 73 Registrations
documentation 76 for I/O fencing 81
location of log files 79 key formatting 85
Oracle instance 2 Reservations
Oracle service group description 81
configuring in main.cf 76 description and use 13
dependencies among resources 72 SCSI-3 persistent (PR) 18
Oracle9i rsh permissions
confirming parallel flag 71 for inter-system communication 25
installation tasks 51 removing 69
installing Release 1 locally 58 runall.sh script 138
installing Release 1 on shared disk 54 runInstaller, Oracle 9i utility 60, 62
installing Release 1 Patch 61
S
installing Release 2 locally 66
Scripts
installing Release 2 on shared disks 63
9iinstall 56, 59
prerequisites for installing 51
9iP2install, 9iP3install 61
runInstaller utility 60, 62
cr_vols 138
P prep_fs 140
Packages runall.sh 138
for CVM and CFS installation 34 using to create Database 137
for VCS installation 32 SCSI-3 persistent reservations
removing 91 described 13
verifying installation of 39 EMC Symmetrix support 41
Parallel attribute, setting for groups 73 requirement for I/O fencing 40
Parallel flag, confirming in Oracle 71 verifying EMC Symmetrix support 41
Parameters verifying that storage supports 40, 51
LMX tunable driver parameters 123 Service groups
shared memory parameter 51 CVM 72
vxfen tunable driver parameters 125 dependencies among 72
Patches Split brain
required for installation 22 described 12
verifying installation of 39 removing risks associated with 13
PATH environment variable, setting 24 Sqlnet agent, setting local attributes 73
Persistent reservations 41, 81 Sqlnet resource, configuring in main.cf 74
Ports, GAB srvconfig (Oracle SRVM) component 53, 57

Index 145
SRVM, creating disk group for 53 VERITAS ODM libraries, linking 69
Storage vxassist command 53
setting up shared in cluster 18 vxdctl command 103
supported for DBE/AC 41 vxfen command 45, 91
testing for I/O fencing 42 vxfen driver
kernel tunable parameters 125
T
starting 45
Tablespaces
unconfiguring 91
table of tablespace names and sizes 134
VXFEN error messages 130
Troubleshooting scripts 97
vxfenadm command
U options for administrators 85
Uninstalling DBE/AC for Oracle9i RAC 92 using -i to read LUN info 41
uninstallvcs utility 92 vxfenclearpre command
V error messages 107
VCS running 107
architecture 10 vxfentab file, created by rc script 45
location of log files 79 vxfentsthdw utility, using to test disks 42
uninstalling previous version 22 VxFS
VCS membership module 12 removing previous version 22
VCSIPC Solaris 8 patches for 22
copying library into place 65 VxVM
errors in trace/log files 101 CVM is extension of 8
VCSMM module 12 removing 92
vcsmmconfig command 98 Solaris 8 patches for 22

146 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide

Você também pode gostar