Escolar Documentos
Profissional Documentos
Cultura Documentos
July 2002
N08843F
Disclaimer
The information contained in this publication is subject to change without notice.
VERITAS Software Corporation makes no warranty of any kind with regard to this
manual, including, but not limited to, the implied warranties of merchantability and
fitness for a particular purpose. VERITAS Software Corporation shall not be liable for
errors contained herein or for incidental or consequential damages in connection with the
furnishing, performance, or use of this manual.
Copyright
Copyright © 2002 VERITAS Software Corporation. All rights reserved. VERITAS,
VERITAS Software, the VERITAS logo, and all other VERITAS product names and slogans
are trademarks of VERITAS Software Corporation in the USA and/or other countries.
Other product names and/or slogans mentioned herein may be trademarks or registered
trademarks or their respective companies.
VERITAS Software Corporation
350 Ellis St.
Mountain View, CA 94043
Phone 650–527–8000
Fax 650–527–2908
www.veritas.com
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
How This Guide is Organized . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
VERITAS Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xii
VERITAS Volume Manager Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xii
VERITAS File System Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xii
VERITAS Cluster Server Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
VERITAS Cluster Server Oracle Enterprise Agent . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Man Pages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Oracle Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Obtaining License Keys for DBE/AC for Oracle9i RAC . . . . . . . . . . . . . . . . . . . . . . . . . xiv
Using the VERITAS vLicenseTM Web Site to Obtain a License Key . . . . . . . . . . xiv
Faxing the License Key Request Form to Obtain a License Key . . . . . . . . . . . . xiv
Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xv
Technical Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xv
iii
Traffic Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Heartbeat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Group Membership Services/Atomic Broadcast (GAB) . . . . . . . . . . . . . . . . . . . . . . . 6
Cluster Membership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Cluster Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Cluster Volume Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
CVM Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Cluster File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
CFS Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
CFS Usage in DBE/AC for RAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Oracle Disk Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
ODM Clustering Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
VERITAS Cluster Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
VCS Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
DBE/AC Service Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
DBE/AC OSD Layer Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Cluster Membership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Inter-Process Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
I/O Fencing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Understanding Split Brain and the Need for I/O Fencing . . . . . . . . . . . . . . . . . . . . 13
SCSI-3 Persistent Reservations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
I/O Fencing Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Data Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Coordinator Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
I/O Fencing Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
iv VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Phase One: Setting up and Configuring the Hardware . . . . . . . . . . . . . . . . . . . . 18
Phase Two: Installing DBE/AC and Configuring its Components . . . . . . . . . . 19
Phase Three: Installing Oracle9i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Phase Four: Creating the Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Phase Five: Setting up VCS to Manage RAC Resources . . . . . . . . . . . . . . . . . . . . 20
Installation and Configuration Flowchart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Prerequisites for Installing Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Preparing for Using the installDBAC Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
General Information Requested by Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Web-based ClusterManager Information Requested by Script . . . . . . . . . . . . . . 23
SMTP/SNMP (Optional) Requested by Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Installing DBE/AC Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Setting the PATH Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Setting the MANPATH Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Setting Up System-to-System Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Mounting the VERITAS DBE/AC for Oracle9i RAC CD . . . . . . . . . . . . . . . . . . . . . . . 25
Running the VERITAS DBE/AC Install Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Verifying Systems for Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Configuring the Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Configuring Cluster Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Configuring SMTP Email Notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Configuring SNMP Trap Notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Installing VCS Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Configuring VCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Starting VCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Installing Base CVM and CFS Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Configuring CVM and CFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Completing Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Verifying All Prerequisite Packages Are Installed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Configuring VxVM Using vxinstall and Rebooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
v
Verifying Storage Supports SCSI-3 Persistent Reservations . . . . . . . . . . . . . . . . . . . . . . 40
For EMC Symmetrix 8000 Series Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
For Hitachi Data Systems 99xx Series Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Verifying Shared Storage Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Using the vxfentsthdw Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Setting Up Coordinator Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Requirements for Coordinator Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Configuring a Disk Group Containing Coordinator Disks . . . . . . . . . . . . . . . . . . . . 44
Starting I/O Fencing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
The Contents of the /etc/vxfentab File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Replacing Failed Coordinator Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Running gabconfig -a After Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Example VCS Configuration File After Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Example main.cf file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
vi VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Removing Temporary rsh Access Permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Linking VERITAS ODM Libraries to Oracle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Creating Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
vii
Unconfiguring the CFS Drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Unconfiguring the I/O Fencing Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Removing Optional CFS and CVM Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Removing Required CFS and CVM Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Remove VxVM Package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Uninstalling VCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Removing License Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Remove Other Configuration Files (Optional) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Chapter 7. Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Running Scripts for Engineering Support Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
getdbac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
getcomms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
hagetcf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Troubleshooting Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Error When Starting an Oracle Instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Missing Dialog Box During Installation of Oracle9i Release 1 . . . . . . . . . . . . . . . . . 98
Missing Dialog Box During Installation of Oracle9i Release 2 . . . . . . . . . . . . . . . . . 99
Instance Numbers Must be Unique (Error Code 205) . . . . . . . . . . . . . . . . . . . . . . . . 99
ORACLE_SID Must be Unique (Error Code 304) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Oracle Log Files Show Shutdown Called Even When Not Shutdown Manually 100
File System Configured Incorrectly for ODM Shuts Down Oracle . . . . . . . . . . . . 100
VCSIPC Errors in Oracle Trace/Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Shared Disk Group Cannot be Imported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
vxfentsthdw Fails When SCSI TEST UNIT READY Command Fails . . . . . . . . . . 102
CVMVolDg Does Not Go Online Even Though CVMCluster is Online . . . . . . . . 102
Restoring Communication Between Host and Disks After Cable Disconnection 102
Adding a System to an Existing Two-Node Cluster Causes Error . . . . . . . . . . . . 103
Removing Existing Keys From Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
System Panics to Prevent Potential Data Corruption . . . . . . . . . . . . . . . . . . . . . . . 105
viii VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
How vxfen Driver Checks for Pre-existing Split Brain Condition . . . . . . . . . . 105
Case 1: System 2 Up, System 1 Ejected (Actual Potential Split Brain) . . . . . . . 106
Case 2: System 2 Down, System 1 Ejected (Apparent Potential Split Brain) . . 106
Using vxfenclearpre Command to Clear Keys After Split Brain . . . . . . . . . . . . . . . 107
ix
VxVM Errors Related to I/O Fencing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
VXFEN Driver Error Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
VXFEN Driver Informational Message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Informational Messages When Node is Ejected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
x VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Preface
The VERITAS Database Edition/Advanced ClusterTM for Oracle9i RAC software is an
integrated set of software products. It enables administrators of Oracle Real Application
Cluster (RAC) to operate a database in an environment of cluster systems running
VERITAS Cluster ServerTM (VCS) and the cluster features of VERITAS Volume ManagerTM
and VERITAS File SystemTM, also known as CVM and CFS, respectively.
The VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide is
intended for system administrators responsible for configuring and maintaining Oracle
Real Application Cluster running on a VCS cluster with VxVM disk management.
This guide assumes that the administrator has a:
◆ Basic understanding of system and database administration
◆ Working knowledge of the Solaris operating system
◆ Working knowledge of Oracle databases
xi
VERITAS Documentation
VERITAS Documentation
The following documents provide important information on the software components
that support CVM, VxFS, and VCS. They are provided as PDF files on the VERITAS
Storage Solutions 3.5 CDs.
xii VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Oracle Documentation
Man Pages
See “Setting the MANPATH Variable” on page 24 to enable access to manual pages.
Oracle Documentation
The following documents, not shipped with VERITAS Database Edition/AC for Oracle9i
RAC, provide necessary related information:
◆ Oracle9i Installation Guide
◆ Oracle9i Parallel Server Setup and Configuration Guide
◆ Notes for the latest Oracle9i Data Server patch set
Preface xiii
Obtaining License Keys for DBE/AC for Oracle9i RAC
You can obtain your license key most efficiently using the VERITAS vLicense web site.
The License Key Request Form (LKRF) has all the information needed to establish a User
Account on vLicense and generate your license key. The LKRF is a one-page insert
included with the CD in your product package. You must have this form to obtain a
software license key for your VERITAS product.
Note Do not discard the License Key Request Form. If you have lost or do not have the form
for any reason, email license@veritas.com.
The License Key Request Form contains information unique to your VERITAS software
purchase. To obtain your software license key, you need the following information shown
on the form:
◆ Your VERITAS customer number
◆ Your order number
◆ Your serial number
Follow the appropriate instructions on the vLicense web site to obtain your license key
depending on whether you are a new or previous user of vLicense:
xiv VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Conventions
Conventions
Typeface Usage
courier computer output, files, attribute names, device names, and directories
italic variables
Symbol Usage
% C shell prompt
Technical Support
For assistance with any VERITAS product, contact Technical Support:
U.S. and Canada: call 1-800-342-0652.
Europe, the Middle East, or Asia: visit the Technical Support Web site at
http://support.veritas.com for a list of each country’s contact information.
Software updates, Tech Notes, product alerts, and hardware compatibility lists, are also
available from http://support.veritas.com.
To learn more about VERITAS and its products, visit http://support.veritas.com.
Preface xv
Technical Support
xvi VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Overview: DBE/AC for Oracle9i RAC 1
What is RAC?
Real Application Clusters (RAC) is a parallel database environment that takes advantage
of the processing power of multiple, interconnected computers. A cluster comprises two
or more computers, also known as nodes or servers. In RAC environments, all nodes
concurrently run Oracle instances and execute transactions against the same database.
RAC coordinates each node’s access to the shared data to provide consistency and
integrity. Each node adds its processing power to the cluster as a whole and can increase
overall throughput or performance.
RAC serves as an important component of a robust high availability solution. A properly
configured Real Application Clusters environment can tolerate failures with minimal
downtime and interruption to users. With clients accessing the same database on multiple
nodes, failure of a node does not completely interrupt access, as clients accessing the
surviving nodes continue to operate. Clients attached to the failed node simply reconnect
to a surviving node and resume access. Recovery after failure in a RAC environment is far
quicker than a failover database, because another instance is already up and running.
Recovery is simply a matter of applying outstanding redo log entries from the failed node.
RAC Architecture
At the highest level, RAC is multiple Oracle instances, accessing a single Oracle database
and carrying out simultaneous transactions. An Oracle database is the physical data
stored in tablespaces on disk. An Oracle instance represents the software processes
necessary to access and manipulate the database. In traditional environments, only one
instance accesses a database at a specific time. With Oracle RAC, multiple instances
communicate to coordinate access to a physical database, greatly enhancing scalability
and availability.
1
What is RAC?
OCI Client
Listener Listener
9i RAC 9i RAC
Instance High Speed Interconnect Instance
ODM ODM
Database
VCS
VCS
Datafiles &
Control
CFS (files) CFS
CVM CVM
Index and Temp
(files)
Online Redo
Logs (files)
Archive Logs
(files)
Oracle Instance
The Oracle instance consists of a set of processes and shared memory space that provide
access to the physical database. The instance consists of server processes acting on behalf
of clients to read data into shared memory as well as make modifications of it, and
includes background processes to write changed data out to disk.
In an Oracle9i RAC environment, multiple instances access the same database. This
requires significant coordination between the instances to keep each instance’s view of the
data consistent.
2 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
What is RAC?
DBE/AC Overview
Database Edition/Advanced Cluster provides a complete I/O and communications stack
to support Oracle9i RAC. It also provides monitoring and management of instance
startup and shutdown. The following section describes the overall data and
communications flow of the DBE/AC stack.
LGWR LGWR
ARCH Oracle9i Oracle9i ARCH
CKPT RAC RAC CKPT
DBWR DBWR
Server Instance Redo Instance Server
Log
Files Archive
Storage
ODM ODM
CFS Data
CFS
and
Disk I/O Control Disk I/O
CVM Files CVM
The diagram above details the overall data flow from an instance running on a server to
the shared storage. The various Oracle processes making up an instance (such as DB
Writer, Log Writer, Checkpoint, Archiver, and Server) read and write data to storage via
the I/O stack shown in the diagram. Oracle communicates via the Oracle Disk Manager
(ODM) interface to the VERITAS Cluster File System (CFS), which in turn accesses the
storage via the VERITAS Cluster Volume Manager (CVM). Each of these components in
the data stack is described in this chapter.
4 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
DBE/AC Overview
Cluster State
VCS Core
VCS Core
GAB
GAB
ODM Datafile Management ODM
LLT
LLT
CFS Filesystem MetaData CFS
The diagram above shows the data stack as well as the communications stack. Each of the
components in the data stack requires communications with its peer on other systems to
properly function. RAC instances must communicate to coordinate protection of data
blocks in the database. ODM processes must communicate to coordinate data file
protection and access across the cluster. CFS coordinates metadata updates for file
systems, and finally CVM must coordinate the status of logical volumes and distribution
of volume metadata across the cluster. VERITAS Cluster Server (VCS) controls starting
and stopping of components in the DBE/AC stack and provides monitoring and
notification on failure. VCS must communicate status of its resources on each node in the
cluster. For the entire system to work, each layer must reliably communicate.
The diagram also shows Low Latency Transport (LLT) and the Group Membership
Services/Atomic Broadcast (GAB), which make up the communications package central
to the operation of DBE/AC. During an operational steady state, the only significant
traffic through LLT and GAB is due to Lock Management and Cache Fusion, while the
traffic for the other data is relatively sparse.
DBE/AC Communications
DBE/AC communications consist of LLT and GAB
IPC
RAC Inter-Process Communications (IPC) uses the VCSIPC shared library for interprocess
communication. In turn, VCSIPC uses LMX, an LLT multiplexer, to provide fast data
transfer between Oracle processes on different cluster nodes. IPC leverages all of LLT’s
features.
Traffic Distribution
LLT distributes (load balances) inter-node communication across all available private
network links. All cluster communications are evenly distributed across as many as eight
network links for performance and fault resilience. On failure of a link, traffic is redirected
to remaining links.
Heartbeat
LLT is responsible for sending and receiving heartbeat traffic over network links. This
heartbeat is used by the Group Membership Services function of GAB to determine
cluster membership.
6 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
DBE/AC Communications
Cluster Membership
A distributed system such as DBE/AC requires all nodes to be aware of which nodes are
currently participating in the cluster. Nodes can leave or join the cluster because of
shutting down, starting up, rebooting, powering off, or faulting.
Cluster membership is determined using LLT heartbeats. When systems no longer receive
heartbeats from a peer for a predetermined interval, then a protocol is initiated to exclude
the peer from the current membership. When systems start receiving heartbeats from a
peer that is not part of the membership, then a protocol is initiated for enabling it to join
the current membership.
The new membership is consistently delivered to all nodes and actions specific to each
module are initiated. For example, following a node fault, the Cluster Volume Manager
must initiate volume recovery, and Cluster File System must do a fast parallel file system
check.
Cluster Communications
GAB’s second function is to provide reliable cluster communications, used by many
DBE/AC modules. GAB provides guaranteed delivery of point to point messages and
broadcast messages to all nodes. Point to point uses a send and acknowledgment. Atomic
Broadcast ensures all systems within the cluster receive all messages. If a failure occurs
while transmitting a broadcast message, GAB’s atomicity ensures that, upon recovery, all
systems have the same information.
NIC
GAB Messaging
NIC
- Cluster Membership/State
Server
Server
- Datafile Management
- Filesystem Metadata
NIC
NIC
- Volume Management
CVM Architecture
Cluster Volume manager is designed with a master/slave architecture. One node in the
cluster acts as the configuration master for logical volume management, and all others are
slaves. Any node can take over as master if the existing master fails. The CVM master is
established on a per cluster basis.
Since CVM is an extension of VxVM, it operates in a very similar fashion. The volume
manager configuration daemon, vxconfigd, maintains the configuration of logical
volumes. Any changes to volumes are handled by vxconfigd, which updates the
operating system at the kernel level when the new volume state is determined. For
example, if a mirror of a volume fails, it is detached from the volume and the error is
passed off to vxconfigd, which then determines the proper course of action, updates the
new volume layout, and informs the kernel of a new volume layout. CVM extends this
behavior across multiple nodes in a master-slave architecture. Changes to a volume are
propagated to the master vxconfigd. The vxconfigd process on the master pushes these
changes out to slave vxconfigd processes, each of which in turn updates the local kernel.
CVM does not impose any write locking between nodes. Each node is free to update any
area of the storage. All data integrity is the responsibility of the upper application. From
an application perspective, logical volumes are accessed identically on a stand-alone
system as on a CVM system.
CVM imposes a “Uniform Shared Storage” model. All systems must be connected to the
same disk sets for a given disk group. Any system unable to see the entire set of physical
disks as other nodes for a given disk group cannot import the disk group. If a node loses
contact with specific disk, it is excluded from participating in the use of that disk.
CVM uses GAB and LLT for transport of all its configuration data.
8 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Cluster File System
CFS Architecture
CFS is designed with a master/slave architecture. Though any node can initiate an
operation to create, delete, or resize data, the actual operation is carried out by the master
node. Since CFS is an extension of VxFS, it operates in a very similar fashion. As with
VxFS, it caches metadata and data in memory (typically called buffer cache or vnode
cache). A distributed locking mechanism, called the Global Lock Manager (CGM) is used
for metadata and cache coherency across the multiple nodes. GLM provides a way to
ensure all nodes have a consistent view of the file system. When any node wishes to read
data, it requests a shared lock. If another node wishes to write to the same area of the file
system, it must request an exclusive lock. The GLM revokes all shared lock before
granting the exclusive lock and informs the reading nodes that their data is no longer
valid. If a node requests a shared lock for data that is exclusively held by another node, in
addition to revoking the exclusive lock, GLM sends the data over the cluster interconnects
to the requesting node.
easy-to- manage file systems, including the support of resizing data files while in use.
ODM improves manageability by allowing Oracle to directly invoke operations such as
resizing data files while the database is online.
VCS Architecture
VCS communicates the status of resources running on each system to all systems in the
cluster. The High Availability Daemon, or “HAD,” is the main VCS daemon running on
each system. HAD collects all information about resources running on the local system,
forwarding it to all other systems in the cluster. It also receives information from all other
cluster members to update its own view of the cluster.
Each type of resource supported in a cluster is associated with an agent. An agent is an
installed program designed to control a particular resource type. Each system runs
necessary agents to monitor and manage resources configured to run on that node. The
agents communicate with HAD on the node. HAD distributes its view of resources on the
local node to other nodes in the cluster using GAB and LLT.
10 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
VERITAS Cluster Server
CFSMount
IP CFSfsckd CVMVolDg
Oracle Database
Oracle Service Group
CVMVolDg
Cluster Membership
Oracle provides an API for providing membership information to RAC. This is known as
skgxn (system kernel generic interface node membership). Oracle RAC expects to make
specific skgxn calls and get membership information required. In DBE/AC this is
implemented as a library linked to Oracle when 9i RAC is installed. The skgxn library in
turn makes ioctl calls to a kernel module for membership information.
This module is known as VCSMM (VCS membership module). Oracle uses the linked in
skgxn library to communicate with VCSMM, which obtains membership information
from GAB.
Inter-Process Communications
In order to coordinate access to a single database by multiple instances, Oracle uses
extensive communications between nodes and instances. Oracle uses Inter-Process
Communications (IPC) for locking traffic and Cache Fusion.
DBE/AC uses LLT to support IPC in a cluster and leverages its high performance and
fault resilient capabilities.
Oracle has a defined API for Inter-Process Communication that isolates Oracle from the
underlying transport mechanism. Oracle communicates between processes on instances,
and does not have to know how the data is moved between systems. The Oracle API for
IPC is referred to as System Kernel Generic Interface Inter-Process Communications
(skgxp).
DBE/AC provides a library linked to Oracle at install time to implement the skgxp
functionality. This module communicates with the LLT Multiplexer (LMX) via ioctl calls.
The LMX module is a kernel module designed to receive communications from the skgxn
module and pass to the correct process on the correct instance on other nodes. The module
“multiplexes” communications between multiple related processes into a single multi
threaded LLT port between systems. LMX leverages all features of LLT, including load
balancing and fault resilience.
I/O Fencing
I/O fencing is a new feature to DBE/AC, designed to guarantee data integrity, even in the
case of faulty cluster communications causing a split brain condition.
12 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
I/O Fencing
ejects another node from the membership. Nodes not in the membership cannot issue this
command. Once a node is ejected, it cannot in turn eject another. This means ejecting is
final and “atomic”.
In the DBE/AC implementation, a node registers the same key for all paths to the device.
This means that a single preempt and abort command ejects a node from all paths to the
storage device.
There are several important concepts to summarize here
◆ Only a registered node can eject another
◆ Since a node registers the same key down each path, ejecting a single key blocks all
I/O paths from the node
◆ Once a node is ejected, it has no key registered and it cannot eject others.
The SCSI-3 PR specification simply describes the method to control access to disks with
the registration and reservation mechanism. The method to determine who can register
with a disk and when a registered member should eject another node is implementation
specific. The following paragraphs describe DBE/AC I/O fencing concepts and
implementation.
Data Disks
Data disks are standard disk devices used for data storage. These can be physical disks or
RAID Logical Units (LUNs). These disks must support SCSI-3 PR. Data disks are
incorporated in standard VxVM/CVM disk groups. In operation, CVM is responsible for
fencing data disks on a disk group basis. Since VxVM enables I/O fencing, several other
features are provided. Disks added to a disk group are automatically fenced, as are new
paths discovered to a device.
Coordinator Disks
Coordinator disks are special purpose disks in a DBE/AC environment. Coordinator
disks are three standard disks, or LUNs, that are set aside for use by I/O fencing during
cluster reconfiguration.
14 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
I/O Fencing
The coordinator disks act as a global lock device during a cluster reconfiguration. This
lock mechanism is used to determine who gets to fence off data drives from other nodes.
From a high level, a system must eject a peer from the coordinator disks before it can fence
the peer from the data drives. This concept of racing for control of the coordinator disks to
gain the capability to fence data disks is key to understanding the split brain prevention
capability of fencing.
Coordinator disks cannot be used for any other purpose in the DBE/AC configuration.
The user may not store data on these disks, or include the disks in a disk group used by
user data. The coordinator disks can be any three disks that support SCSI-3 PR. VERITAS
typically recommends the smallest possible LUNs for coordinator use.
16 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installing the Component Packages 2
Installation Overview
The following few paragraphs introduce what’s involved in installing and configuring a
DBE/AC cluster with Oracle9i RAC, and describe a completed installation.
17
Installation Overview
Clients
LAN
Server A Server B
Private Network
Switch SAN
Disk Arrays
Coordinator Disks
18 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installation Overview
Note Oracle9i Release 1 requires a special procedure prior to installing the software.
Carefully follow the procedure “Installing Oracle9i Release 1 Software” on page 54.
Another Yes
system?
No
See VxVM Run vxinstall
System Administrator’s
Guide
Reboot
Yes
Another
system?
No
See next page
20 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installation and Configuration Flowchart
Create /etc/vxfendg
Another Yes
system?
No
See Chapter 3, and VxVM
System Administrator’s Guide Create shared DG’s / Vol’s for Oracle
See Chapter 4
& VCS Oracle E.A. ICG Edit Oracle group
/ 450 MB
/tmp 100MB for the duration of the installation
- For the installation of Oracle9i, each local system requires approximately 4.6
gigabytes. For the installation of Oracle9i Release 1, a portion of the disk space,
approximately 2 GB, is required for copies of the Oracle9i installation disks.
✔ Remove any versions of VCS and unmount the file systems (VxFS) that currently run
on the systems where you intend to install VERITAS DBE/AC for Oracle9i RAC. The
VERITAS DBE/AC for Oracle9i RAC product requires the versions of VCS, VxVM, and
VxFS contained on the CD. Refer to the installation guide corresponding to each
product for uninstallation instructions. If you are running versions of VCS, VxVM, or
VxFS, uninstall VCS first and upgrade to release 3.5 using the installDBAC utility.
✔ Obtain and install the Sun SPARCstorage Array (SUNWsan) package on each node.
✔ Obtain and install the Solaris 8 patches required by VxVM one each node; they
include 108528-14, 108827-19, 109529-06, 110722-01, and 111413-06.
✔ Obtain and install the Solaris 8 patches required by VxFS on each node: 108528-02 and
108901-03.
22 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Preparing for Using the installDBAC Script
Note The installation of VCS using the installDBAC installation utility is essentially the
same as that described in the VERITAS Cluster Server Installation Guide. The VCS
Installation Guide contains more extensive details about VCS installation.
Note For root user, do not define paths to a cluster file system in the
LD_LIBRARY_PATH variable. For example, define ORACLE_HOME/lib in
LD_LIBRARY_PATH for user oracle only.
The path defined to /opt/VRTSob/bin is optional, required if you install the optional
package, VERITAS Enterprise Administrator.
24 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installing DBE/AC Packages
Note Remove the remote rsh access permissions when they are no longer required for
installation and disk verification. See “Removing Temporary rsh Access
Permissions” on page 69.
3. At the prompt, enter the names of the two systems for installation:
Enter the names of the systems on which VCS is to be installed
separated by spaces (example: system1 system2): galaxy nebula
Analyzing . . .
4. The utility begins by checking that the systems are ready for installation. On the
system where you are running the utility, the utility verifies that:
b. The required Solaris patches and the Solaris SUNWsan package are installed
Note You can ignore warnings about recommended optional patches that are not present
5. Before installing any packages, the utility checks whether the VERITAS license
package, VRTSvlic, is present on the installation system.
- If the VRTSvlic package is not present, the installation utility installs it.
- If the installed VRTSvlic package is not the current version, the utility prompts
you to upgrade to the current version. The utility exits if you decline to upgrade.
6. The utility repeats the checks performed in step 4 and step 5 on the second system.
7. The utility checks for a VERITAS DBE/AC license key on the first system. If it cannot
find a key, it prompts you to enter one:
You do not have a Database Edition for Advanced Cluster license
installed on galaxy.
26 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installing DBE/AC Packages
8. When you are prompted to continue VERITAS Cluster Server installation, press
Return to accept the default (Y) or enter “Y.”
You can skip VERITAS Cluster server installation, if you need to
install only VERITAS Volume manager and VERITAS Filesystem
10. The VCS portion of the installation utility verifies the license status of each node.
VCS licensing verification:
11. When you are ready to start installation of VCS, enter “Y” when prompted:
Are you ready to start the Cluster installation now? (Y) Y
12. For each system, the VCS portion of the utility checks if any of the packages to be
installed are present. If any are found, the utility continues if there are no conflicts or
exits if packages must be manually uninstalled.
13. For each system, the utility checks for the necessary file system space.
14. For each system, the utility check whether any DBE/AC processes are running. Any
running processes are stopped.
15. The script highlights the information required to proceed with installation of VCS:
To configure VCS the following is required:
Note When the installDBAC installation script shows a single response within
parentheses, it is the default response. Press Return to choose the default response.
16. After you enter the cluster ID number, the program discovers and lists all NICs on the
first system:
Discovering NICs on galaxy: ..... discovered hme0 qfe0 qfe1
qfe2 qfe3
Enter the NIC for the first private network heartbeat link on
galaxy: (hme0 qfe0 qfe1 qfe2 qfe3) qfe0
Enter the NIC for the second private network heartbeat link on
nebula: (hme0 qfe1 qfe2 qfe3) qfe1
Are you using the same NICs for private heartbeat links on all
systems? (Y)
If you answer “N,” the program prompts for the NICs of each system.
17. The installation program asks you to verify the cluster information:
Cluster information verification:
28 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installing DBE/AC Packages
18. The installation program describes the information required to configure Cluster
Manager (Web Console):
The following information is required to configure the Cluster
Manager:
19. In reply to the prompts that follow, confirm that you want to use the discovered
public NIC on the first system. Also indicate whether all systems use the same public
NIC. Press Return to choose the default responses.
Active NIC devices discovered on galaxy: hme0
Enter the NIC for the Cluster Manager (Web Console) to use on
galaxy: (hme0)
Is hme0 the public NIC used for all systems (Y)
21. The installation program asks you to verify the Cluster Manager information:
Cluster Manager (Web Console) verification:
NIC: hme0
IP: 10.180.88.199
Netmask: 255.0.0.0
22. The installation program describes the information required to configure the SMTP
notification feature of VCS:
The following information is required to configure SMTP
notification:
23. Respond to the prompts and provide information to configure SMTP notification.
Enter the domain based address of the SMTP server (example:
smtp.yourcompany.com) smtp.xyzstar.com
Enter the email address of the SMTP recipient ozzie@xyzstar.com
Enter the minimum severity of events for which mail should be
sent to ozzie@xyzstar.com: [I=Information, W=Warning, E=Error,
S=SevereError] w
Would you like to add another SMTP recipient? (N) y
Enter the email address of the SMTP recipient harriet@xyzstar.com
Enter the minimum severity of events for which mail should be
sent to harriet@xyzstar.com: [I=Information, W=Warning,
E=Error, S=SevereError] e
Would you like to add another SMTP recipient? (N)
30 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installing DBE/AC Packages
25. The installation program describes the information required to configure the SNMP
notification feature of VCS:
The following information is required to configure SNMP
notification:
26. Respond to the prompts and provide information to configure SNMP notification.
Enter the SNMP trap daemon port (162)
Please enter the SNMP console system name saturn
Enter the minimum severity of events for which SNMP traps should
be sent to saturn: [I=Information, W=Warning, E=Error,
S=SevereError] e
Would you like to add another SNMP console? (N) y
Please enter the SNMP console system name jupiter
Enter the minimum severity of events for which SNMP traps should
be sent to jupiter: [I=Information, W=Warning, E=Error,
S=SevereError] s
Would you like to add another SNMP console? (N)
28. After you have verified that the information you have entered is correct, the
installation program begins installing the packages on the first system:
Installing VCS on galaxy:
Configuring VCS
29. The installation program continues by creating configuration files and copying them
to each system:
Configuring DBED/AC ..................................... Done
Copying DBED/AC configuration files to galaxy ........... Done
Copying DBED/AC configuration files to nebula ........... Done
32 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installing DBE/AC Packages
Starting VCS
30. You can now start VCS and its components on each system:
Do you want to start the cluster components now? (Y)
.
.
.
31. When the utility completes the installation of VCS, it reports on:
- The creation of backup configuration files (named with the extension
“init.name_of_cluster”). For example, the file /etc/llttab in the installation
example has a backup file named /etc/llttab.init.vcs_racluster.
- The URL and the login information for Cluster Manager (Web Console), if you
chose to configure it. Typically this resembles:
http://10.180.88.199:8181/vcs
You can access the Web Console using the User Name: “admin” and the
Password: “password”. Use the /opt/VRTSvcs/bin/hauser command to add
new users.
- The location of a VCS installation report, which is named:
/var/VRTSvcs/installvcsReport.name_of_cluster
33. After the script installs and starts VCS on each cluster system, it prompts you about
installing the base VERITAS DBE/AC packages, which include those for the Cluster
Volume Manager (CVM) and Cluster File System (CFS) components.
The installDBAC script installs required packages followed by the optional
packages that you choose. The base packages are installed first on the first system
entered at the start of installation.
Installing on galaxy machine.
VRTSvxvm is required
VRTSvxfs is required
VRTSglm is required
VRTSgms is required
VRTSodm is required
34 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installing DBE/AC Packages
VRTScavf is required
VRTSvmman is Optional
VRTSvmdoc is Optional
VRTSfsdoc is Optional
VRTSob is Optional
VRTSvmpro is Optional
VRTSfspro is Optional
VRTSobgui is Optional
Continue? [y,n,?,q] y
34. Installation of the packages listed proceeds when you answer “y.” First, the required
packages are installed. After each package is installed, press Return to install the next
required package.
Checking existing package installation ...
Installing VRTSvxvm.
Installation of VRTSvxvm................................Done
Installing VRTSvxfs.
.
.
.
35. When prompted about installing the optional packages, you can choose which
packages you want to install. For example:
.
.
.
Installing VRTSvmman.
system VRTSvmman VERITAS Volume Manager, Manual Pages
This package is optional. Install? [y,n,?,q] y
Installation of VRTSvmman................................Done
.
.
.
VRTSobgui
VRTSfspro
VRTSvmpro
VRTSob
VRTSfsdoc
VRTSvmdoc
VRTSvmman
VRTScavf
VRTSodm
VRTSgms
VRTSglm
VRTSvxfs
VRTSvxvm
VRTSvxvm is required
VRTSvxfs is required
VRTSglm is required
VRTSgms is required
VRTSodm is required
VRTScavf is required
VRTSvmman is Optional
VRTSvmdoc is Optional
VRTSfsdoc is Optional
VRTSob is Optional
VRTSvmpro is Optional
VRTSfspro is Optional
VRTSobgui is Optional
36 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installing DBE/AC Packages
Continue? [y,n,?,q] y
38. The install script prompts you about configuring Cluster Volume Manager and
Cluster File System:
Successfully installed Foundation products on nebula
Would you like to configure Cluster Volume Manager and Cluster
File System [Y/N](Y) y
39. You can choose a different “timeout” value, if necessary, or press Return to accept the
default:
Enter the timeout value in seconds. This timeout will be
used by CVM in the protocol executed during cluster
reconfiguration. Choose the default value, if you do not wish
to enter any specific value.
40. When additional cluster information is displayed, “gab” is shown as the configured
transport protocol, meaning GAB is to provide communications between systems in
the cluster. For VERITAS DBE/AC for Oracle9i RAC, GAB is required.
------- Following is the summary of the information: ------
Cluster : vcs_racluster
Nodes : galaxy nebula
Transport : gab
-----------------------------------------------------------
Hit RETURN to continue.
41. The configuration of CVM begins. When CVM has been added, you receive a message
resembling:
You will now be prompted to enter whether you want to start
cvm on the systems in the cluster. If you choose to do so,
then attempt will be made to start the cvm on all the systems
in the cluster. It will not be started on the systems
which do not have cluster manager running.
========================================================
Cluster File System Configuration is in progress...
cfscluster: CFS Cluster Configured Successfully
38 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Verifying All Prerequisite Packages Are Installed
Completing Installation
42. After reporting that the product is successfully installed, the utility tells you that you
need to reboot all systems in the cluster:
You must reboot all the system in the cluster.
Note Do not reboot the systems at this time. You must first run the vxinstall utility on
each system before rebooting.
40 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Verifying Storage Supports SCSI-3 Persistent Reservations
Note Only supported storage devices can be used with I/O fencing of shared storage.
Currently, supported devices include: EMC Symmetrix 8000 Series or Hitachi Data
System Series 99xx. Other disks, even if they pass the following test procedure, are
not supported. Please see the VERITAS DBE/AC for Oracle9i RAC Release Notes for
additional information about supported shared storage.
The utility, which you can run from one system in the cluster, tests the storage by setting
SCSI-3 registrations on the disk you specify, verifying the registrations on the disk, and
removing the registrations from the disk. See the chapter on “I/O Fencing Scenarios” on
page 81 for information on I/O fencing and a description of how it works. Refer also to
the vxfenadm(1M) manual page.
Caution The utility vxfentsthdw overwrites and destroys existing data on the disks.
42 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Verifying Storage Supports SCSI-3 Persistent Reservations
3. Enter the name of the disk you are checking. For each node, the disk may be known
by the same name, as in our example.
Enter the disk name to be checked for SCSI-3 PGR on node galaxy in
the format: /dev/rdsk/cxtxdxsx
/dev/rdsk/c4t8d0s2
Enter the disk name to be checked for SCSI-3 PGR on node nebula in
the format: /dev/rdsk/cxtxdxsx
Make sure it’s the same disk as seen by nodes galaxy and nebula
/dev/rdsk/c4t8d0s2
Note the disk names, whether or not they are identical, must refer to the same
physical disk, or the testing terminates without success.
4. The utility starts to perform the check and report its activities. For example:
Keys registered for disk /dev/rdsk/c2t8d0s2 on node galaxy
5. For a disk that is ready to be configured for I/O Fencing on both systems, the utility
reports success. For example:
The disk /dev/rdsk/c4t8d0s2 is ready to be configured for I/O
Fencing on node galaxy
The disk /dev/rdsk/c4t8d0s2 is ready to be configured for I/O
Fencing on node nebula
6. Run the vxfentsthdw utility for each disk you intend to verify.
3. Import the disk group with the -t option so that it is not automatically imported
when the systems are rebooted:
# vxdg -t import vxfencoorddg
44 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Setting Up Coordinator Disks
4. Deport the disk group. The disk group is no longer imported. It is used to prevent the
coordinator disks from being used for other purposes.
# vxdg deport vxfencoorddg
Port Function
b I/O fencing
o VCSMM driver
q QuickLog daemon
46 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Example VCS Configuration File After Installation
cluster vcs_racluster (
UserNames = { admin = "cDRpdxPmHpzS." }
Administrators = { admin }
HacliUserLevel = COMMANDROOT
CounterInterval = 5
)
system galaxy (
)
system nebula (
)
group cvm (
SystemList = { galaxy = 0, nebula = 1 }
AutoFailOver = 0
Parallel = 1
AutoStartList = { galaxy, nebula }
)
CFSQlogckd qlogckd (
Critical = 0
)
CFSfsckd vxfsckd (
)
CVMCluster cvm_clus (
Critical = 0
CVMClustName = vcs_racluster
CVMNodeId = { galaxy = 0, nebula = 1 }
CVMTransport = gab
CVMTimeout = 200
)
48 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Example VCS Configuration File After Installation
// {
// CVMCluster cvm_clus
// CFSfsckd vxfsckd
// {
// CFSQlogckd qlogckd
// }
// }
50 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installing Oracle9i Software 3
This chapter describes how to install Oracle9i software. You can install Oracle9i on shared
storage or locally on each node. However, the shared storage installation is featured in this
guide. The configuration of VCS service groups described in Chapter 4 and the example
main.cf file shown in Appendix A are based on using shared storage to install Oracle9i.
1. Edit the file /etc/system and set the shared memory parameter. Refer to the
Oracle9i Installation Guide.
2. Verify that the disk arrays used for shared storage support SCSI-3 persistent
reservations and I/O fencing. Refer to “Verifying Storage Supports SCSI-3 Persistent
Reservations” on page 40.
The shared storage used with VERITAS DBE/AC for Oracle9i RAC must support
SCSI-3 persistent reservations. SCSI-3 persistent reservations support enables the use
of I/O fencing to prevent data corruption caused by a split brain. I/O fencing is
described in the chapter on “I/O Fencing Scenarios” on page 81.
51
Creating Shared Disk Groups and Volumes - General
52 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Creating Shared Disk Group and Volume for SRVM
Refer to the description of disk group activation modes in the VERITAS Volume Manager
Administrator’s Guide for more information (see the chapter on “Cluster Functionality”).
Note For VERITAS DBE/AC for Oracle9i RAC, release 3.5, a shared disk group is not
automatically configured for I/O fencing when it is created. A newly created disk
group must be deported and re-imported to be configured with I/O fencing. You
can use vxdg commands to accomplish this; refer to VERITAS Volume Manager 3.5
User’s Guide - VERITAS Enterprise Administrator.
1. From the master node, create the shared disk group on the shared disk c2t3d1:
# vxdg -s init orasrv_dg c2t3d1
Note If you are installing Oracle9i Release 1 Patch software, you must first install Oracle9i
Release 1 software. Then use the procedure in “Installing Oracle9i Release 1 Patch
Software” on page 61.
2. Add the directory path to the jar utility in the PATH environment variable. Typically,
this is /usr/bin. Do this on both nodes.
7. On one node, create a VxFS file system on the shared volume on which to install the
Oracle9i binaries. For example, create the file system on orabinvol:
# mkfs -F vxfs /dev/vx/rdsk/orbinvol_dg/orabinvol
54 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installing Oracle9i Release 1 Software
8. On both systems, create the mount point for the file system:
# mkdir /oracle
9. On both systems, mount the file system, using the device file for the block device:
# mount -F vxfs -o cluster /dev/vx/dsk/orbinvol_dg/orabinvol
/oracle
10. On both systems, create a local group and a local user for Oracle. For example, create
the group dba and the user oracle. Be sure to assign the same user ID and group ID
for the user on each system.
11. Set the home directory for the oracle user as /oracle.
13. On one system, create the directory where you intend to copy the contents of the
Oracle9i software CDs. A total of 2 gigabytes is required.
# mkdir /install
# cd /install
14. Copy all the files from the three Oracle9i CDs. For example:
a. Insert the first of the three Oracle9i disks in the CD-ROM drive.
- If you are running Solaris volume-management software, the software
automatically mounts the CD as /cdrom/cdrom0. Type the command:
# cd /cdrom/cdrom0
- If you are not running Solaris volume-management software, you must
mount the CD manually. For example:
# mount -F hsfs -o ro /dev/dsk/c0t6d0s2 /cdrom
In this example, /dev/dsk/c0t6d0s2 is the name for the CD drive.
# cd /cdrom
c. Repeat step a and step b for the remaining Oracle9i software CDs, copying the
files from the second CD into /install/Disk2 and the files from the third CD
into /install/Disk3.
15. Run the VERITAS DBE/AC for Oracle9i RAC script 9iinstall:
# /opt/VRTSvcs/rac/lib/9iinstall
The script prompts you to specify whether the Oracle you are installing is 32-bit or
64-bit.
Enter Oracle bits 32 or 64
64
Enter the directory location where you copied the contents of the Oracle9i installation
CDs. For this example, you would enter /install:
Please enter Oracle Installation copy location.
/install
The script modifies some of the copied Oracle9i files to be compatible with VERITAS
DBE/AC for Oracle9i.
17. On one system, and create a directory for the installation of the Oracle9i binaries. For
example:
$ mkdir VRT
$ export ORACLE_HOME=/oracle/VRT
18. On both systems, edit the file .rhosts to provide the other system access to the local
system during the installation. Place a “+” character in the first line of the file. Note
that you can remove this permission after installation is complete.
20. Run the Oracle9i utility runInstaller. The utility is located in the directory where
you copied Oracle9i software from Disk1.
$ /install/Disk1/runInstaller
- As the utility starts up, be sure to select the installation option: Software Only.
Refer to the Oracle9i Installation Guide for additional information about running
the utility.
- When the Node Selection dialog box appears, select only the local node.
56 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installing Oracle9i Release 1 Software
Note If the Node Selection dialog box does not appear, refer to “Missing Dialog Box
During Installation of Oracle9i Release 1” on page 98.
- As the installer runs, it prompts you to provide the name of the shared volume
you created previously for the Oracle SRVM component, srvconfig. In this
example, we would enter /dev/vx/rdsk/orasrv_dg/srvm_vol.See the
“Creating Shared Disk Group and Volume for SRVM” on page 53.
21. Oracle requires the global services daemon (gsd) to be running before any server
utilities can be started. When installation completes, run the gsd.sh script in the
background on both systems as user oracle. For example, where $ORACLE_HOME
equals /oracle/VRT, enter:
$ /oracle/VRT/bin/gsd.sh &
23. Copy /var/opt/oracle from the installation node to the other node:
# rcp -r /var/opt/oracle sysb:/var/opt
2. Add the directory path to the jar utility in the PATH environment variable. Typically,
this is /usr/bin.
5. Create a VxFS file system on orabinvol to install the Oracle9i binaries. For example:
# mkfs -F vxfs /dev/vx/rdsk/or_dg/or_vol
7. Mount the file system, using the device file for the block device:
# mount -F vxfs /dev/vx/rdsk/or_dg/or_vol /oracle
8. Edit the /etc/vfstab file, list the new file system, and specify “yes” for the “mount
at boot” choice. For example:
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
.
.
/dev/vx/rdsk/or_dg/or_vol /dev/vx/rdsk/or_dg/or_vol /oracle vxfs 1
yes -
9. Create a local group and a local user for Oracle. For example, create the group dba
and the user oracle. Be sure to assign the same user ID and group ID for the user on
each system.
10. Set the home directory for the oracle user as /oracle.
58 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installing Oracle9i Release 1 Software
13. On one system, create the directory where you intend to copy the contents of the
Oracle9i software CDs. A total of 2 gigabytes is required.
# mkdir /install
# cd /install
14. From one system, copy all the files from the three Oracle9i CDs. For example:
a. Insert the first of the three Oracle9i disks in the CD-ROM drive.
- If you are running Solaris volume-management software, the software
automatically mounts the CD as /cdrom/cdrom0. Type the command:
# cd /cdrom/cdrom0
- If you are not running Solaris volume-management software, you must
mount the CD manually. For example:
# mount -F hsfs -o ro /dev/dsk/c0t6d0s2 /cdrom
In this example, /dev/dsk/c0t6d0s2 is the name for the CD drive.
# cd /cdrom
c. Repeat step a and step b for the remaining Oracle9i software CDs, copying the
files from the second CD into /install/Disk2 and the files from the third CD
into /install/Disk3.
15. Run the VERITAS DBE/AC for Oracle9i RAC script 9iinstall:
# /opt/VRTSvcs/rac/lib/9iinstall
The script prompts you to specify the Oracle you are installing as 64-bit or 32-bit.
Enter Oracle bits 32 or 64
64
Enter the directory location where you copied the contents of the Oracle9i installation
CDs. For this example, you would enter /install:
Please enter Oracle Installation copy location.
/install
The script modifies some of the copied Oracle9i files to be compatible with VERITAS
DBE/AC for Oracle9i.
17. On both systems, as user oracle, create a directory to install the Oracle9i binaries:
$ mkdir VRT
$ export ORACLE_HOME=/oracle/VRT
18. On both systems, edit the file .rhosts to provide the other system access to the local
system during the installation. Place a “+” character in the first line of the file. Note
that you can remove this permission after installation is complete.
20. Run the Oracle9i utility runInstaller. The utility is located in the directory where
you copied Oracle9i software from Disk1.
$ /install/Disk1/runInstaller
- As the utility starts, select the installation option: Software Only. Refer to the
Oracle9i Installation Guide for additional information about running the utility.
- When the Node Selection dialog box appears, select both nodes for installation.
Note If the Node Selection dialog box does not appear, refer to “Missing Dialog Box
During Installation of Oracle9i Release 1” on page 98.
- As the installer runs, it prompts you to provide the name of the shared volume
you created previously for the Oracle SRVM component, srvconfig. In this
example, we would enter /dev/vx/rdsk/orasrv_dg/srvm_vol.See the
“Creating Shared Disk Group and Volume for SRVM” on page 53.
21. Oracle requires the global services daemon (gsd) to be running before any server
utilities can be started. When installation completes, run the gsd.sh script in the
background on both systems as user oracle. For example, where $ORACLE_HOME
equals /oracle/VRT, enter:
$ /oracle/VRT/bin/gsd.sh &
60 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installing Oracle9i Release 1 Patch Software
1. Log in as superuser.
2. Add the directory path to the jar utility in the PATH environment variable. Typically
this is /usr/bin.
3. On one system, create the directory where you intend to copy the Oracle9i Patch
software.
# mkdir /install/patch
# cd /install/patch
4. Copy all the files included with the downloaded Oracle9i patch software to
/install/patch. When you uncompress and untar the downloaded ZIP file, the
software is placed in a directory Disk1.
6. Depending on the level of patch you are installing, run the applicable VERITAS
DBE/AC for Oracle9i RAC script.
For Oracle9i Patch 2, enter:
# /opt/VRTSvcs/rac/lib/9iP2install
For Oracle9i Patch 3, enter:
# /opt/VRTSvcs/rac/lib/9iP3install
7. The script prompts you to specify whether the Oracle you are installing is 32-bit or
64-bit.
Enter Oracle bits 32 or 64
64
8. The script prompts you for the location of the copied Oracle9i Patch files. For this
example, you would enter the directory /install/patch:
Please enter Oracle patch Installation copy location.
/install/patch
As the script proceeds, it modifies some of the copied Oracle9i Patch files to be
compatible with VERITAS DBE/AC for Oracle9i.
9. Log in as oracle.
10. On both systems, edit the file .rhosts to provide the other system access to the local
system during the installation. Place a “+” character in the first line of the file. Note
that you can remove this permission after installation is complete.
15. To perform post install actions, refer to the Patch Set Notes that accompany the patch.
62 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installing Oracle9i Release 2 Software
2. Add the directory path to the jar utility in the PATH environment variable. Typically,
this is /usr/bin. Do this on both nodes.
5. Deport and import the disk group and set the activation:
# vxdg deport orabinvol_dg
# vxdg -s import orabinvol_dg
# vxvol -g orabinvol_dg startall
# vxdg -g orabinvol_dg set activation=sw
7. On the master node, create a VxFS file system on the shared volume on which to
install the Oracle9i binaries. For example, create the file system on orabinvol:
# mkfs -F vxfs /dev/vx/rdsk/orbinvol_dg/orabinvol
8. On both systems, create the mount point for the file system:
# mkdir /oracle
9. On both systems, mount the file system, using the device file for the block device:
# mount -F vxfs -o cluster /dev/vx/dsk/orbinvol_dg/orabinvol
/oracle
10. On both systems, create a local group and a local user for Oracle. For example, create
the group dba and the user oracle. Be sure to assign the same user ID and group ID
for the user on each system.
11. On both systems, set the home directory for the oracle user as /oracle.
13. On one system, enter the following commands, using the appropriate example below
to copy the VERITAS CM library based on the version of Oracle9i:
# cd /opt/ORCLcluster/lib/9iR2
If the version of Oracle9i is 32-bit, enter:
# cp libskgxn2_32.so ../libskgxn2.so
If the version of Oracle9i is 64-bit, enter:
# cp libskgxn2_64.so ../libskgxn2.so
14. On the first system, insert Disk1 of the Oracle9i disks in the CD-ROM drive.
- If you are running Solaris volume-management software, the software
automatically mounts the CD as /cdrom/cdrom0. Type the command:
# cd /cdrom/cdrom0
- If you are not running Solaris volume-management software, you must mount
the CD manually. For example:
# mount -F hsfs -o ro /dev/dsk/c0t6d0s2 /cdrom
In this example, /dev/dsk/c0t6d0s2 is the name for the CD drive.
# cd /cdrom
16. On the first system, edit the file .rhosts to provide the other system access to the
local system during the installation. Place a “+” character in the first line of the file.
Note that you can remove this permission after installation is complete.
18. On both systems, create a directory for the installation of the Oracle9i binaries:
$ mkdir VRT
$ export ORACLE_HOME=/oracle/VRT
64 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installing Oracle9i Release 2 Software
Note If the Node Selection dialog box does not appear, refer to “Missing Dialog Box
During Installation of Oracle9i Release 2” on page 99.
- As the installer runs, it prompts you to provide the name of the shared volume
you created previously for the Oracle SRVM component, srvconfig. In this
example, we would enter /dev/vx/rdsk/orasrv_dg/srvm_vol.See the
“Creating Shared Disk Group and Volume for SRVM” on page 53.
- When you are prompted for other Oracle9i disks, refer to step 14 if necessary. You
may need to log in as root to manually mount the CDs and log back in as oracle
to continue.
2. Add the directory path to the jar utility in the PATH environment variable. Typically,
this is /usr/bin.
5. Create a VxFS file system on orabinvol to install the Oracle9i binaries. For example:
# mkfs -F vxfs /dev/vx/rdsk/or_dg/or_vol
7. Mount the file system, using the device file for the block device:
# mount -F vxfs /dev/vx/rdsk/or_dg/or_vol /oracle
8. Edit the /etc/vfstab file, list the new file system, and specify “yes” for the “mount
at boot” choice. For example:
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
.
.
/dev/vx/rdsk/or_dg/or_vol /dev/vx/rdsk/or_dg/or_vol /oracle vxfs 1
yes -
9. Create a local group and a local user for Oracle. For example, create the group dba
and the user oracle. Be sure to assign the same user ID and group ID for the user on
each system.
10. Set the home directory for the oracle user as /oracle.
66 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Installing Oracle9i Release 2 Software
13. On one system, enter the following commands, using the appropriate example below
to copy the VERITAS CM library based on the version of Oracle9i:
# cd /opt/ORCLcluster/lib/9iR2
If the version of Oracle9i is 32-bit, enter:
# cp libskgxn2_32.so ../libskgxn2.so
If the version of Oracle9i is 64-bit, enter:
# cp libskgxn2_64.so ../libskgxn2.so
14. On the first system, insert Disk1 of the Oracle9i disks in the CD-ROM drive.
- If you are running Solaris volume-management software, the software
automatically mounts the CD as /cdrom/cdrom0. Type the command:
# cd /cdrom/cdrom0
- If you are not running Solaris volume-management software, you must mount
the CD manually. For example:
# mount -F hsfs -o ro /dev/dsk/c0t6d0s2 /cdrom
In this example, /dev/dsk/c0t6d0s2 is the name for the CD drive.
# cd /cdrom
16. On the first system, edit the file .rhosts to provide the other system access to the
local system during the installation. Place a “+” character in the first line of the file.
Note that you can remove this permission after installation is complete.
18. On both systems, create a directory for the installation of the Oracle9i binaries:
$ mkdir VRT
$ export ORACLE_HOME=/oracle/VRT
Note If the Node Selection dialog box does not appear, refer to “Missing Dialog Box
During Installation of Oracle9i Release 2” on page 99.
- As the installer runs, it prompts you to provide the name of the shared volume
you created previously for the Oracle SRVM component, srvconfig. In this
example, we would enter /dev/vx/rdsk/orasrv_dg/srvm_vol.See the
“Creating Shared Disk Group and Volume for SRVM” on page 53.
- When you are prompted for other Oracle9i disks, refer to step 14 if necessary. You
may need to log in as root to manually mount the CDs and log back in as oracle
to continue.
68 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Removing Temporary rsh Access Permissions
3. Depending on whether you are using the 32-bit or the 64-bit Oracle version, link the
libraries using one of the following commands:
For 32-bit versions:
$ ln -s /usr/lib/libodm.so libodm9.so
For 64-bit versions:
$ ln -s /usr/lib/sparcv9/libodm.so libodm9.so
Creating Databases
At this time, you can an create Oracle database on shared storage. Use you own tools or
refer to “Creating Starter Databases” on page 133 to create a starter database.
70 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Configuring VCS Service Groups for Oracle 4
This chapter describes how to modify the VCS configuration file, main.cf, to define the
service groups for CVM and for Oracle databases in a VCS configuration within a RAC
environment.
71
Configuring CVM and Oracle Groups in main.cf File
oradb1_grp oradb2_grp
VRT_db rac_db
Oracle Oracle
oradb1_mnt oradb2_mnt
CFSMount CFSMount
oradb1_voldg oradb2_voldg
CVMVolDg CVMVolDg
cvm
LISTENER
Sqlnet
orabin_mnt listener_ip
CFSMount IP
qlogckd cvm_clus
CFSQlogckd CVMCluster
72 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Configuring CVM and Oracle Groups in main.cf File
1. Make sure the cvm group has the group Parallel attribute set to 1. Typically this is
already done during installation.
.
group cvm (
SystemList = { galaxy = 0, nebula = 1 }
AutoFailOver = 0
Parallel = 1
AutoStartList = { galaxy, nebula }
)
.
.
2. Define the NIC and IP resources. The VCS bundled NIC and IP agents are described
in VERITAS Cluster Server Bundled Agents Reference Guide. The device name and the IP
addresses are required by the listener for public network communication. Note that
for the IP resource, the Address attribute is localized for each node (see “Attributes
of CVM and Oracle Groups that Must be Defined as Local” on page 78).
.
NIC listener_hme0 (
Device = hme0
NetworkType = ether
)
IP listener_ip (
Device = hme0
Address @galaxy = "192.2.40.21"
Address @nebula = "192.2.40.22"
)
.
3. Define the Sqlnet resource. The Sqlnet listener agent is described in detail in the
VERITAS Cluster Server Enterprise Agent for Oracle, Installation and Configuration Guide.
Note that the listener attribute is localized for each node.
.
Sqlnet LISTENER (
Owner = oracle
Home = "/oracle/orahome"
TnsAdmin = "/oracle/orahome/network/admin"
MonScript = "./bin/Sqlnet/LsnrTest.pl"
Listener @galaxy = LISTENER_a
Listener @nebula = LISTENER_b
EnvFile = "/opt/VRTSvcs/bin/Sqlnet/envfile"
)
.
4. You must configure the CVMVolDg and CFSMount resources in the cvm group for the
Oracle binaries installed on shared storage. Refer to the appendix “CVMCluster,
CVMVolDg, and CFSMount Agents” on page 115 for description of CVMVolDg and
CFSMount agents.
.
CVMVolDg orabin_voldg (
CVMDiskGroup = orabindg
CVMVolume = { "orabinvol", "srvmvol" }
CVMActivation = sw
)
74 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Configuring CVM and Oracle Groups in main.cf File
CFSMount orabin_mnt (
Critical = 0
MountPoint = "/oracle"
BlockDevice = "/dev/vx/dsk/orabindg/orabinvol"
Primary = galaxy
)
.
.
5. Define the dependencies of resources in the group. The dependencies are specified
such that the Sqlnet resource requires the IP resource that, in turn, depends on the
NIC resource. The Sqlnet resource also requires the CFSMount resource. The
CFSMount resource requires the daemons, vxfsckd and qlogckd, used by the cluster
file system. The CFSMount resource also depends on the CVMVolDg resource, which,
in turn, requires the CVMCluster resource for communications within the cluster. The
“VCS Service Groups for RAC: Dependencies Chart” on page 72 visually shows the
dependencies specified in the following statements:
.
.
vxfsckd requires qlogckd
orabin_voldg requires cvm_clus
orabin_mnt requires vxfsckd
orabin_mnt requires orabin_voldg
listener_ip requires listener_hme0
LISTENER requires listener_ip
LISTENER requires orabin_mnt
.
.
Note The VCS Enterprise Agent for Oracle version 2.0.1 is installed when you run
installDBAC. When you refer to the VERITAS Cluster Server Enterprise Agent for
Oracle Installation and Configuration Guide, ignore the steps described in the section
“Installing the Agent Software.”
1. Using the “Sample main.cf File” on page 109 as an example, add a service group to
contain the resources for an Oracle database. For example, add the group
oradb1_grp. Make sure you assign the Parallel attribute a value of 1.
.
group oradb1_grp (
SystemList = { galaxy = 0, nebula = 1 }
AutoFailOver = 1
Parallel = 1
AutoStartList = { galaxy, nebula }
)
.
.
CFSMount oradb1_mnt (
MountPoint = "/oradb1"
BlockDevice = "/dev/vx/dsk/oradb1dg/oradb1vol"
Primary = galaxy
)
.
.
76 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Configuring CVM and Oracle Groups in main.cf File
3. Define the Oracle database resource. Refer to the VERITAS Cluster Server Enterprise
Agent for Oracle, Installation and Configuration Guide for information on the VCS
enterprise agent for Oracle. Note that the Oracle attributes Sid, Pfile, and Table
attributes must be set locally, that is, they must be defined for each cluster system.
.
Oracle VRT_db (
Sid @galaxy = VRT1
Sid @nebula = VRT2
Owner = oracle
Home = "/oracle/orahome"
Pfile @galaxy = "/oracle/orahome/dbs/initVRT1.ora"
Pfile @nebula = "/oracle/orahome/dbs/initVRT2.ora"
User = scott
Pword = tiger
Table @galaxy = vcstable_galaxy
Table @nebula = vcstable_nebula
MonScript = "./bin/Oracle/SqlTest.pl"
AutoEndBkup = 1
EnvFile = "/opt/VRTSvcs/bin/Oracle/envfile"
)
.
4. Define the dependencies for the Oracle service group. Note that the Oracle database
group is specified to require the cvm group, and that the required dependency is
defined as “online local firm,” meaning that the cvm group must be online and
remain online on a system before the Oracle group can come online on the same
system. Refer to the VERITAS Cluster Server User’s Guide for a description of group
dependencies.
Refer to “VCS Service Groups for RAC: Dependencies Chart” on page 72.
.
.
requires group cvm online local firm
oradb1_mnt requires oradb1_voldg
VRT requires oradb1_mnt
.
.
See the “Sample main.cf File” on page 109 for a complete example. You can also find the
complete file in /etc/VRTSvcs/conf/sample_rac/main.cf.
IP Address The virtual IP address (not the base IP address) associated with the
interface. For example:
Address @sysa = "192.2.40.21"
Address @sysb = "192.2.40.22"
Oracle Sid The variable $ORACLE_SID represents the Oracle system ID. For
example, if the SIDs for two systems, sysa and sysb, are VRT1 and
VRT2 respectively, their definitions would be:
Sid @sysa = VRT1
Sid @sysb = VRT2
Oracle Table The table used for in-depth monitoring by User/PWord on each cluster
node. For example:
Table @sysa = vcstable_sysa
Table @sysb = vcstable_sysb
Using the same table on all Oracle instances is not recommended. Using
the same table generates IPC traffic and could cause conflicts between
the Oracle recovery processes and the agent monitoring processes in
accessing the table.
Note Table is only required if in-depth monitoring is used. If the
PWord varies by RAC instance, it must also be defined as local.
If other attributes for the Oracle resource differ for various RAC instances, define them
locally as well. These other attributes may include the Oracle resource attributes User,
PWord, the CVMVolDg resource attribute CVMActivation, and others.
78 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Configuring CVM and Oracle Groups in main.cf File
80 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
I/O Fencing Scenarios 5
I/O Fencing of Shared Storage
When two systems have access to the data on shared storage, the integrity of the data
depends on the systems communicating with each other so that each is aware when the
other is writing data. Usually this communication occurs in the form of heartbeats
through the private networks between the systems. If the private links are lost, or even if
one of the systems is hung or too busy to send or receive heartbeats, each system could be
unaware of the other’s activities with respect to writing data. This is a split brain condition
and can lead to data corruption.
The I/O fencing capability of the DBE/AC, managed by VERITAS Volume Manager,
prevents data corruption in the event of a split brain condition by using SCSI-3 persistent
reservations for disks. This allows a set of systems to have registrations with the disk and
a write-exclusive registrants-only reservation with the disk containing the data. This
means that only these systems can read and write to the disk, while any other system can
only read the disk. The I/O fencing feature fences out a system that no longer sends
heartbeats to the other system by preventing it from writing data to the disk.
The VxVM manages all shared storage subject to I/O fencing. It assigns the keys that
systems use for registrations and reservations for the disks—including all paths—in the
specified disk groups. The vxfen driver is aware of which systems have registrations and
reservations with specific disks.
To protect the data on shared disks, each system in the cluster must be configured to use
I/O fencing.
81
I/O Fencing of Shared Storage
Both private Node A races for majority of Node B races for majority of When Node B is
networks fail. coordinator disks. coordinator disks. ejected from cluster,
If Node A wins race for If Node B loses the race for repair the private
coordinator disks, Node A the coordinator disks, Node networks before
ejects Node B from the B removes itself from the attempting to bring
shared disks and continues. cluster. Node B back.
Both private Node A continues to work. Node B has crashed. It Reboot Node B
networks cannot start the database after private
function again since it is unable to write to networks are
after event the data disks. restored.
above.
One private Node A prints message Node B prints message Repair private
network fails. about an IOFENCE on the about an IOFENCE on the network. After
console but continues. console but continues. network is repaired,
both nodes
automatically use it.
82 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
I/O Fencing of Shared Storage
Nodes A and B
and private
networks lose
power.
Coordinator
and data disks
retain power.
Power returns
to nodes and
they reboot,
but private
networks still Node A reboots and I/O Node B reboots and I/O Refer to section in
have no power. fencing driver (vxfen) fencing driver (vxfen) Troubleshooting
detects Node B is registered detects Node A is registered chapter for
with coordinator disks. The with coordinator disks. The instructions on
driver does not see Node B driver does not see Node A resolving
listed as member of cluster listed as member of cluster preexisting split
because private networks because private networks brain condition.
are down. This causes the are down. This causes the
I/O fencing device driver to I/O fencing device driver to
prevent Node A from prevent Node B from
joining the cluster. Node A joining the cluster. Node B
console displays: console displays:
Potentially a Potentially a
preexisting split preexisting split
brain. Dropping brain. Dropping
out of the out of the
cluster. Refer to cluster. Refer to
the user the user
documentation for documentation for
steps required to steps required to
clear preexisting clear preexisting
split brain. split brain.
Node A crashes
while Node B
is down. Node
B comes up
and Node A is
still down. Node A is crashed. Node B reboots and detects Refer to section in
Node A is registered with Troubleshooting
the coordinator disks. The chapter for
driver does not see Node A instructions on
listed as member of the resolving
cluster. The I/O fencing preexisting split
device driver prints brain condition.
message on console:
Potentially a
preexisting split
brain. Dropping
out of the
cluster. Refer to
the user
documentation for
steps required to
clear preexisting
split brain.
Node B leaves
the cluster and Node A races for a majority Node B leaves the cluster.
Power on failed
the disk array of coordinator disks. Node
disk array and
is still powered A fails because only one of
restart I/O fencing
off. three coordinator disks is
driver to enable
available. Node A removes
Node A to register
itself from the cluster.
with all coordinator
disks.
84 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
I/O Fencing of Shared Storage
0 7
The keys currently assigned to disks can be displayed by using the vxfenadm command.
For example, from the system with node ID 1, display the key for the disk
/dev/rdsk/c2t1d0s2 by entering:
# vxfenadm -g /dev/rdsk/c2t1d0s2
Reading SCSI Registration Keys...
Device Name: /dev/rdsk/c2t1d0s2
Total Number of Keys: 1
key[0]:
Key Value [Numeric Format]: 65,80,71,82,48,48,48,48
Key Value [Character Format]: APGR0000
The -g option of vxfenadm displays all eight bytes of a key value in two formats. In the
numeric format, the first byte, representing the Node ID, contains the system ID plus 65.
The remaining bytes contain the ASCII values of the letters of the key, in this case,
“PRG0000.” In the next line, the node ID 0 is expressed as “A;” node ID 1 would be “B.”
86 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Uninstalling DBE/AC for Oracle9i RAC 6
When removing the VERITAS DBE/AC for Oracle9i RAC software from the systems in the
cluster, use the sequence shown in the flowchart below. Some of the activities require that
you refer to other documents, namely, the VERITAS Volume Manager Installation Guide, the
VERITAS File System Installation Guide, and the Oracle9i Installation Guide.
Another Yes
system?
No
Remove database (optional)
Yes
No Another Yes
Remove binaries system?
No
Another Yes
system?
No
Next page
87
Stop VCS
Encapsulated Yes
Remove VxVM:
root DG? Refer to VxVM
Install Guide
No
Remove VxVM: pkgrm VRTSvxvm
Another Yes
system?
No
Another Yes
system?
No
Done
88 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Offlining the Oracle and Sqlnet Resources
Stopping VCS
On one node, stop VCS by entering:
# hastop -local
This command removes membership for ports h (VCS), v (CVM), and w (the vxconfigd
daemon). Ports b (I/O fencing), d (ODM), f (CFS), q (QIO), and o (VCSMM) still have
membership. Use the /sbin/gabconfig -a command to verify this.
You can stop VCS on the other node later. Refer to the flow diagram at the beginning of
this chapter.
1. Determine file systems to be unmounted. You can check the /etc/mnttab file. For
example, enter:
# cat /etc/mnttab | grep -i vxfs
The output shows each line for the file /etc/mnttab that contains an entry for a
vxfs file system.
2. By specifying its mount point, unmount each of the vxfs file systems listed in the
output:
# umount mount_point
90 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Unconfiguring the CFS Drivers
2. Remove any optional packages from the system; use the order shown in the following
example:
# pkgrm VRTSfspro VRTSvmpro VRTSobgui VRTSob VRTSfsdoc
VRTSvmdoc VRTSvmman
3. Repeat step 1 and step 2 on the other system to remove any of the optional packages.
2. Remove the required packages from the system; use the order shown in the following
example:
# pkgrm VRTScavf VRTSodm VRTSgms VRTSglm VRTSvxfs
3. Repeat step 1 and step 2 on the other system to remove the required packages.
Uninstalling VCS
Use the uninstallvcs utility to remove all packages installed by the installvcs
utility, which include VRTSperl, VRTSvcs, VRTSgab, VRTSllt, VRTSvcsag,
VRTSvcsmg, VRTSvcsdc, VRTSdbac, and VRTSvcsor. The uninstallvcs utility also
unconfigures the drivers associated with VCS and VERITAS DBE/AC for Oracle9i RAC.
Before removing the packages from any system in the cluster, shutdown applications such
as the Java Console or any VCS enterprise agents that depend on VCS. The
uninstallvcs script supplied with VERITAS DBE/AC for Oracle9i RAC does not
uninstall VCS enterprise agents other than the VCS Oracle enterprise agent. See the
documentation for a specific enterprise agent for instructions on removing it.
Note Do not attempt to uninstall VCS if you are running another VERITAS application
that uses the VCS modules GAB or LLT. Refer to the documentation supplied with
that application for instructions on how to shut down the application before
uninstalling VCS or any of its utilities.
92 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Uninstalling VCS
1. Log in as root user, and at the system prompt, enter the following commands:
galaxy# cd /opt/VRTSvcs/install
galaxy# ./uninstallvcs
2. The script begins by discovering any VCS configuration files present on the system
and reporting cluster information from them:
VCS configuration files exist on this system with the following
information:
3. The script determines the current version of the VCS packages it finds to remove and
indicates whether dependencies for them exist. If a package has dependencies, the
dependent packages are listed and you are prompted to remove them.
Checking current installation on galaxy:
.
Checking current installation on nebula:
4. The script prompts you to remove existing VCS configuration files and the VCS
license key. If you decide not to remove the configuration files, they remain in place.
You must remove them or rename them before reinstalling VCS.
5. The script proceeds to terminate VCS processes running on the cluster systems.
Terminating VCS processes on galaxy:
94 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Removing License Files
6. The script proceeds to remove VCS packages from the cluster systems after checking
dependencies among them.
Uninstalling VCS on galaxy:
2. Go to the directory containing the license key files and list them. Enter:
# cd /etc/vx/licenses/lic
# ls -a
3. Using the output from step 1, identify and delete unwanted key files listed in the
step 2.
96 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Troubleshooting 7
Running Scripts for Engineering Support Analysis
You can use a set of three scripts that gather information about the configuration and
status of your cluster and its various modules. The scripts also identify package
information, debugging messages, console messages, and information about disk groups
and volumes. You can forward the output of each of these scripts to VERITAS customer
support who can analyze the information and assist you in solving any problems.
getdbac
This script gathers information about the VERITAS DBE/AC for Oracle9i RAC modules.
Enter the following command on each system:
# /opt/VRTSvcs/bin/getdbac -local
The file /tmp/vcsopslog.time_stamp.sys_name.tar.Z contains the script’s output.
getcomms
This script gathers information about the GAB and LLT modules. On each system, enter:
# /opt/VRTSgab/getcomms -local
The file /tmp/commslog.time_stamp.tar contains the script’s output.
hagetcf
This script gathers information about the VCS cluster and the status of resources. To run
this script, enter the following command on each system:
# /opt/VRTSvcs/bin/hagetcf
The output from this script is placed in a tar file, /tmp/vcsconf.sys_name.tar,on each
cluster system.
97
Troubleshooting Topics
Troubleshooting Topics
The following troubleshooting topics have headings that indicate likely symptoms or that
indicate procedures required for a solution.
1. Make sure that the VCSMM driver is running on both the nodes and start it if is not:
# /sbin/vcsmmconfig -c
2. Make sure you have started with a clean system. It is possible to have a version of the
lsnodes command from another vendor. Delete the contents of directories
oraInventory and $ORACLE_HOME, and the directory where you copied the
Oracle9i files from the Oracle installation CDs.
98 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Troubleshooting Topics
1. On the first system, enter the following commands, using the appropriate example
below to copy the VERITAS CM library based on the version of Oracle9i:
# cd /opt/ORCLcluster/lib/9iR2
If the version of Oracle9i is 32-bit, enter:
# cp libskgxn2_32.so ../libskgxn2.so
If the version of Oracle9i is 64-bit, enter:
# cp libskgxn2_64.so ../libskgxn2.so
Chapter 7, Troubleshooting 99
Troubleshooting Topics
pending ODM cannot yet communicate with its peers, but anticipates being able to
eventually.
failed ODM cluster support has failed to initialize properly. Check console logs.
disabled ODM is not supporting cluster files. If you think it should, check:
- /dev/odm mount options in /etc/vfstab. If the “nocluster” option is
being used, it can force the “disabled” cluster support state.
- Make sure the VRTSgms (group messaging service) package is installed.
Run the following command:
# /opt/VRTSvcs/bin/chk_dbac_pkgs
The utility reports any missing required packages.
100 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Troubleshooting Topics
This message is displayed when CVM cannot retrieve the node ID of the local system
from the vxfen driver. This usually happens when port b is not configured. Verify that
the vxfen driver is configured by checking the GAB ports with the command:
# /sbin/gabconfig -a
Port b must exist on the local system.
1. Fix the problem causing the import of the shared disk group to fail.
2. Offline the service group containing the resource of type CVMVolDg as well as the
service group containing the CVMCluster resource type.
2. Use the format command to verify that the host sees the disks. It may take a few
minutes before the host is capable of seeing the disk.
102 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Troubleshooting Topics
3. Issue the following vxdctl command to force the VxVM configuration daemon
vxconfigd to rescan the disks:
# vxdctl enable
3. If you know on which node the key was created, log in to that node and enter the
following command:
# vxfenadm -x -kA1 -f /tmp/disklist
The key is removed.
4. If you do not know on which node the key was created, follow step 5 through step 7
to remove the key.
6. Remove the first key from the disk by preempting it with the second key:
# vxfenadm -p -kA2 -f /tmp/disklist -vA1
key: A2------ prempted the key: A1------ on disk
/dev/rdsk/c1t0d11s2
104 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Troubleshooting Topics
2. Verify the systems currently registered with the coordinator disks. Use the following
command:
# vxfenadm -g all -f /etc/vxfentab
The output of this command identifies the keys registered with the coordinator disks.
3. Clear the keys on the coordinator disks as well as the data disks using the command
/opt/VRTSvcs/rac/bin/vxfenclearpre. See “Using vxfenclearpre Command
to Clear Keys After Split Brain” on page 107.
106 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Troubleshooting Topics
1. Shut down all other systems in the cluster that have access to the shared storage. This
prevents data corruption.
3. Read the script’s introduction and warning. Then, you can choose to let the script run.
Do you still want to continue: [y/n] (default : n)
y
Note Informational messages resembling the following may appear on the console of one
of the nodes in the cluster when a node is ejected from a disk/LUN:
108 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Sample main.cf File A
A sample VCS configuration file, /etc/VRTSvcs/conf/sample_rac/main.cf, is
available online. It is also shown here for offline reference.
/etc/VRTSvcs/conf/sample_rac/main.cf
// %W% %G% %U% - %Q% #
//ident "@(#)vcsops:%M% %I%"
//
// Copyright (c) 2000-2002 VERITAS Software Corporation. All Rights
// Reserved.
//
// THIS SOFTWARE CONTAINS CONFIDENTIAL INFORMATION AND
// TRADE SECRETS OF VERITAS SOFTWARE. USE, DISCLOSURE,
// OR REPRODUCTION IS PROHIBITED WITHOUT THE PRIOR
// EXPRESS WRITTEN PERMISSION OF VERITAS SOFTWARE CORPORATION
//
// RESTRICTED RIGHTS LEGEND
// USE, DUPLICATION OR DISCLOSURE BY THE U.S. GOVERNMENT IS
// SUBJECT TO RESTRICTIONS SET FORTH IN
// THE VERITAS SOFTWARE CORPORATION LICENSE AGREEMENT AND AS
// PROVIDED IN DFARS 227.7202-1(a) and
// 227.7202-3(a) (1998), and FAR 12.212, as applicable.
// VERITAS Software Corporation
// 350 Ellis Street,
// Mountain View, CA 94043.
// UNPUBLISHED -- RIGHTS RESERVED UNDER THE COPYRIGHT
// LAWS OF THE UNITED STATES. USE OF A COPYRIGHT NOTICE
// IS PRECAUTIONARY ONLY AND DOES NOT IMPLY PUBLICATION
// OR DISCLOSURE.
//
include "types.cf"
include "CFSTypes.cf"
include "CVMTypes.cf"
include "OracleTypes.cf"
109
/etc/VRTSvcs/conf/sample_rac/main.cf
cluster vcs
system sysa
system sysb
group cvm (
SystemList = { sysa = 0, sysb = 1 }
AutoFailOver = 0
Parallel = 1
AutoStartList = { sysa, sysb }
)
CVMCluster cvm_clus (
Critical = 0
CVMClustName = vcs
CVMNodeId = { sysa = 0, sysb = 1 }
CVMTransport = gab
CVMTimeout = 200
)
CFSQlogckd qlogckd (
Critical = 0
)
CFSfsckd vxfsckd (
)
CVMVolDg orabin_voldg (
CVMDiskGroup = orabindg
CVMVolume = { "orabinvol", "srvmvol" }
CVMActivation = sw
)
CFSMount orabin_mnt (
Critical = 0
MountPoint = "/oracle"
BlockDevice = "/dev/vx/dsk/orabindg/orabinvol"
Primary = sysa
)
NIC listener_hme0 (
Device = hme0
NetworkType = ether
)
110 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
/etc/VRTSvcs/conf/sample_rac/main.cf
IP listener_ip (
Device = hme0
Address @sysa = "192.2.40.21"
Address @sysb = "192.2.40.22"
)
Sqlnet LISTENER (
Owner = oracle
Home = "/oracle/orahome"
TnsAdmin = "/oracle/orahome/network/admin"
MonScript = "./bin/Sqlnet/LsnrTest.pl"
Listener @sysa = LISTENER_a
Listener @sysb = LISTENER_b
EnvFile = "/opt/VRTSvcs/bin/Sqlnet/envfile"
)
group oradb1_grp (
SystemList = { sysa = 0, sysb = 1 }
AutoFailOver = 1
Parallel = 1
AutoStartList = { sysa, sysb }
)
CVMVolDg oradb1_voldg (
CVMDiskGroup = oradb1dg
CVMVolume = { "oradb1vol" }
CVMActivation = sw
)
CFSMount oradb1_mnt (
MountPoint = "/oradb1"
BlockDevice = "/dev/vx/dsk/oradb1dg/oradb1vol"
)
Oracle VRT_db (
Sid @sysa = VRT1
Sid @sysb = VRT2
Owner = oracle
Home = "/oracle/orahome"
Pfile @sysa = "/oracle/orahome/dbs/initVRT1.ora"
Pfile @sysb = "/oracle/orahome/dbs/initVRT2.ora"
User = scott
Pword = tiger
Table @sysa = vcstable_sysa
Table @sysb = vcstable_sysb
MonScript = "./bin/Oracle/SqlTest.pl"
AutoEndBkup = 1
EnvFile = "/opt/VRTSvcs/bin/Oracle/envfile"
)
112 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
/etc/VRTSvcs/conf/sample_rac/main.cf
group oradb2_grp (
SystemList = { sysa = 1, sysb = 0 }
AutoFailOver = 1
Parallel = 1
AutoStartList = { sysb, sysa }
)
CVMVolDg oradb2_voldg (
CVMDiskGroup = oradbdg2
CVMVolume = { "oradb2vol" }
CVMActivation = sw
)
CFSMount oradb2_mnt (
MountPoint = "/oradb2"
BlockDevice = "/dev/vx/dsk/oradbdg2/oradb2vol"
Primary = sysb
)
Oracle rac_db (
Sid @sysa = rac1
Sid @sysb = rac2
Owner = oracle
Home = "/oracle/orahome"
Pfile @sysa = "/oracle/orahome/dbs/initrac1.ora"
Pfile @sysb = "/oracle/orahome/dbs/initrac2.ora"
User = scott
Pword = tiger
Table @sysa = vcstable_sysa
Table @sysb = vcstable_sysb
MonScript = "./bin/Oracle/SqlTest.pl"
AutoEndBkup = 1
EnvFile = "/opt/VRTSvcs/bin/Oracle/envfile"
)
114 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
CVMCluster, CVMVolDg, and CFSMount
Agents B
This Appendix describes the entry points and the attributes of the CVMCluster,
CVMVolDG, and CFSMount agents. Use this information to make necessary changes to
the configuration.
CVMCluster Agent
The CVMCluster resource is configured automatically during installation. The
CVMCluster agent controls system membership on the cluster port associated with VxVM
in a cluster.
The following table describes the entry points used by the CVMCluster agent.
115
CVMCluster Agent
CVMNodeId{} string-association An associative list. The first part names the system and
the second part contains the system’s LLT ID number.
type CVMCluster (
static int NumThreads = 1
static int OnlineRetryLimit = 2
static int OnlineTimeout = 400
static str ArgList[] = { CVMTransport, CVMClustName,
CVMNodeAddr, CVMNodeId, PortConfigd, PortKmsgd,
CVMTimeout }
NameRule = ""
str CVMClustName
str CVMNodeAddr{}
str CVMNodeId{}
str CVMTransport
int PortConfigd
int PortKmsgd
int CVMTimeout
)
116 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
CVMCluster Agent
Online If the system is the MASTER and the disk group is not imported, it
imports the disk group and starts all the volumes in the shared disk
group. It then sets the disk group activation mode to shared-write
as long as the CVMActivation attribute is set to sw. The activation
mode can be set on both slave and master systems.
Offline Sets the disk group activation mode to off so that all the volumes in
disk group are invalid.
Monitor Monitors the specified critical volumes in the disk group. The volumes
to be monitored are specified by the CVMVolList attribute. At least one
volume in a disk group must be specified.
Clean Sets the disk group activation mode to off so that all the volumes in
disk group are invalid (same as the Offline entry point).
118 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Configuring the CVMVolDg and CFSMount Resources
CVMVolume string-keylist Lists the critical volumes in the disk group. At least one volume
in the disk group must be specified.
CVMActivation string-scalar Sets the activation mode for the disk group.
Default = sw
Offline Unmounts the file system, forcing unmount if necessary, and sets
primary to secondary if necessary.
Monitor Determines if the file system is mounted. Checks mount status using
the fsclustadm command.
MountOpt string-scalar Options for the mount command. To create a valid MountOpt
attribute string:
- Use the VxFS type-specific options only.
- Do not use the -o flag to specify the VxFS-specific options.
- Do not use the -F vxfs file system type option.
- The cluster option is not required.
- Specify options in comma-separated list as in these examples:
ro
ro,cluster
blkclear,mincache=closesync
Primary string-scalar INFORMATION ONLY. Stores primary node name for a VxFS file
system. Primary is automatically modified when an unmounted
file system is mounted or another node becomes the primary.
120 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Configuring the CVMVolDg and CFSMount Resources
122 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Tunable Kernel Driver Parameters C
LMX Tunable Parameters
Edit the file /kernel/drv/lmx.conf to change the values of the LMX driver tunable
global parameters. The following table describes the LMX driver tunable parameters.
lltport Specifies the LLT port with which LMX registers and 10 N/A
receives data. The port can be an unused port between 0
and 31; dedicated ports such as 0, 7, 31 and others cannot
be used. Use the command /sbin/lltstat -p to
determine the ports in use.
update Enables asynchronous I/O. A value of zero (0) disables 1 [enable] N/A
asynchronous I/O.
123
LMX Tunable Parameters
2. Stop all Oracle client processes on the system, such as sqlplus and svrmgrl.
124 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
vxfen Tunable Parameters
dbg_log_size Size of the kernel log buffer in bytes. The minimum 32768 131072
value is 32768 bytes.
For example, to change the size of the kernel log buffer to 65536 bytes, edit the file and
add the configuration parameter dbg_log_size=65536:
#
# vxfen configuration file
#
name=”vxfen” parent=”pseudo” dbg_log_size=65536 instance=0;
126 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Error Messages D
LMX Error Messages, Critical
The following table lists LMX kernel module error messages. These messages report
critical errors seen when the system runs out of memory, when LMX is unable to
communicate with LLT, or when you are unable to load or unload LMX. Refer to
“Running Scripts for Engineering Support Analysis” on page 97 for information on how
to gather information about your systems and configuration that VERITAS support
personnel can use to assist you.
127
LMX Error Messages, Non-Critical
00018 lmxload contexts number > number, max contexts = system limit = number
00019 lmxload ports number > number, max ports = system limit = number
00020 lmxload buffers number > number, max buffers = system limit = number
00021 lmxload msgbuf number > number, max msgbuf size = system limit = number
06002 lmxreqlink duplicate ureq= 0xaddress kr1= 0xaddress, kr2= 0xaddress req type =
number
128 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
LMX Error Messages, Non-Critical
06301 lmxrecvport port not found unode= number node= number ctx= number
06303 lmxrecvport port not found ugen= number gen= number ctx= number
Message Explanation
vold_pgr_register(disk_path): The vxfen driver has not been configured. Follow the
failed to open the vxfen device. instructions in “Setting Up Coordinator Disks” on page 44
Please make sure that the vxfen to set up coordinator disks and start I/O fencing. Then
driver is installed and configured clear the faulted resources and online the service groups.
Message Explanation
VXFEN: Unable to register with This message appears when the vxfen driver is unable to
coordinator disk with serial number: register with one of the coordinator disks. The serial
xxxx number of the coordinator disk that failed is printed.
VXFEN: Unable to register with a This message appears when the vxfen driver is unable to
majority of the coordinator disks. register with a majority of the coordinator disks. The
Dropping out of cluster. problems with the coordinator disks must be cleared before
fencing can be enabled.
This message is preceded with the message “VXFEN:
Unable to register with coordinator disk with serial number
xxxx.”
130 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Informational Messages When Node is Ejected
132 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Creating Starter Databases E
The optional procedures in this appendix describe suggested methods for creating a
starter Oracle9i database on shared storage. They are provided in case you are not creating
your own database using tools you may have available.
You can create the database on raw VxVM volumes or on VxFS shared file systems. Two
methods to create the database are described. You can use either dbca (Oracle9i Database
Creation Assistant) or VERITAS provided scripts.
For Oracle9i Release 1 and Release 2, you can use dbca to create a starter database in raw
volumes, or you can use VERITAS scripts to a create database in either raw volumes or a
VxFS file system.
133
Using Oracle dbca to Create a Starter Database
Tablespace Requirements
The total recommended size required for these tablespaces is 6.56 GB.
134 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Using Oracle dbca to Create a Starter Database
3. Create a volume in the shared group for each of the tablespaces listed in the previous
table, “Tablespace Requirements” on page 134:
# vxassist -g ora_dg make VRT_system1 1000M
# vxassist -g ora_dg make VRT_spfile1 10M
.
.
4. Enable write access to the shared disk group. On the master node, enter:
# vxdg deport ora_dg
# vxdg -s import ora_dg
# vxvol -g ora_dg startall
# vxdg -g ora_dg set activation=sw
6. Define the access mode and permissions for the volumes storing the Oracle data. For
each volume listed in $ORACLE_HOME/raw_config, use the vxedit(1M) command:
vxedit -g disk_group set group=group user=user mode=660 volume
For example:
# vxedit -g ora_dg set group=dba user=oracle mode=660 VRT_system1
In this example, VRT_system1 is the name of one of the volumes. Repeat the
command to define access mode and permissions for each volume in the ora_dg.
You can now create the database. See “Creating the Oracle Starter Database Using
Oracle9i dbca” on page 136.
Note If you find the dbca utility is unable to create a database, you can create the starter
database using the VERITAS scripts described in the next section.
136 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Using VERITAS Scripts to Create a Starter Database
1. Create a disk group on shared storage in which to create the database. For example:
# vxdg init raw_db_dg c2t3d1s2
2. Deport and import the disk group, start the volume, and enable write access to the
shared disk group. On the master node, enter:
# vxdg deport raw_db_dg
# vxdg -s import raw_db_dg
# vxvol -g raw_db_dg startall
# vxdg -g raw_db_dg set activation=sw
5. Create a directory in which to copy the database creation scripts from the
/opt/VRTSvcs/rac/db_scripts/raw directory. Do not edit the original files in
the original directory because creating the database replaces the variables in the
scripts. For example:
$ mkdir db_cr_raw
$ cp /opt/VRTSvcs/rac/db_scripts/raw/* db_cr_raw
6. Read the file README that is copied with the database creation scripts. This procedure
is described in the README.
7. Edit the file conf, and set all variable as they apply to your environment:
$ vi conf
10. Run the script cr_vols. This script creates volumes for tablespaces in the disk group
you created in step 1.
14. Run the convert command. This command uses the values of the environment
variables set in the conf file to modify the database creation scripts. Enter:
$ convert
15. Examine the script runall.sh and use comment symbols to prevent the use of any
database options you don’t want.The following example shows the runall.sh script
edited such that demo schemas are not created:
$ vi runall.sh
ORACLE_SID=${INSTANCE_1}
export ORACLE_SID
138 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Using VERITAS Scripts to Create a Starter Database
16. Check the permissions for runall.sh. It requires execute permission for the user.
1. Create a disk group on shared storage in which to create the database. For example:
# vxdg init racfs_db c2d2t1s1
4. Read the file README that is copied with the database creation scripts. This procedure
is described in the README.
5. Edit the file conf, and set all the variables as they apply to your environment:
$ vi conf
8. Run the script prep_fs. This script creates and mounts a file system with the
required options and settings based on the contents of the conf file.
12. Run the convert command. This command uses the values of the environment
variables set in the previous step to modify the database creation scripts. Enter:
$ convert
13. Examine the script runall.sh and use comment symbols to prevent the use of any
database options you don’t want:
$ vi runall.sh
The following example shows the runall.sh script edited such that demo schemas
are not created:
# This is a sample database creation script.
# It is assumed that the script is run from Bourne or K shell.
# Comment out those lines which you don’t want to be executed,
# e.g., if demo schemas are not required, comment out the
# corresponding line.
ORACLE_SID=${INSTANCE_1}
export ORACLE_SID
140 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Using VERITAS Scripts to Create a Starter Database
14. Check the permissions for runall.sh. It requires execute permission for the user.
142 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
Index
Symbols /sbin/lltstat -p (listing ports in use) 123
.rhosts convert 138
editing to enable rsh 25, 56, 60, 62, 64, 67 format (verify disks) 103
editing to remove rsh permissions 69 lmxconfig -c (configure LMX) 124
/etc/default/vxdg 52 lmxconfig -U (unconfigure LMX) 124
/opt/VRTSvcs/bin/chk_dbac_pkgs 39 vxassist 53
/sbin/lmxconfig 124 vxdctl enable (scan disks) 103
/sbin/vcsmmconfig, starting VCSMM 98 vxdg list (disk group information) 52
vxedit (set shared volume mode) 135
Numerics
vxfenclearpre 107
9iinstall script 56, 59
Communications
9iP2install script 61
data stack illustration 5
9iP3install script 61
LLT and GAB overview 6
A provided by GAB 7
Activation mode, setting 53, 135, 137 Configuration files
Agents LMX tunable parameters 123
CFSMount 120 sample main.cf file 109
CVMCluster 116 VXFEN tunable parameters 125
CVMVolDg 118 Connectivity policy of disk groups 52
Oracle. See Oracle enterprise agent Contexts
Attributes also know as minors 123
CFSMount agent 120 LMX tunable parameter 123
CVMCluster agent 116 convert command 138
CVMVolDg agent 119 Coordinator disks
Oracle agent, setting as local 73 concept of 14
Sqlnet agent, setting as local 73 description 81
C setting up 44
CFS (Cluster File System) cr_vols script 138
extension of VxFS 9 CVM (Cluster Volume Manager)
used with ODM 9 architecture 8
CFSMount agent 120 extension of VxVM 8
definition 118 CVM service group
sample configuration 121 configuring in main.cf 73
type definition 119 created after installation 47
CFSTypes.cf file 120 CVMCluster agent
Cluster Manager (CM) 3 description 115
Cluster membership 3, 12 sample configuration 117
Commands type definition 116
143
CVMTypes.cf file 116, 119 port memberships 46
CVMVolDg agent getcomms (troubleshooting script) 97
description 118 getdbac (troubleshooting script) 97
type definition 119
H
D hagetcf (troubleshooting script) 97
Data corruption Hitachi Data Systems
preventing with I/O fencing 13 System Mode 186 must be set 41
system panics to prevent 105 Hitachi Data Systems storage
Database 99xx Series supported 41
using dbca to create 133
I
using scripts to create 137
I/O fencing
Dependencies among service groups 72
components 14
Disk group
description 81
creating for SRVM 53
overview 12
displaying information about 52
scenarios for I/O fencing 82
general guidelines 52
starting 45
importing 53, 135, 137
stopping 91
setting activation mode 53, 135, 137
Importing disk groups 53, 135, 137
setting the connectivity policy 52
Inter-Process Communication (IPC) 3
E
K
Ejected systems
Kernel driver parameters 123
error messages displayed 131
Keys
losing access to shared storage 81
registration keys, formatting of 85
recovering from ejection 105
removing registration keys 103
EMC Symmetrix 8000 series storage
PR flag requirement 41 L
Enterprise agent for Oracle License keys
See Oracle enterprise agent adding during installation 26
Environment variables obtaining xiv
MANPATH 24 Listener process
PATH variable 24 configuring as resource in main.cf 74
Error messages shown in configuration 11
LMX messages 127 LLT 6
running vxfenclearpre command 107 LMX
VXFEN messages 130 configuring with lmxconfig -c 124
VxVM, related to I/O fencing 130 error messages 127
when node is ejected 131 tunable driver parameters 123
unconfiguring 124
F
Local, defining attributes as 78
Fencing. See I/O fencing
Log files
Files
location of Oracle agent log files 79
.rhosts 25
location of VCS log files 79
/etc/default/vxdg 52
LUNs
Flowchart, RAC installation overview 20
using for coordinator disks 44
format command 103
verifying serial numbers for paths to 41
G
M
GAB
main.cf
part of communications package 5
144 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide
example 109 displaying 46
example after installation 48 membership, list of 46
MANPATH environment variable 24 removing membership for 90
Manual pages, setting MANPATH 24 Ports, tunable kernel parameter 123
Minors prep_fs script 140
also known as contexts 123
R
appearing in LMX error messages 101
RAC
increasing maximum number of 124
architecture 1
O defined 1
ODM (Oracle Disk Manager) installation flowchart 20
described 9 Registration key
linking Oracle9i to VERITAS libraries 69 displaying with vxfenadm 85
Oracle enterprise agent formatting of 85
configuring for RAC 73 Registrations
documentation 76 for I/O fencing 81
location of log files 79 key formatting 85
Oracle instance 2 Reservations
Oracle service group description 81
configuring in main.cf 76 description and use 13
dependencies among resources 72 SCSI-3 persistent (PR) 18
Oracle9i rsh permissions
confirming parallel flag 71 for inter-system communication 25
installation tasks 51 removing 69
installing Release 1 locally 58 runall.sh script 138
installing Release 1 on shared disk 54 runInstaller, Oracle 9i utility 60, 62
installing Release 1 Patch 61
S
installing Release 2 locally 66
Scripts
installing Release 2 on shared disks 63
9iinstall 56, 59
prerequisites for installing 51
9iP2install, 9iP3install 61
runInstaller utility 60, 62
cr_vols 138
P prep_fs 140
Packages runall.sh 138
for CVM and CFS installation 34 using to create Database 137
for VCS installation 32 SCSI-3 persistent reservations
removing 91 described 13
verifying installation of 39 EMC Symmetrix support 41
Parallel attribute, setting for groups 73 requirement for I/O fencing 40
Parallel flag, confirming in Oracle 71 verifying EMC Symmetrix support 41
Parameters verifying that storage supports 40, 51
LMX tunable driver parameters 123 Service groups
shared memory parameter 51 CVM 72
vxfen tunable driver parameters 125 dependencies among 72
Patches Split brain
required for installation 22 described 12
verifying installation of 39 removing risks associated with 13
PATH environment variable, setting 24 Sqlnet agent, setting local attributes 73
Persistent reservations 41, 81 Sqlnet resource, configuring in main.cf 74
Ports, GAB srvconfig (Oracle SRVM) component 53, 57
Index 145
SRVM, creating disk group for 53 VERITAS ODM libraries, linking 69
Storage vxassist command 53
setting up shared in cluster 18 vxdctl command 103
supported for DBE/AC 41 vxfen command 45, 91
testing for I/O fencing 42 vxfen driver
kernel tunable parameters 125
T
starting 45
Tablespaces
unconfiguring 91
table of tablespace names and sizes 134
VXFEN error messages 130
Troubleshooting scripts 97
vxfenadm command
U options for administrators 85
Uninstalling DBE/AC for Oracle9i RAC 92 using -i to read LUN info 41
uninstallvcs utility 92 vxfenclearpre command
V error messages 107
VCS running 107
architecture 10 vxfentab file, created by rc script 45
location of log files 79 vxfentsthdw utility, using to test disks 42
uninstalling previous version 22 VxFS
VCS membership module 12 removing previous version 22
VCSIPC Solaris 8 patches for 22
copying library into place 65 VxVM
errors in trace/log files 101 CVM is extension of 8
VCSMM module 12 removing 92
vcsmmconfig command 98 Solaris 8 patches for 22
146 VERITAS Database Edition/AC for Oracle9i RAC Installation and Configuration Guide