Você está na página 1de 27

Thanks to Source:

http://www.iselfschooling.com/mc4articles/RACinstallation.htm

How to install Oracle RAC on Sun Cluster?


More Resources
by Google:

Step-By-Step Installation of RAC on Sun Cluster v3


This document will provide the reader with step-by-step instructions on how to install
a cluster, install Oracle Real Application Clusters (RAC) and start a cluster database
on Sun Cluster v3. For additional explanation or information on any of these steps,
please see the references listed at the end of this document.

1. Configuring the Clusters Hardware


1.1 Minimal Hardware list / System Requirements
For a two node cluster the following would be a minimum recommended hardware
list.

1.1.1 Hardware

For Sun[] servers, Sun or third-party storage products, Cluster


interconnects, Public networks, Switch options, Memory, Swap & CPU
requirements consult the operating system or hardware vendor. Sun's
interconnect, RSM, is now supported in v9.2.0.2 with patch number 2454989.
System disk partitions
/globaldevices - a 100Mb file system that will be used by the
scinstall(1M) utility for global devices.
Volume manager - a 10Mb partition for volume manager use on a slice
at the end of the disk (slice 7). If your cluster uses VERITAS Volume
Manager (VxVM) and you intend to encapsulate the root disk, you
need two unused slices available for use by VxVM.

As with any other system running the Solaris Operating


System (SPARC) environment, you can configure the root
(/), /var, /usr, and /opt directories as separate file systems, or
you can include all the directories in the root (/) file system.
The following describes the software contents of the root (/),
/var, /usr, and /opt directories in a Sun Cluster configuration.
Consider this information when you plan your partitioning
scheme.
root (/) - The Sun Cluster software itself occupies less than 40
Mbytes of space in the root (/) file system. For best results, you
need to configure ample additional space and inode capacity
for the creation of both block special devices and character
special devices used by VxVM software, especially if a large

number of shared disks are in the cluster. Therefore, add at


least 100 Mbytes to the amount of space you would normally
allocate for your root (/) filesystem.
/var - The Sun Cluster software occupies a negligible amount
of space in the /var file system at installation time. However,
you need to set aside ample space for log files. Also, more
messages might be logged on a clustered node than would be
found on a typical standalone server. Therefore, allow at least
100 Mbytes for the /var file system.
/usr - Sun Cluster software occupies less than 25 Mbytes of
space in the /usr file system. VxVM software require less than
15 Mbytes.
/opt - Sun Cluster framework software uses less than 2 Mbytes
in the /opt file system. However, each Sun Cluster data service
might use between 1 Mbyte and 5 Mbytes. VxVM software
can use over 40 Mbytes if all of its packages and tools are
installed. In addition, most database and applications software
is installed in the /opt file system. If you use Sun Management
Center software to monitor the cluster, you need an additional
25 Mbytes of space on each node to support the Sun
Management Center agent and Sun Cluster module packages.
An example system disk layout is as follows:A sample system disk layout
Contents
Slic
e
0

Allocation Description
(in Mbytes)

1168

swap

750

2
3

overlap
/globaldevices

2028
100

441 Mbytes for Solaris Operating System


(SPARC) environment software.
100 Mbytes extra for root (/).
100 Mbytes extra for /var.
25 Mbytes for Sun Cluster software.
55 Mbytes for volume manager software.
1 Mbyte for Sun Cluster HA for NFS
software.
25 Mbytes for the Sun Management Center
agent and Sun Cluster module agent
packages.
421 Mbytes (the remaining free space on
the disk) for possible future use by
database and application software.
Minimum size when physical memory is
less than 750 Mbytes.
The entire disk.
The Sun Cluster software later assigns this
slice a different mount point and mounts it
as a cluster file system.

unused

unused

unused

volume manager

10

Available as a free slice for encapsulating


the root disk under VxVM

Used by VxVM for installation after you


free the slice.

1.1.2 Software

For Solaris Operating System (SPARC)[], Sun Cluster, Volume


Manager and File System supportconsult the operating system vendor. Sun
Cluster have scalable services with Global File Systems (GFS) based around
the Proxy File Systems (PXFS). PXFS allows file access locations transparent
and is Sun's implementation of Cluster File Systems. Currently, Sun is not
supporting anything relating to GFS for RAC. Check with Sun for updates on
this status.

1.1.3 Patches
The Sun Cluster nodes might require patches in the following areas:

Solaris Operating System (SPARC) Environment patches


Storage Array interface firmware patches
Storage Array disk drive firmware patches
Veritas Volume Manager patches

Some patches, such as those for Veritas Volume Manager cannot be


installed until after the volume management software installation is
completed. Before installing any patches, always do the following:make sure all cluster nodes have the same patch levels
do not install any firmware-related patches without qualified assistance
always obtain the most current patch information
read all patch README notes carefully.

Specific Solaris Operating System (SPARC) patches maybe required


and it is recommended that the latest Solaris Operating System
(SPARC) release, Sun's recommended patch clusters and Sun Cluster
updates are applied. Current Sun Cluster updates include release
11/00, update one 07/01, update two 12/01 and update three 05/02. To
determine which patches have been installed, enter the following
commands:

$ showrev -p

For the latest Sun Cluster 3.0 required patches see SunSolve document id
24617 Sun Cluster 3.0 Early Notifier.
1.2 Installing Sun StorEdge Disk Arrays
Follow the procedures for an initial installation of a StorEdge disk enclosures or
arrays, prior to installing the Solaris Operating System (SPARC) operating
environment and Sun Cluster software. Perform this procedure in conjunction with the
procedures in the Sun Cluster 3.0 Software Installation Guide and your server
hardware manual. Multihost storage in clusters uses the multi-initiator capability of

the Small Computer System Interface (SCSI) specification. For conceptual


information on multi-initiator capability, see the Sun Cluster 3.0 Concepts document.

1.3 Installing Cluster Interconnect and Public Network Hardware


The following procedures are needed for installing cluster hardware during an initial
cluster installation, before Sun Cluster software is installed. Separate procedures need
to be followed for installing Ethernet-based interconnect hardware, PCI-SCI-based
interconnect hardware, and public network hardware (see Sun's current installation
notes).
If not already installed, install host adapters in your cluster nodes. For the
procedure on installing host adapters, see the documentation that shipped with
your host adapters and node hardware. Install the transport cables (and
optionally, transport junctions), depending on how many nodes are in your
cluster:
A cluster with only two nodes can use a point-to-point connection, requiring
no cluster transport junctions. Use a point-to-point (crossover) Ethernet cable
if you are connecting 100BaseT or TPE ports of a node directly to ports on
another node. Gigabit Ethernet uses the standard fiber optic cable for both
point-to-point and switch configurations.

Note: If you use a transport junction in a two-node cluster, you can


add additional nodes to the cluster without bringing the cluster
offline to reconfigure the transport path.

A cluster with more than two nodes requires two cluster transport junctions.
These transport junctions are Ethernet-based switches (customer-supplied).
You install the cluster software and configure the interconnect after you have installed
all other hardware.

2. Creating a Cluster
2.1 Sun Cluster Software Installation
The Sun Cluster v3 host system (node) installation process is completed in several
major steps. The general process is: repartition boot disks to meet SunCluster v3.
install the Solaris Operating System (SPARC) Environment software
configure the cluster host systems environment
install Solaris 8 Operating System (SPARC) Environment patches
install hardware-related patches
install Sun Cluster v3 on the first cluster node
install Sun Cluster v3 on the remaining nodes
install any Sun Cluster patches and updates
perform postinstallation checks and configuration
You can use two methods to install the Sun Cluster v3 software on the cluster nodes: interactive installation using the scinstall installation interface
automatic Jumpstart installation (requires a pre-existing Solaris Operating
System (SPARC) JumpStart server)

This note assumes an interactive installation of Sun Cluster v3 with update 2. The Sun
Cluster installation program,scinstall, is located on the Sun Cluster v3 CD in the
/cdrom/suncluster_3_0_u2/SunCluster_3.0/Tools directory. When
you start the program without any options, it prompts you for cluster configuration
information that is stored for use later in the process. Although the Sun Cluster
software can be installed on all nodes in parallel, you can complete the installation on
the first node and then runscinstall on all other nodes in parallel. The additional
nodes get some basic information from the first, or sponsoring node, that was
configured.

2.2 Form a One-Node Cluster


As root:# cd /cdrom/suncluster_3_0_u2/SunCluster_3.0/Tools
# ./scinstall
*** Main Menu ***
Please select from one of the following (*) options:
* 1) Establish a new cluster using this machine as the
first node
* 2) Add this machine as a node in an established cluster
3) Configure a cluster to be JumpStarted from this
install server
4) Add support for new data services to this cluster node
5) Print release information for this cluster node
* ?) Help with menu options
* q) Quit
Option: 1
*** Establishing a New Cluster ***
...
Do you want to continue (yes/no) [yes]? yes
When prompted whether to continue to install Sun Cluster software packages, type
yes.
>>> Software Package Installation <<<
Installation of the Sun Cluster framework software
packages will take a few minutes to complete.
Is it okay to continue (yes/no) [yes]? yes
** Installing SunCluster 3.0 **
SUNWscr.....done
...Hit ENTER to continue:
After all packages are installed, press Return to continue to the next screen.
Specify the cluster name.
>>> Cluster Name <<<
...

What is the name of the cluster you want to


establish? clustername
Run the preinstallation check.
>>> Check <<<
This step runs sccheck(1M) to verify that certain basic
hardware and
software pre-configuration requirements have been met. If
sccheck(1M)
detects potential problems with configuring this machine
as a cluster
node, a list of warnings is printed.
Hit ENTER to continue:
Specify the names of the other nodes that will become part of this cluster.
>>> Cluster Nodes <<<
...
Node name: node2
Node name (Ctrl-D to finish): <Control-D>
This is the complete list of nodes:
...
Is it correct (yes/no) [yes]?
Specify whether to use data encryption standard (DES) authentication.
By default, Sun Cluster software permits a node to connect to the cluster only if the
node is physically connected to the private interconnect and if the node name was
specified. However, the node actually communicates with the sponsoring node over
the public network, since the private interconnect is not yet fully configured. DES
authentication provides an additional level of security at installation time by enabling
the sponsoring node to more reliably authenticate nodes that attempt to contact it to
update the cluster configuration.
If you choose to use DES authentication for additional security, you must configure
all necessary encryption keys before any node can join the cluster. See
the keyserv(1M) and publickey(4) man pages for details.
>>> Authenticating Requests to Add Nodes <<<
...
Do you need to use DES authentication (yes/no) [no]?
Specify the private network address and netmask.
>>> Network Address for the Cluster Transport <<<
...
Is it okay to accept the default network address (yes/no)
[yes]?
Is it okay to accept the default netmask (yes/no) [yes]?
Note: You cannot change the private network address after the cluster is successfully
formed.
Specify whether the cluster uses transport junctions.
If this is a two-node cluster, specify whether you intend to use transport junctions.
>>> Point-to-Point Cables <<<
...

Does this two-node cluster use transport junctions


(yes/no) [yes]?
Tip - You can specify that the cluster uses transport junctions, regardless of whether
the nodes are directly connected to each other. If you specify that the cluster uses
transport junctions, you can more easily add new nodes to the cluster in the future.
If this cluster has three or more nodes, you must use transport junctions. Press Return
to continue to the next screen.
>>> Point-to-Point Cables <<<
...
Since this is not a two-node cluster, you will be asked
to configure two transport junctions.
Hit ENTER to continue:
Does this cluster use transport junctions?
If yes, specify names for the transport junctions. You can use the default names
switchN or create your own names.
>>> Cluster Transport Junctions <<<
...
What is the name of the first junction in the cluster
[switch1]?
What is the name of the second junction in the cluster
[switch2]?
Specify the first cluster interconnect transport adapter.
Type help to list all transport adapters available to the node.
>>> Cluster Transport Adapters and Cables <<<
...
What is the name of the first cluster transport adapter
(help) [adapter]?
Name of the junction to which "adapter" is connected
[switch1]?
Use the default port name for the "adapter" connection
(yes/no) [yes]?
Hit ENTER to continue:
Note: If your configuration uses SCI adapters, do not accept the default when you are
prompted for the adapter connection (the port name). Instead, provide the port name
(0, 1, 2, or 3) found on the Dolphin switch itself, to which the node is physically
cabled. The following example shows the prompts and responses for declining the
default port name and specifying the Dolphin switch port name 0.
Use the default port name for the "adapter" connection
(yes/no) [yes]? no
What is the name of the port you want to use? 0
Choose the second cluster interconnect transport adapter.
Type help to list all transport adapters available to the node.
What is the name of the second cluster transport adapter
(help) [adapter]?
You can configure up to two adapters by using the scinstall command. You can
configure additional adapters after Sun Cluster software is installed by using
the scsetup utility.
If your cluster uses transport junctions, specify the name of the second transport
junction and its port.

Name of the junction to which "adapter" is connected


[switch2]?
Use the default port name for the "adapter" connection
(yes/no) [yes]?
Hit ENTER to continue:
Note: If your configuration uses SCI adapters, do not accept the default when you are
prompted for the adapter port name. Instead, provide the port name (0, 1, 2, or 3)
found on the Dolphin switch itself, to which the node is physically cabled. The
following example shows the prompts and responses for declining the default port
name and specifying the Dolphin switch port name 0.
Use the default port name for the "adapter" connection
(yes/no) [yes]? no
What is the name of the port you want to use? 0
Specify the global devices file system name.
>>> Global Devices File System <<<
...
The default is to use /globaldevices.
Is it okay to use this default (yes/no) [yes]?
Do you have any Sun Cluster software patches to install?
>>> Automatic Reboot <<<
...
Do you want scinstall to reboot for you (yes/no) [yes]?
Accept or decline the generated scinstall command. The scinstall command generated
from your input is displayed for confirmation.
>>> Confirmation <<<
Your responses indicate the following options to
scinstall:
scinstall -ik \
...
Are these the options you want to use (yes/no) [yes]?
Do you want to continue with the install (yes/no) [yes]?
If you accept the command and continue the installation, scinstall processing
continues. Sun Cluster installation output is logged in
the /var/cluster/logs/install/scinstall.log.pid file, where pid is
the process ID number of the scinstall instance.
After scinstall returns you to the Main Menu, you can rerun menu option 1 and
provide different answers. Your previous session answers display as the defaults.
Install any Sun Cluster software patches. See the Sun Cluster 3.0 Release Notes for
the location of patches and installation instructions. Reboot the node to establish the
cluster. If you rebooted the node after you installed patches, you do not need to reboot
the node a second time.
The first node reboot after Sun Cluster software installation forms the cluster and
establishes this node as the first-installednode of the cluster. During the final
installation process, the scinstall utility performs the following operations on the
first cluster node: installs cluster software packages

disables routing on the node (touch /etc/notrouter)


creates an installation log (/var/cluster/logs/install)
reboots the node
creates the Disk ID devices during the reboot
You can then install additional nodes in the cluster.

2.3 Installing Additional Nodes


After you complete the Sun Cluster software installation on the first node, you can
run scinstall in parallel on all remaining cluster nodes. The additional nodes are
placed in install mode so they do not have a quorum vote. Only the first node has a
quorum vote.
As the installation on each new node completes, each node reboots and comes up in
install mode without a quorum vote. If you reboot the first node at this point, all the
other nodes would panic because they cannot obtain a quorum. You can, however,
reboot the second or later nodes freely. They should come up and join the cluster
without errors.
Cluster nodes remain in install mode until you use the scsetup command to reset
the install mode.
You must perform postinstallation configuration to take the nodes out of install mode
and also to establish quorum disk(s).
Ensure that the first-installed node is successfully installed with Sun Cluster
software and that the cluster is established.
If you are adding a new node to an existing, fully installed cluster, ensure that
you have performed the following tasks.
Prepare the cluster to accept a new node.
Install Solaris Operating System (SPARC) software on the new node.
Become superuser on the cluster node to install.
Start the scinstall utility.
# ./scinstall
*** Main Menu ***
Please select from one of the following (*) options:
* 1) Establish a new cluster using this machine as
the first node
* 2) Add this machine as a node in an established
cluster
3) Configure a cluster to be JumpStarted from this
install server
4) Add support for new data services to this cluster
node
5) Print release information for this cluster node
* ?) Help with menu options
* q) Quit
Option: 2

*** Adding a Node to an Established Cluster ***


...
Do you want to continue (yes/no) [yes]? yes
When prompted whether to continue to install Sun Cluster software packages,
type yes.
>>> Software Installation <<<
Installation of the Sun Cluster framework software
packages will only
take a few minutes to complete.
Is it okay to continue (yes/no) [yes]? yes

** Installing SunCluster 3.0 **


SUNWscr.....done
...Hit ENTER to continue:
Specify the name of any existing cluster node, referred to as the sponsoring
node.
>>> Sponsoring Node <<<
...
What is the name of the sponsoring node? node1
>>> Cluster Name <<<
...
What is the name of the cluster you want to join?
clustername
>>> Check <<<
This step runs sccheck(1M) to verify that certain
basic hardware and software pre-configuration
requirements have been met. If sccheck(1M) detects
potential problems with configuring this machine as
a cluster node, a list of warnings is printed.

Hit ENTER to continue:


Specify whether to use autodiscovery to configure the cluster transport.
>>> Autodiscovery of Cluster Transport <<<
If you are using ethernet adapters as your cluster
transport adapters, autodiscovery is the best method
for configuring the cluster transport.
Do you want to use autodiscovery (yes/no) [yes]?
...
The following connections were discovered:
node1:adapter switch node2:adapter
node1:adapter switch node2:adapter

Is it okay to add these connections to the


configuration (yes/no) [yes]?
Specify whether this is a two-node cluster.
>>> Point-to-Point Cables <<<
...
Is this a two-node cluster (yes/no) [yes]?
Does this two-node cluster use transport junctions
(yes/no) [yes]?
Did you specify that the cluster will use transport junctions? If yes, specify
the transport junctions.
>>> Cluster Transport Junctions <<<
...
What is the name of the first junction in the
cluster [switch1]?
What is the name of the second junction in the
cluster [switch2]?
Specify the first cluster interconnect transport adapter.
>>> Cluster Transport Adapters and Cables <<<
...
What is the name of the first cluster transport
adapter (help)?adapter
Specify what the first transport adapter connects to. If the transport adapter
uses a transport junction, specify the name of the junction and its port.
Name of the junction to which "adapter" is connected
[switch1]?
...
Use the default port name for the "adapter"
connection (yes/no) [yes]?
OR
Name of adapter on "node1" to which "adapter" is
connected? adapter
Specify the second cluster interconnect transport adapter.
What is the name of the second cluster transport
adapter (help)?adapter
Specify what the second transport adapter connects to. If the transport adapter
uses a transport junction, specify the name of the junction and its port.
Name of the junction to which "adapter" is connected
[switch2]?
Use the default port name for the "adapter"
connection (yes/no) [yes]?
Hit ENTER to continue:
OR
Name of adapter on "node1" to which "adapter" is
connected? adapter
Specify the global devices file system name.
>>> Global Devices File System <<<

...
The default is to use /globaldevices.

Is it okay to use this default (yes/no) [yes]?


Do you have any Sun Cluster software patches to install? If not:>>> Automatic Reboot <<<
...
Do you want scinstall to reboot for you (yes/no)
[yes]?
>>> Confirmation <<<
Your responses indicate the following options to
scinstall:

scinstall -i \
...
Are these the options you want to use (yes/no)
[yes]?
Do you want to continue with the install (yes/no)
[yes]?
Install any Sun Cluster software patches.
Reboot the node to establish the cluster unless you rebooted the node after
you installed patches.
Do not reboot or shut down the first-installed node while any other nodes are
being installed, even if you use another node in the cluster as the sponsoring
node. Until quorum votes are assigned to the cluster nodes and cluster install
mode is disabled, the first-installed node, which established the cluster, is the
only node that has a quorum vote. If the cluster is still in install mode, you will
cause a system panic because of lost quorum if you reboot or shut down the
first-installed node. Cluster nodes remain in install mode until the first time
you run the scsetup(1M) command, during the procedure PostInstallation
Configuration.

2.4 Post Installation Configuration


Post-installation can include a number of tasks such as installing a volume manager
and or database software. There are other tasks that must be completed first.
taking the cluster nodes out of install mode
defining quorum disks
Before a new cluster can operate normally, the install mode attribute must be reset on
all nodes. You can this in a single step using the scsetup utility. This utility is a
menu-driven interface that prompts for quorum device information the first time it is
run on a new cluster installation. Once the quorum device is defined, the install mode
attribute is reset on all nodes. Use thescconf command as follows to disable or
enable install mode: scconf -c -q reset (reset install mode)
scconf -c -q installmode (enable install mode)

# /usr/cluster/bin/scsetup
>>> Initial Cluster Setup <<<
This program has detected that the cluster "installmode"
attribute is set ...
Please do not proceed if any additional nodes have yet to
join the cluster.
Is it okay to continue (yes/no) [yes]? yes
Which global device do you want to use (d<N>) ? dx
Is it okay to proceed with the update (yes/no) [yes]? yes
scconf -a -q globaldev=dx
Do you want to add another quorum disk (yes/no) ? no
Is it okay to reset "installmode" (yes/no) [yes] ? yes
scconf -c -q reset
Cluster initialization is complete.
Although it appears that the scsetup utility uses two simple scconf commands to
define the quorum device and reset install mode, the process is more complex.
The scsetup utility performs numerous verification checks for you. It is
recommended that you do not use scconf manually to perform these functions.

2.5 PostInstallation Verification


When you have completed the Sun Cluster software installation on all nodes, verify
the following information:
DID device configuration
General CCR configuration information
Each attached system sees the same DID devices but might use a different logical path
to access them. You can verify the DID device configuration with
the scdidadm command the following scdidadm output demonstrates how a
DID device can have a different logical path from each connected node.
# scdidadm -L
The list on each node should be the same. Output resembles the following.
1 phys-schost-1:/dev/rdsk/c0t0d0 /dev/did/rdsk/d1
2 phys-schost-1:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2
2 phys-schost-2:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2
3 phys-schost-1:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3
3 phys-schost-2:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3
...
The scstat utility displays the current status of various cluster components. You
can use it to display the following information:
the cluster name and node names
names and status of cluster members
status of resource groups and related resources
cluster interconnect status
The following scstat command option displays the cluster membership and
quorum vote information.
# /usr/cluster/bin/scstat -q

Cluster configuration information is stored in the CCR on each node. You should
verify that the basic CCR values are correct. The scconf -p command displays
general cluster information along with detailed information about each node in the
cluster.
$ /usr/cluster/bin/scconf -p

2.6 Basic Cluster Administration


Checking Status Using the scstat Command
Without any options, the scstat command displays general information for all
cluster nodes. You can use options to restrict the status information to a particular type
of information and/or to a particular node.
The following command displays the cluster transport status for a single node with
gigabit ethernet:$ /usr/cluster/bin/scstat -W -h <node1>
-- Cluster Transport Paths -Endpoint Endpoint Status
-------- -------- -----Transport path: <node2>:ge1 <node1>:ge1 Path online
Transport path: <node2>:ge0 <node1>:ge0 Path online
Checking Status Using the sccheck Command
The sccheck command verifies that all of the basic global device structure is correct
on all nodes. Run the sccheckcommand after installing and configuring a cluster, as
well as after performing any administration procedures that might result in changes to
the devices, volume manager, or Sun Cluster configuration.
You can run the command without any options or direct it to a single node. You can
run it from any active cluster member. There is no output from the command unless
errors are encountered. Typical sccheck command variations follow (as root):# /usr/cluster/bin/sccheck
# /usr/cluster/bin/sccheck -h <node 1>
Checking Status Using the scinstall Command
During the Sun Cluster software installation, the scinstall utility is copied to
the /usr/cluster/bin directory. You can run the scinstall utility with
options that display the Sun Cluster revision and/or the names and revision of
installed packages. The displayed information is for the local node only. A
typical scinstall status output follows:$ /usr/cluster/bin/scinstall -pv
SunCluster 3.0
SUNWscr: 3.0.0,REV=2000.10.01.01.00
SUNWscdev: 3.0.0,REV=2000.10.01.01.00
SUNWscu: 3.0.0,REV=2000.10.01.01.00
SUNWscman: 3.0.0,REV=2000.10.01.01.00
SUNWscsal: 3.0.0,REV=2000.10.01.01.00
SUNWscsam: 3.0.0,REV=2000.10.01.01.00
SUNWscvm: 3.0.0,REV=2000.10.01.01.00
SUNWmdm: 4.2.1,REV=2000.08.08.10.01

Starting & Stopping Cluster Nodes


The Sun Cluster software starts automatically during a system boot operation. Use
the init command to shut down a single node. You use
the scshutdown command to shut down all nodes in the cluster.
Before shutting down a node, you should switch resource groups to the next preferred
node and then run init 0 on the node
You can shut down the entire cluster with the scshutdown command from any
active cluster node. A typical cluster shutdown example follows:# /usr/cluster/bin/scshutdown -y -g 30
Broadcast Message from root on <node 1> ...
The cluster <cluster> will be shutdown in 30 seconds
....
The system is down.
syncing file systems... done
Program terminated
ok
Log Files for Sun Cluster
The log files for Sun Cluster are stored
in /var/cluster/logs and /var/cluster/logs/install for
installation. Both Solaris Operating System (SPARC) and Sun Cluster software write
error messages to the /var/adm/messages file, which over time can fill the /var file
system. If a cluster node's /var file system fills up, Sun Cluster might not be able
to restart on that node. Additionally, you might not be able to log in to the node.

2.7 Installing a Volume Manager


It is now necessary to install volume management software. Sun Cluster v3 supports
two products: Sun's Solaris Volume Manager software (Solstice DiskSuite software) or
VERITAS Volume Manager (VxVM) software v3.0.4+ (32-bit RAC), v3.1.1+
(64-bit RAC) is needed to provide shared disk access and distributed
mirroring.
Although Sun's Solstice DiskSuite (SDS), integrated into Solaris Operating System
(SPARC) 9 onwards as Solaris Volume Manager (SVM), is supported by Sun Cluster
v3, neither SDS or SVM supports cluster wide volume management, it is only for per
node basis. Hence these products cannot be used for RAC.

3.0 Preparing for the installation of RAC


The Real Application Clusters installation process includes four major tasks.

1. Install the operating system-dependent (OSD) clusterware.


2. Configure the shared disks and UNIX preinstallation tasks.
3. Run the Oracle Universal Installer to install the Oracle9i Enterprise
Edition and the Oracle9i Real Application Clusters software.
4. Create and configure your database.
3.1 Install the operating system-dependent (OSD) clusterware

You must configure RAC to use the shared disk architecture of Sun Cluster. In this
configuration, a single database is shared among multiple instances of RAC that
access the database concurrently. Conflicting access to the same data is controlled by
means of the Oracle UNIX Distributed Lock Manager (UDLM). If a process or a node
crashes, the UDLM is reconfigured to recover from the failure. In the event of a node
failure in an RAC environment, you can configure Oracle clients to reconnect to the
surviving server without the use of the IP failover used by Sun Cluster failover data
services.
The Sun Cluster install CD's contain the required SC udlm package:Package SUNWudlm Sun Cluster Support for Oracle Parallel Server UDLM, (opt) on
SunCluster v3
To install use the pkgadd command:# pkgadd -d . SUNWudlm
Once installed, Oracle's interface with this, the Oracle UDLM, can be installed.
To install Sun Cluster Support for RAC with VxVM, the following Sun Cluster 3
Agents data services packages need to be installed as superuser (see Sun's Sun Cluster
3 Data Services Installation and Configuration Guide):# pkgadd -d . SUNWscucm SUNWudlmr SUNWcvmr
SUNWcvm (SUNWudlm will also need to be included unless already installed from
the step above).
Before rebooting the nodes, you must ensure that you have correctly installed and
configured the Oracle UDLM software.
The Oracle Unix Distributed Lock Manager (ORCLudlm also known as the Oracle
Node Monitor) must be installed. This may be referred to in the Oracle documentation
as the "Parallel Server Patch". To check version information on any previously
installed dlm package:
$ pkginfo -l ORCLudlm |grep PSTAMP
OR
$ pkginfo -l ORCLudlm |grep VERSION
You must apply the following steps to all cluster nodes. The Oracle udlm can be found
on Disk1 of the Oracle9i server installation CD-ROM, in the
directory opspatch or racpatch in later versions. A version of the Oracle udlm
may also be found on the Sun Cluster CD set but check the Oracle release for the
latest applicable version. The informational
files README.udlm & release_notes.334x are located in this directory with
version and install information. This is the Oracle udlm package for 7.X.X or later on
Solaris Operating System (SPARC) and requires any previous versions to be removed
prior to installation.
Shutdown all existing clients of Oracle Unix Distributed Lock Manager
(including all Oracle Parallel Server/RAC instances).
Become super user.
Reboot the cluster node in non-cluster mode (replace <node name> with your
cluster node name):# scswitch -S -h <node name>
# shutdown -g 0 -y
... wait for the ok prompt
ok boot -x

Unpack the file ORCLudlm.tar.Z into a directory:

Install the patch by adding the package as root:

cd <CD-ROM mount>/opspatch #(or racpatch in later


versions)
cp ORCLudlm.tar.Z /tmp
cd /tmp
uncompress ORCLudlm.tar.Z
tar xvf ORCLudlm.tar

cd /tmp
pkgadd -d . ORCLudlm

The udlm configuration files in SC2.X and SC3.0 are the following:
SC2.X: /etc/opt/SUNWcluster/conf/<default_cluster_name>.ora_cdb
SC3.0: /etc/opt/SUNWcluster/conf/udlm.conf
The udlm log files in SC2.X and SC3.0 are the following:
SC2.X: /var/opt/SUNWcluster/dlm_<node_name>/logs/dlm.log
SC3.0: /var/cluster/ucmm/dlm_<node_name>/logs/dlm.log
pkgadd will copy a template file, <configuration_file_name>.template,
to /etc/opt/SUNWcluster/conf.

Now that udlm (also referred to as the "Cluster Membership Monitor") is


installed, you can start it up by rebooting the cluster node in cluster mode:-

# shutdown -g 0 -y -i 6

3.2 Configure the shared disks and UNIX preinstallation tasks


3.2.1 Configure the shared disks
Real Application Clusters requires that all each instance be able to access a set of
unformatted devices on a shared disk subsystem. These shared disks are also referred
to as raw devices. If your platform supports an Oracle-certified cluster file system,
however, you can store the files that Real Application Clusters requires directly on the
cluster file system.
The Oracle instances in Real Application Clusters write data onto the raw devices to
update the control file, server parameter file, each datafile, and each redo log file. All
instances in the cluster share these files.
The Oracle instances in the RAC configuration write information to raw devices
defined for:
The control file
The spfile.ora
Each datafile
Each ONLINE redo log file
Server Manager (SRVM) configuration information
It is therefore necessary to define raw devices for each of these categories of file. This
normally means striping data across a large number of disks in a RAID 0+1
configuration.
The Oracle Database Configuration Assistant (DBCA) will create a seed database
expecting the following configuration:-

Raw Volume
SYSTEM tablespace
USERS tablespace
TEMP tablespace

File
Size

Sample File Name

400 Mb db_name_raw_system_400m
120 Mb db_name_raw_users_120m
100 Mb db_name_raw_temp_100m

UNDOTBS tablespace per


312 Mb db_name_raw_undotbsx_312m
instance
CWMLITE tablespace
100 Mb db_name_raw_cwmlite_100m
EXAMPLE
160 Mb db_name_raw_example_160m
OEMREPO
20 Mb db_name_raw_oemrepo_20m
INDX tablespace
70 Mb db_name_raw_indx_70m
TOOLS tablespace
12 Mb db_name_raw_tools_12m
DRYSYS tablespace
90 Mb db_name_raw_drsys_90m
First control file
110 Mb db_name_raw_controlfile1_110m
Second control file
110 Mb db_name_raw_controlfile2_110m
Two ONLINE redo log files 120 Mb db_name_thread_lognumber_120m
per instance
x2
spfile.ora
5 Mb db_name_raw_spfile_5m
srvmconfig
100 Mb db_name_raw_srvmconf_100m
Note: Automatic Undo Management requires an undo tablespace per instance
therefore you would require a minimum of 2 tablespaces as described above. By
following the naming convention described in the table above, raw partitions are
identified with the database and the raw volume type (the data contained in the raw
volume). Raw volume size is also identified using this method.
Note: In the sample names listed in the table, the string db_name should be replaced
with the actual database name, threadis the thread number of the instance,
and lognumber is the log number within a thread.
On the node from which you run the Oracle Universal Installer, create an ASCII file
identifying the raw volume objects as shown above. The DBCA requires that these
objects exist during installation and database creation. When creating the ASCII file
content for the objects, name them using the format:
database_object=raw_device_file_path
When you create the ASCII file, separate the database objects from the paths with
equals (=) signs as shown in the example below:system=/dev/vx/rdsk/oracle_dg/db_name_raw_system_400m
spfile=/dev/vx/rdsk/oracle_dg/db_name_raw_spfile_5m
users=/dev/vx/rdsk/oracle_dg/db_name_raw_users_120m
temp=/dev/vx/rdsk/oracle_dg/db_name_raw_temp_100m
undotbs1=/dev/vx/rdsk/oracle_dg/db_name_raw_undotbs1_312m
undotbs2=/dev/vx/rdsk/oracle_dg/db_name_raw_undotbs2_312m
example=/dev/vx/rdsk/oracle_dg/db_name_raw_example_160m
cwmlite=/dev/vx/rdsk/oracle_dg/db_name_raw_cwmlite_100m
indx=/dev/vx/rdsk/oracle_dg/db_name_raw_indx_70m
tools=/dev/vx/rdsk/oracle_dg/db_name_raw_tools_12m
drsys=/dev/vx/rdsk/oracle_dg/db_name_raw_drsys_90m
control1=/dev/vx/rdsk/oracle_dg/db_name_raw_controlfile1_
110m
control2=/dev/vx/rdsk/oracle_dg/db_name_raw_controlfile2_
110m
redo1_1=/dev/vx/rdsk/oracle_dg/db_name_raw_log11_120m
redo1_2=/dev/vx/rdsk/oracle_dg/db_name_raw_log12_120m
redo2_1=/dev/vx/rdsk/oracle_dg/db_name_raw_log21_120m
redo2_2=/dev/vx/rdsk/oracle_dg/db_name_raw_log22_120m

You must specify that Oracle should use this file to determine the raw device volume
names by setting the following environment variable where filename is the name of
the ASCII file that contains the entries shown in the example above:
setenv DBCA_RAW_CONFIG filename
or
export DBCA_RAW_CONFIG=filename

3.2.2 UNIX Preinstallation Steps


After configuring the raw volumes, perform the following steps prior to installation as
root user:
Add the Oracle USER
Make sure you have an osdba group defined in the /etc/group file on all
nodes of your cluster. To designate an osdba group name and group number
and osoper group during installation, these group names must be identical
on all nodes of your UNIX cluster that will be part of the Real Application
Clusters database. The default UNIX group name for the osdba and osoper
groups is dba. A typical entry would therefore look like the following:
dba::101:oracle
oinstall::102:root,oracle
Create an oracle account on each node so that the account:
Is a member of the osdba group
Is used only to install and update Oracle software
Has write permissions on remote directories

A typical command would look like the following:


# useradd -c "Oracle software owner" -G dba,
oinstall -u 101 -m -d /export/home/oracle -s
/bin/ksh oracle

Create a mount point directory on each node to serve as the top of your
Oracle software directory structure so that:
The name of the mount point on each node is identical to that on the
initial node
The oracle account has read, write, and execute privileges
On the node from which you will run the Oracle Universal Installer, set up
user equivalence by adding entries for all nodes in the cluster, including the
local node, to the .rhosts file of the oracle account, or
the/etc/hosts.equiv file.
As oracle account user, check for user equivalence for the oracle account by
performing a remote login (rlogin) to each node in the cluster.
As oracle account user, if you are prompted for a password, you have not
given the oracle account the same attributes on all nodes. You must correct
this because the Oracle Universal Installer cannot use the rcp command to
copy Oracle products to the remote node's directories without user
equivalence.
System Kernel Parameters
Verify operating system kernel parameters are set to appropriate levels: The
file /etc/system is read by the operating system kernel at boot time. Check this
file for appropriate values for the following parameters.

Kernel Parameter

Setting

Purpose
Maximum allowable size of one shared
SHMMAX
4294967295
memory segment (4 Gb)
Minimum allowable size of a single shared
SHMMIN
1
memory segment.
Maximum number of shared memory segments
SHMMNI
100
in the entire system.
Maximum number of shared memory segments
SHMSEG
10
one process can attach.
Maximum number of semaphore sets in the
SEMMNI
1024
entire system.
Minimum recommended value. SEMMSL
should be 10 plus the largest PROCESSES
SEMMSL
100
parameter of any Oracle database on the
system.
Maximum semaphores on the system. This
setting is a minimum recommended value.
SEMMNS should be set to the sum of the
SEMMNS
1024
PROCESSES parameter for each Oracle
database, add the largest one twice, plus add an
additional 10 for each database.
SEMOPM
100
Maximum number of operations per semop call.
SEMVMX
32767
Maximum value of a semaphore.
Two to four times your system's physical
(swap space)
750 MB
memory size.
Establish system environment variables
Set a local bin directory in the user's PATH, such as /usr/local/bin,
or /opt/bin. It is necessary to have execute permissions on this directory.
Set the DISPLAY variable to point to the system's (from where you will run
OUI) IP address, or name, X server, and screen.
Set a temporary directory path for TMPDIR with at least 20 Mb of free space
to which the OUI has write permission.
Establish Oracle environment variables: Set the following Oracle environment
variables:
Environment Variable
Suggested value
eg /u01/app/oracle
ORACLE_BASE
eg /u01/app/oracle/product/9201
ORACLE_HOME
ORACLE_TERM
xterm
NLS_LANG
AMERICAN-AMERICA.UTF8 for example
$ORACLE_HOME/ocommon/nls/admin/data
ORA_NLS33
Should contain $ORACLE_HOME/bin
PATH
$ORACLE_HOME/JRE:$ORACLE_HOME/jlib \
$ORACLE_HOME/rdbms/jlib: \
CLASSPATH
$ORACLE_HOME/network/jlib
Create the directory /var/opt/oracle and set ownership to the oracle
user.

Verify the existence of the file /opt/SUNWcluster/bin/lkmgr. This


is used by the OUI to indicate that the installation is being performed on a
cluster.
Note: There is a script InstallPrep.sh available which may be downloaded and run
prior to the installation of Oracle Real Application Clusters. This script verifies that
the system is configured correctly according to the Installation Guide. The output of
the script will report any further tasks that need to be performed before successfully
installing Oracle 9.xDataServer (RDBMS). This script performs the following
verifications: ORACLE_HOME Directory Verification
UNIX User/umask Verification
UNIX Group Verification
Memory/Swap Verification
TMP Space Verification
Real Application Cluster Option Verification
Unix Kernel Verification
. ./InstallPrep.sh
You are currently logged on as oracle
Is oracle the unix user that will be installing Oracle
Software? y or n
y
Enter the unix group that will be used during the
installation
Default: dba
dba
Enter Location where you will be installing Oracle
Default: /u01/app/oracle/product/oracle9i
/u01/app/oracle/product/9.2.0.1
Your Operating System is SunOS
Gathering information... Please wait
Checking unix user ...
user test passed
Checking unix umask ...
umask test passed
Checking unix group ...
Unix Group test passed
Checking Memory & Swap...
Memory test passed
/tmp test passed
Checking for a cluster...
SunOS Cluster test
3.x has been detected
Cluster has been detected
You have 2 cluster members configured and 2 are curently
up
No cluster warnings detected
Processing kernel parameters... Please wait
Running Kernel Parameter Report...
Check the report for Kernel parameter verification

Completed.
/tmp/Oracle_InstallPrep_Report has been generated
Please review this report and resolve all issues before
attempting to install the Oracle Database Software

3.3 Using the Oracle Universal Installer for Real Application Clusters
Follow these procedures to use the Oracle Universal Installer to install the Oracle
Enterprise Edition and the Real Application Clusters software. Oracle9i is supplied on
multiple CD-ROM disks and the Real Application Clusters software is part of this
distribution. During the installation process it is necessary to switch between the CDROMS. OUI will manage the switching between CDs.
To install the Oracle Software, perform the following:.
Login as the oracle user
$ /<cdrom_mount_point>/runInstaller
At the OUI Welcome screen, click Next.
A prompt will appear for the Inventory Location (if this is the first time that
OUI has been run on this system). This is the base directory into which OUI
will install files. The Oracle Inventory definition can be found in the
file/var/opt/oracle/oraInst.loc. Click OK.
Verify the UNIX group name of the user who controls the installation of the
Oracle9i software. If an instruction to
run /tmp/orainstRoot.sh appears, the pre-installation steps were not
completed successfully. Typically, the /var/opt/oracle directory does
not exist or is not writeable by oracle. Run /tmp/orainstRoot.sh to
correct this, forcing Oracle Inventory files, and others, to be written to
the ORACLE_HOME directory. Once again this screen only appears the first
time Oracle9i products are installed on the system. Click Next.
The File Location window will appear. Do NOT change the Source field. The
Destination field defaults to theORACLE_HOME environment variable.
Click Next.
Select the Products to install. In this example, select the Oracle9i Server then
click Next.
Select the installation type. Choose the Enterprise Edition option. The
selection on this screen refers to the installation operation, not the database
configuration. The next screen allows for a customized database configuration
to be chosen. Click Next.
Select the configuration type. In this example you choose the Advanced
Configuration as this option provides a database that you can customize, and
configures the selected server products. Select Customized and click Next.
Select the other nodes on to which the Oracle RDBMS software will be
installed. It is not necessary to select the node on which the OUI is currently
running. Click Next.
Identify the raw partition in to which the Oracle9i Real Application Clusters
(RAC) configuration information will be written. It is recommended that this
raw partition is a minimum of 100MB in size.
An option to Upgrade or Migrate an existing database is presented.
Do NOT select the radio button. The Oracle Migration utility is not able to
upgrade a RAC database, and will error if selected to do so.

The Summary screen will be presented. Confirm that the RAC database
software will be installed and then clickInstall. The OUI will install the
Oracle9i software on to the local node, and then copy this information to the
other nodes selected.
Once Install is selected, the OUI will install the Oracle RAC software on to
the local node, and then copy software to the other nodes selected earlier. This
will take some time. During the installation process, the OUI does not display
messages indicating that components are being installed on other nodes - I/O
activity may be the only indication that the process is continuing.

3.4 Create a RAC Database using the Oracle Database Configuration


Assistant
The Oracle Database Configuration Assistant (DBCA) will create a database for you.
The DBCA creates your database using the optimal flexible architecture (OFA). This
means the DBCA creates your database files, including the default server parameter
file, using standard file naming and file placement practices. The primary phases of
DBCA processing are: Verify that you correctly configured the shared disks for each tablespace (for
non-cluster file system platforms)
Create the database
Configure the Oracle network services
Start the database instances and listeners
Oracle Corporation recommends that you use the DBCA to create your database. This
is because the DBCA preconfigured databases optimize your environment to take
advantage of Oracle9i features such as the server parameter file and automatic undo
management. The DBCA also enables you to define arbitrary tablespaces as part of
the database creation process. So even if you have datafile requirements that differ
from those offered in one of the DBCA templates, use the DBCA. You can also
execute user-specified scripts as part of the database creation process.
The DBCA and the Oracle Net Configuration Assistant (NETCA) also accurately
configure your Real Application Clusters environment for various Oracle high
availability features and cluster administration tools.
Note: Prior to running the DBCA it may be necessary to run the NETCA tool or to
manually set up your network files. To run the NETCA tool execute the
command netca from the $ORACLE_HOME/bin directory. This will configure the
necessary listener names and protocol addresses, client naming methods,
Net service names and Directory server usage. Also, it is recommended that the
Global Services Daemon (GSD) is started on all nodes prior to running DBCA. To run
the GSD execute the command gsd from the $ORACLE_HOME/bindirectory.
DBCA will launch as part of the installation process, but can be run manually
by executing the command dbcafrom the $ORACLE_HOME/bin directory
on UNIX platforms. The RAC Welcome Page displays. ChooseOracle
Cluster Database option and select Next.
The Operations page is displayed. Choose the option Create a Database and
click Next.
The Node Selection page appears. Select the nodes that you want to configure
as part of the RAC database and click Next. If nodes are missing from the
Node Selection then perform clusterware diagnostics by executing
the$ORACLE_HOME/bin/lsnodes -v command and analyzing its

output. Refer to your vendor's clusterware documentation if the output


indicates that your clusterware is not properly installed. Resolve the problem
and then restart the DBCA.
The Database Templates page is displayed. The templates other than New
Database include datafiles. Choose New Database and then click Next.
The Show Details button provides information on the database template
selected.
DBCA now displays the Database Identification page. Enter the Global
Database Name and Oracle System Identifier (SID). The Global Database
Name is typically of the form name.domain, for
examplemydb.us.oracle.com while the SID is used to uniquely identify an
instance (DBCA should insert a suggested SID, equivalent
to name1 where name was entered in the Database Name field). In the RAC
case the SID specified will be used as a prefix for the instance number. For
example, MYDB, would become MYDB1, MYDB2 for instance 1 and 2
respectively.
The Database Options page is displayed. Select the options you wish to
configure and then choose Next. Note: If you did not choose New Database
from the Database Template page, you will not see this screen.
The Additional database Configurations button displays additional database
features. Make sure both are checked and click OK.
Select the connection options desired from the Database Connection Options
page. Note: If you did not choose New Database from the Database Template
page, you will not see this screen. Click Next.
DBCA now displays the Initialization Parameters page. This page comprises a
number of Tab fields. Modify theMemory settings if desired and then select
the File Locations tab to update information on the Initialization Parameters
filename and location. Then click Next.
The option Create persistent initialization parameter file is selected by
default. If you have a cluster file system, then enter a file system name,
otherwise a raw device name for the location of the server parameter file
(spfile) must be entered. Then click Next.
The button File Location Variables displays variable information.
Click OK.
The button All Initialization Parameters displays the Initialization
Parameters dialog box. This box presents values for all initialization
parameters and indicates whether they are to be included in the spfile to be
created through the check box, included (Y/N). Instance specific parameters
have an instance value in the instance column. Complete entries in the All
Initialization Parameters page and select Close. Note: There are a few
exceptions to what can be altered via this screen. Ensure all entries in the
Initialization Parameters page are complete and selectNext.
DBCA now displays the Database Storage Window. This page allows you to
enter file names for each tablespace in your database.
The file names are displayed in the Datafiles folder, but are entered by
selecting the Tablespaces icon, and then selecting the tablespace object from
the expanded tree. Any names displayed here can be changed. A configuration
file can be used, (pointed to by the environment

variable DBCA_RAW_CONFIG). Complete the database storage information


and click Next.
The Database Creation Options page is displayed. Ensure that the
option Create Database is checked and click Finish.
The DBCA Summary window is displayed. Review this information and
then click OK.
Once the Summary screen is closed using the OK option, DBCA begins to
create the database according to the values specified.
A new database now exists. It can be accessed via Oracle SQL*PLUS or other
applications designed to work with an Oracle RAC database.

4.0 Administering Real Application Clusters Instances


Oracle Corporation recommends that you use SRVCTL to administer your Real
Application Clusters database environment. SRVCTL manages configuration
information that is used by several Oracle tools. For example, Oracle Enterprise
Manager and the Intelligent Agent use the configuration information that SRVCTL
generates to discover and monitor nodes in your cluster. Before using SRVCTL,
ensure that your Global Services Daemon (GSD) is running after you configure your
database. To use SRVCTL, you must have already created the configuration
information for the database that you want to administer. You must have done this
either by using the Oracle Database Configuration Assistant (DBCA), or by using
thesrvctl add command as described below.
If this is the first Oracle9i database created on this cluster, then you must initialize the
clusterwide SRVM configuration. Firstly, create or edit the
file /var/opt/oracle/srvConfig.loc file and add the
entrysrvconfig_loc=path_name.where the path name is a small cluster-shared
raw volume eg
$ vi /var/opt/oracle/srvConfig.loc
srvconfig_loc=/dev/vx/rdsk/datadg/rac_srvconfig_10m
Then execute the following command to initialize this raw volume (Note: This cannot
be run while the gsd is running. Prior to 9i Release 2 you will need to kill
the .../jre/1.1.8/bin/... process to stop the gsd from running.
From 9i Release 2 use thegsdctl stop command):$ srvconfig -init
The first time you use the SRVCTL Utility to create the configuration, start the Global
Services Daemon (GSD) on all nodes so that SRVCTL can access your cluster's
configuration information. Then execute the srvctl add command so that Real
Application Clusters knows what instances belong to your cluster using the following
syntax:-

For Oracle RAC v9.0.1:-

$ gsd
Successfully started the daemon on the local node.
$ srvctl add db -p db_name -o oracle_home
Then for each instance enter the command:

$ srvctl add instance -p db_name -i sid -n node


To display the configuration details for, example, databases racdb1/2, on nodes
racnode1/2 with instances racinst1/2 run:$ srvctl
racdb1
racdb2
$ srvctl
racnode1
racnode2

config
config -p racdb1
racinst1
racinst2

$ srvctl config -p racdb1 -n racnode1


racnode1 racinst1
Examples of starting and stopping RAC follow:$ srvctl start -p racdb1
Instance successfully started on node: racnode2
Listeners successfully started on node: racnode2
Instance successfully started on node: racnode1
Listeners successfully started on node: racnode1
$ srvctl stop -p racdb2
Instance successfully stopped on node: racnode2
Instance successfully stopped on node: racnode1
Listener successfully stopped on node: racnode2
Listener successfully stopped on node: racnode1
$ srvctl stop -p racdb1 -i racinst2 -s inst
Instance successfully stopped on node: racnode2
$ srvctl stop -p racdb1 -s inst
PRKO-2035 : Instance is already stopped on node: racnode2
Instance successfully stopped on node: racnode1

For Oracle RAC v9.2.0+:$ gsdctl start


Successfully started the daemon on the local node.
$ srvctl add database -d db_name -o oracle_home [m domain_name] [-sspfile]
Then for each instance enter the command:
$ srvctl add instance -d db_name -i sid -n node
To display the configuration details for, example, databases racdb1/2, on nodes
racnode1/2 with instances racinst1/2 run:$ srvctl config
racdb1
racdb2
$ srvctl config -p racdb1 -n racnode1
racnode1 racinst1 /u01/app/oracle/product/9.2.0.1

$ srvctl status database -d racdb1


Instance racinst1 is running on node racnode1
Instance racinst2 is running on node racnode2
Examples of starting and stopping RAC follow:$ srvctl start database -d racdb2
$ srvctl stop database -d racdb2
$ srvctl stop instance -d racdb1 -i racinst2
$ srvctl start instance -d racdb1 -i racinst2
$ gsdctl stat
GSD is running on local node
$ gsdctl stop
For further information on srvctl and gsdctl see the Oracle9i Real Application
Clusters Administration manual.

Você também pode gostar