Escolar Documentos
Profissional Documentos
Cultura Documentos
Each node has its own redo log file(s) and UNDO tablespace, but the other nodes must be
able to access them (and the shared control file) in order to recover that node in the event of a
system failure. Each node has its own redo log, control files, and UNDO tablespace, but the
other nodes must be able to access them in order to recover that node in the event of a system
failure.
The biggest difference between Oracle RAC and OPS is the addition of Cache Fusion. With
OPS a request for data from one node to another required the data to be written to disk first,
then the requesting node can read that data. With cache fusion, data is passed along a high-
speed interconnect using a sophisticated locking algorithm.
Not all clustering solutions use shared storage. Some vendors use an approach known as a
"Federated Cluster", in which data is spread across several machines rather than shared by all.
With Oracle RAC 10g, however, multiple nodes use the same set of disks for storing data.
With Oracle RAC 10g, the data files, redo log files, control files, and archived log files reside
on shared storage on raw-disk devices, a NAS, ASM, or on a clustered file system. Oracle's
approach to clustering leverages the collective processing power of all the nodes in the cluster
and at the same time provides failover security.
Pre-configured Oracle RAC 10g solutions are available from vendors such as Dell, IBM and
HP for production environments. This article, however, focuses on putting together your own
Oracle RAC 10g environment for development and testing by using Solaris servers and a low
cost shared disk solution; iSCSI.
Oracle Database Files
File System /
Volume
RAC Node Name Instance Name Database Name $ORACLE_BASE
Manager for
DB Files
solaris1 orcl1 orcl /u01/app/oracle ASM
Download Solaris 10 appliance Æ
http://developers.sun.com/solaris/downloads/solaris_apps/index.jsp
Configure Virtual Machine with Netwotk and Virtual Disks
To create and configure the first virtual machine, you will add virtual hardware devices such
as disks
and processors. Before proceeding with the install, create the windows folders to house the
virtual
machines and the shared storage.
D:\>mkdir vm\rac\rac1
D:\>mkdir vm\rac\rac2
D:\>mkdir vm\rac\sharedstorage
Double-click on the VMware Server icon on your desktop to bring up the application:
1. Press CTRL-N to create a new virtual machine.
2. New Virtual Machine Wizard: Click on Next.
3. Select the Appropriate Configuration:
a. Virtual machine configuration: Select Custom.
Select a Guest Operating System:
a. Guest operating system: Select Linux.
b. Version: Select Red Hat Enterprise Linux 4.
4.
Name the Virtual Machine:
a. Virtual machine name: Enter “rac1.”
b. Location: Enter “d:\vm\rac\rac1.”
5.
Set Access Rights:
a. Access rights: Select Make this virtual machine private.
6.
Startup / Shutdown Options:
a. Virtual machine account: Select User that powers on the virtual machine.
7.
Processor Configuration:
a. Processors: Select One.
8.
Memory for the Virtual Machine:
a. Memory: Select 1024MB.
9.
Network Type:
a. Network connection: Select Use bridged networking.
10.
Select I/O Adapter Types:
a. I/O adapter types: Select LSI Logic.
11.
Select a Disk:
a. Disk: Select Create a new virtual disk.
12.
Select a Disk Type:
a. Virtual Disk Type: Select SCSI (Recommended).
13.
Specify Disk Capacity:
a. Disk capacity: Enter “0.2GB”
Deselect Allocate all disk space now. To save space, you do not have to allocate all the
disk space now.
b.
14.
Specify Disk File:
a. Disk file: Enter “ocr.vmdk”
b. Click on Finish.
15.
Repeat steps 16 to 24 to create four virtual SCSI hard disks - ocfs2disk.vmdk (512MB),
asmdisk1.vmdk
(3GB), asmdisk2.vmdk (3GB), and asmdisk3.vmdk (2GB).
16. VMware Server Console: Click on Edit virtual machine settings.
17. Virtual Machine Settings: Click on Add.
18. Add Hardware Wizard: Click on Next.
Hardware Type:
a. Hardware types: Select Hard Disk.
19.
Select a Disk:
a. Disk: Select Create a new virtual disk.
20.
Select a Disk Type:
a. Virtual Disk Type: Select SCSI (Recommended).
21.
Specify Disk Capacity:
a. Disk capacity: Enter “0.5GB.”
Select Allocate all disk space now. You do not have to allocate all the disk space if you
want to save space. For performance reason, you will pre-allocate all the disk space for
each of the virtual shared disk. If the size of the shared disks were to grow rapidly
especially during Oracle database creation or when the database is under heavy DML
activity, the virtual machines may hang intermittently for a brief period or crash in a few
b.
22.
rare occasions.
Specify Disk File:
a. Disk file: Enter “”
b. Click on Advanced.
23.
Add Hardware Wizard:
a. Virtual device node: Select SCSI 1:0.
b. Mode: Select Independent, Persistent for all shared disks.
c. Click on Finish.
• Ocr.vmdk Æ 0.2 GB
• Vot.vmdk ‐> 0.2 GB
• Asm1.vmdk ‐> 2 GB
• Asm2.vmdk ‐> 2GB
• Asm3.vmdk ‐> 2GB
Æ Start vmware
Network
• Sys‐unconfig ‐> prompt y and after system reboot configure network for 2 ethernet with
ip’s and gateways.
Vmxnet0 ‐> hostname:solaris1 ip:172.16.16.27 netmask:255.255.255.0 gateway:172.16.16.1
Vmxnet1 ‐> hostname:solaris1‐prıv ip:192.168.2.111 netmask:255.255.255.0
After adding virtual disks restart vmware.
Tell Solaris to look for new devices when booting
Select the Solaris entry GRUB menu that you want to boot.
to edit, enter e
select the "kernel /platform" line
to edit that again enter e
add to the end of the 'kernel' line a space followed by ‐r
kernel /platform/i86pc/multiboot ‐r
press enter key to accept the change
press b to boot
OPTIONAL: Another method to force Solaris to rebuild the device tree: You can force Solaris
to rebuild it's device tree while the system is running but creating an empty file and
rebooting Solaris: #touch /reconfigure
Formatting Disks
# format
Searching for disks...done
FORMAT MENU
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
fdisk - run the fdisk program
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
volname - set 8-character volume name
!<cmd> - execute <cmd>, then return
format> partition
Please run fdisk first
format> fdisk
No fdisk table exists. The default partition for the disk is:
Type "y" to accept the default partition, otherwise type "n" to edit the
partition table.
y
format> partition
PARTITION MENU:
0 - change '0' partition
1 - change '1' partition
2 - change '2' partition
3 - change '3' partition
4 - change '4' partition
5 - change '5' partition
6 - change '6' partition
7 - change '7' partition
select - select a predefined table
modify - modify a predefined partition table
name - name the current table
print - display the current table
label - write the partition map and label to the disk
!<cmd> - execute <cmd>, then return
quit
partition> print
Current partition table (original):
Total disk cylinders available: 156 + 2 (reserved cylinders)
partition> 1
Part Tag Flag Cylinders Size Blocks
1 unassigned wm 0 0 (0/0/0) 0
partition> quit
format> disk 2
selecting c2t2d0
[disk formatted]
format> fdisk
No fdisk table exists. The default partition for the disk is:
Type "y" to accept the default partition, otherwise type "n" to edit the
partition table.
y
format> partition
partition> print
Current partition table (original):
Total disk cylinders available: 60 + 2 (reserved cylinders)
partition> 1
Part Tag Flag Cylinders Size Blocks
1 unassigned wm 0 0 (0/0/0) 0
partition> quit
format> disk 3
selecting c2t3d0
[disk formatted]
format> fdisk
No fdisk table exists. The default partition for the disk is:
Type "y" to accept the default partition, otherwise type "n" to edit the
partition table.
y
format> partition
partition> print
Current partition table (original):
Total disk cylinders available: 156 + 2 (reserved cylinders)
partition> 1
Part Tag Flag Cylinders Size Blocks
1 unassigned wm 0 0 (0/0/0) 0
partition> quit
format> disk 4
selecting c2t4d0
[disk formatted]
format> fdisk
No fdisk table exists. The default partition for the disk is:
Type "y" to accept the default partition, otherwise type "n" to edit the
partition table.
y
format> partition
partition> print
Current partition table (original):
Total disk cylinders available: 11472 + 2 (reserved cylinders)
partition> 1
Part Tag Flag Cylinders Size Blocks
1 unassigned wm 0 0 (0/0/0) 0
partition> quit
format> disk 5
selecting c2t5d0
[disk formatted]
format> fdisk
No fdisk table exists. The default partition for the disk is:
Type "y" to accept the default partition, otherwise type "n" to edit the
partition table.
y
format> partition
partition> print
Current partition table (original):
Total disk cylinders available: 11472 + 2 (reserved cylinders)
partition> 1
Part Tag Flag Cylinders Size Blocks
1 unassigned wm 0 0 (0/0/0) 0
partition> quit
format> disk 6
selecting c2t6d0
[disk formatted]
format> fdisk
No fdisk table exists. The default partition for the disk is:
Type "y" to accept the default partition, otherwise type "n" to edit the
partition table.
y
format> partition
partition> print
Current partition table (original):
Total disk cylinders available: 12745 + 2 (reserved cylinders)
partition> 1
Part Tag Flag Cylinders Size Blocks
1 unassigned wm 0 0 (0/0/0) 0
partition> quit
format> quit
We will create the dba group and the oracle user account along with all appropriate
directories.
# mkdir -p /u01/app
# groupadd -g 115 dba
# useradd -u 175 -g 115 -d /u01/app/oracle -m -s /usr/bin/bash -c "Oracle
Software Owner" oracle
# chown -R oracle:dba /u01
# passwd oracle
# su - oracle
When you are setting the oracle environment variables for each Oracle RAC node, ensure to
assign each RAC node a unique Oracle SID. For this example, we used:
• solaris1: ORACLE_SID=orcl1
• solaris2: ORACLE_SID=orcl2
After creating the oracle user account on both nodes, ensure that the environment is setup
correctly by using the following .bash_profile (Please note that the .bash_profile will not exist
on Solaris; you will have to create it).
# .bash_profile
umask 022
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1
export ORA_CRS_HOME=$ORACLE_BASE/product/crs
# Each RAC node must have a unique ORACLE_SID. (i.e. orcl1, orcl2,...)
export ORACLE_SID=orcl1
export PATH=.:${PATH}:$HOME/bin:$ORACLE_HOME/bin
export PATH=${PATH}:/usr/bin:/bin:/usr/local/bin:/usr/sfw/bin
export TNS_ADMIN=$ORACLE_HOME/network/admin
export ORA_NLS10=$ORACLE_HOME/nls/data
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
export CLASSPATH=$ORACLE_HOME/JRE
export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export TEMP=/tmp
export TMPDIR=/tmp
This section focuses on configuring both Oracle RAC Solaris servers - getting each one
prepared for the Oracle RAC 10g installation. A lot of the information in the kernel
parameters section is referenced from Oracle ACE Howard J. Rogers' excellent Web site,
Dizwell Informatics.
The devices we will be using for the various components of this article (e.g. the OCR and the
voting disk) must have the appropriate ownershop and permissions set on them before we can
proceed to the installation stage. We will the set the permissions and ownerships using the
chown and chmod commands as follows: (this must be done as the root user)
The Oracle Universal Installer and configuration assistants (such as NETCA) will look for the
SSH binaries in the wrong location on Solaris. Even if the SSH binaries are included in your
path when you start these programs, they will still look for the binaries in the wrong location.
On Solaris, the SSH binaries are located in the /usr/bin directory by default. The Oracle
Universal Installer will throw an error stating that it cannot find the ssh or scp binaries. For
this article, my workaround was to simply create s symbolic link in the /usr/local/bin directory
for these binaries. This workaround was quick and easy to implement and worked perfectly.
# ln -s /usr/bin/ssh /usr/local/bin/ssh
# ln -s /usr/bin/scp /usr/local/bin/scp
During an Oracle Clusterware instllation, the Oracle Universal Installer uses SSH perform
remote operations on other nodes. During the installation, hidden files on the system (for
example, .bashrc or .cshrc) will cause installlation errors if they contain stty commands.
To avoid this problem, you must modify these files to suppress all output on STDERR, as in
the following examples:
• if [ -t 0 ]; then
• stty intr ^C
• fi
• C shell:
• test -t 0
• if ($status==0) then
• stty intr ^C
• endif
In Solaris 10, there is a new way of setting kernel parameters. The old Solaris 8 and 9 way of
setting kernel parameters by editing the /etc/system file is deprecated. A new method of
setting kernel parameters exists in Solaris 10 using the resource control facility and this
method does not require the system to be re-booted for the change to take effect.
# projadd oracle
Kernel parameters are merely attributes of a resource project so new kernel parameter values
can be established by modifying the attributes of a project. First we need to make sure that the
oracle user we created earlier knows to use the new oracle project for its resource limits.
This is accomplished by editing the /etc/user_attr file to look like this:
#
# Copyright (c) 2003 by Sun Microsystems, Inc. All rights reserved.
#
# /etc/user_attr
#
# user attributes. see user_attr(4)
#
#pragma ident "@(#)user_attr 1.1 03/07/09 SMI"
#
adm::::profiles=Log Management
lp::::profiles=Printer Management
root::::auths=solaris.*,solaris.grant;profiles=Web Console
Management,All;lock_after_retries=no
oracle::::project=oracle
This assigns the oracle user to the new resource project called oracle whenever he/she logs
on. Log on as the oracle user to test it out.
# su - oracle
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
$ id -p
uid=175(oracle) gid=115(dba) projid=100(oracle)
$
The last output here indicates that the oracle user has been assigned to project number 100,
the oracle project.
To check the resource parameter values that have been assigned to the project, issue the
following command:
This indicates that the maximum size of shared memory segments that the oracle user can
create is 510MB. Unless that number is in the gigabyte range, it will need to be changed as
the recommended value by Oracle is 4GB.
At this point, leave the oracle user logged in to the original terminal session and start a brand
new session as the root user. A resource project's settings can only be modified dynamically if
there is at least one user logged in that is actually assigned to that project.
As the root user in the new session, issue the following command:
After issuing this command, switch back to the oracle user's session and re-issue the earlier
command:
Now we can see that the value for the maximum size of shared memory segments is now
4GB. This sets a mximum size for the shared memory segment; it does not mean that the
Oracle instance will actually be 4GB.
This procedure sets the correct value for the max-shm-memory kernel parameter dynamically
but if the server is rebooted, the new value would be lost. To make the value ermanent across
reboots, the following command is issued in the root user's session:
There are other kernel parameters which the Oracle documentation instructs you to check and
modify if necessary. However, if you are following along this article and just installed Solaris
10, these parameters will be set to acceptable values by defualt. Therefore, there is no need to
alter these parameters.
For more information on resource projects in Solaris 10, see the System Administration
Guide: Solaris Containers-Resource Management and Solaris Zones. This is available from
http://docs.sun.com.
****Shutdown and copy solaris1 to solaris2****
After just reboot second machine change /etc/hosts file with solaris2 loghost.
• Change /etc/hostname.vmxnet0 solaris2
• Change /etc/hostname.vmxnet1 solaris2‐prıv
• Change /etc/nodename solaris2
• Svcadm restart network/physıcal
Reboot.
Before you can install and use Oracle RAC, you must configure either secure shell (SSH) or
remote shell (RSH) for the oracle user account both of the Oracle RAC nodes in the cluster.
The goal here is to setup user equivalence for the oracle user account. User equivalence
enables the oracle user account to access all other nodes in the cluster without the need for a
password. This can be configured using either SSH or RSH where SSH is the preferred
method.
In this article, we will just discuss the secure shell method for establishing user equivalency.
For steps on how to use the remote shell method, please see this section of Jeffrey Hunter's
original article.
The first step in configuring SSH is to create RSA key pairs on both Oracle RAC nodes in the
cluster. The command to do this will create a public and private key for the RSA algorithm.
The content of the RSA public key will then need to be copied into an authorized key file
which is then distributed to both of the Oracle RAC nodes in the cluster.
Use the following steps to create the RSA key pair. Please note that these steps will need to be
completed on both Oracle RAC nodes in the cluster:
This command will write the public key to the ˜/.ssh/id_rsa.pub file and the private
key to the ˜/.ssh/id_rsa file. Note that you should never distribute the private key to
anyone.
4. Repeat the above steps for each Oracle RAC node in the cluster.
Now that both Oracle RAC nodes contain a public and private key pair for RSA, you will
need to create an authorized key file on one of the nodes. An authorized key file is nothing
more than a single file that contains a copy of everyone's (every node's) RSA public key.
Once the authorized key file contains all of the public keys, it is then distributed to all other
nodes in the RAC cluster.
Complete the following steps on one of the nodes in the cluster to create and then distribute
the authorized key file. For the purpose of this article, I am using solaris1.
The following example is being run from solaris1 and assumes a 2-node cluster,
with nodes solaris1 and solaris2:
Note: The first time you use SSH to connect to a node from a particular system, you
may see a message similar to the following:
Enter yes at the prompt to continue. You should not see this message again when you
connect from this system to the same node.
3. At this point, we have the content of the RSA public key from every node in the
cluster in the authorized key file (˜/.ssh/authorized_keys) on solaris1. We now
need to copy it to the remaining nodes in the cluster. In out two-node cluster example,
the only remaining node is solaris2. Use the scp command to copy the authorized key
file to all remaining nodes in the cluster:
Note: If you see any other messages or text, apart from the host name, then the Oracle
installation can fail. Make any changes required to ensure that only the host name is
displayed when you enter these commands. You should ensure that any part of a login
script(s) that generate any output, or ask any questions, are modified so that they act
only when the shell is an interactive shell.
When running the OUI, it will need to run the secure shell tool commands (ssh and scp)
without being prompted for a password. Even though SSH is configured on both Oracle RAC
nodes in the cluster, using the secure shell tool commands will still prompt for a password.
Before running the OUI, you need to enable user equivalence for the terminal session you
plan to run the OUI from. For the purpose of this article, all Oracle installations will be
performed from solaris1.
User equivalence will need to be enabled on any new terminal shell session before attempting
to run the OUI. If you log out and log back in to the node you will be performing the Oracle
installation from, you must enable user equivalence for the terminal session as this is not done
by default.
To enable user equivalence for the current terminal shell session, perform the following steps:
1. Logon to the node where you want to run the OUI from (solaris1) as the oracle user.
# su - oracle
2. Enter the following commands:
At the prompts, enter the pass phrase for each key that you generated.
3. If SSH is configured correctly, you will be able to use the ssh and scp commands
without being prompted for a password or pass phrase from this terminal session:
Note: The commands above should display the date set on both Oracle RAC nodes
along with its hostname. If any of the nodes prompt for a password or pass phrase then
verify that the ˜/.ssh/authorized_keys file on that node contains the correct public keys.
Also, if you see any other messages or text, apart from the date and hostname, then the
Oracle installation can fail. Make any changes required to ensure that only the date
and hostname is displayed when you enter these commands. You should ensure that
any part of a login script(s) that generate any output, or ask any questions, are
modified so that they act only when the shell is an interactive shell.
4. The Oracle Universal Installer is a GUI interface and requires the use of an X server.
From the terminal session enabled for user equivalence (the node you will be
performing the Oracle installations from), set the environment variable DISPLAY to a
valid X Windows display:
C shell:
$ setenv DISPLAY <Any X-Windows Host>"0
5. You must run the Oracle Universal Installer from this terminal session or remember to
repeat the steps to enable user equivalence (steps 2, 3, and 4 from this section) before
you start the OUI from a different terminal session.
The next logical step is to install Oracle Clusterware Release 2 (10.2.0.2.0) and Oracle
Database 10g Release 2 (10.2.0.2.0). For this article, I chose not to install the Companion CD
as Jeffrey Hunter did in his article. This is a matter of choice - if you want to install the
companion CD, by all means do.
In this section, we will be downloading and extracting the required software from Oracle to
only one of the Solaris nodes in the RAC cluster - namely solaris1. This is the machine where
I will be performing all of the Oracle installs from. The Oracle installer will copy the required
software packages to all other nodes in the RAC configuration using the user equivalency we
setup in the section "Configure RAC Nodes for Remote Access".
Login to the node that you will be performing all of the Oracle installations from (solaris1) as
the oracle user account. In this example, I will be downloading the required Oracle software
to solaris1 and saving them to /u01/app/oracle/orainstall.
First, download the Oracle Clusterware Release 2 software for Solaris 10 x86.
• 10202_clusterware_solx86.zip
First, download the Oracle Database Release 2 software for Solaris 10 x86.
• 10202_database_solx86.zip
As the oracle user account, extract the two packages you downloaded to a temporary
directory. In this example, I will use /u01/app/oracle/orainstall.
# su - oracle
$ cd ˜oracle/orainstall
$ unzip 10202_clusterware_solx86.zip
Then extract the Oracle10g Database software:
$ cd ˜oracle/orainstall
$ unzip 10202_database_solx86.zip
The following packages must be installed on each server before you can continue:
SUNWlibms
SUNWtoo
SUNWi1cs
SUNWi15cs
SUNWxwfnt
SUNWxwplt
SUNWmfrun
SUNWxwplr
SUNWxwdv
SUNWgcc
SUNWbtool
SUNWi1of
SUNWhea
SUNWlibm
SUNWsprot
SUNWuiu8
To check whether any of these required packages are installed on your system, use the
pkginfo -i package_name commad as follows:
# pkginfo -i SUNWlibms
system SUNWlibms Math & Microtasking Libraries (Usr)
#
If you need to install any of the above packages, use the pkgadd -d package_name
command.
You are now ready to install the "cluster" part of the environment - the Oracle Clusterware. In
a previous section, you downloaded and extracted the install files for Oracle Clusterware to
solaris1 in the directory /u01/app/oracle/orainstall/clusterware. This is the only node from
which you need to perform the install.
During the installation of Oracle Clusterware, you will be asked for the nodes involved and to
configure in the RAC cluster. Once the actual installation starts, it will copy the required
software to all nodes using the remote access we configured in the section "Configure RAC
Nodes for Remote Access".
After installing Oracle Clusterware, the Oracle Universal Installer (OUI) used to install the
Oracle10g database software (next section) will automatically recognize these nodes. Like the
Oracle Clusterware install you will be performing in this section, the Oracle Database 10g
software only needs to be run from one node. The OUI will copy the software packages to all
nodes configured in the RAC cluster.
The material contained in this section is taken from material created by Kevin Closson
(http://kevinclosson.wordpress.com) and much more information is available in the PolyServe
white paper entitled Third-Party Cluster Platforms and Oracle Real Application Clusters on
Linux (can be downloaded from here).
In its simplest form, any software that performs any function on more than one interconnected
computer system can be called clusterware. In the context of Oracle RAC, clusterware takes
the form of two shared libraries that provide node membership services and internode
communication functionality when dynamically linked with Oracle executables.
• libskgxn.so: This library contains the routines used by the Oracle server to maintain
node membership services. In short, these routines let Oracle know what nodes are in
the cluster. Likewise, if Oracle wants to evict (fence) a node, it will call a routine in
this library.
• libskgxp.so: This library contains the Oracle server routines used for communication
between instances (e.g. Cache Fusion CR sends, lock converts, etc.).
Before starting the OUI, you should first verify you are logged onto the server you will be
running the installer from ( i.e. solaris1) then run the xhost command as root from the
console to allow X server connections. Next, login as the oracle user account. If you are
using a remote client to connect to the node performing the installationg (SSH/Telnet to
solaris1 from a workstation configured with an X server), you will need to set the DISPLAY
variable to point to your local workstation. Finally, verify remote access/user equivalence to
all nodes in the cluster.
# hostname
solaris1
# xhost +
access control disabled, clients can connect from any host
Login as the oracle User Account and Set DISPLAY (if necessary)
# su - oracle
$ # IF YOU ARE USING A REMOTE CLIENT TO CONNECT TO THE
$ # NODE PERFORMING THE INSTALL
# DISPLAY=<you local workstation>:0.0
$ export DISPLAY
Verify you are able to run the Secure Shell commands (ssh or scp) on the Solaris server you
will be running the OUI from against all other Solaris servers in the cluster without being
prompted for a password.
When using the secure shell method (which is what we are using in this article), user
equivalence will need to enabled on any new terminal shell session before attempting to run
the OUI. To enable user equivalence for the current terminal shell session, perform the
following steps remembering to enter the pass phrase for the RSA key that generated when
prompted:
Installing Clusterware
$ cd ˜oracle
$ /u01/app/oracle/orainstall/clusterware/runInstaller
Within the same new console window on both Oracle RAC nodes in the cluster,
Execute (starting with the node you are performing the install from), stay logged in as the
Configuration "root" user account.
Scripts
Navigate to the /u01/app/oracle/product/crs directory and locate the root.sh file
for each node in the cluster - (starting with the node you are performing the install
from). Run the root.sh file ON ALL NODES in the RAC cluster ONE AT A TIME.
You will receive several warnings while running the root.sh script on all nodes. These
warnings can be safely ignored.
Go back to the OUI and acknowledge the "Execute Configuration scripts" dialog
window after running the root.sh script on both nodes.
End of installation At the end of the installation, exit from the OUI.
Verify Oracle Clusterware Installation
After the installation of Oracle Clusterware, we can run through several tests to verify the
install was successful. Run the following commands on both nodes in the RAC Cluster
$ /u01/app/oracle/product/crs/bin/olsnodes -n
solaris1
solaris2
$ ls -l /etc/init.d/init.*[d,s]
-r-xr-xr-x 1 root root 2236 Jan 26 18:49 init.crs
-r-xr-xr-x 1 root root 4850 Jan 26 18:49 init.crsd
-r-xr-xr-x 1 root root 41163 Jan 26 18:49 init.cssd
-r-xr-xr-x 1 root root 3190 Jan 26 18:49 init.evmd
After successfully installing the Oracle Clusterware software, the next step is to install the
Oracle Database 10g Release 2 (10.2.0.1.0) with RAC. For the purpose of this article, you opt
not to create a database when installing the software. You will, instead, create the database
using the Database Creation Assistant (DBCA) after the install.
Like the Oracle Clusterware install in the previous section, the Oracle 10g database software
only needs to be run from one node. The OUI will copy the software packages to all nodes
configured in the RAC cluster.
As discussed in the previous section, the terminal shell environment needs to be configured
for remote access and user equivalence to all nodes in the cluster before running the Oracle
Universal Installer. Note that you can utilize the same terminal shell session used in the
previous section which in this case, you do not have to take any of the actions described
below with regards to setting up remote access and the DISPLAY variable.
# hostname
solaris1
# xhost +
access control disabled, clients can connect from any host
Login as the oracle User Account and Set DISPLAY (if necessary)
# su - oracle
$ # IF YOU ARE USING A REMOTE CLIENT TO CONNECT TO THE
$ # NODE PERFORMING THE INSTALL
# DISPLAY=<you local workstation>:0.0
$ export DISPLAY
Verify you are able to run the Secure Shell commands (ssh or scp) on the Solaris server you
will be running the OUI from against all other Solaris servers in the cluster without being
prompted for a password.
When using the secure shell method (which is what we are using in this article), user
equivalence will need to enabled on any new terminal shell session before attempting to run
the OUI. To enable user equivalence for the current terminal shell session, perform the
following steps remembering to enter the pass phrase for the RSA key that generated when
prompted:
Install the Oracle Database 10g Release 2 software with the following:
$ cd ˜oracle
$ /u01/app/oracle/orainstall/database/runInstaller -ignoreSysPrereqs
DBCA requires the Oracle TNS listener process to be configured and running on all nodes in
the RAC cluster before it can create the clustered database. The process of creating the TNS
listener only needs to be performed on one node in the cluster. All changes will be made and
replicated to all nodes in the cluster. On one of the nodes (I will be using solaris1) bring up
the NETCA and run through the process of creating a new TNS listener process and also
configure the node for local access.
As discussed in the previous section, the terminal shell environment needs to be configured
for remote access and user equivalence to all nodes in the cluster before running the Network
Configuration Assistant (NETCA). Note that you can utilize the same terminal shell session
used in the previous section which in this case, you do not have to take any of the actions
described below with regards to setting up remote access and the DISPLAY variable.
Login as the oracle User Account and Set DISPLAY (if necessary)
# su - oracle
$ # IF YOU ARE USING A REMOTE CLIENT TO CONNECT TO THE
$ # NODE PERFORMING THE INSTALL
# DISPLAY=<you local workstation>:0.0
$ export DISPLAY
Verify you are able to run the Secure Shell commands (ssh or scp) on the Solaris server you
will be running the OUI from against all other Solaris servers in the cluster without being
prompted for a password.
When using the secure shell method (which is what we are using in this article), user
equivalence will need to enabled on any new terminal shell session before attempting to run
the OUI. To enable user equivalence for the current terminal shell session, perform the
following steps remembering to enter the pass phrase for the RSA key that generated when
prompted:
$ netca
The following table walks you through the process of creating a new Oracle listener for our
RAC environment.
The Oracle TNS listener process should now be running on all nodes in the RAC cluster.
Login as the oracle User Account and Set DISPLAY (if necessary)
# su - oracle
$ # IF YOU ARE USING A REMOTE CLIENT TO CONNECT TO THE
$ # NODE PERFORMING THE INSTALL
# DISPLAY=<you local workstation>:0.0
$ export DISPLAY
Verify you are able to run the Secure Shell commands (ssh or scp) on the Solaris server you
will be running the OUI from against all other Solaris servers in the cluster without being
prompted for a password.
When using the secure shell method (which is what we are using in this article), user
equivalence will need to enabled on any new terminal shell session before attempting to run
the OUI. To enable user equivalence for the current terminal shell session, perform the
following steps remembering to enter the pass phrase for the RSA key that generated when
prompted:
I used itconvergence.com for the database domain. You may use any domain.
Keep in mind that this domain does not have to be a valid DNS domain.
Management Leave the default options here, which is to "Configure the Database with
Option Enterprise Manager / Use Database Control for Database Management."
Database I selected to Use the Same Password for All Accounts. Enter the password
Credentials (twice) and make sure the password does not start with a digit number.
Storage
For this guide, we will select to use Automatic Storage Management (ASM).
Options
Supply the SYS password to use for the new ASM instance.
Also, starting with Oracle 10g Release 2, the ASM instance server parameter
file (SPFILE) needs to be on a shared disk. You will need to modify the default
entry for "Create server parameter file (SPFILE)" to reside on the RAW
partition as follows: /dev/rdsk/c2t3d0s1. All other options can stay at their
Create ASM
defaults.
Instance
You will then be prompted with a dialog box asking if you want to create and
start the ASM instance. Select the OK button to acknowledge this dialog.
The OUI will now create and start the ASM instance on all nodes in the RAC
cluster.
To start, click the Create New button. This will bring up the "Create Disk
Group" window with the three of the partitions we created earlier.
For the first "Disk Group Name", I used the string "ORCL_DATA". Select the
first two RAW partitions (in my case c2t4d0s1 and c2t5d0s1) in the "Select
Member Disks" window. Keep the "Redundancy" setting to "Normal".
After verifying all values in this window are correct, click the [OK] button.
This will present the "ASM Disk Group Creation" dialog. When the ASM Disk
Group Creation process is finished, you will be returned to the "ASM Disk
Groups" windows.
ASM Disk
Groups
Click the Create New button again. For the second "Disk Group Name", I used
the string "FLASH_RECOVERY_AREA". Select the last RAW partition
(c2t6d0s1) in the "Select Member Disks" window. Keep the "Redundancy"
setting to "External".
After verifying all values in this window are correct, click the [OK] button.
This will present the "ASM Disk Group Creation" dialog.
When the ASM Disk Group Creation process is finished, you will be returned to
the "ASM Disk Groups" window with two disk groups created and selected.
Select only one of the disk groups by using the checkbox next to the newly
created Disk Group Name "ORCL_DATA" (ensure that the disk group for
"FLASH_RECOVERY_AREA" is not selected) and click [Next] to continue.
I selected to use the default, which is to use Oracle Managed Files:
Database File
Locations
Database Area: +ORCL_DATA
Check the option for "Specify Flash Recovery Area".
For the Flash Recovery Area, click the [Browse] button and select the disk
Recovery
group name "+FLASH_RECOVERY_AREA".
Configuration
My disk group has a size of about 100GB. I used a Flash Recovery Area Size of
90GB (92160 MB).
Database I left all of the Database Components (and destination tablespaces) set to their
Content default value.
For this test configuration, click Add, and enter orcltest as the "Service
Database
Name." Leave both instances set to Preferred and for the "TAF Policy" select
Services
"Basic".
Initialization Change any parameters for your environment. I left them all at their default
Parameters settings.
Database Change any parameters for your environment. I left them all at their default
Storage settings.
Keep the default option Create Database selected and click Finish to start the
Creation database creation process.
Options
Click OK on the "Summary" screen.
At the end of the database creation, exit from the DBCA.
End of
When exiting the DBCA, another dialog will come up indicating that it is
Database
starting all Oracle instances and HA service "orcltest". This may take several
Creation
minutes to complete. When finished, all windows and dialog boxes will
disappear.
When DBCA has completed, you will have a fully functional Oracle RAC cluster running.
The first step is to stop the Oracle instance. When the instance (and related services) is down,
then bring down the ASM instance. Finally, shut down the node applications (Virtual IP,
GSD, TNS Listener, and ONS).
$ export ORACLE_SID=orcl1
$ emctl stop dbconsole
$ srvctl stop instance -d orcl -i orcl1
$ srvctl stop asm -n solaris1
$ srvctl stop nodeapps -n solaris1
The first step is to start the node applications (Virtual IP, GSD, TNS Listener, and ONS).
When the node applications are successfully started, then bring up the ASM instance. Finally,
bring up the Oracle instance (and related services) and the Enterprise Manager Database
console.
$ export ORACLE_SID=orcl1
$ srvctl start nodeapps -n solaris1
$ srvctl start asm -n solaris1
$ srvctl start instance -d orcl -i orcl1
$ emctl start dbconsole
This section provides several srvctl commands and SQL queries to validate you Oracle
RAC 10g configuration.