Você está na página 1de 39

Table of Contents

Rac10gR2OnHPUX............................................................................................................................................1
1. *Introduction.......................................................................................................................................1
1.1. *What you need to know....................................................................................................1
2. *Prepare the cluster nodes for Oracle RAC.....................................................................................................2
3. *Prepare the shared storage for Oracle RAC...................................................................................................2
4. Oracle Clusterware Installation and Configuration.........................................................................................3
5. Oracle Clusterware Patching..........................................................................................................................12
5.1. Patch Oracle Clusterware to 10.2.0.4.............................................................................................12
6. Oracle ASM Home Software Install..............................................................................................................15
7. Oracle ASM Software Home Patching..........................................................................................................18
8. Oracle ASM Listener Creation......................................................................................................................21
8.1. Create Node specific network listeners..........................................................................................21
8.2. Deleting and Adding Listeners......................................................................................................21
9. Oracle ASM Instance and diskgroup Creation..............................................................................................25
10. Oracle RAC Database Home Software Install.............................................................................................29
11. Oracle RAC Software Home Patching........................................................................................................30
12. Oracle RAC Database Creation...................................................................................................................32
12.1. Creating the RAC Database.........................................................................................................32
12.2. Oracle Interconnect Configuration..............................................................................................32
12.3. Public Network Redundancy Configuration................................................................................34

i
Rac10gR2OnHPUX

1. *Introduction

1.1. *What you need to know


This guide details the tasks required for installing Oracle Real Application Clusters (10.2.0.2) on the HP-UX
(11.17) platform. The technolgies configured as part of this deployment are as follows:

• Oracle Real Application Clusters


• Oracle ASM over SLVM (Shared Logical Volumes)
• HP Serviceguard extensions for RAC

A graphical depiction of this is shown below (picture taken from HP/Oracle CTC presentation):

The high level overview of the tasks (from an Oracle perspective) is as follows:

• Install Oracle Clusterware


• Patch Oracle Clusterware
• Install ASM
• Patch ASM
• Install RAC
• Patch RAC
• Create RAC Database

It should be noted that HP Serviceguard extensions for RAC is not mandatory for deploying RAC 10gR2. Nor
is it necessary to deploy ASM over SLVM. This document does not include information with respect to the
building of the HP Serviceguard cluster nor the configuration of SLVM. For those interested in obtaining that
information, please refer to the HP/Oracle CTC site at: http://hporaclectc.com/

Rac10gR2OnHPUX 1
2. *Prepare the cluster nodes for Oracle RAC

3. *Prepare the shared storage for Oracle RAC


The shared storage will be used to place the following files:

• Oracle Clusterware

♦ Oracle Cluster Registry (OCR)


♦ Voting Disk
• Oracle Automatic Storage Management (ASM) Files

♦ Oracle Database files

Since this particular deployment uses ASM over SLVM, the configuration is typically performed by a system
administrator. That is to say the SA, configures the LUN's required for Oracle Clusterware and ASM. These
will be logical volumes that will be made available for the storage of the files mentioned above - generally the
entire physical disk will be used for a logical volume - which means there would be a 1-1 mapping of logical
volumes to disks. Now, typically a volume group is created which acts as a logical container for logical
volumes. What this means, is that a logical volume group can consist of many logical volumes - which in turn
map to physical disks. So for example, you can have a volume group for the oracle clusterware files, that
consists of 2 logical volumes (if there is external redundancy) - 1 for the OCR file and the other for the voting
disk. To add to that, you can have another volume group that consists of logical volumes to store the oracle
database files - to be used by ASM. The management and control of these volume group's fall under HP's
SLVM - note however that this is not the only way to deploy ASM, at the time of this particular deployment
since HP serviceguard extensions for RAC was also being used this was required. However one can do a
standalone Oracle deployment that does not use a third party cluster manager nor requires the use of SLVM.

I will briefly cover the details of this deployment below, starting off with the following components:

• Oracle Cluster Registry (OCR)


• Oracle Voting Disk

These two components reside on shared logical volumes under the control of HP’s SLVM. They are part of
the following volume group:

vg_rac_crs (/dev/vg_rac_crs)

The shared logical volumes within that group are represented by

rlv_rac_crs_ocr -> OCR Shared Logical Volume (/dev/vg_rac_crs/rlv_rac_crs_ocr)


rlv_rac_crs_vote -> Voting Disk Shared Logical Volume (/dev/vg_rac_crs/rlv_rac_

Similarly the oracle database datafiles are part of the volume group:

vg_rac_data

The details of this volume group are shown in the table below:

Node Name Size GB Volume Group Name Volume Name LUN's


Shared 50 Storage for CRM Database Datafiles
ARC_CRM 100GB For Oracle Archivelogs (CRM)
DATA_FSYS 100GB Storage for FSYS Database Datafiles

2. *Prepare the cluster nodes for Oracle RAC 2


ARC_FSYS 50GB For Oracle Archivelogs (FSYS)

4. Oracle Clusterware Installation and Configuration


With the operating system prerequisites met, we begin with installing the oracle clusterware.

Determine the device used for the cdrom:

#ioscan -fnC

Mount the media for oracle clusterware (runInstaller is present in the second dvdrom disk)

# mount -F cdfs -o rr /dev/dsk/c1t2d0 /SD_CDROM

Enable access to your xsession, by issuing the xhost command

#xhost +

Become the oracle user

#su - oracle

Set the display to your xwindow client

$export DISPLAY=172.25.3.25:0.0

Invoke the oracle universal installer, as the oracle user (the command below, runs the installer, with tracing
enabled to assist with troubleshooting – should any issues arise during the installation).

$/SD_CDROM/clusterware/runInstaller -J-DTRACING.ENABLED=true -J-DTRACING.LEVEL=

This is shown in below:

• Notes

♦ Begin the clusterware installation by starting the installer


♦ /SD_CDROM is the mount point for the cdrom drive

3. *Prepare the shared storage for Oracle RAC 3


After this you will be presented with the welcome screen as seen below:

• Notes

♦ The welcome screen for the clusterware installation


♦ Notice the additional information on the terminal screen - a result of enabling tracing for
runInstaller

Click Next to be presented with the ‘Specify Home Details’ (shown below), you will specify the
ORACLE_HOME location here for oracle clusterware. You will also give this home a name – which will be
served as a reference for future installations/patching.

Click next to proceed and you should be presented with the ‘Specify Cluster Configuration’ screen (below).
At this point, you must select all nodes that will participate in the RAC configuration. You will also specify
the private hostname alias (used by cluster interconnect) and the virtual hostname alias used for client
connections. These aliases should be defined on all hosts, typically /etc/hosts file of the cluster nodes. In

4. Oracle Clusterware Installation and Configuration 4


addition to this you must provide a name for this cluster installation.

Clicking next will take you to the ‘Specify Network Interface Usage Screen’ (shown below), here you specify
which interfaces will be used for the public network and which will be used for the private network.

Next you will be asked to provide the location for where the oracle cluster registry (OCR) will be stored. This
should be a shared device accessible by all nodes in the cluster (shown below).

4. Oracle Clusterware Installation and Configuration 5


• Notes

♦ Specify the location for OCR device


♦ rlv_rac_crs_ocr is a shared logical volume that exists in the vg_rac_crs volume group
• Actions

♦ Location for OCR device is /dev/vg_rac_crs/rlv_rac_crs_ocr

Finally the location for the voting disk must be specified (see below):

• Notes

♦ Specify the location for the voting disk


♦ The rlv_rac_crs_vote file/disk resides in the vg_rac_crs volume group
• Actions

♦ Location for voting disk is /dev/vg_rac_crs/rlv_rac_crs_vote

4. Oracle Clusterware Installation and Configuration 6


Once this information is specified you will be presented with the summary screen that shows the details about
the installation to be performed, clicking next will start the installation. Eventually a dialog box will pop up
asking to run root.sh and orainst.sh scripts as the root user on the rac nodes (shown below):

• Notes

♦ After the remote operations stage completes a dialog box pops up requesting the execution of
various scripts on the nodes in the cluster

Below are the details of running these scripts on both nodes of the cluster - aquadb0 and aquadb1:

From aquadb1:

Show orainstRoot.sh execution - aquadb1 Hide orainstRoot.sh execution - aquadb1

# ./orainstRoot.sh
Creating the Oracle inventory pointer file (/var/opt/oracle/oraInst.loc)
Changing permissions of /home/oracle/oraInventory to 775.
Changing groupname of /home/oracle/oraInventory to dba.
The execution of the script is complete

From aquadb0:

Show root.sh execution - aquadb0 Hide root.sh execution - aquadb0

# ./root.sh
Checking to see if Oracle CRS stack is already configured
Checking to see if any 9i GSD is up

Setting the permissions on OCR backup directory


Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node : node 1: aquadb0 aquadb0_priv aquadb0 node 2: aquadb1 aquadb1_priv aquad

4. Oracle Clusterware Installation and Configuration 7


From aquadb1:

Show root.sh execution - aquadb1 Hide root.sh execution - aquadb1

# ./root.sh
Checking to see if Oracle CRS stack is already configured
Checking to see if any 9i GSD is up

Setting the permissions on OCR backup directory


Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node : node 1: aquadb0 aquadb0_priv aquadb0 node 2: aquadb1 aquadb1_priv aquadb

The message highlighted in italics (above) is a bug, and will require that the virtual ip configuration assistant
be run as root user from the last node – aquadb1 in our case

To proceed with running the virtual ip configuration assistant. A new terminal is opened and the DISPLAY
parameter is set, as this is a graphical tool (see below):

• Notes

♦ vipca is run from the CRS_HOME (Clusterware ORACLE_HOME)


♦ vipca is a graphical tool and the first screen/step displayed is the welcome screen

Since the virtual ip’s are configured on the public interface, we must select all public interfaces when
prompted to at Step 1 of 2. In our case, lan0 is the public interface and hence is selected (shown below):

4. Oracle Clusterware Installation and Configuration 8


• Notes

♦ The next step requires selecting the public interface that will be used while configuring the
vip.
• Actions

♦ In this particular case, lan0, is the public interface


♦ After selecting the public interface, click next to proceed

The next step requires details to be provided about the public node names, the virtual host names (IP Alias
Name) and the ip addresses of the nodes that are part of the RAC cluster.

• Notes

♦ Ensure the correct information is shown with respect to virtual host information, make the
necessary changes by clicking the appropriate field.
♦ Once done, click next to proceed

4. Oracle Clusterware Installation and Configuration 9


Finally the summary screen is displayed:

• Notes

♦ Click finish to proceed with configuring the vip's

After this various configuration assistants are run, their purpose is to configure the ‘nodeapps’ component of
RAC 10g.

These assistants should configure successfully as indicated displayed below:

4. Oracle Clusterware Installation and Configuration 10


Clicking ok, results in a summary page displaying the configuration results

You may click exit here and then 'ok' on the screen that displays the pop up box for ‘execute configuration
scripts’. This will result in the OUI spawning the remaining configuration assistants of which the oracle
cluster verification utility (cvu) will fail (it did in my case) – this can safely be ignored.

4. Oracle Clusterware Installation and Configuration 11


At this point click next to proceed and be presented with the end of installation screen (below), you may then
proceed by clicking on exit – this will end the installation of the oracle clusterware software.

5. Oracle Clusterware Patching

5.1. Patch Oracle Clusterware to 10.2.0.4


The previous section, detailed the installation of Oracle Clusterware - the version installed at this point is
10.2.0.1. This section deals with applying the oracle patch set 10.2.0.4 to this clusterware installation. The
first step in the application of this patch set requires that the oracle clusterware is shutdown on all nodes in the
RAC configuration, this can be seen in the two screen shots below. The root user is responsible for starting
and stopping the oracle clusterware by invoking the following command:

#$ORACLE_HOME/bin/crsctl stop crs

Where ORACLE_HOME is the location pointed to for the Oracle Clusterware installation.

5. Oracle Clusterware Patching 12


• Actions

♦ Stopping CRS on first node - aquadb0

• Actions

♦ Stopping CRS on second node - aquadb1

Now as before, we will invoke the runInstaller utility from the patchset location, from the node aquadb0. The
patch set location is the directory where the 10.2.0.4 patchset has been extracted. After this we will provide
the location of ORACLE_HOME of the oracle clusterware, so that the correct ORACLE_HOME is patched.

5.1. Patch Oracle Clusterware to 10.2.0.4 13


The oracle universal installed should detect the clusterware that is in place and a screen should be presented
that displays the nodes that a part of the RAC configuration.

• Notes

♦ Both nodes in the RAC Cluster are displayed

You may click next to proceed; you will eventually reach the summary screen, which details the actions to be
performed, after which clicking on next commences the patchset application. You may encounter an error,
which pops up with a message stating PRKC-1073, you can safely ignore this as it is a bug.

5.1. Patch Oracle Clusterware to 10.2.0.4 14


• Notes

♦ Bug encountered during 'remote operations' stage. PRKC-1073 returned.


• Actions

♦ Ignore the error and continue

Finally the end of installation screen will appear, requiring you to run the root102.sh script as root user on all
nodes in the cluster.

6. Oracle ASM Home Software Install


Once Oracle clusterware has been installed, the next step requires the installation of Oracle automatic storage
management (asm). This part of the installation, like before will require the use of the oracle media to be run
from one node – in our case aquadb0 – The oracle universal installer (OUI) will install the binaries for ASM
on aquadb0 and then copy them over to aquadb1. Initially the runInstaller is invoked from the first dvd from
the database directory, and similar screens as presented in ‘installing oracle clusertware’ will be encountered.

6. Oracle ASM Home Software Install 15


These have been left out here and the screens specific to ASM are only shown. Now, you will eventually
reach a screen titled ‘Select Configuration Option’ (shown below). Here the second option is selected which is
to configure automatic storage management. A sys password is also specified for the ASM instances that will
be created

Once this is done, the ‘Configure Automatic Storage Management’ screen is displayed, here will create only
one diskgroup (others will be created after the successful installation of asm). The disks to be used by ASM
have been configured as shared logical volumes. Each disk is basically under the control of HP’s SLVM. Each
logical volume consists of one physical disk, this means each logical volume will be 50gb in size (since each
physical disk is 50gb). These logical volumes are part of the volume group vg_rac_data. In order for ASM to
find these disks, we need to change the disk discovery path (shown below), the string used here is:

/dev/vg_rac_data/rlv*

• Notes

♦ Need to change discovery string used by ASM to search for volumes

6. Oracle ASM Home Software Install 16


• Actions

♦ Discovery string changed to /dev/vg_rac_data/rlv*


♦ This results in all logical volumes in the vg_rac_data volume group to be displayed.

After this we will be able to see all disks that can be selected to be part of an asm diskgroup. The first
diskgroup we will create will be called data_crm, and will consist of logical volumes rlv*01-08. Once this is
done, we will click next to proceed anf will eventually be presented with the summary page as seen in
previous sections. Click next to begin the installation of oracle asm 10.2.0.1. Towards the end of the
installation, a dialog box will popup asking for the configuration script root.sh to be run on each node (shown
below).

As before, root.sh is run on aquadb0 and aquadb1. Once this is done, clicking on ok will result in the
configuration assistants screen to be presented.

A successful installation will lead to the end of installation screen being display

6. Oracle ASM Home Software Install 17


7. Oracle ASM Software Home Patching
Once ASM has been installed the next step in the process is to apply the 10.2.0.4 patchset to the ASM
ORACLE_HOME. Before doing this, the ASM installation would have started the ASM instances on each
node (aquadb0, aquadb1). This can be confirmed by running the command 'crs_stat -t' as the oracle user
(shown below). Prior to running the command the environment file crs.env is sourced to set the required
environment variables.

The commands required to shutdown the asm instances are as follows:

Show ASM Instance Shutdown Hide ASM Instance Shutdown

$srvctl stop asm -n aquadb1 -> stop asm instance on aquadb0


$srvctl stop asm -n aquadb0 -> stop asm instance on aquadb1

In addition to that the nodeapps components must also be stopped as follows:

7. Oracle ASM Software Home Patching 18


Show Nodeapps Shutdown Hide Nodeapps Shutdown

$srvctl stop nodeapps -n aquadb1 -> stop nodeapps on aquadb0


$srvctl stop nodeapps -n aquadb0 -> stop nodeapps on aquadb1

Once this is done the OUI is invoked in the same way as it was for patch the oracle clusterware. We are
required to specify our ASM ORACLE_HOME as seen below

The next screen presented will show the nodes that will be patched – basically the nodes that are part of the
cluster.

We are then presented with the summary screen, next, as shown in previous installations. After which the
patching commences, as shown below:

7. Oracle ASM Software Home Patching 19


As seen before, a pop up box will appear asking for the root.sh script to be run on each node

Once the script is run, we can startup nodeapps and asm, as follows:

Show Nodeapps & ASM Startup Hide Nodeapps & ASM Startup

$ srvctl start nodeapps -n aquadb0


$ srvctl start nodeapps -n aquadb1
$ srvctl start asm -n aquadb0
$ srvctl start asm -n aquadb1

To verify that all services are up, the following command can be issued:

Show crs_stat Output Hide crs_stat Output

$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------

7. Oracle ASM Software Home Patching 20


ora....SM1.asm application ONLINE ONLINE aquadb0
ora....B0.lsnr application ONLINE ONLINE aquadb0
ora....db0.gsd application ONLINE ONLINE aquadb0
ora....db0.ons application ONLINE ONLINE aquadb0
ora....db0.vip application ONLINE ONLINE aquadb0
ora....SM2.asm application ONLINE ONLINE aquadb1
ora....B1.lsnr application ONLINE ONLINE aquadb1
ora....db1.gsd application ONLINE ONLINE aquadb1
ora....db1.ons application ONLINE ONLINE aquadb1
ora....db1.vip application ONLINE ONLINE aquadb1

The above shows that both nodeapps and asm have been started successfully.

8. Oracle ASM Listener Creation


The next step in the deployment of RAC is the creation of network listeners on each node of the cluster. In a
typical RAC deployment using ASM, there be three ORACLE_HOME's:

• Oracle Clusterware
• Oracle ASM
• Oracle RDBMS

The network listeners will run out of the ASM ORACLE_HOME and will be created using the netca (network
configuration assistant) utility. The steps required, here, are typically the same (when deploying a single
instance oracle database) the only real difference is selecting the 'Cluster Configuration' radio button. The next
section goes through this process.

8.1. Create Node specific network listeners


This is a filler section - till the content is available - The section below can be used (as it discusses the
addition of listeners, after removing them).

8.2. Deleting and Adding Listeners


There appears to be a bug (not documented at the time of this writing), that on issuing 'srvctl modify' and
specifying the CRS_HOME on 10.2.0.2 results in the network listeners not starting. In my case the following
command had been issued:

# srvctl modify nodeapps -n aquadb0 -o /app1/oracrs/product/102 -A aquadb0-vip/

The result of this requires a look into the output of the 'crs_stat -p' command, specifically for network listener
resource:

NAME=ora.aquadb0.LISTENER_AQUADB0.lsnr
NAME=ora.aquadb0.LISTENER_AQUADB1.lsnr

Since we have a two node RAC, we have two listeners with each node name appended to the end of the
listener name preceded by an '_'. Now, the above two resources have a parameter called ACTION_SCRIPT,
after running the srvctl command the value of this parameter looks like:

ACTION_SCRIPT=/app1/oracrs/product/102/bin/racgwrap -> Incorrect value:Pointing to CRS Home:Should


be ASM Home

8. Oracle ASM Listener Creation 21


Prior to running ‘srvctl modify’ the value was:

ACTION_SCRIPT=/app2/oraasm/product/102/bin -> Correct value:Pointing to ASM Home

The difference seen is the second result (before running ‘srvctl modify’) shows the racgwrap script pointing to
the ASM_HOME – this is correct as the listeners run from this home. Also on trying to run nodeapps, we
would find that the network listener would not start. However if one were to login as the oracle user and set
the environment to ASM_HOME and then run the listener as follows:

lsnrctl start listener_aquadb0


lsnrctl start listener_aquadb1

Further analysis showed the following in the 'ora.aquadb0.LISTENER_AQUADB0.lsnr.log' log file located
under $CRS_HOME/racg directory

Show Listener Log Hide Listener Log

2007-04-11 09:39:58.332: [ RACG][1] [17817][1][ora.aquadb0.LISTENER_AQUADB0.

As can be seen the lsnrctl binary is being called out of the CRS_HOME – this binary is not present there, nor
should it be run from there as the network listeners are configured to run out of the ASM_HOME. It appears
that the run of ‘srvctl modify’ has resulted in the incorrect HOME being used for the network listeners.

The solution to listener problem is to delete the existing listeners and recreate them using the netca (network
configuration assistant) utility run from the ASM_HOME of our primary node aquadb0. What follows are the
steps to achieve this:

$ . ./asm.env
$ export DISPLAY=client_pc_where_xsession_is_run_from
$ netca

On invoking netca it is important to select ‘Cluster Configuration’ before proceeding. This is seen below:

Click next to proceed.

8.2. Deleting and Adding Listeners 22


Next a node selection screen appears, select the nodes that are part of the cluster and from where the listeners
are running. Click next when done.

Next select the task to be performed, in our case we will choose to delete the listener(s).

8.2. Deleting and Adding Listeners 23


Next as shown above, choose to delete the listener LISTENER.

Once the listener has been deleted a message will be printed on the xterminal that was used to invoke netca
informing that the listeners on both nodes aquadb0 and aquadb1 have been deleted. Once this has been done,
start a new session of netca from where the new listener will be created. Ensure that cluster configuration is
selected as shown earlier. Next ensure that all nodes are selected, then click next to select to add a listener
from the options presented. Finally accept the default name as LISTENER. Once this is done, your xterminal
window will show messages that the listener has been created on nodes aquadb0 and aquadb1, as shown
below.

• Notes

♦ Notice that first there is a message for the deletion of each listener
♦ During our second run of netca - to add the listeners - all nodes where the listeners are created
are displayed on the xterm.

8.2. Deleting and Adding Listeners 24


9. Oracle ASM Instance and diskgroup Creation
Prior to creating the oracle database(s) ASM diskgroups must be created. The requirements laid out here was
as follows:

Diskgroup Name Size Purpose


DATA_CRM 400GB Storage for CRM Database
Datafiles
ARC_CRM 100GB For Oracle Archivelogs
(CRM)
DATA_FSYS 100GB Storage for FSYS Database
Datafiles
ARC_FSYS 50GB For Oracle Archivelogs
(FSYS)
In order to configure diskgroups, we will use the Oracle Database Configuration Assistant (DBCA). This will
be invoked as the oracle user from the oracle rdbms ORACLE_HOME, as follows:

$ cd
$ . ./db.env
$ dbca

We will be presented with the DBCA welcome page, select the option for Real Application Clusters to
proceed

Next we are presented with the step 1 of 4 screen, here will select the last option configure automatic storage
management as we plan to create disk groups.

9. Oracle ASM Instance and diskgroup Creation 25


Next, in Step 2 of 4, we will select the nodes that are part of this configuration

On selecting the nodes we will be prompted for the ASM sys password that we provided earlier during our
installation of ASM.

9. Oracle ASM Instance and diskgroup Creation 26


We are then taken to the ASM Diskgroups page, here we will see that DATA_CRM is present – we created
this diskgroup during our installation of ASM.

We will now create the remaining diskgroups as per the requirements shown in table at the beginning of this
chapter. The first group ARC_CRM is created by specifying logical volumes:

• rlv_rac_data09
• rlv_rac_data10

Information on these logical volumes can be found in the chapter 3 - Prepare the shared storage for Oracle
RAC:

9. Oracle ASM Instance and diskgroup Creation 27


• Notes

♦ Notice Alls disk discovered by ASM are displayed - this is virtue of the discovery string we
used earlier while configuring ASM
♦ Select the disks/logical volumes that will be part of the disk group ARC_CRM
• Actions

♦ We check the boxes for volumes (rlv_rac_data09 and rlv_rac_data10)


♦ For redundancy we will select external

Once the disks are selected we will click ‘ok’ and be presented will a pop up box indicating that the diskgroup
creation is underway.

Once the disk group is created we will see a new disk group name ARC_CRM under the ASM Disk Groups
screen.

9. Oracle ASM Instance and diskgroup Creation 28


We now proceed by creating the other disk groups. The following logical volumes are members of the follow
disk groups:

DATA_FSYS - rlv_rac_data_11, rlv_rac_data_12


ARC_FSYS - rlv_rac_data_13

Shown below is an image of when all diskgroups have been successfully created:

10. Oracle RAC Database Home Software Install


We now proceed with installing the Oracle RDBMS (RAC) binaries into a new ORACLE_HOME, separate
from the one used for the Oracle Clusterware and Oracle ASM. The OUI is invoked from the installation
media, and we are then presented with the ‘Specify Home Details’ screen, shown below. Here we will provide
a new Oracle Home Name and a new Path on our servers for its location.

10. Oracle RAC Database Home Software Install 29


The screens presented here are similar to those seen during the installation of ASM, the difference is in the
options that are selected. So when presented with the ‘Select Configuration Option’ screen, we will choose to
only perform a software installation.

Following this screen is the summary screen (as seen earlier) after which the installation starts. After
successful installation we will be prompted to run the root.sh script. This will be run as before on all nodes in
the cluster. This will complete the installation of the RDBMS (RAC) binaries.

11. Oracle RAC Software Home Patching


Once RAC has been installed the next step in the process is to apply the 10.2.0.4 patchset to the RAC
ORACLE_HOME.The OUI is invoked in the same way as it was for patch the oracle clusterware. We are
required to specify our ASM ORACLE_HOME as seen below

11. Oracle RAC Software Home Patching 30


The next screen presented will show the nodes that will be patched – basically the nodes that are part of the
cluster.

We are then presented with the summary screen, next, as shown in previous installations. After which the
patching commences, as shown below:

11. Oracle RAC Software Home Patching 31


As seen before, a pop up box will appear asking for the root.sh script to be run on each node

12. Oracle RAC Database Creation

12.1. Creating the RAC Database


This section to be filled in, once available.

12.2. Oracle Interconnect Configuration


Due to a change in the subnet of the private network (lan2) used by oracle, the following messages were seen
logged in the alert log files of the various database instances:

Show Alert Log output Hide Alert Log output


Interface type 1 lan2 172.25.0.0 configured from OCR for use as a cluster inter

12. Oracle RAC Database Creation 32


WARNING 172.25.0.0 could not be translated to a network address error 1
Interface type 1 lan0 172.25.3.0 configured from OCR for use as a public interf
WARNING: No cluster interconnect has been specified. Depending on
the communication driver configured Oracle cluster traffic
may be directed to the public interface of this machine.
Oracle recommends that RAC clustered databases be configured
with a private interconnect for enhanced security and
performance.

This occurred because there was a change of the network subnet mask, which took place at the o/s level and
was not changed in the OCR – where this information is also stored. This can be seen by issuing the following
command:

# $ORACLE_HOME/bin/oifcfg getif
lan2 172.25.0.0 global cluster_interconnect
lan0 172.25.3.0 global public

lan2 is our private network for the oracle interconnect, the subnet initially assigned to it was 172.25.0.0, this
should be 172.25.5.0. To remedy this, the following needs to be done (after shutting down all services
dependant on and including nodeapps:

(Deletes private network information from the OCR).


# $ORACLE_HOME/bin/oifcfg delif -global lan2

(Verifies private network information has been removed).


# $ORACLE_HOME/bin/oifcfg getif
lan0 172.25.3.0 global public

(Configures new private network with correct subnet, run as root user)
# $ORACLE_HOME/bin/oifcfg setif -global lan2/172.25.7.0:cluster_interconnect

(Verifies the addition of the new information)


# $ORACLE_HOME/bin/oifcfg getif
lan2 172.25.7.0 global cluster_interconnect
lan0 172.25.3.0 global public

Once this is done, oracle clusterware must be restarted. After which the dependent services can be brought
back up.

Further verification can be seen by connecting to both database instances and issuing the following sql:

SQL> select * from v$cluster_interconnects;

NAME IP_ADDRESS IS_ SOURCE


--------------- ---------------- --- -------------------------------
lan2 172.25.7.5 NO Oracle Cluster Repository

SQL> select * from v$cluster_interconnects;

NAME IP_ADDRESS IS_ SOURCE


--------------- ---------------- --- -------------------------------
lan2 172.25.7.6 NO Oracle Cluster Repository

12.2. Oracle Interconnect Configuration 33


12.3. Public Network Redundancy Configuration
Redundancy for the public network is configured in such a way that failures in lan0 (primary network
interface) will result in the lan3 (standby network interface) taking over. A failure in lan3 will result in lan4
(second standby network interface) taking over and finally a failure in lan4 will result in the VIP being
relocated to a surviving node. It is important to note that the monitoring, or rather the fail over of the public
network interface is managed by HP's Serviceguard Clusterware stack.

In order to implement this, the following two things need to be done:

1. Application of patch: 4699597 (only for HP-UX implementations)


2. Updating the OCR so that it knows that lan3 and lan4 are possible interfaces for the public network,
more so the VIP, as this is the entry point for clients.

The steps to apply patch: 4699597 are documented in its readme, this section discusses the use of srvctl to
update the configuration in the OCR to include lan3 and lan4 as standby lans.

First all services dependant on and including nodeapps are stopped. What follows next is the series of
commands issued to perform the update at the OCR.

# srvctl modify nodeapps -n aquadb0 -o /app1/oracrs/product/102 -A aquadb0-vip/


# srvctl modify nodeapps -n aquadb1 -o /app1/oracrs/product/102 -A aquadb1-vip/

Two things to note about the above command:

• The -o option requires the value of the ORACLE_HOME for the CRS.
• The primary interface is specified first - lan0
• Each additional lan is separated by the pipe symbol '|'. This | symbol must be preceded by the forward
slash '\' which acts as an escape character to the pipe symbol

To verify the above changes have taken place, the following commands are executed (as root user, from the
CRS_HOME):

#srvctl config nodeapps -n aquadb0 -a


VIP exists.:/aquadb0-vip/172.25.3.16/255.255.255.0/lan0:lan3:lan4

#srvctl config nodeapps -n aquadb1 -a


VIP exists.:/aquadb1-vip/172.25.3.17/255.255.255.0/lan0:lan3:lan4

To verify the changes have taken place, the crs_stat command can have its output redirected to an output file
as follows:

crs_stat -p > /tmp/file_name.txt

This file will contain a list of all oracle managed resources. Since we have modified the VIP properties, we
look at the resource for the VIP’s which has the following heading:

NAME=ora.aquadb0.vip
NAME=ora.aquadb1.vip

Within these two resources, the following parameter shows the interface definition:

USR_ORA_IF=lan0|lan3|lan4

12.3. Public Network Redundancy Configuration 34


Prior to the update this initially had the value:

USR_ORA_IF=lan0

Now the VIP, knows it must run on first lan0, and then lan3 or lan4 after which a relocation to another node
must take place.

An important note, is that after the application of patch 4699597, the VIP now runs on the logical interface as:
lan0:801 – prior to the patch it would run on lan0:1

To test that this works fine the following can be conducted.

• Pull public cable for lan0, on aquadb0


• Pull public cable for lan3, on aquadb0
• Pull public cable for lan4, on aquadb0

We will investigate this by inspecting the VIP log file first for node aquadb0, the name of this log file is:
ora.aquadb0.vip.log, located under $CRS_HOME/racg.

Prior to investigating the log file it is important to note that the checking of the VIP is determined by the
parameter CHECK_INTERVAL. By default this parameter is set to 60 seconds, this means the interface the
VIP resides on is checked every 60 seconds, below is a snippet from the above log file, that shows such a
check once the cable from lan0 is removed (text in blue is commentary and not part of the log file):

Show VIP Log Hide VIP Log


2007-04-13 22:47:33.229: [ RACG][1] [8943][1][ora.aquadb0.vip]: Fri Apr 13 2
Fri Apr 13 22:47:32 PST 2007 [ 8945 ] Calling getifbyip
Fri Apr 13 22:47:32 PST 2007 [ 8945 ] getifbyip: started for 172.25.3.16
Fri Apr 13 22:47:32 PST 2007 [ 8945 ] Completed getifbyip
2007-04-13 22:47:33.229: [ RACG][1] [8943][1][ora.aquadb0.vip]: lan0:801
Fri Apr 13 22:47:32 PST 2007 [ 8945 ] Completed with initial interface test
Fri Apr 13 22:47:32 PST 2007 [ 8945 ] Broadcast = 172.25.3.255
Fri Apr 13 22:47:32 PST 2007 [ 8945 ] checkIf: start for if=lan0
Fri Apr 13 22:47:32 PST 2007 [ 8945 ] chec
2007-04-13 22:47:33.229: [ RACG][1] [8943][1][ora.aquadb0.vip]: kIf: detecte
Fri Apr 13 22:47:32 PST 2007 [ 8945 ] checkIf: interface lan0 is a MC/Servicegu
Fri Apr 13 22:47:32 PST 2007 [ 8945 ] Performing CRS_STAT testing

Within seconds we will now see that the VIP has failed over to the standby interface lan3:

Show VIP First Fail Over Hide VIP First Fail Over
2007-04-13 22:47:33.229: [ RACG][1] [8943][1][ora.aquadb0.vip]: Fri Apr 13 2
Fri Apr 13 22:47:32 PST 2007 [ 8945 ] Interface tests
Fri Apr 13 22:47:32 PST 2007 [ 8945 ] checkIf: start for if=lan3 -> Next availa
Fri Apr 13 22:47:33 PST 2007 [ 8945 ] checkIf: detected MC/ServiceGu
2007-04-13 22:47:33.229: [ RACG][1] [8943][1][ora.aquadb0.vip]: ard Local Sw
Fri Apr 13 22:47:33 PST 2007 [ 8945 ] checkIf: interface lan3 is a MC/Servicegu
Fri Apr 13 22:47:33 PST 2007 [ 8945 ] getnextli: started for if=lan3
Fri Apr 13 22:47:33 PST 2007 [ 8945
2007-04-13 22:47:33.229: [ RACG][1] [8943][1][ora.aquadb0.vip]: ] listif: s
Fri Apr 13 22:47:33 PST 2007 [ 8945 ] listif: completed with lan0
lan1
lan2
lan3
lan4

12.3. Public Network Redundancy Configuration 35


Fri Apr 13 22:47:33 PST 2007 [ 8945 ] getnextli: completed with nextli=lan3:80
Fri Apr 13 22:47:33 PST 2007 [ 8945 ] Success exit 1

Finally the cable for lan3 is unplugged and the following is observed:

Show VIP Second Fail Over Hide VIP Second Fail Over
2007-04-13 22:52:37.619: [ RACG][1] [17212][1][ora.aquadb0.vip]: Fri Apr 13
Fri Apr 13 22:52:36 PST 2007 [ 17214 ] Calling getifbyip
Fri Apr 13 22:52:36 PST 2007 [ 17214 ] getifbyip: started for 172.25.3.16
Fri Apr 13 22:52:36 PST 2007 [ 17214 ] Completed getifb
2007-04-13 22:52:37.619: [ RACG][1] [17212][1][ora.aquadb0.vip]: yip lan3:80
Fri Apr 13 22:52:36 PST 2007 [ 17214 ] Completed with initial interface test
Fri Apr 13 22:52:36 PST 2007 [ 17214 ] Broadcast = 172.25.3.255
Fri Apr 13 22:52:36 PST 2007 [ 17214 ] checkIf: start for if=lan3
Fri Apr 13 22:52:36 PST 2007 [ 1721
2007-04-13 22:52:37.619: [ RACG][1] [17212][1][ora.aquadb0.vip]: 4 ] checkIf
Fri Apr 13 22:52:36 PST 2007 [ 17214 ] checkIf: interface lan3 is a MC/Serviceg
Fri Apr 13 22:52:36 PST 2007 [ 17214 ] Performing CRS_STAT testing
F
2007-04-13 22:52:37.619: [ RACG][1] [17212][1][ora.aquadb0.vip]: ri Apr 13 2
Fri Apr 13 22:52:37 PST 2007 [ 17214 ] start part: get default gw
Fri Apr 13 22:52:37 PST 2007 [ 17214 ] defaultgw: started
Fri Apr 13 22:52:37 PST 2007 [ 17214 ] defaultgw: completed with
2007-04-13 22:52:37.619: [ RACG][1] [17212][1][ora.aquadb0.vip]: 172.25.3.2
Fri Apr 13 22:52:37 PST 2007 [ 17214 ] Completed second gateway test
Fri Apr 13 22:52:37 PST 2007 [ 17214 ] Interface tests
Fri Apr 13 22:52:37 PST 2007 [ 17214 ] checkIf: start for if=lan0 -> First lan0
Fri Apr 13 22:52:37 PST 2007 [ 17214 ] checkIf: det
2007-04-13 22:52:37.619: [ RACG][1] [17212][1][ora.aquadb0.vip]: ected MC/Se
Fri Apr 13 22:52:37 PST 2007 [ 17214 ] checkIf: interface lan0 is a MC/Serviceg
Fri Apr 13 22:52:37 PST 2007 [ 17214 ] checkIf: start for if=lan4 -> lan4 is no
Fri Apr 13 22:52:3
2007-04-13 22:52:37.619: [ RACG][1] [17212][1][ora.aquadb0.vip]: 7 PST 2007
Fri Apr 13 22:52:37 PST 2007 [ 17214 ] checkIf: interface lan4 is a MC/Serviceg
Fri Apr 13 22:52:37 PST 2007 [ 17214 ] getnextli: st
2007-04-13 22:52:37.619: [ RACG][1] [17212][1][ora.aquadb0.vip]: arted for i
Fri Apr 13 22:52:37 PST 2007 [ 17214 ] listif: starting
Fri Apr 13 22:52:37 PST 2007 [ 17214 ] listif: completed with lan0
lan1
lan2
lan3
lan4
Fri Apr 13 22:52:37 PST 2007 [ 17214 ] getnextli: completed with nextli=lan4:8
Fri Apr 13

Finally we proceed by pulling the cable for lan4:

Show VIP No Public Network Hide VIP No Public Network


2007-04-13 22:54:19.289: [ RACG][1] [18910][1][ora.aquadb0.vip]: Fri Apr 13
Fri Apr 13 22:53:38 PST 2007 [ 18923 ] Calling getifbyip
Fri Apr 13 22:53:38 PST 2007 [ 18923 ] getifbyip: started for 172.25.3.16
Fri Apr 13 22:53:38 PST 2007 [ 18923 ] Completed getifb
2007-04-13 22:54:19.289: [ RACG][1] [18910][1][ora.aquadb0.vip]: yip lan4:80
Fri Apr 13 22:53:38 PST 2007 [ 18923 ] Completed with initial interface test

12.3. Public Network Redundancy Configuration 36


Fri Apr 13 22:53:38 PST 2007 [ 18923 ] Broadcast = 172.25.3.255
Fri Apr 13 22:53:38 PST 2007 [ 18923 ] checkIf: start for if=lan4 -> Check lan4
Fri Apr 13 22:53:51 PST 2007 [ 1892
2007-04-13 22:54:19.289: [ RACG][1] [18910][1][ora.aquadb0.vip]: 3 ] checkIf
Fri Apr 13 22:53:51 PST 2007 [ 18923 ] checkIf: interface lan4 is a MC/Serviceg
Fri Apr 13 22:53:51 PST 2007 [ 18923 ] Performing CRS_STAT testing
F
2007-04-13 22:54:19.289: [ RACG][1] [18910][1][ora.aquadb0.vip]: ri Apr 13 2
Fri Apr 13 22:53:51 PST 2007 [ 18923 ] start part: get default gw
Fri Apr 13 22:53:51 PST 2007 [ 18923 ] defaultgw: started
Fri Apr 13 22:53:51 PST 2007 [ 18923 ] defaultgw: completed with
2007-04-13 22:54:19.289: [ RACG][1] [18910][1][ora.aquadb0.vip]: 172.25.3.2
Fri Apr 13 22:53:51 PST 2007 [ 18923 ] Completed second gateway test
Fri Apr 13 22:53:51 PST 2007 [ 18923 ] Interface tests -> Begin interface check
Fri Apr 13 22:53:51 PST 2007 [ 18923 ] checkIf: start for if=lan0 -> Start Chec
Fri Apr 13 22:54:04 PST 2007 [ 18923 ] checkIf: det
2007-04-13 22:54:19.290: [ RACG][1] [18910][1][ora.aquadb0.vip]: ected MC/Se
Fri Apr 13 22:54:04 PST 2007 [ 18923 ] checkIf: interface lan0 is a MC/Serviceg
Fri Apr 13 22:54:04 PST 2007 [ 18923 ] checkIf: start for if=lan3 -> Proceed wi
Fri Apr 13 22:54:1
2007-04-13 22:54:19.290: [ RACG][1] [18910][1][ora.aquadb0.vip]: 7 PST 2007
Fri Apr 13 22:54:17 PST 2007 [ 18923 ] checkIf: interface lan3 is a MC/Serviceg
Fri Apr 13 22:54:17 PST 2007 [ 18923 ] DEBUG: FAIL_
2007-04-13 22:54:19.290: [ RACG][1] [18910][1][ora.aquadb0.vip]: WHEN_ALL_LI
Invalid parameters, or failed to bring up VIP (host=aquadb0)

Finally when all interfaces are unavailable the VIP will relocate to a surviving node, in our case aquadb1. On
aquadb1, under the CRS_HOME/racg directory there is a VIP log file for aquadb0, called:
ora.aquadb0.vip.log. This contains the following entries:

Show VIP Relocation to Available Node Hide VIP Relocation to Available Node
2007-04-13 22:55:11.745: [ RACG][1] [24512][1][ora.aquadb0.vip]: Fri Apr 13
Fri Apr 13 22:54:52 PST 2007 [ 24523 ] Calling getifbyip
Fri Apr 13 22:54:52 PST 2007 [ 24523 ] getifbyip: started for 172.25.3.16
Fri Apr 13 22:54:52 PST 2007 [ 24523 ] Completed getifb
2007-04-13 22:55:11.745: [ RACG][1] [24512][1][ora.aquadb0.vip]: yip
Fri Apr 13 22:54:52 PST 2007 [ 24523 ] switched to standby : start/check operat
Fri Apr 13 22:54:56 PST 2007 [ 24523 ] Completed with initial interface test
Fri Apr 13 22:54:56 PST 2007 [ 24523 ] Broadcast = 172.25.3.255
Fri Apr 13 22:54:56 PST 20
2007-04-13 22:55:11.746: [ RACG][1] [24512][1][ora.aquadb0.vip]: 07 [ 24523
Fri Apr 13 22:54:56 PST 2007 [ 24523 ] checkIf: start for if=lan0
Fri Apr 13 22:55:09 PST 2007 [ 24523 ] checkIf: detected MC/ServiceGuard Local
Fri Apr 13 22:55:09 PST 2007 [ 24523 ] checkIf: interface la
2007-04-13 22:55:11.746: [ RACG][1] [24512][1][ora.aquadb0.vip]: n0 is a MC
Fri Apr 13 22:55:10 PST 2007 [ 24523 ] getnextli: started for if=lan0
Fri Apr 13 22:55:10 PST 2007 [ 24523 ] listif: starting
Fri Apr 13 22:55:10 PST 2007 [ 24523 ] listif: completed with lan0

2007-04-13 22:55:11.746: [ RACG][1] [24512][1][ora.aquadb0.vip]: lan0:801 ->


lan1
lan2
lan3
lan4
Fri Apr 13 22:55:10 PST 2007 [ 24523 ] getnextli: completed with nextli=lan0:8

12.3. Public Network Redundancy Configuration 37


Fri Apr 13 22:55:10 PST 2007 [ 24523 ] Success exit 1

This concludes the check of the redundancy for the public network

12.3. Public Network Redundancy Configuration 38

Você também pode gostar