Você está na página 1de 9

Oracle 10g RAC Installation/Configuration Steps on Solaris

Steps to install Oracle 10g RAC Software on Solaris

Pre-Install Tasks

1) Ensure that external shared disks are allocated (preferably RAW) for
storing Oracle Clusterware files (OCR and voting) and DB files

2) Ensure IPs are set up properly on each node of cluster


• One private IP address for each node to server as private interconnect
– must be defined in /etc/hosts on each node – will be used during
clusterware install
• One public IP for each node to be used as Virtual IP for client
connections and for connection failover – Clusterware will manage this VIP –
should have entry in /etc/hosts, but SHOULD NOT BE CONFIGURED/UP (test by
pinging)- CRS will manage the VIP later on
• One public fixed host name address for each node
• Ensure that the Network Interface Names are the same on all nodes of
the RAC

3) Ensure all necessary IDs/groups created


• oinstall group
• dba group
• oracle UNIX ID with oinstall as primary group and dba as secondary group

4) Ensure all OS is configured properly according to Oracle’s install guide


pkginfo –I SUNWarc

5) Ensure all OS patches are in place – refer to Oracle’s install guide

6) Check Kernel Parameters – refer to Oracle’s install guide

7) Check UDP Parameters


udp_recv_hiwat and udp_xmit_hiwat need to be 65536 bytes at minimum

• If use Sun Clusters, then you must install the UDLM patch provided by
Oracle onto each node that is part of your current cluster installation. You
must install the UDLM patch before you install Oracle Clusterware. Oracle
Database 10g release 10.2.0.3 requires UDLM asynchronous 3.3.4.8.
pkginfo –l ORCLudlm | grep VERSION – to determine current version

8) Create RSA and DSA keys on each node on server for BOTH oracle IDs.
Setting up rsa and dsa key in authorized_key file in .ssh directory

NOTE: Ensure no banners are displayed when doing ssh using oracle ID or install
will fail. Banner can be put back after install is done.

9) Ensure DBAs have access to run all necessary scripts as ROOT during
install and CRS management.
Example:
(root) /usr/bin/su - oracle
(root) /etc/init.d/init.crs
(root) /etc/init.d/init.crsd
(root) /etc/init.d/init.cssd
(root) /usr/bin/ls -l /var/core
(root) /usr/bin/ls -ltr /var/core
(root) /usr/bin/view /var/cron/syslog.log
(root) /usr/bin/view /var/cron/log
(root) /u001/app/oracle/product/oraInventory/orainstRoot.sh
(root) /u001/app/oracle/product/rdbms/10.2.0/root.sh
(root) /u001/app/oracle/product/crs/10.2.0/root.sh
(root) /u001/app/oracle/product/asm/10.2.0/root.sh
(root) /u001/app/oracle/product/crs/10.2.0/bin/crsctl
(root) /u001/app/oracle/product/crs/10.2.0/bin/vipca
(root) /u001/app/oracle/product/crs/10.2.0/install/root102.sh

10) Set up oracle .profile so environment is correct


ORACLE_OWNER=oracle
export ORACLE_OWNER
ORACLE_SID=xxxx
export ORACLE_SID
export ORACLE_BASE=/u001/app/oracle/product
export ORACLE_BASE
export ORACLE_HOME=$ORACLE_BASE/rdbms/10.2.0
export ORACLE_HOME
export ORA_RDBMS_HOME=$ORACLE_BASE/rdbms/10.2.0
export ORA_RDBMS_HOME
export ORA_CRS_HOME=$ORACLE_BASE/crs/10.2.0
export ORA_CRS_HOME
export ORA_ASM_HOME=$ORACLE_BASE/asm/10.2.0
export ORA_ASM_HOME
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH
PATH=.:/usr/bin:/usr/sbin:/etc:/usr/ccs/bin:/usr/openwin/bin:/usr/local/bin:
$ORACLE_HOME/bin:$ORA_CRS_HOME/bin:$ORA_ASM_HOME/bin:$PATH
export PATH

11) Prior to running cluvfy, ensure that ssh and scp reside in
/usr/local/bin, if not, get SAs to create links in /usr/local/bin to their
actual location.

12) Create links for CRS Registry (crsocr) and Voting disk (crsvote) pointing
to raw devices. These links will need to be set up on each node of RAC in same
exact location.
run ls –lL to ensure underlying privileges on files resemble the
following:
the crsocr raw device needs to be owned by root:oinstall with
permissions of 640 and crsvote should be owned by oracle:oinstall and have
permissions of 660.

13) Run CRS Verify to do verification


runcluvfy.sh stage -pre crsinst -n node_list - ensures install prereqs ok
runcluvfy.sh comp nodecon -n node_list - ensures node connectivity ok
runcluvfy.sh stage –post hwos –n node_list –verbose – ensures hardware/os setup ok
runcluvfy.sh comp sys -n node_list -p crs -osdba osdba_group -orainv
orainv_group –verbose – ensure os prereq ok
• Ignore error on Solaris 10 if warning that SUNWsprox is
missing – not needed
• Ignore failure on UDLM if version installed is greater than
what is required – 3.3.4.8 (220 has UDLM Version Dev Release
03/09/06, 64bit 3.3.4.9 reentrant, async libskgxn2.so)
• If receive error related to not finding suitable VIPS refer to
Note: 338924.1

Installation Steps

1) Install CRS 10.2.0.1 into separate ORACLE_HOME – only need to do on one node as
GUI will automatically install it on all nodes
a) Set ORACLE_HOME=ORA_CRS_HOME
b) download 10gr2_cluster_sol.cpio from Oracle site (example using
Solaris file name)
c) cpio -idmv < 10gr2_cluster_sol.cpio
d) run CVU command:
…./Disk1/cluvfy/runcluvfy.sh stage -pre crsinst -n node_list
e) determine following attributes for each node in cluster:
cluster name = crscognos
public node name = ustsmdbcvtc017
ustsmdbcvtc018
private node name = ustsmdbcvtc017-dbint1
ustsmdbcvtc018-dbint1
virtual node name (public) = ustsmdbcvtc017-oravip
ustsmdbcvtc018-oravip

f) Choose cluster name – crscognos

g) Choose IP addresses associated with node (Hit ADD to include 2nd


node)
1) Public IP/node name for node
2) VIP/node name for crs
3) Private IP/node name used for interconnect

h) Next screen is Network Usage Screen


Answer Network usage about subnets – ALL subnets should be DO NOT USE,
except subnets for Public Hostname and Private Interconnect.

Run ifconfig –a on server to confirm you have the proper interface names
with the IPs that have been assigned the public hostname and private
interconnect.

i) Specify location of OCR with external redundancy

j) Specify location of Voting disk with external redundancy

k) Will need to run the following scripts as root on each NODE of


cluster:
1) $ORACLE_BASE/oraInventory/orainstRoot.sh
2) $ORA_CRS_HOME/root.sh
• Upon execution of root.sh on the last node
you might get Warning message “Public Interfaces should be
used to configure Virtual ip’s” . Run the
$CRS_HOME/bin/vipca command as root user on the last node.
Ensure DISPLAY parameter is set on the last node as the
vipca command would displayed on a GUI screen . Configure
Virtual IP address correctly otherwise one of the cluster
service will not be running . Ensure you specify the
correct Ip address , Subnet mask and server name .

l) Verify CRS install


Crsctl check crs - should return ‘healthy’ for 3 components
Crs_stat –t - should show online for gnd/ons/vip on all nodes

2) Install 10.2.0.1 ASM


a) Set ORACLE_HOME=ORA_ASM_HOME
b) using same software as RDBMS, kick off ./runInstaller
c) choose Enterprise Edition
d) specify ASM Home Name (OraASM10g) and Home Path
e) make sure all nodes are selected under “Specify Hardware
Cluster Installation Mode”
f) Select “Configure Automatic Storage Management” and
enter in ASM SYS password
g) Start install
h) Cancel out of Oracle Net Configuration Screen (netca)
i) Cancel out of Oracle Database Configuration Assistance
(dbca)
j) Run root.sh on both nodes as root

3) Install 10.2.0.1 RDBMS


a) Set ORACLE_HOME=ORA_RDBMS_HOME
b) gunzip *.gz file
c) cpio –idmv < *.cpio
d) run following CVU command:
……/Disk1/cluvfy/runcluvfy.sh stage -pre dbinst -n node_list -r 10gR2 -osdba
osdba_group -verbose
e) Shut down all services/instances/listener/nodeapps in the Oracle
home on the node that might be accessing a database:
srvctl stop service -d db_name [-s service_name_list [-i inst_name]]
srvctl stop instance -d db_name -i inst_name
srvctl stop listener -n node [-l listenername]
srvctl stop nodeapps -n node
f) ./runInstaller
g) Choose Enterprise Edition
h) Specify RDBMS Home Name and Path
i) Make sure all nodes are selected under “Specify Hardware Cluster
Installation Mode”
j) Install Software Only
k) Install
l) Run root.sh on both nodes as root

4) Upgrade to CRS 10.2.0.4 - p6810189_10204_SOLARIS-64.zip


a) Set $ORACLE_HOME to $ORA_CRS_HOME
b) Unzip *.zip
c) Shut down all services/instances/nodeapps in the Oracle home on
all nodes that might be accessing a database. Shutdown ASM as well:
srvctl stop service -d db_name [-s service_name_list [-i
inst_name]]
srvctl stop instance -d db_name -i inst_name
srvctl stop listener -n node [-l listenername]
srvctl stop asm –n node
srvctl stop nodeapps –n node
d) Shut down CRS as the root user:
$ORA_CRS_HOME/bin/crsctl stop crs
e) If using Sun Cluster 3.x - may need to upgrade UDLM patch - look
at Disk1/racpatch/README.udlm - Oracle Database 10g release 10.2.0.4 requires
UDLM asynchronous 3.3.4.9. UDLM with asynchronous libskgxn2.so is included
with this patch set.
f) ./Disk1/runInstaller
Ensure pointing to proper oracle home
Click Next when on Select Nodes screen
Run $ORACLE_HOME/root.sh as root on each node (not sure if you
will be prompted during install on CRS)
g) Optional – run $ORACLE_HOME/install/changePerm.sh
h) as root shut down clusterware on EACH NODE of cluster:
$ORA_CRS_HOME/bin/crsctl stop crs
i) as root start the Oracle Clusterware on EACH NODE of cluster:
$ORA_CRS_HOME/install/root102.sh

5) Upgrade to ASM 10.2.0.4 - p6810189_10204_SOLARIS-64.zip


a) Set $ORACLE_HOME to $ORA_ASM_HOME
b) Unzip *.zip
c) Shut down all services/instances/nodeapps in the Oracle home on all
nodes that might be accessing a database. Shutdown ASM as well:
srvctl stop service -d db_name [-s service_name_list [-i inst_name]]
srvctl stop instance -d db_name -i inst_name
srvctl stop listener -n node [-l listenername]
srvctl stop asm –n node
srvctl stop nodeapps –n node

d) Shut down CRS as the root user:


$ORA_CRS_HOME/bin/crsctl stop crs
e) ./Disk1/runInstaller
Ensure pointing to proper oracle home
Click Next when on Select Nodes screen
Run $ORACLE_HOME/root.sh when prompted
f) Optional – run $ORACLE_HOME/install/changePerm.sh
g) as root shut down clusterware: $ORA_CRS_HOME/bin/crsctl stop crs
h) as root start the Oracle Clusterware on the patched node:
/etc/init.d/init.crs

6) Upgrade RDBMS software ONLY (no DBs) to 10.2.0.4 -


p6810189_10204_SOLARIS-64.zip
a) Set $ORACLE_HOME to $ORA_RDBMS_HOME
b) Unzip *.zip
c) Shut down all services/instances/nodeapps in the Oracle home on
all nodes that might be accessing a database. Shutdown ASM as well:
srvctl stop service -d db_name [-s service_name_list [-i
inst_name]]
srvctl stop instance -d db_name -i inst_name
srvctl stop listener -n node [-l listenername]
srvctl stop asm –n node
srvctl stop nodeapps –n node

d) Shut down CRS as the root user:


$ORA_CRS_HOME/bin/crsctl stop crs
e) ./Disk1/runInstaller
Ensure pointing to proper oracle home
Click Next when on Select Nodes screen
Run $ORACLE_HOME/root.sh when prompted
f) Optional – run $ORACLE_HOME/install/changePerm.sh
g) as root shut down clusterware: $ORA_CRS_HOME/bin/crsctl stop crs
h) as root start the Oracle Clusterware on the patched node:
/etc/init.d/init.crs

7) Apply latest OPatch – 6880880 to all 3 oracle Homes

8) Apply CRS bundle for 10.2.0.4 #3 – 7715304 to CRS, ASM and RDBMS
oracle homes
Reference Doc ID 405820.1 for known issues

a) If using Sun Cluster 3.x - may need to upgrade UDLM patch -


look at Disk1/racpatch/README.udlm - Oracle Database 10g release 10.2.0.4
requires UDLM asynchronous 3.3.4.9. UDLM with asynchronous libskgxn2.so is
included with this patch set.
b) Shut down all services/instances/nodeapps in the Oracle home
on all nodes that might be accessing a database. Shutdown ASM as well:
srvctl stop service -d db_name [-s service_name_list [-i
inst_name]]
srvctl stop instance -d db_name -i inst_name
srvctl stop listener -n node [-l listenername]
srvctl stop asm –n node
srvctl stop nodeapps –n node

c) Shut down CRS as the root user:


$ORA_CRS_HOME/bin/crsctl stop crs

Perform steps d-n on each node


d) Verify Oracle Inventory
opatch lsinventory –detail –oh $ORA_CRS_HOME
opatch lsinventory –detail –oh $ORA_ASM_HOME
opatch lsinventory –detail –oh $ORA_RDBMS_HOME
e) Unzip 7715304.zip
f) Run the following as root:
…/custom/scripts/prerootpatch.sh –crshome $ORA_CRS_HOME –crsuser
<username>
g) Run the following as CRS user :
../custom/scripts/prepatch.sh –crshome $ORA_CRS_HOME
h) Run the following as RDBMS user:
../custom/server/7715304/custom/scripts/prepatch.sh –dbhome
$ORA_RDBMS_HOME
../custom/server/7715304/custom/scripts/prepatch.sh –dbhome $ORA_ASM_HOME
i) Run the following as CRS user:
opatch napply –local –oh $ORA_CRS_HOME –id 7715304,7421126
j) Run the following as RDBMS user:
Opatch napply …/7715304/custom/server/ -local –oh $ORA_RDBMS_HOME -id
7715304,7421126
Opatch napply …/7715304/custom/server/ -local –oh $ORA_ASM_HOME -id
7715304,7421126

Known Issues for 10.2.0.4 CRS Bundle Patch #3:


ALL POSTPATCH.SH scripts (k – m)
When running ./postpatch.sh -crshome $ORA_CRS_HOME following warning is seen:
On HP-UX: selected locales are not available
On Solaris: couldn't set locale correctly

WORKAROUND:
The workaround is to change "LC_ALL=en_US.UTF-8" to "LC_ALL=C" in postpatch.sh,
and rerun it.
-

k) Run the following as CRS user :


…/custom/scripts/postpatch.sh –crshome $ORA_CRS_HOME

l) Run the following as RDBMS user:


…/custom/server/7715304/custom/scripts/postpatch.sh –dbhome
$ORA_RDBMS_HOME
…/custom/server/7715304/custom/scripts/postpatch.sh –dbhome $ORA_ASM_HOME

m) Run the following as root:


…/custom/scripts/postrootpatch.sh –crshome $ORA_CRS_HOME
n) Confirm installation of patch
opatch lsinventory –detail –oh $ORA_CRS_HOME
opatch lsinventory –detail –oh $ORA_RDBMS_HOME
opatch lsinventory –detail –oh $ORA_ASM_HOME

9) Apply latest CPU Patch - Solaris 64 bit = 7592346

On ALL NODES, apply it to the following:


a) ASM – set ORACLE_HOME=ORA_ASM_HOME
b) RDBMS – set ORACLE_HOME=ORA_RDBMS_HOME

Post-Installation Steps
1. Configure Listener

a) Specify ASM home as Oracle home


b) Type netca and GUI screen will open
c) Select “Cluster Configuration “ on the type of Network
configuration screen
d) Select all the nodes to configure the Instance and then
Click next .
e) Select Listener Configuration and then click next
f) Select “Add” and then click next
g) Select TCP and then click next
h) Type Listener name , ex: LISTENER
i) Select port number.
j) Start the listener .
At this Point the listener is configured on all the nodes .

2. Configure ASM

a) Specify ASM home as Oracle home


b) Type dbca and GUI screen will appear
c) Select “Oracle Real Application Cluster database “ and then
click next .
d) Select “configure Automatic storage management” on the type
of operation you want to perform and the click next .
e) Select all the nodes and then click next .
f) Now Specify SYS password and then ensure PFILE path is
correct and then click next
g) Then a dialog window will appear mentioning “dbca will now
create and start the ASM Instance . After the ASM Instance
is started you can create the disk groups for database
storage”.
h) Now you should create 2 disk groups one for DATA and
another for FRA .
i) Click create new button and select the Diskgroup path and
then select external if using raw devices or select Normal
or high based on the storage . this is for FRA .
j) Repeat the above steps for DATA group also
k) Once done click Finish Button and at this point it will
mount the Disk groups that you have selected .
l) If it is successful then you are done with this .
m) Next you have to create database.

3. Configure Database – should be done through dbca so can be


added to cluster properly
a) Specify RDBMS home as Oracle home
b) Type dbca and GUI screen will open
c) Welcome Screen Opens, select ‘Oracle Real
Application Clusters database’ and then click
`Next’
d) Select ‘Create a Database’ and click ‘Next’.
e) Click ‘Select All’ button and ‘Next’
f) Select ‘Custom Database’ button and ‘Next’
g) Enter ‘CGNSP’ in ‘Global Database Name:’ field
h) Enter ‘CGNSP’ in ‘SID Prefix’ field and click
‘Next’
i) Unclick ‘Configure the Database with Enterprise
Manager’ field and click ‘Next’
j) Select ‘Use the Same Password for All Account’
field, enter in the appropriate password and click
‘Next’
k) Select ‘ASM’ and click ‘Next’
l) Specify ASM SYS password and click ‘OK’
m) Select DATA disk group to use as storage for the
database and click ‘Next’
n) Select ‘Use Common Location for All Database
Files’ and ensure +DATA is in Database Files
Location, then click ‘Next’
o) Choose ‘Specify Flash Recovery Area’ field, enter
‘+FRA’ in ‘Flash Recovery Area:’ field and set
10000 MB in ‘Flash Recovery Area Size:’ field and
click ‘Next’
p) Ensure ‘Enable Archiving’ is not clicked (will do
later)
q) Click ‘Next’ and continue to the next page
r) Click ‘Next’ and continue to the next page
s) Click ‘Next’ and continue to the next page
t) On Memory Tab, ensure Typical is checked.
u) On Sizing Tab leave defaults.
v) On Character Sets Tab, choose UTF8 for DB
character set and leave other defaults.
w) On Connection Mode Tab, leave default of Dedicated
Server Mode
x) Click ‘Next’
y) Click ‘Controlfile’ tag on the left side and
modify the file locations for controlfiles, ensure
they are in +DATA, with one in +FRA
z) Click ‘Datafile’ tag on the left and edit the
entire necessary column under ‘File Directory’
ensuring they are in +DATA
aa) Click ‘Redo Log Groups’. Ensure redo log member
is in +DATA.
bb) Click ‘Next’
cc) Select (create database) and generate DB creation
scripts and click ‘Finish’
dd) Click ‘OK’ when provided the summary
ee) Click ‘OK’ when told the script was generated
successfully
ff) A process message will be displayed to describe
the current operation and status. Click ‘Exit’ to
continue. A final message will be displayed
describing the names of cluster database instances
and have been started.
3)

Você também pode gostar