Escolar Documentos
Profissional Documentos
Cultura Documentos
Server Information In this class each student is provided with 3 machines. We will perform the two node rac install initially. We will use the third node for the node addition and removal classes.
RPM Requirement
64-bit (x86_64) Installations binutils-2.17.50.0.6 compat-libstdc++-33-3.2.3 compat-libstdc++-33-3.2.3 (32 bit) elfutils-libelf-0.125 elfutils-libelf-devel-0.125 elfutils-libelf-devel-static-0.125 gcc-4.1.2
gcc-c++-4.1.2 glibc-2.5-24 glibc-2.5-24 (32 bit) glibc-common-2.5 glibc-devel-2.5 glibc-devel-2.5 (32 bit) glibc-headers-2.5 ksh-20060214 libaio-0.3.106 libaio-0.3.106 (32 bit) libaio-devel-0.3.106 libaio-devel-0.3.106 (32 bit) libgcc-4.1.2 libgcc-4.1.2 (32 bit) libstdc++-4.1.2 libstdc++-4.1.2 (32 bit) libstdc++-devel 4.1.2 make-3.81 pdksh-5.2.14 sysstat-7.0.2 unixODBC-2.2.11 unixODBC-2.2.11 (32 bit) unixODBC-devel-2.2.11 unixODBC-devel-2.2.11 (32 bit)
1) Putty 2) Xmanager These software are installed on your lab machine From Home 1. Putty 2. Nxclient for windows (Please refer the nxclient setup mail for setting up) Additional Information Scan Cluster Name : scan-cluster.wysheid $ nslookup scan-cluster.wysheid 3 ip address Cluster Name Software location : scan-cluster : /wysheid on each node
The scan cluster name will be provided separately for each installation As convention, we have given # symbol for the commands to be executed as root user. And $ symbol to represent the commands to be executed as grid and oracle user. Prerequisite Check 1) Minimum 10G space required on all the nodes of the cluster. # df h 2) Minimum 1.5 GB RAM required # free m 3) Make sure the /etc/hosts file contains the information about all the nodes involved in the installation. You can edit /etc/hosts file as root user.
You must have an entry similar to this in each machines /etc/hosts file ============================================== == 10.17.57.121 wysheid11gr21 wysheid11gr21.wysheid
4) Verify /etc/resolv.conf pointing to the dns server You should have an entry similar to this nameserver 10.17.57.155
Login as root user and edit the file, if the nameserver entry is missing or incorrect.
5) Make sure you are able to resolve the scan cluster name given to you.
Common Terminology
$GRID_HOME :- The location at which the grid infrastructure is installed $ORACLE_HOME :- The location at which RDBMS software is installed.
Installation Steps
User creation These steps has to be performed from both nodes of the cluster
#groupadd -g 500 oinstall #groupadd g 501 dba #groupadd g 503 asmadmin #groupadd g 504 asmdba #groupadd g 505 asmoper
#useradd u 501 g oinstall G asmadmin,asmoper,asmdba grid #useradd u 500 g oinstall G dba,asmdba oracle
Create directories and grant required permission(Has to be done on all nodes of cluster)
oracle base
#/usr/sbin/oracleasm configure i
When prompted for asm owner provide , grid When prompted for asmgroup provide , asmadmin
#/usr/sbin/oracleasm init
Check the status of oracleasm libraries .The status should be ON to proceed with next steps
#/usr/sbin/oracleasm status
Create asm disks for OCR/Votingdisk and Database( This has to be done from any one of the node)
o o
/dev/sdb1 /dev/sdc1
You can confirm the above by the following command which displays the partition named and size
# fdisk -l
Create the asm disk as follows. This to be done from one of the node.
#/usr/sbin/oracleasm createdisk VOTE1 /dev/sdb1 disk here #/usr/sbin/oracleasm createdisk DATA1 /dev/sdc1 disk here
Scan and List the disks (do this from both nodes)
#/usr/sbin/oracleasm scandisks #/usr/sbin/oracleasm listdisks created above -- This should list all the disks
We need to perform the installation from the first node . The installation will be continued to the other nodes also.
$ login as grid user $ cd /wysheid/grid - assuming this is the directory where the grid software is unzipped $ set the DISPLAY as explained in the lab. Make sure the working with xclock command $ Start installation as follows
(i) $ ./runInstaller (ii) Select option Install grid infrastructure for cluster (iii)Select ADVANCED option (iv) Select the language as ENGLISH
(v) Uncheck GNS option (vi) Provide the cluster name as scan-cluster, scan name as scan-cluster.wysheid and scan port as 1521 (vii) On Configure Node option, by default the node details of the node from the installation started will available. Click on add and provide the public name and vip name for the second node as specified in the /etc/hosts file
Click of SSH Connectivity and setup ssh . Test it and make sure its working fine. You have to provide the password of grid user to setup ssh
(viii) On the network configuration page , select eth1 as the private interconnect (ix) On Configure storage page , select ASM
(x) On create ASM DISK Group page , give DISK Group name as CRS , select redundancy as external and select ORCL:VOTE1 as disk (xi) On Specify asm password page , give command password for all user (xii) (xiii) Donot use IPMI Configure Opertaing system privileged group as follows. Asm instanace administrator asmadmin Asm database administrator asmdba Asm operator asmoper
Please pay special attention on this page, else the installation may fail. (i) Set the installation location as follows Oracle base = /wysheid/11.2.0 Oracle_home=/wysheid/grid_home this is your grid infrastrture home
(ii) Specify the inventory location as /wysheid/orainventory (iii)Run the pre-requisite checks and fix the errors. Use the fixup scripts if necessary (iv) Click on FINISH to start the installation
(v) Run oraInstroot.sh and root.sh on both node as specified on the GUI. Please be careful at this steps, you have to do it in the order ie complete oraInstroot.sh on node1 and node2 . Then root.sh on node1 and node2
Node ======
Script ==========
$ login as grid user to the first node $ . oraenv give +ASM1 as SID name $ crsctl check cluster all
If the installation is success, we can see the following background process online on both node
To avoid the bug which fails to display the disk group for oracle user while running DBCA perform the following
Login as grid user to the first node. Set the DISPLAY $ . oraenv ( note there is space after . ). When prompted give +ASM1 $ asmca On GUI select create new disk group option . Give the name DATA for disk group. Choose external redundancy. Choose disk ORCL:DATA1 . Select create disk group and make sure that the disk group is mounted on both nodes
RDBMS Installation
$ login as oracle user $ cd /wysheid/database - assuming this is the location where RDBMS software is unzipped $ ./runInstaller
Choose RAC installation Choose Install software only option Choose Enterprise Edition Choose the install location as follows ORACLE_HOME=/wysheid/db_home1 ORACLE_BASE=/wysheid/11.2.0
Choose the privileged Operating system groups as follows Sysdba group dba Sysoper group dba
Complete the pre requisite check. Run fixup script if required. Click FINISH to complete the installation
Database Creation
$ login as oracle user from node1 Set the DISPLAY $ cd $ORACLE_HOME/bin ./dbca Choose RAC Database Choose custom database
Storage option ASM Choose common location for all datafiles . Choose the diskgroup +DATA Choose same diskgroup for FLASH Recovery Area also Choose the memory as 40% Complete the database creation
$ login as grid user from the first node $ . oraenv $ crs_stat t If the installation is successful, we can see all the resources except GSD and OCJ4 as online give the instance name as +ASM1