Escolar Documentos
Profissional Documentos
Cultura Documentos
http://www.iselfschooling.com/mc4articles/RACinstallation.htm
1.1.1 Hardware
Allocation Description
(in Mbytes)
1168
swap
750
2
3
overlap
/globaldevices
2028
100
unused
unused
unused
volume manager
10
1.1.2 Software
1.1.3 Patches
The Sun Cluster nodes might require patches in the following areas:
$ showrev -p
For the latest Sun Cluster 3.0 required patches see SunSolve document id
24617 Sun Cluster 3.0 Early Notifier.
1.2 Installing Sun StorEdge Disk Arrays
Follow the procedures for an initial installation of a StorEdge disk enclosures or
arrays, prior to installing the Solaris Operating System (SPARC) operating
environment and Sun Cluster software. Perform this procedure in conjunction with the
procedures in the Sun Cluster 3.0 Software Installation Guide and your server
hardware manual. Multihost storage in clusters uses the multi-initiator capability of
A cluster with more than two nodes requires two cluster transport junctions.
These transport junctions are Ethernet-based switches (customer-supplied).
You install the cluster software and configure the interconnect after you have installed
all other hardware.
2. Creating a Cluster
2.1 Sun Cluster Software Installation
The Sun Cluster v3 host system (node) installation process is completed in several
major steps. The general process is: repartition boot disks to meet SunCluster v3.
install the Solaris Operating System (SPARC) Environment software
configure the cluster host systems environment
install Solaris 8 Operating System (SPARC) Environment patches
install hardware-related patches
install Sun Cluster v3 on the first cluster node
install Sun Cluster v3 on the remaining nodes
install any Sun Cluster patches and updates
perform postinstallation checks and configuration
You can use two methods to install the Sun Cluster v3 software on the cluster nodes: interactive installation using the scinstall installation interface
automatic Jumpstart installation (requires a pre-existing Solaris Operating
System (SPARC) JumpStart server)
This note assumes an interactive installation of Sun Cluster v3 with update 2. The Sun
Cluster installation program,scinstall, is located on the Sun Cluster v3 CD in the
/cdrom/suncluster_3_0_u2/SunCluster_3.0/Tools directory. When
you start the program without any options, it prompts you for cluster configuration
information that is stored for use later in the process. Although the Sun Cluster
software can be installed on all nodes in parallel, you can complete the installation on
the first node and then runscinstall on all other nodes in parallel. The additional
nodes get some basic information from the first, or sponsoring node, that was
configured.
...
The default is to use /globaldevices.
scinstall -i \
...
Are these the options you want to use (yes/no)
[yes]?
Do you want to continue with the install (yes/no)
[yes]?
Install any Sun Cluster software patches.
Reboot the node to establish the cluster unless you rebooted the node after
you installed patches.
Do not reboot or shut down the first-installed node while any other nodes are
being installed, even if you use another node in the cluster as the sponsoring
node. Until quorum votes are assigned to the cluster nodes and cluster install
mode is disabled, the first-installed node, which established the cluster, is the
only node that has a quorum vote. If the cluster is still in install mode, you will
cause a system panic because of lost quorum if you reboot or shut down the
first-installed node. Cluster nodes remain in install mode until the first time
you run the scsetup(1M) command, during the procedure PostInstallation
Configuration.
# /usr/cluster/bin/scsetup
>>> Initial Cluster Setup <<<
This program has detected that the cluster "installmode"
attribute is set ...
Please do not proceed if any additional nodes have yet to
join the cluster.
Is it okay to continue (yes/no) [yes]? yes
Which global device do you want to use (d<N>) ? dx
Is it okay to proceed with the update (yes/no) [yes]? yes
scconf -a -q globaldev=dx
Do you want to add another quorum disk (yes/no) ? no
Is it okay to reset "installmode" (yes/no) [yes] ? yes
scconf -c -q reset
Cluster initialization is complete.
Although it appears that the scsetup utility uses two simple scconf commands to
define the quorum device and reset install mode, the process is more complex.
The scsetup utility performs numerous verification checks for you. It is
recommended that you do not use scconf manually to perform these functions.
Cluster configuration information is stored in the CCR on each node. You should
verify that the basic CCR values are correct. The scconf -p command displays
general cluster information along with detailed information about each node in the
cluster.
$ /usr/cluster/bin/scconf -p
You must configure RAC to use the shared disk architecture of Sun Cluster. In this
configuration, a single database is shared among multiple instances of RAC that
access the database concurrently. Conflicting access to the same data is controlled by
means of the Oracle UNIX Distributed Lock Manager (UDLM). If a process or a node
crashes, the UDLM is reconfigured to recover from the failure. In the event of a node
failure in an RAC environment, you can configure Oracle clients to reconnect to the
surviving server without the use of the IP failover used by Sun Cluster failover data
services.
The Sun Cluster install CD's contain the required SC udlm package:Package SUNWudlm Sun Cluster Support for Oracle Parallel Server UDLM, (opt) on
SunCluster v3
To install use the pkgadd command:# pkgadd -d . SUNWudlm
Once installed, Oracle's interface with this, the Oracle UDLM, can be installed.
To install Sun Cluster Support for RAC with VxVM, the following Sun Cluster 3
Agents data services packages need to be installed as superuser (see Sun's Sun Cluster
3 Data Services Installation and Configuration Guide):# pkgadd -d . SUNWscucm SUNWudlmr SUNWcvmr
SUNWcvm (SUNWudlm will also need to be included unless already installed from
the step above).
Before rebooting the nodes, you must ensure that you have correctly installed and
configured the Oracle UDLM software.
The Oracle Unix Distributed Lock Manager (ORCLudlm also known as the Oracle
Node Monitor) must be installed. This may be referred to in the Oracle documentation
as the "Parallel Server Patch". To check version information on any previously
installed dlm package:
$ pkginfo -l ORCLudlm |grep PSTAMP
OR
$ pkginfo -l ORCLudlm |grep VERSION
You must apply the following steps to all cluster nodes. The Oracle udlm can be found
on Disk1 of the Oracle9i server installation CD-ROM, in the
directory opspatch or racpatch in later versions. A version of the Oracle udlm
may also be found on the Sun Cluster CD set but check the Oracle release for the
latest applicable version. The informational
files README.udlm & release_notes.334x are located in this directory with
version and install information. This is the Oracle udlm package for 7.X.X or later on
Solaris Operating System (SPARC) and requires any previous versions to be removed
prior to installation.
Shutdown all existing clients of Oracle Unix Distributed Lock Manager
(including all Oracle Parallel Server/RAC instances).
Become super user.
Reboot the cluster node in non-cluster mode (replace <node name> with your
cluster node name):# scswitch -S -h <node name>
# shutdown -g 0 -y
... wait for the ok prompt
ok boot -x
cd /tmp
pkgadd -d . ORCLudlm
The udlm configuration files in SC2.X and SC3.0 are the following:
SC2.X: /etc/opt/SUNWcluster/conf/<default_cluster_name>.ora_cdb
SC3.0: /etc/opt/SUNWcluster/conf/udlm.conf
The udlm log files in SC2.X and SC3.0 are the following:
SC2.X: /var/opt/SUNWcluster/dlm_<node_name>/logs/dlm.log
SC3.0: /var/cluster/ucmm/dlm_<node_name>/logs/dlm.log
pkgadd will copy a template file, <configuration_file_name>.template,
to /etc/opt/SUNWcluster/conf.
# shutdown -g 0 -y -i 6
Raw Volume
SYSTEM tablespace
USERS tablespace
TEMP tablespace
File
Size
400 Mb db_name_raw_system_400m
120 Mb db_name_raw_users_120m
100 Mb db_name_raw_temp_100m
You must specify that Oracle should use this file to determine the raw device volume
names by setting the following environment variable where filename is the name of
the ASCII file that contains the entries shown in the example above:
setenv DBCA_RAW_CONFIG filename
or
export DBCA_RAW_CONFIG=filename
Create a mount point directory on each node to serve as the top of your
Oracle software directory structure so that:
The name of the mount point on each node is identical to that on the
initial node
The oracle account has read, write, and execute privileges
On the node from which you will run the Oracle Universal Installer, set up
user equivalence by adding entries for all nodes in the cluster, including the
local node, to the .rhosts file of the oracle account, or
the/etc/hosts.equiv file.
As oracle account user, check for user equivalence for the oracle account by
performing a remote login (rlogin) to each node in the cluster.
As oracle account user, if you are prompted for a password, you have not
given the oracle account the same attributes on all nodes. You must correct
this because the Oracle Universal Installer cannot use the rcp command to
copy Oracle products to the remote node's directories without user
equivalence.
System Kernel Parameters
Verify operating system kernel parameters are set to appropriate levels: The
file /etc/system is read by the operating system kernel at boot time. Check this
file for appropriate values for the following parameters.
Kernel Parameter
Setting
Purpose
Maximum allowable size of one shared
SHMMAX
4294967295
memory segment (4 Gb)
Minimum allowable size of a single shared
SHMMIN
1
memory segment.
Maximum number of shared memory segments
SHMMNI
100
in the entire system.
Maximum number of shared memory segments
SHMSEG
10
one process can attach.
Maximum number of semaphore sets in the
SEMMNI
1024
entire system.
Minimum recommended value. SEMMSL
should be 10 plus the largest PROCESSES
SEMMSL
100
parameter of any Oracle database on the
system.
Maximum semaphores on the system. This
setting is a minimum recommended value.
SEMMNS should be set to the sum of the
SEMMNS
1024
PROCESSES parameter for each Oracle
database, add the largest one twice, plus add an
additional 10 for each database.
SEMOPM
100
Maximum number of operations per semop call.
SEMVMX
32767
Maximum value of a semaphore.
Two to four times your system's physical
(swap space)
750 MB
memory size.
Establish system environment variables
Set a local bin directory in the user's PATH, such as /usr/local/bin,
or /opt/bin. It is necessary to have execute permissions on this directory.
Set the DISPLAY variable to point to the system's (from where you will run
OUI) IP address, or name, X server, and screen.
Set a temporary directory path for TMPDIR with at least 20 Mb of free space
to which the OUI has write permission.
Establish Oracle environment variables: Set the following Oracle environment
variables:
Environment Variable
Suggested value
eg /u01/app/oracle
ORACLE_BASE
eg /u01/app/oracle/product/9201
ORACLE_HOME
ORACLE_TERM
xterm
NLS_LANG
AMERICAN-AMERICA.UTF8 for example
$ORACLE_HOME/ocommon/nls/admin/data
ORA_NLS33
Should contain $ORACLE_HOME/bin
PATH
$ORACLE_HOME/JRE:$ORACLE_HOME/jlib \
$ORACLE_HOME/rdbms/jlib: \
CLASSPATH
$ORACLE_HOME/network/jlib
Create the directory /var/opt/oracle and set ownership to the oracle
user.
Completed.
/tmp/Oracle_InstallPrep_Report has been generated
Please review this report and resolve all issues before
attempting to install the Oracle Database Software
3.3 Using the Oracle Universal Installer for Real Application Clusters
Follow these procedures to use the Oracle Universal Installer to install the Oracle
Enterprise Edition and the Real Application Clusters software. Oracle9i is supplied on
multiple CD-ROM disks and the Real Application Clusters software is part of this
distribution. During the installation process it is necessary to switch between the CDROMS. OUI will manage the switching between CDs.
To install the Oracle Software, perform the following:.
Login as the oracle user
$ /<cdrom_mount_point>/runInstaller
At the OUI Welcome screen, click Next.
A prompt will appear for the Inventory Location (if this is the first time that
OUI has been run on this system). This is the base directory into which OUI
will install files. The Oracle Inventory definition can be found in the
file/var/opt/oracle/oraInst.loc. Click OK.
Verify the UNIX group name of the user who controls the installation of the
Oracle9i software. If an instruction to
run /tmp/orainstRoot.sh appears, the pre-installation steps were not
completed successfully. Typically, the /var/opt/oracle directory does
not exist or is not writeable by oracle. Run /tmp/orainstRoot.sh to
correct this, forcing Oracle Inventory files, and others, to be written to
the ORACLE_HOME directory. Once again this screen only appears the first
time Oracle9i products are installed on the system. Click Next.
The File Location window will appear. Do NOT change the Source field. The
Destination field defaults to theORACLE_HOME environment variable.
Click Next.
Select the Products to install. In this example, select the Oracle9i Server then
click Next.
Select the installation type. Choose the Enterprise Edition option. The
selection on this screen refers to the installation operation, not the database
configuration. The next screen allows for a customized database configuration
to be chosen. Click Next.
Select the configuration type. In this example you choose the Advanced
Configuration as this option provides a database that you can customize, and
configures the selected server products. Select Customized and click Next.
Select the other nodes on to which the Oracle RDBMS software will be
installed. It is not necessary to select the node on which the OUI is currently
running. Click Next.
Identify the raw partition in to which the Oracle9i Real Application Clusters
(RAC) configuration information will be written. It is recommended that this
raw partition is a minimum of 100MB in size.
An option to Upgrade or Migrate an existing database is presented.
Do NOT select the radio button. The Oracle Migration utility is not able to
upgrade a RAC database, and will error if selected to do so.
The Summary screen will be presented. Confirm that the RAC database
software will be installed and then clickInstall. The OUI will install the
Oracle9i software on to the local node, and then copy this information to the
other nodes selected.
Once Install is selected, the OUI will install the Oracle RAC software on to
the local node, and then copy software to the other nodes selected earlier. This
will take some time. During the installation process, the OUI does not display
messages indicating that components are being installed on other nodes - I/O
activity may be the only indication that the process is continuing.
$ gsd
Successfully started the daemon on the local node.
$ srvctl add db -p db_name -o oracle_home
Then for each instance enter the command:
config
config -p racdb1
racinst1
racinst2