Você está na página 1de 112

RAC Deployment Workshop

Copyright 2007, Oracle. All rights reserved.

Objectives
After completing this workshop, you should be able to: Install and configure iSCSI storage on both client clusters and Openfiler servers Install and configure Oracle Clusterware and Real Application Clusters (RAC) on more than two clustered nodes Use Oracle Clusterware to protect a single-instance database Convert a single-instance database to RAC Extend Oracle Clusterware and RAC to more than two nodes Create a RAC primary-physical/logical standby database environment Upgrade Oracle Clusterware, RAC, and RAC databases from 10.2.0.1 to 10.2.0.2 in a rolling fashion
2

Copyright 2007, Oracle. All rights reserved.

Assumptions

You are familiar with: Linux (all examples and labs are Linux-related) Oracle Clusterware Oracle Real Application Clusters Data Guard

Copyright 2007, Oracle. All rights reserved.

Workshop Format and Challenge


Start with Workshop I

Instructor presents workshop

Instructor comments on workshop using viewlet or lab

Students do workshop using: 1: Viewlets 2: Labs document 3: Solution scripts


Go to next workshop

Copyright 2007, Oracle. All rights reserved.

Workshop Flow Overview


Storage setup Install Clusterware and ASM Set up single-instance protection Single-instance to RAC conversion Cluster extension to three nodes

Create a logical standby DB


Rolling upgrade Six nodes cluster

Copyright 2007, Oracle. All rights reserved.

Hardware Organization
160 GB
160 GB

SCSI disk

Openfiler iSCSI servers

SCSI disk

Public network

Public network

ETH1

Group1

Group2

Group3

Group4

Node a

Node b

Node c

Node a

Node b

Node c

Node a

Node b

Node c

Node a

Node b

Node c

ETH0

Private interconnect

switch

Private interconnect

Copyright 2007, Oracle. All rights reserved.

Openfiler Storage Organization


Openfiler 1

cg3: ocr vote asm ocfs

cg4:

/dev/sda1 /dev/sda2 /dev/sda3 /dev/sda4

1GB 1.5GB 34GB 36GB

ocr vote asm ocfs

/dev/sda1 /dev/sda2 /dev/sda3 /dev/sda4

1GB 1.5GB 34GB 36GB

Disk

160Go

160Go

Disk

cg1:
ocr vote asm ocfs /dev/sda1 /dev/sda2 /dev/sda3 /dev/sda4
1GB 1.5GB 34GB 36GB

cg2: ocr vote asm ocfs /dev/sda1 /dev/sda2 /dev/sda3 /dev/sda4


1GB 1.5GB 34GB 36GB

Openfiler 2

Copyright 2007, Oracle. All rights reserved.

Cluster Storage Organization


/stage/ p4547817_10202_LINUX.zip /DGpatch /ocfs2 /stage/10gR2: /rdbms/clusterware /rdbms/database /home/oracle: priminfo & stdbinfo /solutions /u01: /app/oracle/oraInventory /crs1020 /app/oracle/product/10.2.0/rac /app/oracle/product/10.2.0/sgl
Node a Node b Node c

Local disk

disk

160Go

/dev/mapper:
265MB 512MB

ocr1 ocr2 265MB vote1 512MB vote2 512MB vote3 ocfs2 36GB

asm1 asm2 asm5 asm6 asm7 asm8

2GB 2GB 7.5GB 7.5GB 7.5GB 7.5GB

Copyright 2007, Oracle. All rights reserved.

Group Formation and Naming Conventions

Three students per group Volume groups: cg[1|2|3|4] Names must be unique within a classroom:
Cluster name: XY#CLUST[1|2|3|4] Database name: XY#RDB[A|B|C|D] Standby database name: XY#SDB[A|C] Example: Atlanta in the Buckhead office in Room 9

AB9CLUST1, AB9CLUST2, AB9CLUST3, AB9CLUST4 AB9RDBA, AB9RDBB, AB9RDBC, AB9RDBD AB9SDBA, AB9RDBC

Copyright 2007, Oracle. All rights reserved.

Storage setup

Install Clusterware and ASM

Set up single-instance protection

Single-instance to RAC conversion

Cluster extension to three nodes

Create a logical standby DB

Rolling upgrade

10

Copyright 2007, Oracle. All rights reserved.

Workshop I: Configuring Storage

1. Openfiler volume setup 2. iSCSI client setup 3. fdisk client setup 4. Multipathing client setup 5. Raw and udev client setup

11

Copyright 2007, Oracle. All rights reserved.

Persistent Storage Flow


Filer Cluster nodes Persistent name /dev/mapper/ocr /dev/mapper/ocr1 /dev/mapper/ocr2 /dev/mapper/vote /dev/mapper/vote1 /dev/mapper/vote2 /dev/mapper/vote3

Physical volume

Determine volumes-to-devices mapping fdisk devices


(One node only)

Volume group

Logical volumes
(ocr, vote, asm, ocfs2)

/dev/sdw /dev/sdx Discovery /dev/sdy /dev/sdz


iSCSI

/dev/mapper/mpath0 /dev/mapper/mpath0p1 /dev/mapper/mpath0p2 /dev/mapper/mpath1 /dev/mapper/mpath1p1 /dev/mapper/mpath1p2 /dev/mapper/mpath1p3

Allowed nodes Start iSCSI Define list of nodes Start multipath

Restart multipath

Determine and define wwids-to-volume mapping

12

Copyright 2007, Oracle. All rights reserved.

Enterprise Network Storage


NAS devices utilize a client-server architecture to share file systems. Most NAS solutions support one or more of the following file access protocols:
NFS Version 3 SMB/CIFS HTTP/WebDAV

SAN storage appears like locally attached SCSI disks to nodes using the storage. The main difference between NAS and SAN is that:
SAN devices transfer data in disk blocks NAS devices operate at the file level

AoE enables ATA disks to be remotely accessed.


13

Copyright 2007, Oracle. All rights reserved.

Openfiler: Your Classroom Storage Solution

15

Copyright 2007, Oracle. All rights reserved.

Persistent Storage Flow


Filer Cluster nodes Persistent name /dev/mapper/ocr /dev/mapper/ocr1 /dev/mapper/ocr2 /dev/mapper/vote /dev/mapper/vote1 /dev/mapper/vote2 /dev/mapper/vote3

Physical volume

Determine volumes-to-devices mapping fdisk devices


(One node only)

Volume group

Logical volumes
(ocr, vote, asm, ocfs2)

/dev/sdw /dev/sdx Discovery /dev/sdy /dev/sdz


iSCSI

/dev/mapper/mpath0 /dev/mapper/mpath0p1 /dev/mapper/mpath0p2 /dev/mapper/mpath1 /dev/mapper/mpath1p1 /dev/mapper/mpath1p2 /dev/mapper/mpath1p3

Allowed nodes Start iSCSI Define list of nodes Start multipath

Restart multipath

Determine and define wwids-to-volume mapping

16

Copyright 2007, Oracle. All rights reserved.

Creating Physical Volumes

17

Copyright 2007, Oracle. All rights reserved.

Persistent Storage Flow


Filer Cluster nodes Persistent name /dev/mapper/ocr /dev/mapper/ocr1 /dev/mapper/ocr2 /dev/mapper/vote /dev/mapper/vote1 /dev/mapper/vote2 /dev/mapper/vote3

Physical volume

Determine volumes-to-devices mapping fdisk devices


(One node only)

Volume group

Logical volumes
(ocr, vote, asm, ocfs2)

/dev/sdw /dev/sdx Discovery /dev/sdy /dev/sdz iSCSI

/dev/mapper/mpath0 /dev/mapper/mpath0p1 /dev/mapper/mpath0p2 /dev/mapper/mpath1 /dev/mapper/mpath1p1 /dev/mapper/mpath1p2 /dev/mapper/mpath1p3

Allowed nodes Start iSCSI

Restart multipath

Define list of nodes

Start multipath

Determine and define wwids-to-volume mapping

18

Copyright 2007, Oracle. All rights reserved.

Creating Volume Groups

19

Copyright 2007, Oracle. All rights reserved.

Persistent Storage Flow


Filer Physical volume Determine volumes-to-devices mapping fdisk devices
(One node only)

Cluster nodes

Persistent name
/dev/mapper/ocr /dev/mapper/ocr1 /dev/mapper/ocr2 /dev/mapper/vote /dev/mapper/vote1 /dev/mapper/vote2 /dev/mapper/vote3

Volume group

Logical volumes
(ocr, vote, asm, ocfs2)

/dev/sdw /dev/sdx Discovery /dev/sdy /dev/sdz


iSCSI

/dev/mapper/mpath0 /dev/mapper/mpath0p1 /dev/mapper/mpath0p2 /dev/mapper/mpath1 /dev/mapper/mpath1p1 /dev/mapper/mpath1p2 /dev/mapper/mpath1p3

Allowed nodes
Start iSCSI Define list of nodes Start multipath

Restart multipath

Determine and define wwids-to-volume mapping

20

Copyright 2007, Oracle. All rights reserved.

Creating Logical Volumes

21

Copyright 2007, Oracle. All rights reserved.

Persistent Storage Flow


Filer Cluster nodes Persistent name /dev/mapper/ocr /dev/mapper/ocr1 /dev/mapper/ocr2 /dev/mapper/vote /dev/mapper/vote1 /dev/mapper/vote2 /dev/mapper/vote3

Physical volume

Determine volumes-to-devices mapping fdisk devices


(One node only)

Volume group

Logical volumes
(ocr, vote, asm, ocfs2)

/dev/sdw /dev/sdx Discovery /dev/sdy /dev/sdz


iSCSI

/dev/mapper/mpath0 /dev/mapper/mpath0p1 /dev/mapper/mpath0p2 /dev/mapper/mpath1 /dev/mapper/mpath1p1 /dev/mapper/mpath1p2 /dev/mapper/mpath1p3

Allowed nodes Start iSCSI Define list of nodes Start multipath

Restart multipath

Determine and define wwids-to-volume mapping

22

Copyright 2007, Oracle. All rights reserved.

Initializing the Storage


To initialize the storage, disable or enable iSCSI Target from the management interface. Or execute service iscsi-target restart as root:
[root@ed-dnfiler06b ~]# service iscsi-target restart

View the contents of the /etc/ietd.conf file:


[root@ed-dnfiler06b ~]# cat /etc/ietd.conf Target iqn.2006-01.com.oracle.us:cg1.ocr Lun 0 Path=/dev/cg3/ocr,Type=fileio ...

Edit the initiators.deny and initiators.allow files:


[root@ed-dnfiler06b ~]# cat /etc/initiators.deny iqn.2006-01.com.oracle.us:cg3.ocr ALL iqn.2006-01.com.oracle.us:cg3.vote ALL ...

23

Copyright 2007, Oracle. All rights reserved.

Persistent Storage Flow


Filer Cluster nodes Persistent name /dev/mapper/ocr /dev/mapper/ocr1 /dev/mapper/ocr2 /dev/mapper/vote /dev/mapper/vote1 /dev/mapper/vote2 /dev/mapper/vote3

Physical volume

Determine volumes-to-devices mapping fdisk devices


(One node only)

Volume group

Logical volumes
(ocr, vote, asm, ocfs2)

iSCSI Discovery

/dev/sdw /dev/sdx /dev/sdy /dev/sdz

/dev/mapper/mpath0 /dev/mapper/mpath0p1 /dev/mapper/mpath0p2 /dev/mapper/mpath1 /dev/mapper/mpath1p1 /dev/mapper/mpath1p2 /dev/mapper/mpath1p3

Allowed nodes Start iSCSI Define list of nodes Start multipath

Restart multipath

Determine and define wwids-to-volume mapping

24

Copyright 2007, Oracle. All rights reserved.

Accessing the Shared Storage

Ensure that the iscsi-initiator-tools RPM is loaded:


[root@ed-otraclin10a ~]# rpm -qa|grep iscsi

Edit the /etc/iscsi.conf file to add discovery entry:


[root@ed-otraclin11b ~]# vi /etc/iscsi.conf DiscoveryAddress=ed-dnfiler06b.us.oracle.com

Make sure the iscsi service is started on system boot:


[root@ed-otraclin10a ~]# chkconfig -add iscsi [root@ed-otraclin10a ~]# chkconfig iscsi on

Start the iscsi service:


[root@ed-otraclin11b ~]# service iscsi start

25

Copyright 2007, Oracle. All rights reserved.

Persistent Storage Flow


Filer Cluster nodes Persistent name /dev/mapper/ocr /dev/mapper/ocr1 /dev/mapper/ocr2 /dev/mapper/vote /dev/mapper/vote1 /dev/mapper/vote2 /dev/mapper/vote3

Physical volume

Determine volumes-to-devices mapping fdisk devices


(One node only)

Volume group

Logical volumes
(ocr, vote, asm, ocfs2)

iSCSI Discovery

/dev/sdw /dev/sdx /dev/sdy /dev/sdz

/dev/mapper/mpath0 /dev/mapper/mpath0p1 /dev/mapper/mpath0p2 /dev/mapper/mpath1 /dev/mapper/mpath1p1 /dev/mapper/mpath1p2 /dev/mapper/mpath1p3

Allowed nodes Start iSCSI Define list of nodes Start multipath

Restart multipath

Determine and define wwids-to-volume mapping

26

Copyright 2007, Oracle. All rights reserved.

Accessing the Shared Storage

Check to see that the volumes are accessible with iscsi-ls and dmesg.
[root@ed-otraclin10a ~]# iscsi-ls ************************************************************* SFNet iSCSI Driver Version ...4:0.1.11-3(02-May-2006) ************************************************************* TARGET NAME : iqn.2006-01.com.oracle.us:cg1.ocr TARGET ALIAS : HOST ID : 24 BUS ID : 0 TARGET ID : 0 TARGET ADDRESS : 10.156.49.151:3260,1 SESSION STATUS : ESTABLISHED AT Thu Nov 23 10:07:20 EST 2006 SESSION ID : ISID 00023d000001 TSIH 600 *************************************************************

27

Copyright 2007, Oracle. All rights reserved.

Persistent Storage Flow


Filer Cluster nodes Persistent name /dev/mapper/ocr /dev/mapper/ocr1 /dev/mapper/ocr2 /dev/mapper/vote /dev/mapper/vote1 /dev/mapper/vote2 /dev/mapper/vote3

Physical volume

Determine volumes-to-devices mapping fdisk devices


(One node only)

Volume group

Logical volumes
(ocr, vote, asm, ocfs2)

iSCSI Discovery

/dev/sdw /dev/sdx /dev/sdy /dev/sdz

/dev/mapper/mpath0 /dev/mapper/mpath0p1 /dev/mapper/mpath0p2 /dev/mapper/mpath1 /dev/mapper/mpath1p1 /dev/mapper/mpath1p2 /dev/mapper/mpath1p3

Allowed nodes Start iSCSI Define list of nodes Start multipath

Restart multipath

Determine and define wwids-to-volume mapping

29

Copyright 2007, Oracle. All rights reserved.

Partitioning the iSCSI Disk

Use the fdisk utility to create iSCSI slices within the iSCSI volumes.

These device names are not persistent across reboots.

30

Copyright 2007, Oracle. All rights reserved.

Persistent Storage Flow


Filer Cluster nodes Persistent name /dev/mapper/ocr /dev/mapper/ocr1 /dev/mapper/ocr2 /dev/mapper/vote /dev/mapper/vote1 /dev/mapper/vote2 /dev/mapper/vote3

Physical volume

Determine volumes-to-devices mapping fdisk devices


(One node only)

Volume group

Logical volumes
(ocr, vote, asm, ocfs2)

iSCSI Discovery

/dev/sdw /dev/sdx /dev/sdy /dev/sdz

/dev/mapper/mpath0 /dev/mapper/mpath0p1 /dev/mapper/mpath0p2 /dev/mapper/mpath1 /dev/mapper/mpath1p1 /dev/mapper/mpath1p2 /dev/mapper/mpath1p3

Allowed nodes Start iSCSI Define list of nodes Start multipath

Restart multipath

Determine and define wwids-to-volume mapping

31

Copyright 2007, Oracle. All rights reserved.

Udev Basics

Udev simplifies device management for cold and hot plug devices. Udev uses hot plug events sent by the kernel whenever a device is added or removed from the system.
Details about newly added devices are exported to /sys. Udev manages device entries in /dev by monitoring /sys.

Udev is a standard package in RH4.0. The primary benefit Udev provides for Oracle RAC environments is persistent:
Disk device naming Device ownership and permissions

32

Copyright 2007, Oracle. All rights reserved.

Udev Configuration

Udev behavior is controlled by /etc/udev/udev.conf. Important parameters include the following: udev_root sets the location where udev creates device nodes. (/dev is the default.) default_mode controls the permissions of device nodes. default_owner sets user ID of files. default_group sets group ID of files. udev_rules sets the directory for Udev rules files. (/etc/udev/udev.rules is the default.) udev_permissions sets the directory for permissions. (/etc/udev/udev.permissions is the default.)
33

Copyright 2007, Oracle. All rights reserved.

Udev Rules Parameters

Common parameters for NAME, SYMLINK, and PROGRAM: %n is the kernel number, sda2 would be 2. %k is the kernel name for the device, sda for example. %M is the kernel major number for the device. %m is the kernel minor number for the device. %b is the bus ID for the device. %p is the path for the device. %c is the string returned by the external program defined by PROGRAM. %s{filename} is the content of a sysfs (/sys) attribute.

34

Copyright 2007, Oracle. All rights reserved.

Multipathing and Device Mapper

Multipathing tools aggregate a devices independent paths into a single logical path. Multipathing is an important aspect of high availability configurations. RHEL4 incorporates a tool called Device Mapper (DM) to manage multipathed devices. DM is dependent on the following packages:
device-mapper udev device-mapper-multipath

The /etc/init.d/multipathd start command will initialize Device Mapper.


35

Copyright 2007, Oracle. All rights reserved.

Persistent Storage Flow


Filer Cluster nodes Persistent name /dev/mapper/ocr /dev/mapper/ocr1 /dev/mapper/ocr2 /dev/mapper/vote /dev/mapper/vote1 /dev/mapper/vote2 /dev/mapper/vote3

Physical volume

Determine volumes-to-devices mapping fdisk devices


(One node only)

Volume group

Logical volumes
(ocr, vote, asm, ocfs2)

iSCSI Discovery

/dev/sdw /dev/sdx /dev/sdy /dev/sdz

/dev/mapper/mpath0 /dev/mapper/mpath0p1 /dev/mapper/mpath0p2 /dev/mapper/mpath1 /dev/mapper/mpath1p1 /dev/mapper/mpath1p2 /dev/mapper/mpath1p3

Allowed nodes Start iSCSI Define list of nodes Start multipath

Restart multipath

Determine and define wwids-to-volume mapping

36

Copyright 2007, Oracle. All rights reserved.

Configuring Multipath

multipaths { multipath { wwid 14f70656e66696c000000000001000000d54 alias ocr path_grouping_policy multibus path_checker readsector0 path_selector "round-robin 0" failback manual no_path_retry 5 } ... }

37

Copyright 2007, Oracle. All rights reserved.

Device Mapper Devices


DM devices are created as /dev/dm-n.
DM only maps whole drives. If a drive has multiple partitions, the device mapping of each partition is handled by kpartx.
# cat /etc/udev/rules.d/40-multipath.rules KERNEL="dm-[0-9]*", PROGRAM="/sbin/mpath_get_name %M %m", \ RESULT="?*", NAME="%k", SYMLINK="mpath/%c" KERNEL="dm-[0-9]*", PROGRAM="/sbin/kpartx_get_name %M %m", \ RESULT="?*", NAME="%k", SYMLINK="mpath/%c"

If the device is partitioned, the partitions will appear as:


/dev/mapper/mpathNpN /dev/mapper/<alias>pN /dev/mapper/<WWID>pN

OCR and voting disks should use /dev/dm-N or /dev/mapper/<alias>pN path formats.
38

Copyright 2007, Oracle. All rights reserved.

Storage Configuration Summary

Copyright 2007, Oracle. All rights reserved.

Persistent Storage Flow


Filer Cluster nodes Persistent name /dev/mapper/ocr /dev/mapper/ocr1 /dev/mapper/ocr2 /dev/mapper/vote /dev/mapper/vote1 /dev/mapper/vote2 /dev/mapper/vote3

Physical volume

Determine volumes-to-devices mapping fdisk devices


(One node only)

Volume group

Logical volumes
(ocr, vote, asm, ocfs2)

/dev/sdw /dev/sdx Discovery /dev/sdy /dev/sdz


iSCSI

/dev/mapper/mpath0 /dev/mapper/mpath0p1 /dev/mapper/mpath0p2 /dev/mapper/mpath1 /dev/mapper/mpath1p1 /dev/mapper/mpath1p2 /dev/mapper/mpath1p3

Allowed nodes Start iSCSI Define list of nodes Start multipath

Restart multipath

Determine and define wwids-to-volume mapping

40

Copyright 2007, Oracle. All rights reserved.

Openfiler Storage Goal

As a class, create four volume groups called:


CG1 CG2 CG3 CG4

Within each group, create logical volumes called:


ocr vote asm ocfs2

One volume group will be used per cluster group.

41

Copyright 2007, Oracle. All rights reserved.

Configuring Openfiler Storage

1. Openfiler Storage Control Center:


Ensure that iSCSI target service is enabled. Create a physical volume partition: /dev/sdan (75.5GB). Create a new volume group (cgx) using the Physical volume physical volume. Create iSCSI logical volumes inside volume group: Volume group

ocr: 1000 MB vote: 1500 MB asm: 34000 MB ocfs2: 36000 MB

Logical volumes

2. Edit the /etc/initiators.deny and /etc/initiators.allow files to restrict access. 3. Execute service iscsi-target status|restart.
42

Copyright 2007, Oracle. All rights reserved.

Configuring Cluster Storage: iSCSI + fdisk


1. Check /etc/iscsi.conf:
DiscoveryAddress=<filer name>

2. Check /etc/hosts to make sure your filer is there. 3. Execute service iscsi restart + iscsi-ls. 4. Ensure that iSCSI is started on boot:
chkconfig -add/on.

5. Determine which logical volumes are attached to your block devices: /var/log/messages and iscsi-ls. 6. Use fdisk to partition each block device (one node):

43

Two slices for OCR: 256 MB, 512 MB Three slices for voting: 256 MB, 512 MB, 512 MB Six slices for ASM: 2x2000 MB (primary), 4x7500 MB (ext) OCSF2 uses whole slice.
Copyright 2007, Oracle. All rights reserved.

Configuring Cluster Storage: Multipathing


1. Comment blacklist in /etc/multipath.conf. 2. Execute service multipathd start + chkconfig multipathd on. 3. reboot 4. Determine list of wwids associated to your logical volumes from /var/lib/multipath/bindings and multipath -v3. 5. Edit /etc/multipath.conf to add wwids and aliases in multipaths section: ocr, vote, asm, and ocfs2.
Filer Cluster node Persistent name

Logical volume

Block device

mpath device

mapper device

Raw device

44

Copyright 2007, Oracle. All rights reserved.

Configuring Cluster Storage: Permissions

1. Associate /dev/mapper devices to /dev/raw/raw[1-5] for OCR and voting in /etc/sysconfig/rawdevices. (This is not strictly necessary in 10gR2; see OUI bug.) 2. service rawdevices restart 3. Edit /etc/udev/permissions.d/40rac.permissions:
raw/raw[1-2]:root:oinstall:660 raw/raw[3-5]: oracle:oinstall:660

4. Edit /etc/rc.local:
chown oracle:dba /dev/mapper/asm* chmod 660 /dev/mapper/asm*

5. reboot
45

Copyright 2007, Oracle. All rights reserved.

Storage setup

Install Clusterware and ASM

Set up single-instance protection

Single-instance to RAC conversion

Cluster extension to three nodes

Create a logical standby DB

Rolling upgrade

46

Copyright 2007, Oracle. All rights reserved.

Workshop II: Install Clusterware and ASM


1. Install Oracle Clusterware locally on the first and second nodes only. 2. Install database software locally on the first and second nodes. 3. Configure ASM with DATA and FRA disk groups.
You do not use ASMLib (See the viewlet for installation information).
Node a Node b

Oracle Clusterware 10.2.0.1 Oracle RAC/ASM 10.2.0.1


CRS +ASM1

Oracle Clusterware 10.2.0.1 Oracle RAC/ASM 10.2.0.1


CRS +ASM2

DATA

FRA

47

Copyright 2007, Oracle. All rights reserved.

Installing Oracle Clusterware

1. Use the provided solution script to set up ssh on all three nodes. 2. Check your interfaces and storage devices on all three nodes:
ifconfig ls al /dev/mapper, raw qa, ls al /dev/raw In 10gR2, although you can use block devices to store OCR and voting disks, OUI does not accept it.

3. Run OUI from /stage/10gR2/rdbms/clusterware:



48

Inventory = /u01/app/oracle/oraInventory Home = /u01/crs1020 OCR and voting disks: /dev/raw/raw1,2,3,4,5 VIPCA needs to be manually executed.
Copyright 2007, Oracle. All rights reserved.

Installing Oracle RAC Software and ASM

1. Run OUI from /stage/10gR2/rdbms/database:


Home = /u01/app/oracle/product/10.2.0/rac Software installation only First two nodes

2. Run dbca from /u01/app/oracle/product/10.2.0/rac/bin:


Export ORACLE_HOME. Use the first two nodes Create two disk groups used later:

DATA: /dev/mapper/asm1 & 2 FRA: /dev/mapper/asm5,6,7 & 8

dbca automatically creates the listeners and ASM instances. Use initialization parameter files:
$ORACLE_HOME/dbs/init+ASM.ora
49

Copyright 2007, Oracle. All rights reserved.

Storage setup

Install Clusterware and ASM

Set up single-instance protection

Single-instance to RAC conversion

Cluster extension to three nodes

Create a logical standby DB

Rolling upgrade

50

Copyright 2007, Oracle. All rights reserved.

Workshop III: Set Up Single-Instance Protection

1. Install single-install database software on the first and second nodes only. 2. Create single-instance database on the first node. 3. Protect it against both instance and node failure using Oracle Clusterware. 4. Three possible starting case scenarios:
No software installed Single-instance database running on nonclustered ASM Single-instance database running on local file system

51

Copyright 2007, Oracle. All rights reserved.

Installing Single-Instance Database Software

1. Run OUI from /stage/10gR2/rdbms/database:


Home = /u01/app/oracle/product/10.2.0/sgl Software install only To be done on the first and second nodes

2. Do the same on your second node:


You could parallelize the work.

52

Copyright 2007, Oracle. All rights reserved.

Creating Single-Instance Database

Run dbca from /u01/app/oracle/product/10.2.0/sgl:


Store your database and Flash Recovery Area on ASM: DATA and FRA disk groups. Use sample schemas.

You use shared storage to protect against node failures.

53

Copyright 2007, Oracle. All rights reserved.

Protecting the Single-Instance Database by Using Oracle Clusterware


1. Copy init.ora to the second node (spfile on ASM). 2. On the second node:
Create a password file Create the $ORACLE_HOME/admin tree for your database

3. Create action script for your database: start/check/stop. 4. Store it on both nodes: /u01/crs1020/crs/public. 5. Create profile, db: ci=30 ra=3 (sudo). 6. Register the DB with Oracle Clusterware (sudo). 7. Set DB permissions (sudo).

54

Copyright 2007, Oracle. All rights reserved.

Protection Flow Diagrams

Node a

Node b

Oracle Clusterware Oracle RAC/ASM Oracle single-instance

Oracle Clusterware Oracle RAC/ASM Oracle single-instance

RDBA

+ASM1

+ASM2

55

Copyright 2007, Oracle. All rights reserved.

Protection Flow Diagrams

Node a

Node b

Oracle Clusterware Oracle RAC/ASM Oracle single-instance

Oracle Clusterware Oracle RAC/ASM Oracle single-instance

RDBA

+ASM1

+ASM2

56

Copyright 2007, Oracle. All rights reserved.

Protection Flow Diagrams

Node a

Node b

Oracle Clusterware Oracle RAC/ASM Oracle single-instance

Oracle Clusterware Oracle RAC/ASM Oracle single-instance

+ASM1

+ASM2

57

Copyright 2007, Oracle. All rights reserved.

Protection Flow Diagrams

Node a

Node b

Oracle Clusterware Oracle RAC/ASM Oracle single-instance

Oracle Clusterware Oracle RAC/ASM Oracle single-instance

RDBA

+ASM1

+ASM2

58

Copyright 2007, Oracle. All rights reserved.

Protection Flow Diagrams

Node a

Node b

Oracle Clusterware Oracle RAC/ASM Oracle single-instance

Oracle Clusterware Oracle RAC/ASM Oracle single-instance

RDBA

+ASM1

+ASM2

59

Copyright 2007, Oracle. All rights reserved.

Protection Flow Diagrams

Node a

Node b

Oracle Clusterware Oracle RAC/ASM Oracle Single-instance

Oracle Clusterware Oracle RAC/ASM Oracle single-instance

RDBA

+ASM1

+ASM2

60

Copyright 2007, Oracle. All rights reserved.

Protection Flow Diagrams

Node b

Oracle Clusterware Oracle RAC/ASM Oracle single-instance

+ASM2

61

Copyright 2007, Oracle. All rights reserved.

Protection Flow Diagrams

Node b

Oracle Clusterware Oracle RAC/ASM Oracle single-instance

+ASM2

RDBA

62

Copyright 2007, Oracle. All rights reserved.

Protection Flow Diagrams

Node b

Oracle Clusterware Oracle RAC/ASM Oracle single-instance

+ASM2

RDBA

63

Copyright 2007, Oracle. All rights reserved.

Resource Action Script


#!/usr/bin/perl # Copyright (c) 2002, 2006, Oracle. All rights reserved. # action_db.pl # This perl script is the action script for start / stop / check # the Oracle Instance in a cold failover configuration. # # NAME # action_db.pl # #DESCRIPTION # #NOTES # # Usage: # rknapp 05/22/06 - Creation

# Environment settings, please modify and adapt this $ORA_CRS_HOME = "/u01/crs1020"; $CRS_HOME_BIN = "/u01/crs1020/bin"; $CRS_HOME_SCRIPT = "/u01/crs1020/crs/public"; $ORACLE_HOME_BIN = "/u01/app/oracle/product/10.2.0/sgldb_1/bin"; $ORACLE_HOME = "/u01/app/oracle/product/10.2.0/sgldb_1"; $ORA_SID = "OL8RDBA"; $ORA_USER = "oracle";

64

Copyright 2007, Oracle. All rights reserved.

Resource Action Script

if ($#ARGV != 0 ) { print "usage: start stop check required \n"; exit; }


$command = $ARGV[0]; # Database start stop check # Start database if ($command eq "start" ) { system (" su - $ORA_USER << EOF export ORACLE_SID=$ORA_SID export ORACLE_HOME=$ORACLE_HOME $ORACLE_HOME_BIN/sqlplus /nolog connect / as sysdba startup quit EOF" ); }

65

Copyright 2007, Oracle. All rights reserved.

Resource Action Script

# Stop database if ($command eq "stop" ) { system (" su - $ORA_USER << EOF export ORACLE_SID=$ORA_SID export ORACLE_HOME=$ORACLE_HOME $ORACLE_HOME_BIN/sqlplus /nolog connect / as sysdba shutdown immediate quit EOF" ); } # Check database if ($command eq "check" ) { check(); }

66

Copyright 2007, Oracle. All rights reserved.

Resource Action Script

sub check { my($check_proc,$process) = @_; $process = "ora_pmon_$ORA_SID"; $check_proc = qx(ps -aef | grep ora_pmon_$ORA_SID | grep -v grep | awk '{print \$8}'); chomp($check_proc); if ($process eq $check_proc) { exit 0; } else { exit 1; } }

67

Copyright 2007, Oracle. All rights reserved.

Storage setup

Install Clusterware and ASM

Set up single-instance protection

Single-instance to RAC conversion

Cluster extension to three nodes

Create a logical standby DB

Rolling upgrade

68

Copyright 2007, Oracle. All rights reserved.

Workshop IV: Single-Instance to RAC Conversion

1. Use dbca from single-instance home to create a database template including files. 2. Propagate template files from single-instance home to RAC home. 3. Use dbca from single-instance home to remove the existing database. 4. Use dbca from RAC home to create a new database with the same name by using the new template.

69

Copyright 2007, Oracle. All rights reserved.

Storage setup

Install Clusterware and ASM

Set up single-instance protection

Single-instance to RAC conversion

Cluster extension to three nodes

Create a logical standby DB

Rolling upgrade

70

Copyright 2007, Oracle. All rights reserved.

Workshop V: Cluster Extension to Three Nodes

Extend your cluster database to the third node of your group: 1. Use addNode.sh to add Oracle Clusterware to the third node. 2. Add ONS configuration of your third node to the OCR. 3. Use addNode.sh to add RAC to the third node. 4. Use dbca to extend your database to the third node.

71

Copyright 2007, Oracle. All rights reserved.

Required Steps to Add a Node to a RAC Cluster


1. Install and configure OS and hardware for the new node. 2. Add Oracle Clusterware to the new node. 3. Configure ONS for the new node. [4. Add ASM home to the new node.] 5. Add RAC home to the new node. [6. Add a listener to the new node.] 7. Add a database instance to the new node.
ORACLE_HOME CRS HOME ORACLE_HOME CRS HOME

Storage
72

Copyright 2007, Oracle. All rights reserved.

Checking Prerequisites Before Oracle Clusterware Installation

73

Copyright 2007, Oracle. All rights reserved.

Adding Oracle Clusterware to the New Node


Execute <Oracle Clusterware home>/oui/bin/addNode.sh.

74

Copyright 2007, Oracle. All rights reserved.

Adding Oracle Clusterware to the New Node

75

Copyright 2007, Oracle. All rights reserved.

Adding Oracle Clusterware to the New Node

76

Copyright 2007, Oracle. All rights reserved.

Configuring the New ONS

Use the racgons add_config command to add new node ONS configuration information to OCR.

77

Copyright 2007, Oracle. All rights reserved.

Adding ASM Home to the New Node (Optional)

78

Copyright 2007, Oracle. All rights reserved.

Adding RAC Home to the New Node

79

Copyright 2007, Oracle. All rights reserved.

Adding a Listener to the New Node (Optional)

80

Copyright 2007, Oracle. All rights reserved.

Adding a Database Instance to the New Node

81

Copyright 2007, Oracle. All rights reserved.

Adding a Database Instance to the New Node

82

Copyright 2007, Oracle. All rights reserved.

Adding a Database Instance to the New Node

83

Copyright 2007, Oracle. All rights reserved.

Storage setup

Install Clusterware and ASM

Set up single-instance protection

Single-instance to RAC conversion

Cluster extension to three nodes

Create a logical standby DB

Rolling upgrade

84

Copyright 2007, Oracle. All rights reserved.

RAC and Data Guard Architecture


Primary instance A Standby receiving instance C
ARCn ARCn Primary database

LGWR

RFS
Flash Recovery Area

Online Standby redo redo files files LGWR RFS

Flash Recovery Area ARCn

Standby database Apply ARCn

Primary instance B
85

Standby apply instance D


Copyright 2007, Oracle. All rights reserved.

Workshop VI: Creating a RAC Logical Standby


1. Form super groupstwo groups together:
The first group works on the primary database (three nodes). The second group works on the standby database (two nodes).

Stop the standby database.

2. Install OCFS2 on the second cluster. 3. Create the physical standby database:
The second group uses OCFS2 storage for the standby.

4. Convert your physical standby to a logical standby.


PI1 PI2 PI3 SI1 SI2 PI1 PI2 PI3 SI1 SI2

Primary RAC ASM DB


Group1 Group2

Standby RAC OCFS2 DB


Group3

Primary RAC ASM DB


Group4

Standby RAC OCFS2 DB

87

Copyright 2007, Oracle. All rights reserved.

Installing OCFS2 on the Second Cluster

1. On both nodes, install the OCFS2 RPMs (/stage/ocfs2):


rpm Uvh ocfs2-tools-1.2.2-1.i386.rpm rpm Uvh ocfs2-2.6.9-42.Elsmp-1.2.3-1.i686.rpm rpm Uvh ocfs2console-1.2.2-1.i386.rpm

2. Run the OCFS2 console (ocfs2console):


Configure nodes (both nodes of your cluster). Propagate configuration. From first node only: Format /dev/dm-1.

Cluster size set to 128K, Block size set to 4K

3. Run /etc/init.d/o2cb configure. 4. Edit /etc/fstab to add:


/dev/mapper/ocfs2p1 /ocfs2 ocfs2 _netdev,datavolume,nointr 0 0

5. mount /ocfs2
88

Copyright 2007, Oracle. All rights reserved.

Creating the Physical Standby Database I

1. Create the /opt/standbydb/stage directory on one node on both primary and secondary sites. 2. Change your redo log group configuration on the primary DB to use a 10 MB redo log. 3. Put primary DB in ARCHIVELOG mode and FORCE LOGGING mode. 4. Back up primary DB Pfile to the stage directory. 5. Back up primary DB plus archive logs to the stage directory. 6. Back up primary DB controlfile for standby to the stage directory. 7. Back up primary network file to stage. 8. Copy stage to your standby site first node.
89

Copyright 2007, Oracle. All rights reserved.

Creating the Physical Standby Database II


[ 1. Manually configure listeners on standby site. ] [ 2. Use NETCA to register listeners as CRS resources. ]
3. 4. 5. 6. 7. 8. 9. Create password files on standby site. Create DB directories on /ocfs2 to mimic ASM. Create Spfiles on standby site. Create /admin/DB directories on standby site. RMAN: duplicate target database for standby Add standby redo log files: three groups per thread. Issue alter database recover managed standby database
current logfile disconnect

using

10. Register standby DB/instances Clusterware resources. 11. Configure primary init parameters. 12. Add standby redo logs files on primary site. 13. Check that propagation is working.
90

Copyright 2007, Oracle. All rights reserved.

Standby Initialization Parameter: Example


*.audit_file_dest='/u01/app/oracle/product/10.2.0/rac/admin/OL8SDBA/adump' *.background_dump_dest='/u01/app/oracle/product/10.2.0/rac/admin/OL8SDBA/bdump' *.cluster_database_instances=2 *.cluster_database=true *.control_files='+DATA/ol8rdba/controlfile/current.261.607490653','+FRA/ol8rdba/cont rolfile/current.256.607490655' *.core_dump_dest='/u01/app/oracle/product/10.2.0/rac/admin/OL8SDBA/cdump' *.db_create_file_dest='/ocfs2/STANDBY/DATA' *.db_name='OL8RDBA' *.db_recovery_file_dest='/ocfs2/STANDBY/FRA' *.db_recovery_file_dest_size=33554432000 *.dispatchers='(PROTOCOL=TCP) (SERVICE=OL8SDBAXDB)' OL8SDBA1.instance_number=1 OL8SDBA2.instance_number=2 OL8SDBA3.instance_number=3 *.remote_listener='LISTENERS_OL8SDBA' *.remote_login_passwordfile='exclusive' OL8SDBA3.thread=3 OL8SDBA2.thread=2 OL8SDBA1.thread=1 *.undo_management='AUTO' OL8SDBA3.undo_tablespace='UNDOTBS3' OL8SDBA2.undo_tablespace='UNDOTBS2' OL8SDBA1.undo_tablespace='UNDOTBS1' *.user_dump_dest='/u01/app/oracle/product/10.2.0/rac/admin/OL8SDBA/udump'

91

Copyright 2007, Oracle. All rights reserved.

Standby Initialization Parameter: Example

*.log_archive_config='dg_config=(OL8SDBA,OL8RDBA)' *.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST' *.log_archive_dest_2='service=OL8RDBA valid_for=(online_logfiles,primary_role) db_unique_name=OL8RDBA' *.db_file_name_convert='+DATA/ol8rdba/','/ocfs2/STANDBY/DATA/OL8SDBA/','+FRA/ol8rdba ','/ocfs2/STANDBY/FRA/OL8SDBA' *.log_file_name_convert='+DATA/ol8rdba/','/ocfs2/STANDBY/DATA/OL8SDBA/','+FRA/ol8rdb a','/ocfs2/STANDBY/FRA/OL8SDBA' *.standby_file_management=auto *.fal_server='OL8RDBA' *.fal_client='OL8SDBA' *.service_names='OL8SDBA' *.db_unique_name=OL8SDBA

92

Copyright 2007, Oracle. All rights reserved.

Primary Initialization Parameter: Example

log_archive_config='dg_config=(OL8SDBA,OL8RDBA)' log_archive_dest_2='service=OL8SDBA valid_for=(online_logfiles,primary_role) db_unique_name=OL8SDBA' db_file_name_convert='/ocfs2/STANDBY/DATA/OL8SDBA/','+DATA/ol8rdba/','/ocfs2/STANDBY /FRA/OL8SDBA','+FRA/ol8rdba' log_file_name_convert='/ocfs2/STANDBY/DATA/OL8SDBA/','+DATA/ol8rdba/','/ocfs2/STANDB Y/FRA/OL8SDBA','+FRA/ol8rdba' standby_file_management=auto fal_server='OL8SDBA' fal_client='OL8RDBA'

93

Copyright 2007, Oracle. All rights reserved.

Converting Physical Standby to Logical Standby


1. 2. 3. 4. 5. Set the primary site to MAXIMIZE PERFORMANCE. Stop standby recovery on the standby site. Build a logical standby dictionary on the primary site. Create +FRA/logical_arch to support standby archive. Configure initialization parameters for logical standby on both sites. 6. Shut down the second standby instance. 7. Start up mount exclusive first standby instance:
alter database recover to logical standby DB Startup mount force alter database open resetlogs

8. Start up the second instance. 9. alter database start logical standby apply immediate 10. Check propagation.
94

Copyright 2007, Oracle. All rights reserved.

Logical Standby Initialization Parameter: Example


standby_archive_dest='+FRA/logical_arch/' log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=OL8RDBA' log_archive_dest_state_1=enable log_archive_dest_2='SERVICE=OL8SDBA VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) LGWR SYNC AFFIRM DB_UNIQUE_NAME=OL8SDBA' log_archive_dest_state_2=enable log_archive_dest_3='LOCATION=+FRA/logical_arch/ VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=OL8RDBA' log_archive_dest_state_3=enable log_archive_dest_10='' parallel_max_servers=9

Primary site

standby_archive_dest='/ocfs2/logical_arch/' Standby site Log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=OL8SDBA' log_archive_dest_state_1=enable log_archive_dest_2='SERVICE=OL8RDBA VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) LGWR SYNC AFFIRM DB_UNIQUE_NAME=OL8RDBA' log_archive_dest_state_2=enable log_archive_dest_3='LOCATION=/ocfs2/logical_arch/ VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=OL8SDBA' log_archive_dest_state_3=enable log_archive_dest_10='' parallel_max_servers=9

95

Copyright 2007, Oracle. All rights reserved.

Storage setup

Install Clusterware and ASM

Set up single-instance protection

Single-instance to RAC conversion

Cluster extension to three nodes

Create a logical standby DB

Rolling upgrade

96

Copyright 2007, Oracle. All rights reserved.

Patches and the RAC Environment


ex0043 /u01/app/oracle /product/db_1

ex0044 /u01/app/oracle /product/db_1

ex0045 /u01/app/oracle /product/db_1

Apply a patchset to /u01/app/oracle /product/db_1 on all nodes.


97

Copyright 2007, Oracle. All rights reserved.

Inventory List Locks

The OUI employs a timed lock on the inventory list stored on a node. The lock prevents an installation from changing a list being used concurrently by another installation. If a conflict is detected, the second installation is suspended and the following message appears:

"Unable to acquire a writer lock on nodes ex0044. Restart the install after verifying that there is no OUI session on any of the selected nodes."

98

Copyright 2007, Oracle. All rights reserved.

OPatch Support for RAC: Overview


OPatch supports four different methods:
All-node patch: Stop all/Patch all/Start all Minimize down time: Stop/Patch all but one, Stop last, Start all down, Patch last/Start last Rolling patch: Stop/Patch/Start one at a time Local patch: Stop/Patch/Start only one

How OPatch selects which method to use:


If (users specify -local | -local_node) patching mechanism = Local else if (users specify -minimize_downtime) patching mechanism = Min. Downtime else if (patch is a rolling patch) patching mechanism = Rolling else patching mechanism = All-node

99

Copyright 2007, Oracle. All rights reserved.

Rolling Patch Upgrade Using RAC


Clients 1 Clients 2

Patch

Oracle patch upgrades Operating system upgrades Hardware upgrades

Initial RAC configuration


4 Clients

Clients on
Clients

, patch

B 3

Patch

Upgrade complete
100

Clients on

, patch

Copyright 2007, Oracle. All rights reserved.

Downloading and Installing Patch Updates

101

Copyright 2007, Oracle. All rights reserved.

Downloading and Installing Patch Updates

102

Copyright 2007, Oracle. All rights reserved.

Rolling Release Upgrade Using SQL Apply


Clients Logs ship 1 Clients Logs queue 2

Patch set upgrades

Version n

Version n

Version n Version n+1

Initial SQL Apply setup


4 Logs ship Clients

Upgrade standby site


Clients Logs ship 3

Major release upgrades Cluster software and hardware upgrades

Version n+1 Version n+1

Version n Version n+1

Switchover, upgrade standby


103

Run mixed to test

Copyright 2007, Oracle. All rights reserved.

Workshop VII: Rolling Upgrade

1. Both groups: Perform a rolling upgrade of Oracle Clusterware to 10.2.0.2 (one instance at a time). 2. Upgrade your logical standby database to 10.2.0.2. 3. Switch over. 4. Upgrade your new logical standby database to 10.2.0.2. 5. Switch back.

104

Copyright 2007, Oracle. All rights reserved.

Oracle Clusterware Rolling Upgrade: Initial Status


Node a Node b Node c

Oracle Clusterware 10.2.0.1 Oracle RAC/ASM 10.2.0.1


RDBA1 CRS +ASM1

Oracle Clusterware 10.2.0.1 Oracle RAC/ASM 10.2.0.1


RDBA2 CRS +ASM2

Oracle Clusterware 10.2.0.1 Oracle RAC/ASM 10.2.0.1


RDBA3 CRS +ASM3

Node a

Node b

Oracle Clusterware 10.2.0.1 Oracle RAC/ASM 10.2.0.1


SDBA1 CRS

Oracle Clusterware 10.2.0.1 Oracle RAC/ASM 10.2.0.1


SDBA2 CRS

105

Copyright 2007, Oracle. All rights reserved.

Oracle Clusterware Rolling Upgrade: Primary Site


1. Run unzip p4547817_10202_LINUX.zip. 2. Run runInstaller from the Disk1 directory:
Choose your Oracle Clusterware home installation. Choose all three nodes.

3. Repeat on each node, one after the other:


crsctl stop crs /u01/crs1020/install/root102.sh
Node a/b/c Node a/b/c Node a/b/c

Oracle Clusterware 10.2.0.1 Oracle RAC/ASM 10.2.0.1


RDBA1 CRS

stop

Oracle Clusterware 10.2.0.1 Oracle RAC/ASM 10.2.0.1

root102

Oracle Clusterware 10.2.0.2 Oracle RAC/ASM 10.2.0.1


RDBA1 CRS +ASM1

+ASM1

106

Copyright 2007, Oracle. All rights reserved.

Oracle Clusterware Rolling Upgrade: Standby Site

1. 2. 3. 4.

Stop the logical standby apply engine. Optional: Stop the logical standby database. Unzip p4547817_10202_LINUX.zip. Run runInstaller from the Disk1 directory:
Choose your Oracle Clusterware home installation. Choose both nodes.

5. Repeat on each node, one after the other:


crsctl stop crs /u01/crs1020/install/root102.sh

Note: There is no need to restart the logical standby database. 6. Restart the logical standby apply engine.
107

Copyright 2007, Oracle. All rights reserved.

Oracle Clusterware Rolling Upgrade: Final Status


Node a Node b Node c

Oracle Clusterware 10.2.0.2 Oracle RAC/ASM 10.2.0.1


RDBA1 CRS +ASM1

Oracle Clusterware 10.2.0.2 Oracle RAC/ASM 10.2.0.1


RDBA2 CRS +ASM2

Oracle Clusterware 10.2.0.2 Oracle RAC/ASM 10.2.0.1


RDBA3 CRS +ASM3

Node a

Node b

Oracle Clusterware 10.2.0.2 Oracle RAC/ASM 10.2.0.1


SDBA1 CRS

Oracle Clusterware 10.2.0.2 Oracle RAC/ASM 10.2.0.1


SDBA2 CRS

108

Copyright 2007, Oracle. All rights reserved.

Standby Database Upgrade


Clients
Logs ship

Clients

Logs queue
Stop logical standby apply engine and standby DB.

10.2.0.1

10.2.0.1

10.2.0.1

10.2.0.1

Clients

Logs queue

3
1) Apply patch 5287523 (10.2.0.1). 2) Upgrade database home to 10.2.0.2. 3) Execute catupgrd.sql and utlrp.sql. 4) Apply patch 5287523 (10.2.0.2).

4
Clients
Logs ship
Create database link to standby using SYSTEM account. Start standby DB and logical standby apply engine.

Create database link using the SYSTEM user.

10.2.0.1

10.2.0.2 10.2.0.1 10.2.0.2

109

Copyright 2007, Oracle. All rights reserved.

Switching Over
Clients
Logs ship
Start standby DB and logical standby apply engine.

Clients

6
Logs ship

Stop second and third instances and disable their threads.

Stop the second instance and disable its thread.

10.2.0.1

10.2.0.2

10.2.0.1

10.2.0.2

Clients
Logs queue

7
Logs queue

Clients

Log off users and switch over to logical standby.

Switch over to logical primary and log on users.

Enable second and third threads and restart corresponding instances.

Enable second thread and restart the corresponding instance.

10.2.0.1

10.2.0.2

10.2.0.1

10.2.0.2

110

Copyright 2007, Oracle. All rights reserved.

Old Primary Database Upgrade


Clients
Logs queue Stop the standby DB.

9
Logs queue

Clients 10

10.2.0.1

10.2.0.2 Clients 11
Logs queue

10.2.0.1

10.2.0.2

Clients

12

1) Apply patch 5287523 (10.2.0.1). 2) Upgrade database home to 10.2.0.2. 3) Execute catupgrd.sql and utlrp.sql. 4) Apply patch 5287523 (10.2.0.2).

Restart the standby DB and the logical standby apply engine.

Logs ship

10.2.0.2

10.2.0.2

10.2.0.2

10.2.0.2

111

Copyright 2007, Oracle. All rights reserved.

Switching Back
Clients

13
Logs ship

Clients

14

Logs ship

Stop second and third instances and disable their threads.

Stop the second instance and disable its thread.

10.2.0.2

10.2.0.2

10.2.0.2

10.2.0.2

15
Clients Clients

16
Commit to switch over to logical standby, restart logical standby apply engine, and restart/enable second instance/ thread.

Prepare to switch over to primary.

Prepare to switch over to logical standby.

Commit to switch over to primary and restart/enable second/third instances/ threads.

Logs ship

10.2.0.2

10.2.0.2

10.2.0.2

10.2.0.2

112

Copyright 2007, Oracle. All rights reserved.

Using a Test Environment

The most common cause of down time is change. Test your changes on a separate test cluster before changing your production environment.

Production cluster

Test cluster

RAC database

RAC database

113

Copyright 2007, Oracle. All rights reserved.

Optional Workshop: Six Nodes Cluster


1. Use the same groups as the previous workshop. 2. Reconfigure OCFS2 on all six nodes:
cg1.ocfs2 visible from six nodes cg3.ocfs2 visible from six nodes

3. Install Oracle Clusterware on all six nodes. 4. Install RAC and Database on all six nodes using OCFS2 storage.
I1 I2 I3 I4 I5 I6 I1 I2 I3 I4 I5 I6

RAC DB on /ocfs2/big
Group1
Group2 Group3

RAC DB on /ocfs2/big
Group4

114

Copyright 2007, Oracle. All rights reserved.

Summary

In this workshop, you should have learned how to: Install and configure iSCSI storage on both client clusters and Openfiler servers Install and configure Oracle Clusterware and RAC on more than two clustered nodes Use Oracle Clusterware to protect a single-instance database Convert a single-instance database to RAC Extend Oracle Clusterware and RAC to more than two nodes Create a RAC primary-physical/logical standby database environment Upgrade Oracle Clusterware, RAC, and RAC databases from 10.2.0.1 to 10.2.0.2 in a rolling fashion
115

Copyright 2007, Oracle. All rights reserved.

Você também pode gostar