Escolar Documentos
Profissional Documentos
Cultura Documentos
Objectives
After completing this workshop, you should be able to: Install and configure iSCSI storage on both client clusters and Openfiler servers Install and configure Oracle Clusterware and Real Application Clusters (RAC) on more than two clustered nodes Use Oracle Clusterware to protect a single-instance database Convert a single-instance database to RAC Extend Oracle Clusterware and RAC to more than two nodes Create a RAC primary-physical/logical standby database environment Upgrade Oracle Clusterware, RAC, and RAC databases from 10.2.0.1 to 10.2.0.2 in a rolling fashion
2
Assumptions
You are familiar with: Linux (all examples and labs are Linux-related) Oracle Clusterware Oracle Real Application Clusters Data Guard
Hardware Organization
160 GB
160 GB
SCSI disk
SCSI disk
Public network
Public network
ETH1
Group1
Group2
Group3
Group4
Node a
Node b
Node c
Node a
Node b
Node c
Node a
Node b
Node c
Node a
Node b
Node c
ETH0
Private interconnect
switch
Private interconnect
cg4:
Disk
160Go
160Go
Disk
cg1:
ocr vote asm ocfs /dev/sda1 /dev/sda2 /dev/sda3 /dev/sda4
1GB 1.5GB 34GB 36GB
Openfiler 2
Local disk
disk
160Go
/dev/mapper:
265MB 512MB
ocr1 ocr2 265MB vote1 512MB vote2 512MB vote3 ocfs2 36GB
Three students per group Volume groups: cg[1|2|3|4] Names must be unique within a classroom:
Cluster name: XY#CLUST[1|2|3|4] Database name: XY#RDB[A|B|C|D] Standby database name: XY#SDB[A|C] Example: Atlanta in the Buckhead office in Room 9
AB9CLUST1, AB9CLUST2, AB9CLUST3, AB9CLUST4 AB9RDBA, AB9RDBB, AB9RDBC, AB9RDBD AB9SDBA, AB9RDBC
Storage setup
Rolling upgrade
10
1. Openfiler volume setup 2. iSCSI client setup 3. fdisk client setup 4. Multipathing client setup 5. Raw and udev client setup
11
Physical volume
Volume group
Logical volumes
(ocr, vote, asm, ocfs2)
Restart multipath
12
SAN storage appears like locally attached SCSI disks to nodes using the storage. The main difference between NAS and SAN is that:
SAN devices transfer data in disk blocks NAS devices operate at the file level
15
Physical volume
Volume group
Logical volumes
(ocr, vote, asm, ocfs2)
Restart multipath
16
17
Physical volume
Volume group
Logical volumes
(ocr, vote, asm, ocfs2)
Restart multipath
Start multipath
18
19
Cluster nodes
Persistent name
/dev/mapper/ocr /dev/mapper/ocr1 /dev/mapper/ocr2 /dev/mapper/vote /dev/mapper/vote1 /dev/mapper/vote2 /dev/mapper/vote3
Volume group
Logical volumes
(ocr, vote, asm, ocfs2)
Allowed nodes
Start iSCSI Define list of nodes Start multipath
Restart multipath
20
21
Physical volume
Volume group
Logical volumes
(ocr, vote, asm, ocfs2)
Restart multipath
22
23
Physical volume
Volume group
Logical volumes
(ocr, vote, asm, ocfs2)
iSCSI Discovery
Restart multipath
24
25
Physical volume
Volume group
Logical volumes
(ocr, vote, asm, ocfs2)
iSCSI Discovery
Restart multipath
26
Check to see that the volumes are accessible with iscsi-ls and dmesg.
[root@ed-otraclin10a ~]# iscsi-ls ************************************************************* SFNet iSCSI Driver Version ...4:0.1.11-3(02-May-2006) ************************************************************* TARGET NAME : iqn.2006-01.com.oracle.us:cg1.ocr TARGET ALIAS : HOST ID : 24 BUS ID : 0 TARGET ID : 0 TARGET ADDRESS : 10.156.49.151:3260,1 SESSION STATUS : ESTABLISHED AT Thu Nov 23 10:07:20 EST 2006 SESSION ID : ISID 00023d000001 TSIH 600 *************************************************************
27
Physical volume
Volume group
Logical volumes
(ocr, vote, asm, ocfs2)
iSCSI Discovery
Restart multipath
29
Use the fdisk utility to create iSCSI slices within the iSCSI volumes.
30
Physical volume
Volume group
Logical volumes
(ocr, vote, asm, ocfs2)
iSCSI Discovery
Restart multipath
31
Udev Basics
Udev simplifies device management for cold and hot plug devices. Udev uses hot plug events sent by the kernel whenever a device is added or removed from the system.
Details about newly added devices are exported to /sys. Udev manages device entries in /dev by monitoring /sys.
Udev is a standard package in RH4.0. The primary benefit Udev provides for Oracle RAC environments is persistent:
Disk device naming Device ownership and permissions
32
Udev Configuration
Udev behavior is controlled by /etc/udev/udev.conf. Important parameters include the following: udev_root sets the location where udev creates device nodes. (/dev is the default.) default_mode controls the permissions of device nodes. default_owner sets user ID of files. default_group sets group ID of files. udev_rules sets the directory for Udev rules files. (/etc/udev/udev.rules is the default.) udev_permissions sets the directory for permissions. (/etc/udev/udev.permissions is the default.)
33
Common parameters for NAME, SYMLINK, and PROGRAM: %n is the kernel number, sda2 would be 2. %k is the kernel name for the device, sda for example. %M is the kernel major number for the device. %m is the kernel minor number for the device. %b is the bus ID for the device. %p is the path for the device. %c is the string returned by the external program defined by PROGRAM. %s{filename} is the content of a sysfs (/sys) attribute.
34
Multipathing tools aggregate a devices independent paths into a single logical path. Multipathing is an important aspect of high availability configurations. RHEL4 incorporates a tool called Device Mapper (DM) to manage multipathed devices. DM is dependent on the following packages:
device-mapper udev device-mapper-multipath
Physical volume
Volume group
Logical volumes
(ocr, vote, asm, ocfs2)
iSCSI Discovery
Restart multipath
36
Configuring Multipath
multipaths { multipath { wwid 14f70656e66696c000000000001000000d54 alias ocr path_grouping_policy multibus path_checker readsector0 path_selector "round-robin 0" failback manual no_path_retry 5 } ... }
37
OCR and voting disks should use /dev/dm-N or /dev/mapper/<alias>pN path formats.
38
Physical volume
Volume group
Logical volumes
(ocr, vote, asm, ocfs2)
Restart multipath
40
41
Logical volumes
2. Edit the /etc/initiators.deny and /etc/initiators.allow files to restrict access. 3. Execute service iscsi-target status|restart.
42
2. Check /etc/hosts to make sure your filer is there. 3. Execute service iscsi restart + iscsi-ls. 4. Ensure that iSCSI is started on boot:
chkconfig -add/on.
5. Determine which logical volumes are attached to your block devices: /var/log/messages and iscsi-ls. 6. Use fdisk to partition each block device (one node):
43
Two slices for OCR: 256 MB, 512 MB Three slices for voting: 256 MB, 512 MB, 512 MB Six slices for ASM: 2x2000 MB (primary), 4x7500 MB (ext) OCSF2 uses whole slice.
Copyright 2007, Oracle. All rights reserved.
Logical volume
Block device
mpath device
mapper device
Raw device
44
1. Associate /dev/mapper devices to /dev/raw/raw[1-5] for OCR and voting in /etc/sysconfig/rawdevices. (This is not strictly necessary in 10gR2; see OUI bug.) 2. service rawdevices restart 3. Edit /etc/udev/permissions.d/40rac.permissions:
raw/raw[1-2]:root:oinstall:660 raw/raw[3-5]: oracle:oinstall:660
4. Edit /etc/rc.local:
chown oracle:dba /dev/mapper/asm* chmod 660 /dev/mapper/asm*
5. reboot
45
Storage setup
Rolling upgrade
46
DATA
FRA
47
1. Use the provided solution script to set up ssh on all three nodes. 2. Check your interfaces and storage devices on all three nodes:
ifconfig ls al /dev/mapper, raw qa, ls al /dev/raw In 10gR2, although you can use block devices to store OCR and voting disks, OUI does not accept it.
Inventory = /u01/app/oracle/oraInventory Home = /u01/crs1020 OCR and voting disks: /dev/raw/raw1,2,3,4,5 VIPCA needs to be manually executed.
Copyright 2007, Oracle. All rights reserved.
dbca automatically creates the listeners and ASM instances. Use initialization parameter files:
$ORACLE_HOME/dbs/init+ASM.ora
49
Storage setup
Rolling upgrade
50
1. Install single-install database software on the first and second nodes only. 2. Create single-instance database on the first node. 3. Protect it against both instance and node failure using Oracle Clusterware. 4. Three possible starting case scenarios:
No software installed Single-instance database running on nonclustered ASM Single-instance database running on local file system
51
52
53
3. Create action script for your database: start/check/stop. 4. Store it on both nodes: /u01/crs1020/crs/public. 5. Create profile, db: ci=30 ra=3 (sudo). 6. Register the DB with Oracle Clusterware (sudo). 7. Set DB permissions (sudo).
54
Node a
Node b
RDBA
+ASM1
+ASM2
55
Node a
Node b
RDBA
+ASM1
+ASM2
56
Node a
Node b
+ASM1
+ASM2
57
Node a
Node b
RDBA
+ASM1
+ASM2
58
Node a
Node b
RDBA
+ASM1
+ASM2
59
Node a
Node b
RDBA
+ASM1
+ASM2
60
Node b
+ASM2
61
Node b
+ASM2
RDBA
62
Node b
+ASM2
RDBA
63
# Environment settings, please modify and adapt this $ORA_CRS_HOME = "/u01/crs1020"; $CRS_HOME_BIN = "/u01/crs1020/bin"; $CRS_HOME_SCRIPT = "/u01/crs1020/crs/public"; $ORACLE_HOME_BIN = "/u01/app/oracle/product/10.2.0/sgldb_1/bin"; $ORACLE_HOME = "/u01/app/oracle/product/10.2.0/sgldb_1"; $ORA_SID = "OL8RDBA"; $ORA_USER = "oracle";
64
65
# Stop database if ($command eq "stop" ) { system (" su - $ORA_USER << EOF export ORACLE_SID=$ORA_SID export ORACLE_HOME=$ORACLE_HOME $ORACLE_HOME_BIN/sqlplus /nolog connect / as sysdba shutdown immediate quit EOF" ); } # Check database if ($command eq "check" ) { check(); }
66
sub check { my($check_proc,$process) = @_; $process = "ora_pmon_$ORA_SID"; $check_proc = qx(ps -aef | grep ora_pmon_$ORA_SID | grep -v grep | awk '{print \$8}'); chomp($check_proc); if ($process eq $check_proc) { exit 0; } else { exit 1; } }
67
Storage setup
Rolling upgrade
68
1. Use dbca from single-instance home to create a database template including files. 2. Propagate template files from single-instance home to RAC home. 3. Use dbca from single-instance home to remove the existing database. 4. Use dbca from RAC home to create a new database with the same name by using the new template.
69
Storage setup
Rolling upgrade
70
Extend your cluster database to the third node of your group: 1. Use addNode.sh to add Oracle Clusterware to the third node. 2. Add ONS configuration of your third node to the OCR. 3. Use addNode.sh to add RAC to the third node. 4. Use dbca to extend your database to the third node.
71
Storage
72
73
74
75
76
Use the racgons add_config command to add new node ONS configuration information to OCR.
77
78
79
80
81
82
83
Storage setup
Rolling upgrade
84
LGWR
RFS
Flash Recovery Area
Primary instance B
85
2. Install OCFS2 on the second cluster. 3. Create the physical standby database:
The second group uses OCFS2 storage for the standby.
87
5. mount /ocfs2
88
1. Create the /opt/standbydb/stage directory on one node on both primary and secondary sites. 2. Change your redo log group configuration on the primary DB to use a 10 MB redo log. 3. Put primary DB in ARCHIVELOG mode and FORCE LOGGING mode. 4. Back up primary DB Pfile to the stage directory. 5. Back up primary DB plus archive logs to the stage directory. 6. Back up primary DB controlfile for standby to the stage directory. 7. Back up primary network file to stage. 8. Copy stage to your standby site first node.
89
using
10. Register standby DB/instances Clusterware resources. 11. Configure primary init parameters. 12. Add standby redo logs files on primary site. 13. Check that propagation is working.
90
91
*.log_archive_config='dg_config=(OL8SDBA,OL8RDBA)' *.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST' *.log_archive_dest_2='service=OL8RDBA valid_for=(online_logfiles,primary_role) db_unique_name=OL8RDBA' *.db_file_name_convert='+DATA/ol8rdba/','/ocfs2/STANDBY/DATA/OL8SDBA/','+FRA/ol8rdba ','/ocfs2/STANDBY/FRA/OL8SDBA' *.log_file_name_convert='+DATA/ol8rdba/','/ocfs2/STANDBY/DATA/OL8SDBA/','+FRA/ol8rdb a','/ocfs2/STANDBY/FRA/OL8SDBA' *.standby_file_management=auto *.fal_server='OL8RDBA' *.fal_client='OL8SDBA' *.service_names='OL8SDBA' *.db_unique_name=OL8SDBA
92
log_archive_config='dg_config=(OL8SDBA,OL8RDBA)' log_archive_dest_2='service=OL8SDBA valid_for=(online_logfiles,primary_role) db_unique_name=OL8SDBA' db_file_name_convert='/ocfs2/STANDBY/DATA/OL8SDBA/','+DATA/ol8rdba/','/ocfs2/STANDBY /FRA/OL8SDBA','+FRA/ol8rdba' log_file_name_convert='/ocfs2/STANDBY/DATA/OL8SDBA/','+DATA/ol8rdba/','/ocfs2/STANDB Y/FRA/OL8SDBA','+FRA/ol8rdba' standby_file_management=auto fal_server='OL8SDBA' fal_client='OL8RDBA'
93
8. Start up the second instance. 9. alter database start logical standby apply immediate 10. Check propagation.
94
Primary site
standby_archive_dest='/ocfs2/logical_arch/' Standby site Log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=OL8SDBA' log_archive_dest_state_1=enable log_archive_dest_2='SERVICE=OL8RDBA VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) LGWR SYNC AFFIRM DB_UNIQUE_NAME=OL8RDBA' log_archive_dest_state_2=enable log_archive_dest_3='LOCATION=/ocfs2/logical_arch/ VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=OL8SDBA' log_archive_dest_state_3=enable log_archive_dest_10='' parallel_max_servers=9
95
Storage setup
Rolling upgrade
96
The OUI employs a timed lock on the inventory list stored on a node. The lock prevents an installation from changing a list being used concurrently by another installation. If a conflict is detected, the second installation is suspended and the following message appears:
"Unable to acquire a writer lock on nodes ex0044. Restart the install after verifying that there is no OUI session on any of the selected nodes."
98
99
Patch
Clients on
Clients
, patch
B 3
Patch
Upgrade complete
100
Clients on
, patch
101
102
Version n
Version n
1. Both groups: Perform a rolling upgrade of Oracle Clusterware to 10.2.0.2 (one instance at a time). 2. Upgrade your logical standby database to 10.2.0.2. 3. Switch over. 4. Upgrade your new logical standby database to 10.2.0.2. 5. Switch back.
104
Node a
Node b
105
stop
root102
+ASM1
106
1. 2. 3. 4.
Stop the logical standby apply engine. Optional: Stop the logical standby database. Unzip p4547817_10202_LINUX.zip. Run runInstaller from the Disk1 directory:
Choose your Oracle Clusterware home installation. Choose both nodes.
Note: There is no need to restart the logical standby database. 6. Restart the logical standby apply engine.
107
Node a
Node b
108
Clients
Logs queue
Stop logical standby apply engine and standby DB.
10.2.0.1
10.2.0.1
10.2.0.1
10.2.0.1
Clients
Logs queue
3
1) Apply patch 5287523 (10.2.0.1). 2) Upgrade database home to 10.2.0.2. 3) Execute catupgrd.sql and utlrp.sql. 4) Apply patch 5287523 (10.2.0.2).
4
Clients
Logs ship
Create database link to standby using SYSTEM account. Start standby DB and logical standby apply engine.
10.2.0.1
109
Switching Over
Clients
Logs ship
Start standby DB and logical standby apply engine.
Clients
6
Logs ship
10.2.0.1
10.2.0.2
10.2.0.1
10.2.0.2
Clients
Logs queue
7
Logs queue
Clients
10.2.0.1
10.2.0.2
10.2.0.1
10.2.0.2
110
9
Logs queue
Clients 10
10.2.0.1
10.2.0.2 Clients 11
Logs queue
10.2.0.1
10.2.0.2
Clients
12
1) Apply patch 5287523 (10.2.0.1). 2) Upgrade database home to 10.2.0.2. 3) Execute catupgrd.sql and utlrp.sql. 4) Apply patch 5287523 (10.2.0.2).
Logs ship
10.2.0.2
10.2.0.2
10.2.0.2
10.2.0.2
111
Switching Back
Clients
13
Logs ship
Clients
14
Logs ship
10.2.0.2
10.2.0.2
10.2.0.2
10.2.0.2
15
Clients Clients
16
Commit to switch over to logical standby, restart logical standby apply engine, and restart/enable second instance/ thread.
Logs ship
10.2.0.2
10.2.0.2
10.2.0.2
10.2.0.2
112
The most common cause of down time is change. Test your changes on a separate test cluster before changing your production environment.
Production cluster
Test cluster
RAC database
RAC database
113
3. Install Oracle Clusterware on all six nodes. 4. Install RAC and Database on all six nodes using OCFS2 storage.
I1 I2 I3 I4 I5 I6 I1 I2 I3 I4 I5 I6
RAC DB on /ocfs2/big
Group1
Group2 Group3
RAC DB on /ocfs2/big
Group4
114
Summary
In this workshop, you should have learned how to: Install and configure iSCSI storage on both client clusters and Openfiler servers Install and configure Oracle Clusterware and RAC on more than two clustered nodes Use Oracle Clusterware to protect a single-instance database Convert a single-instance database to RAC Extend Oracle Clusterware and RAC to more than two nodes Create a RAC primary-physical/logical standby database environment Upgrade Oracle Clusterware, RAC, and RAC databases from 10.2.0.1 to 10.2.0.2 in a rolling fashion
115