Você está na página 1de 200

Oracle Maximum Availability Architecture

Ricardo Portilho Proni


ricardo@nervinformatica.com.br

Esta obra está licenciada sob a licença


Creative Commons Atribuição-SemDerivados 3.0 Brasil.
Para ver uma cópia desta licença, visite
http://creativecommons.org/licenses/by-nd/3.0/br/.

1
Alta Disponibilidade

2
Características da Alta Disponibilidade
• Confiança
• Recuperabilidade
• Detecção de erros em tempo hábil
• Continuidade operacional

3
Análise de Alta Disponibilidade
• Análise de impacto no negócio
• Custo de tempo fora de operação
• Objetivo de tempo de recuperação (RTO)
• Objetivo de ponto de recuperação (RPO)
• Meta de gerenciamento
• Custo total de propriedade (TCO)
• Retorno sobre o investimento (ROI)

4
Sistemas e Alta Disponibilidade
• Camada 1 (Faturamento, Vendas)
• Camada 2 (Compras, Estoque)
• Camada 3 (BI, Desenvimento)

5
Custos e Alta Disponibilidade
• Tempo máximo de parada tolerado.
• Frequência máxima de paradas tolerada.
• Custos facilmente mensuráveis (vendas, funcionários ociosos, multas
contratuais)
• Custos dificilmente mensuráveis (processos judiciais)
• Custos não mensusáveis (publicidade negativa, clientes irritados)

6
Implantação de Alta Disponibilidade

7
Soluções Oracle para Alta Disponibilidade
• Fast-Start Fault Recovery
• Oracle Restart
• Oracle Real Application Clusters and Oracle Clusterware
• Oracle RAC One Node
• Oracle Data Guard
• Oracle GoldenGate / Oracle Streams
• Oracle Flashback Technology
• Oracle Automatic Storage Management
• Fast Recovery Area
• Recovery Manager
• Data Recovery Advisor
• Oracle Secure Backup
• Oracle Security Features
• LogMiner
• Oracle Exadata Storage Server Software (Exadata Cell)
• Oracle Exadata Database Machine
• Oracle Database File System (DBFS)
• Oracle Automatic Storage Management Cluster File System (Oracle ACFS)
• Client Failover
• Automatic Block Repair
• Corruption Prevention, Detection, and Repair

8
Indisponibilidades Planejadas
• Operating system and hardware upgrades -> Oracle RAC
• Oracle Database patches -> Oracle RAC
• Oracle Grid Infrastructure upgrades and patches -> Oracle RAC
• Storage Migration -> Oracle ASM
• Migrating to Exadata Storage -> Oracle MAA best practices
• Upgrading Exadata Storage -> Exadata Patch Manager
• Migrating a single-instance database to Oracle RAC -> Oracle Grid Infrastructure
• Migrating to Oracle ASM -> Oracle Data Guard
• Migrating a single-instance database to Oracle RAC -> Oracle Data Guard
• Patch set and database upgrades -> Oracle Data Guard using SQL Apply
• Oracle interim patches, Oracle clusterware upgrades and patches, Oracle ASM
upgrades, Operating System and Hardware Upgrades -> Oracle Data Guard Standby-First
Patch Apply
• Migration across Windows and Linux -> Oracle Data Guard
• Platform migration across the same endian format platforms -> Transportable database
• Platform migration across different endian format platforms -> Transportable tablespace
• Patch set and database upgrades, platform migration, rolling upgrades, and when
different character sets are required -> Oracle GoldenGate and Oracle Streams
• Application upgrades -> Online Application Maintenance and Upgrades

9
Indisponibilidades Não Planejadas
• Site Failures -> Oracle Data Guard
• Site Failures -> Oracle GoldenGate and Oracle Streams
• Site Failures -> Recovery Manager
• Computer Failures -> Oracle Real Application Clusters and Oracle Clusterware
• Computer Failures -> Oracle RAC One Node
• Computer Failures -> Fast-Start Fault Recovery
• Computer Failures -> Oracle Data Guard
• Computer Failures -> Oracle GoldenGate and Oracle Streams
• Storage Failures -> Oracle Automatic Storage Management
• Storage Failures -> Oracle Data Guard
• Storage Failures -> RMAN with Fast Recovery Area and Oracle Secure Backup
• Storage Failures -> Oracle GoldenGate and Oracle Streams
• Data Corruption -> Oracle Exadata Storage Server Software (Exadata Cell) and Oracle ASM
• Data Corruption -> Corruption Prevention, Detection, and Repair
• Data Corruption -> Data Recovery Advisor and RMAN with Fast Recovery Area
• Data Corruption -> Oracle Data Guard
• Data Corruption -> Oracle GoldenGate and Oracle Streams
• Human Errors -> Oracle Security Features
• Human Errors -> Oracle Flashback Technology
• Human Errors -> LogMiner
• Lost writes -> Oracle Data Guard, RMAN, DB_LOST_WRITE_PROTECT
• Lost writes -> Oracle Data Guard Oracle Exadata Storage Server Software (Exadata Cell)
• Hangs or slow down - Oracle Database and Oracle Enterprise Manager

10
Maiores informações
High Availability Overview:
http://docs.oracle.com/database/121/HAOVW/toc.htm

High Availability Best Practices:


http://docs.oracle.com/database/121/HABPT/toc.htm

11
Cenário 1: Oracle RAC + Oracle Data Guard
Ambiente Produção
Oracle RAC: nerv01 e nerv02 / nerv03 e nerv04 / nerv05 e nerv06 / nerv07 e nerv08
NFS: nerv09
ASM: nerv09
DNS: nerv09

Ambiente Contingência
Oracle Data Guard Physical Standby: nerv11 / nerv12 / nerv13 / nerv14
NFS: nerv10
ASM: nerv10
DNS: nerv10

Ambiente Observador
Oracle Client: observer-rac01 / observer-rac02 / observer-rac03 / observer-rac04

12
Cenário 2: Oracle RAC + Oracle Golden Gate
Ambiente Produção
Oracle RAC: nerv01 e nerv02 / nerv03 e nerv04 / nerv05 e nerv06 / nerv07 e nerv08
NFS: nerv09
ASM: nerv09
DNS: nerv09

Ambiente Contingência
Oracle Database: nerv11 / nerv12 / nerv13 / nerv14
NFS: nerv10
ASM: nerv10
DNS: nerv10

13
Cenário 3: Oracle RAC Extended
Ambiente Produção
Oracle RAC: nerv01 e nerv02 / nerv03 e nerv04 / nerv05 e nerv06 / nerv07 e nerv08
NFS: nerv09
ASM: nerv09
DNS: nerv09

Ambiente Contingência
Oracle RAC: nerv11 / nerv12 / nerv13 / nerv14
NFS: nerv10
ASM: nerv10
DNS: nerv10

Ambiente Observador
NFS: nerv15

14
Lab 1 – Instalação OEL 6
Hands On !

15 15
Lab 1.1: Instalação OEL 6
Nas máquinas nerv01, nerv02 e nerv11, instale o OEL.

- 1a tela: Install or upgrade an existing system


- 2a tela: Skip
- 3a tela: Next
- 4a tela: English (English), Next
- 5a tela: Brazilian ABNT2, Next
- 6a tela: Basic Storage Devices, Next
- 7a tela: Fresh Installation, Next
- 8a tela: nerv01.localdomain, Next
- 9a tela: America/Sao Paulo, Next
- 10a tela: Nerv2015, Nerv2015, Next
- 11a tela: Create Custom Layout, Next

16 16
Lab 1.2: Instalação OEL 6
- 12a tela: Crie as partições como abaixo, e em seguida, Next:
sda1 1024 MB /boot
sda2 100000 MB /
sda3 20000 MB /home
sda5 16384 MB swap
sda6 10000 MB /var
sda7 10000 MB /tmp
sda8 Espaço restante /u01

- 13a tela: Format


- 14a tela: Write changes to disk
- 15a tela: Next
- 16a tela: Minimal
- 17a tela: Reboot
- Retire o DVD.

17 1717
Lab 2 – Configuração OEL 6
Hands On !

18 18
Lab 2.1 – Configuração OEL 6
Nas máquinas nerv01, nerv02 e nerv11, configure as placas de rede.

19 19
Lab 2.2 – Configuração OEL 6
Nas máquinas nerv01, nerv02 e nerv11, atualize o sistema operacional e execute a
instalação dos pré-requisitos.
# service network restart
# yum -y update
# yum -y install oracle-rdbms-server-12cR1-preinstall
# yum -y install oracleasm-support
# yum -y install unzip wget iscsi-initiator-utils java-1.7.0-openjdk parted
# yum -y install unixODBC unixODBC.i686 unixODBC-devel unixODBC-devel.i686

# wget http://download.oracle.com/otn_software/asmlib/oracleasmlib-2.0.12-1.el6.x86_64.rpm
# rpm -ivh oracleasmlib-2.0.12-1.el6.x86_64.rpm

Nas máquinas nerv01, nerv02 e nerv11, remova o DNS 8.8.8.8 da placa de rede eth0.

Nas máquinas nerv01, nerv02 e nerv11, altere a seguinte linha no arquivo /etc/fstab.
tmpfs /dev/shm tmpfs defaults,size=4g 00

20 20
Lab 2.3 – Configuração OEL 6
Nas máquinas nerv01, nerv02 e nerv11, ACRESCENTAR AO FINAL do
arquivo /etc/hosts:
# Public
192.168.0.101 nerv01.localdomain nerv01
192.168.0.102 nerv02.localdomain nerv02
192.168.0.121 nerv11.localdomain nerv11
# Private
192.168.1.101 nerv01-priv.localdomain nerv01-priv
192.168.1.102 nerv02-priv.localdomain nerv02-priv
192.168.1.121 nerv11-priv.localdomain nerv11-priv
# Virtual
192.168.0.111 nerv01-vip.localdomain nerv01-vip
192.168.0.112 nerv02-vip.localdomain nerv02-vip
192.168.0.131 nerv11-vip.localdomain nerv11-vip
# Storage
192.168.0.201 nerv09.localdomain nerv09
192.168.0.202 nerv10.localdomain nerv10
# Client
192.168.0.191 observer-rac01.localdomain observer-rac01
192.168.0.195 nerv15.localdomain nerv15

21 21
Lab 2.4 – Configuração OEL 6
Nas máquinas nerv01, nerv02 e nerv11, executar os comandos abaixo.
# groupadd oper
# groupadd asmadmin
# groupadd asmdba
# groupadd asmoper
# usermod -g oinstall -G dba,oper,asmadmin,asmdba,asmoper oracle
# mkdir -p /u01/app/12.1.0.2/grid
# mkdir -p /u01/app/oracle/product/12.1.0.2/db_1
# chown -R oracle:oinstall /u01
# chmod -R 775 /u01
# passwd oracle (Coloque como senha do usuário oracle: Nerv2015)

22 22
Lab 2.5 – Configuração OEL 6
Nas máquinas nerv01, nerv02 e nerv11, altere o SELinux de “enforcing” para
“permissive”.
# vi /etc/selinux/config

Nas máquinas nerv01, nerv02 e nerv11, desabilite o firewall.


# chkconfig iptables off
# chkconfig ip6tables off

Nas máquinas nerv01, nerv02 e nerv11, desabilite o NTP.


# mv /etc/ntp.conf /etc/ntp.conf.org
# reboot

23 23
Lab 2.6 – Configuração OEL 6
Nas máquinas nerv01 e nerv02 , com o usuário oracle, ACRESCENTAR NO
FINAL do arquivo /home/oracle/.bash_profile as linhas abaixo.
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=nerv01.localdomain
export ORACLE_UNQNAME=ORCL
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/12.1.0.2/db_1
export GRID_HOME=/u01/app/12.1.0.2/grid
export CRS_HOME=$GRID_HOME
export ORACLE_SID=ORCL1
export ORACLE_TERM=xterm
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi

24 24
Lab 2.7 – Configuração OEL 6
Na máquina nerv11, com o usuário oracle, ACRESCENTAR NO FINAL do
arquivo /home/oracle/.bash_profile as linhas abaixo.
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=nerv11.localdomain
export ORACLE_UNQNAME=ORCL
export ORACLE_BASE=/u01/app
export ORACLE_HOME=$ORACLE_BASE/oracle/product/12.1.0.2/db_1
export GRID_HOME=/u01/app/12.1.0.2/grid
export CRS_HOME=$GRID_HOME
export ORACLE_SID=ORCL
export ORACLE_TERM=xterm
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi

25 25
Lab 3 – Storage
Hands On !

26 26
Lab 3.1 – Storage
Nas máquinas nerv09 e nerv10, crie 3 partições de 5GB, e 4 de 10GB.

Nas máquinas nerv09 e nerv10, configure o iSCSI server.


# yum -y install scsi-target-utils
# cat /etc/tgt/targets.conf
<target iqn.2010-10.com.nervinformatica:storage.asm01-01>
backing-store /dev/sda5
initiator-address 192.168.0.101
initiator-address 192.168.0.102
</target>
<target iqn.2010-10.com.nervinformatica:storage.asm01-02>
backing-store /dev/sda6
initiator-address 192.168.0.101
initiator-address 192.168.0.102
</target>
...

# service tgtd start


# chkconfig tgtd on

27 27
Lab 3.2 – Storage
Nas máquinas nerv01, nerv02 e nerv11, ative o pacote iSCSI Initiator.
# chkconfig iscsid on

Nas máquinas nerv01, nerv02 e nerv11, verifique os Discos exportados no


Storage.
# iscsiadm -m discovery -t sendtargets -p 192.168.0.201 -l

Nas máquinas nerv01, nerv02 e nerv11, deixe APENAS os novos discos no


arquivo /etc/iscsi/initiatorname.iscsi.
InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-01
InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-02
InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-03
InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-04
InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-05
InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-06
InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-07

28 28
Lab 3.3 – Storage
Nas máquinas nerv01, nerv02 e nerv11, verifique se os discos foram adicionados.
# fdisk -l

Nas máquinas nerv01 e nerv11, particione os novos discos.


# fdisk /dev/sdb
n <enter>
p <enter>
1 <enter>
<enter>
<enter>
w <enter>
...

Na máquina nerv02, execute a detecção dos novos discos.


# partprobe /dev/sdb
...
# fdisk -l

29 29
Lab 3.4 – Storage
Nas máquinas nerv01, nerv02 e nerv11, configure a ASMLib.
# /etc/init.d/oracleasm configure
oracle <enter>
asmadmin <enter>
y <enter>
y <enter>
# /etc/init.d/oracleasm status

Nas máquinas nerv01 e nerv11, crie os discos do ASM.


# /etc/init.d/oracleasm createdisk DISK01 /dev/sdb1
# /etc/init.d/oracleasm createdisk DISK02 /dev/sdc1
# /etc/init.d/oracleasm createdisk DISK03 /dev/sdd1
# /etc/init.d/oracleasm createdisk DISK04 /dev/sde1
# /etc/init.d/oracleasm createdisk DISK05 /dev/sdf1
# /etc/init.d/oracleasm createdisk DISK06 /dev/sdg1
# /etc/init.d/oracleasm createdisk DISK07 /dev/sdh1

Na máquina nerv02, execute a detecção dos discos criados.


# /etc/init.d/oracleasm scandisks

30 30
Lab 3.5 – Storage
Nas máquinas nerv01, nerv02 e nerv11, verifique se os discos estão corretos.
# /etc/init.d/oracleasm listdisks
# /etc/init.d/oracleasm querydisk -v -p DISK01
# /etc/init.d/oracleasm querydisk -v -p DISK02
# /etc/init.d/oracleasm querydisk -v -p DISK03
# /etc/init.d/oracleasm querydisk -v -p DISK04
# /etc/init.d/oracleasm querydisk -v -p DISK05
# /etc/init.d/oracleasm querydisk -v -p DISK06
# /etc/init.d/oracleasm querydisk -v -p DISK07

Nas máquinas nerv01, nerv02 e nerv11, verifique se os discos estão corretos.


# ls -lh /dev/oracleasm/disks/
brw-rw----. 1 oracle asmadmin 8, 33 Mar 18 08:25 DISK01
brw-rw----. 1 oracle asmadmin 8, 65 Mar 18 08:26 DISK02
brw-rw----. 1 oracle asmadmin 8, 81 Mar 18 08:26 DISK03
brw-rw----. 1 oracle asmadmin 8, 49 Mar 18 08:26 DISK04
brw-rw----. 1 oracle asmadmin 8, 97 Mar 18 08:26 DISK05
brw-rw----. 1 oracle asmadmin 8, 113 Mar 18 08:26 DISK06
brw-rw----. 1 oracle asmadmin 8, 17 Mar 18 08:26 DISK07

31 31
Lab 4 - Grid Infraestructure
Hands On !

32 32
Lab 4.1 – Grid Infrastructure
Na máquina nerv01, com o usuário oracle, descompacte e execute o instalador
do Grid Infrastructure.
$ cd /home/oracle
$ unzip -q linuxamd64_12102_grid_1of2.zip
$ unzip -q linuxamd64_12102_grid_2of2.zip

Nas máquinas nerv01 e nerv02, instale o Cluster Verification Utility.


# rpm -ivh /home/oracle/grid/rpm/cvuqdisk-1.0.9-1.rpm

Na máquina nerv01, inicie a instalação do Grid Infrastructure.


$ cd grid
$ ./runInstaller

33 33
Lab 4.2 – Grid Infrastructure

34 34
Lab 4.3 – Grid Infrastructure

35 35
Lab 4.4 – Grid Infrastructure

36 36
Lab 4.5 – Grid Infrastructure

37 37
Lab 4.6 – Grid Infrastructure

38 38
Lab 4.7 – Grid Infrastructure

39 39
Lab 4.8 – Grid Infrastructure

40 40
Lab 4.9 – Grid Infrastructure

41 41
Lab 4.10 – Grid Infrastructure

42 42
Lab 4.11 – Grid Infrastructure

43 43
Lab 4.12 – Grid Infrastructure

44 44
Lab 4.13 – Grid Infrastructure

45 45
Lab 4.14 – Grid Infrastructure

46 46
Lab 4.15 – Grid Infrastructure

47 47
Lab 4.16 – Grid Infrastructure

48 48
Lab 4.17 – Grid Infrastructure

49 49
Lab 4.18 – Grid Infrastructure

50 50
Lab 4.19 – Grid Infrastructure

51 51
Lab 4.20 – Grid Infrastructure

52 52
Lab 4.21 – Grid Infrastructure

53 53
Lab 4.22 – Grid Infrastructure

54 54
Lab 4.23 – Grid Infrastructure

55 55
Lab 4.24 – Grid Infrastructure

56 56
Lab 4.25 – Grid Infrastructure

57 57
Lab 4.26 – Grid Infrastructure

58 58
Lab 4.27 – Grid Infrastructure

59 59
Lab 4.28 – Grid Infrastructure

60 60
Lab 4.29 – Grid Infrastructure

61 61
Lab 4.30 – Grid Infrastructure

62 62
Lab 4.31 – Grid Infrastructure

63 63
Lab 4.32 – Grid Infrastructure
Na máquina nerv11, com o usuário oracle, descompacte e execute o instalador
do Grid Infrastructure.
[oracle@nerv01 ~]$ ssh -CX oracle@nerv11
[oracle@nerv11 ~]$ unzip -q linuxamd64_12102_grid_1of2.zip
[oracle@nerv11 ~]$ unzip -q linuxamd64_12102_grid_2of2.zip

Na máquina nerv11, instale o Cluster Verification Utility.


# rpm -ivh /home/oracle/grid/rpm/cvuqdisk-1.0.9-1.rpm

Na máquina nerv11, inicie a instalação do Grid Infrastructure.


$ cd grid
$ ./runInstaller

64 64
Lab 4.33 – Grid Infrastructure

65 65
Lab 4.34 – Grid Infrastructure

66 66
Lab 4.35 – Grid Infrastructure

67 67
Lab 4.36 – Grid Infrastructure

68 68
Lab 4.37 – Grid Infrastructure

69 69
Lab 4.38 – Grid Infrastructure

70 70
Lab 4.39 – Grid Infrastructure

71 71
Lab 4.40 – Grid Infrastructure

72 72
Lab 4.41 – Grid Infrastructure

73 73
Lab 4.42 – Grid Infrastructure

74 74
Lab 4.43 – Grid Infrastructure

75 75
Lab 4.44 – Grid Infrastructure

76 76
Lab 4.45 – Grid Infrastructure

77 77
Lab 4.46 – Grid Infrastructure

78 78
Lab 4.47 – Grid Infrastructure

79 79
Lab 4.48 – Grid Infrastructure

80 80
Lab 5 – Oracle Database Software
Hands On !

81 81
Lab 5.1 – Oracle Database Software
Na máquina nerv01, com o usuário oracle, descompacte e execute o instalador
do Oracle Database Software.
$ cd /home/oracle
$ unzip -q linuxamd64_12102_database_1of2.zip
$ unzip -q linuxamd64_12102_database_2of2.zip
$ cd database
$ ./runInstaller

82 82
Lab 5.2 – Oracle Database Software

83 83
Lab 5.3 – Oracle Database Software

84 84
Lab 5.4 – Oracle Database Software

85 85
Lab 5.5 – Oracle Database Software

86 86
Lab 5.6 – Oracle Database Software

87 87
Lab 5.7 – Oracle Database Software

88 88
Lab 5.8 – Oracle Database Software

89 89
Lab 5.9 – Oracle Database Software

90 90
Lab 5.10 – Oracle Database Software

91 91
Lab 5.11 – Oracle Database Software

92 92
Lab 5.12 – Oracle Database Software

93 93
Lab 5.13 – Oracle Database Software

94 94
Lab 5.14 – Oracle Database Software

95 95
Lab 5.15 – Oracle Database Software

96 96
Lab 5.16 – Oracle Database Software

97 97
Lab 5.17 – Oracle Database Software
Na máquina nerv11, com o usuário oracle, descompacte e execute o instalador
do Oracle Database Software.
$ cd /home/oracle
$ unzip -q linuxamd64_12102_database_1of2.zip
$ unzip -q linuxamd64_12102_database_2of2.zip
$ cd database
$ ./runInstaller

98 98
Lab 5.18 – Oracle Database Software

99 99
Lab 5.19 – Oracle Database Software

100
100
Lab 5.20 – Oracle Database Software

101
101
Lab 5.21 – Oracle Database Software

102
102
Lab 5.22 – Oracle Database Software

103
103
Lab 5.23 – Oracle Database Software

104
104
Lab 5.24 – Oracle Database Software

105
105
Lab 5.25 – Oracle Database Software

106
106
Lab 5.26 – Oracle Database Software

107
107
Lab 5.27 – Oracle Database Software

108
108
Lab 5.28 – Oracle Database Software

109
109
Lab 5.29 – Oracle Database Software

110
110
LAB 6 – ASM
Hands On !

111 111
Lab 6.1 – ASM
Na máquina nerv01, configure os outros Disk Groups do ASM.
$ export ORACLE_HOME=$GRID_HOME
$ export ORACLE_SID=+ASM1
$ sqlplus / AS SYSASM
SQL> CREATE DISKGROUP DATA NORMAL REDUNDANCY DISK 'ORCL:DISK04',
'ORCL:DISK05';
SQL> CREATE DISKGROUP FRA NORMAL REDUNDANCY DISK 'ORCL:DISK06',
'ORCL:DISK07';
SQL> ALTER DISKGROUP DATA SET ATTRIBUTE 'compatible.asm' = '12.1.0.2.0';
SQL> ALTER DISKGROUP FRA SET ATTRIBUTE 'compatible.asm' = '12.1.0.2.0';
SQL> ALTER DISKGROUP DATA SET ATTRIBUTE 'compatible.rdbms' = '12.1.0.2.0';
SQL> ALTER DISKGROUP FRA SET ATTRIBUTE 'compatible.rdbms' = '12.1.0.2.0';
$ srvctl start diskgroup -g DATA -n nerv02
$ srvctl enable diskgroup -g DATA -n nerv02
$ srvctl start diskgroup -g FRA -n nerv02
$ srvctl enable diskgroup -g FRA -n nerv02

112 112
Lab 6.2 – ASM
Na máquina nerv11, configure os outros Disk Groups do ASM.
$ export ORACLE_HOME=$GRID_HOME
$ export ORACLE_SID=+ASM
$ sqlplus / AS SYSASM
SQL> CREATE DISKGROUP DATA NORMAL REDUNDANCY DISK 'ORCL:DISK04',
'ORCL:DISK05';
SQL> CREATE DISKGROUP FRA NORMAL REDUNDANCY DISK 'ORCL:DISK06',
'ORCL:DISK07';
SQL> ALTER DISKGROUP DATA SET ATTRIBUTE 'compatible.asm' = '12.1.0.2.0';
SQL> ALTER DISKGROUP FRA SET ATTRIBUTE 'compatible.asm' = '12.1.0.2.0';
SQL> ALTER DISKGROUP DATA SET ATTRIBUTE 'compatible.rdbms' = '12.1.0.2.0';
SQL> ALTER DISKGROUP FRA SET ATTRIBUTE 'compatible.rdbms' = '12.1.0.2.0';

113 113
Lab 7 – Oracle Database
Hands On !

114 114
Lab 7.1 – Oracle Database
Na máquina nerv01, execute o DBCA.

115 115
Lab 7.2 – Oracle Database

116 116
Lab 7.3 – Oracle Database

117 117
Lab 7.4 – Oracle Database

118 118
Lab 7.5 – Oracle Database

119 119
Lab 7.6 – Oracle Database

120 120
Lab 7.7 – Oracle Database

121 121
Lab 7.8 – Oracle Database

122 122
Lab 7.9 – Oracle Database

123 123
Lab 7.10 – Oracle Database

124 124
Lab 7.11 – Oracle Database

125 125
Lab 7.12 – Oracle Database

126 126
Lab 7.13 – Oracle Database

127 127
Lab 7.14 – Oracle Database

128 128
Lab 7.15 – Oracle Database

129 129
Lab 8 – RAC + Data Guard
Hands On !

130 130
Lab 8.1 – Data Guard
Nas máquinas nerv01, nerv02 e nerv11, deixe o tnsnames do ORACLE_HOME
como abaixo.
PROD =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac01-scan.localdomain)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ORCL)
)
)

DR =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = nerv11.localdomain)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = DR)
)
)

131 131
Lab 8.2 – Data Guard
Nas máquinas nerv01, nerv02 e nerv11, acrescente as linhas abaixo no final do
arquivo $GRID_HOME/network/admin/listener.ora.
SID_LIST_LISTENER=
(SID_LIST=
(SID_DESC=
(GLOBAL_DBNAME=ORCL_DGMGRL)
(ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/db_1)
(SID_NAME=ORCL1)
)
)

Nas máquinas nerv01, nerv02 e nerv11, teste a nova configuração do LISTENER.


$ export ORACLE_HOME=$GRID_HOME
$ $GRID_HOME/bin/lsnrctl status
$ srvctl stop listener
$ srvctl start listener
$ $GRID_HOME/bin/lsnrctl status

132 132
Lab 8.3 – Data Guard
Na máquina nerv01, habilite os pré-requisitos do Data Guard.
$ export ORACLE_HOME=$ORACLE_BASE/product/12.1.0.2/db_1
$ export ORACLE_SID=ORCL1
$ srvctl stop database -d ORCL
$ srvctl start instance -d ORCL -i ORCL1 -o mount
$ sqlplus / AS SYSDBA
SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST='+FRA';
SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE=10G;
SQL> ALTER DATABASE ARCHIVELOG;
SQL> ALTER DATABASE FORCE LOGGING;
SQL> ALTER DATABASE FLASHBACK ON;
SQL> ALTER DATABASE OPEN;
$ srvctl start instance -d ORCL -i ORCL2

Na máquina nerv01, altere a localização do SNAPSHOT CONTROLFILE.


$ rman target /
RMAN> CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+FRA/snapcf.f';

133 133
Lab 8.4 – Data Guard
Na máquina nerv01, crie um STANDBY CONTROLFILE.
SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/home/oracle/stb.ctl';

Na máquina nerv01, execute um Backup do Banco de Dados e Archives.


RMAN> BACKUP DATABASE FORMAT '/home/oracle/Backup_Banco_%U.rman';
RMAN> BACKUP ARCHIVELOG ALL FORMAT '/home/oracle/Backup_Archives_%U.rman';

Na máquina nerv01, copie o PASSWORD FILE para a máquina nerv11.


ASMCMD [+] > ls -l DATA/ORCL/PASSWORD/
ASMCMD [+] > pwcopy DATA/ORCL/PASSWORD/pwdorcl.123 /home/oracle/orapwORCL
$ scp /home/oracle/orapwORCL nerv11:$ORACLE_HOME/dbs/orapwORCL

Na máquina nerv01, copie o STANDBY CONTROLFILE para a máquina nerv11.


SQL> CREATE PFILE='/home/oracle/initORCL.ora' FROM SPFILE;
$ scp /home/oracle/initORCL.ora nerv11:$ORACLE_HOME/dbs/initORCL.ora

Na máquina nerv01, copie o STANDBY CONTROLFILE para a máquina nerv11.


$ scp /home/oracle/stb.ctl nerv11:/home/oracle/

Na máquina nerv01, copie o BACKUP para a máquina nerv11.


$ scp /home/oracle/Backup_*.rman nerv11:/home/oracle/

134 134
Lab 8.5 – Data Guard
Na máquina nerv11, remova as seguintes linhas do arquivo initORCL.ora.
ORCL1.*
ORCL2.*
*.cluster_database=true

Na máquina nerv11, adicione a seguinte linha do arquivo initORCL.ora.


*.undo_tablespace='UNDOTBS1'

Na máquina nerv11, crie o diretório do ADUMP, que está no initORCL.ora.


$ mkdir -p /u01/app/oracle/admin/ORCL/adump

Na máquina nerv11, crie um SPFILE a partir do arquivo initORCL.ora.


SQL> CREATE SPFILE FROM PFILE;
SQL> STARTUP NOMOUNT;

135 135
Lab 8.6 – Data Guard
Na máquina nerv11, altere o parâmetro DB_UNIQUE_NAME.
SQL> ALTER SYSTEM SET DB_UNIQUE_NAME='DR' SCOPE=SPFILE;
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP NOMOUNT;

Na máquina nerv11, restaure o CONTROLFILE.


RMAN> RESTORE CONTROLFILE FROM '/home/oracle/stb.ctl';
SQL> ALTER DATABASE MOUNT STANDBY DATABASE;

Na máquina nerv11, corrija os metadados do RMAN.


RMAN> CROSSCHECK BACKUP;
RMAN> CROSSCHECK ARCHIVELOG ALL;
RMAN> DELETE NOPROMPT EXPIRED BACKUP;
RMAN> DELETE NOPROMPT EXPIRED ARCHIVELOG ALL;
RMAN> CATALOG START WITH '/home/oracle/Backup';

Na máquina nerv11, restaure o banco de dados.


RMAN> RESTORE DATABASE;
RMAN> RECOVER DATABASE;

136 136
Lab 8.7 – Data Guard
Na máquina
Teste o nerv11, habilite os pré-requisitos do Data Guard.
SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST='+FRA';
SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE=10G;
SQL> ALTER DATABASE ARCHIVELOG;
SQL> ALTER DATABASE FLASHBACK ON;

Na máquina nerv11, adicione o banco de dados ao Grid.


$ srvctl add database -d ORCL -oraclehome /u01/app/oracle/product/12.1.0.2/db_1
$ srvctl start database -d ORCL
$ srvctl modify database -db ORCL -pwfile /u01/app/oracle/product/12.1.0.2/db_1/dbs/orapwORCL

Teste a autenticação via PASSWORD FILE entre as três máquinas.


[oracle@nerv01 ~]$ sqlplus SYS/Nerv2015@DR AS SYSDBA
[oracle@nerv02 ~]$ sqlplus SYS/Nerv2015@DR AS SYSDBA
[oracle@nerv11 ~]$ sqlplus SYS/Nerv2015@PROD AS SYSDBA

Nas máquinas nerv01 e nerv11, configure o Data Guard Broker.


SQL> ALTER SYSTEM SET DG_BROKER_CONFIG_FILE1 = '+FRA/DR1.DAT' SCOPE=BOTH;
SQL> ALTER SYSTEM SET DG_BROKER_CONFIG_FILE2 = '+FRA/DR2.DAT' SCOPE=BOTH;
SQL> ALTER SYSTEM SET DG_BROKER_START=TRUE;

137 137
Lab 8.8 – Data Guard
Na máquina
Teste o nerv01, crie a configuração do Data Guard Broker.
$ dgmgrl SYS/Nerv2015@PROD
DGMGRL> CREATE CONFIGURATION 'DRSolution' AS PRIMARY DATABASE IS ORCL
CONNECT IDENTIFIER IS PROD;

Na máquina nerv01, adicione a máquina nerv11 na configuração.


DGMGRL> ADD DATABASE DR AS CONNECT IDENTIFIER IS DR;

Nas três máquinas, acompanhe o Alert Log.

Na máquina nerv01, habilite a configuração.


DGMGRL> SHOW CONFIGURATION;
DGMGRL> ENABLE CONFIGURATION;
DGMGRL> SHOW CONFIGURATION;

138 138
Lab 8.9 – Data Guard
Na máquina
Teste o nerv11, crie STANDBY LOGFILEs.
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
SQL> ALTER DATABASE ADD STANDBY LOGFILE;
SQL> ALTER DATABASE ADD STANDBY LOGFILE;
SQL> ALTER DATABASE ADD STANDBY LOGFILE;
SQL> ALTER DATABASE ADD STANDBY LOGFILE;
SQL> ALTER DATABASE ADD STANDBY LOGFILE;
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM
SESSION;

Nas máquina nerv01 crie STANDBY LOGFILEs.


SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1;
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1;
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1;
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1;
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1;
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2;
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2;
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2;
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2;
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2;

139 139
Lab 8.10 – Data Guard
Na máquina nerv01, verifique detalhes de um banco de dados.
DGMGRL> SHOW DATABASE VERBOSE ORCL;
DGMGRL> EDIT DATABASE ORCL SET PROPERTY 'ArchiveLagTarget'=600;
DGMGRL> EDIT DATABASE ORCL SET PROPERTY 'StandbyFileManagement'=AUTO;
DGMGRL> SHOW DATABASE ORCL 'ArchiveLagTarget';
DGMGRL> SHOW DATABASE ORCL 'StandbyFileManagement';

Na máquina nerv01, altere o Protection Mode.


DGMGRL> EDIT DATABASE ORCL SET PROPERTY 'LogXptMode'='SYNC';
DGMGRL> EDIT DATABASE DR SET PROPERTY 'LogXptMode'='SYNC';
DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MAXAVAILABILITY;

Na máquina nerv01, verifique se o Protection Mode foi alterado.


DGMGRL> SHOW CONFIGURATION;
DGMGRL> SHOW DATABASE VERBOSE ORCL;
DGMGRL> SHOW DATABASE VERBOSE DR;
DGMGRL> SHOW INSTANCE VERBOSE “ORCL1” ON DATABASE ORCL;
DGMGRL> SHOW INSTANCE VERBOSE “ORCL2” ON DATABASE ORCL;
DGMGRL> SHOW INSTANCE VERBOSE “ORCL” ON DATABASE DR;

140
Lab 8.11 – Data Guard
Na máquina nerv01, execute SWITCHOVER para a máquina nerv11, sempre
acompanhando os Alert Logs.
DGMGRL> SHOW CONFIGURATION;
DGMGRL> SWITCHOVER TO DR;

Na máquina nerv11, execute SWITCHBACK para a máquina nerv01, sempre


acompanhando os Alert Logs.
DGMGRL> SHOW CONFIGURATION;
DGMGRL> SWITCHOVER TO ORCL;

141
Lab 8.12 – Data Guard
Desligue as máquinas nerv01 e nerv02.

Execute FAILOVER para a máquina nerv11.


$ dgmgrl SYS/Nerv2015@DR
DGMGRL> FAILOVER TO DR;
DGMGRL> SHOW CONFIGURATION;

Ligue as máquinas nerv01 e nerv02, e na máquina nerv11, execute o REISNTATE.


DGMGRL> REINSTATE DATABASE ORCL;

Na máquina nerv11, execute o SWITCHOVER.


DGMGRL> SWITCHOVER TO ORCL;

142
Lab 9 – Fast-Start Failover
Hands On !

143 143
Lights out administration

144
Lab 9.1: Fast-Start Failover
Na máquina nerv01, configure o Fast-Start Failover.
DGMGRL> EDIT DATABASE ORCL SET PROPERTY 'LogXptMode'='SYNC';
DGMGRL> EDIT DATABASE DR SET PROPERTY 'LogXptMode'='SYNC';
DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MAXAVAILABILITY;
DGMGRL> EDIT CONFIGURATION SET PROPERTY FastStartFailoverLagLimit = 600;
DGMGRL> EDIT CONFIGURATION SET PROPERTY FastStartFailoverThreshold = 30;
DGMGRL> EDIT CONFIGURATION SET PROPERTY FastStartFailoverAutoReinstate = TRUE;
DGMGRL> EDIT CONFIGURATION SET PROPERTY FastStartFailoverPmyShutdown = TRUE;
DGMGRL> EDIT DATABASE ORCL SET PROPERTY FastStartFailoverTarget=DR;
DGMGRL> ENABLE FAST_START FAILOVER;
DGMGRL> SHOW CONFIGURATION;
DGMGRL> SHOW FAST_START FAILOVER;

145
Lab 9.2: Fast-Start Failover
Na máquina observer-rac01, inicie o observador.
$ dgmgrl -logfile /home/oracle/observer.log SYS/Nerv2015@DR
DGMGRL> START OBSERVER;

Desligue as máquinas nerv01 e nerv02, e aguarde o FAILOVER.

Ligue as máquinas nerv01 e nerv02.

Aguarde o REINSTATE.

Execute o SWITCHOVER.

146
Lab 9.3: Fast-Start Failover
Na máquina nerv01, crie dois Services como abaixo.
$GRID_HOME/bin/srvctl add service -d ORCL -r ORCL1,ORCL2 -s OLTP -l PRIMARY -w 1 -z 10
$GRID_HOME/bin/srvctl add service -d ORCL -r ORCL1,ORCL2 -s OLAP -l PHYSICAL_STANDBY
-w 1 -z 10

Na máquina nerv11, crie dois Services como abaixo.


$GRID_HOME/bin/srvctl add service -d ORCL -s OLTP -l PRIMARY -w 1 -z 10
$GRID_HOME/bin/srvctl add service -d ORCL -s OLAP -l PHYSICAL_STANDBY -w 1 -z 10

Na máquina nerv01, inicie os dois Services.


$GRID_HOME/bin/srvctl start service -d ORCL -s OLTP
$GRID_HOME/bin/srvctl start service -d ORCL -s OLAP

Na máquina nerv01, gere alguns Archived Redo Logs e espere replicar para o DR.

Na máquina nerv01, pare o Service OLAP.


$GRID_HOME/bin/srvctl stop service -d ORCL -s OLAP

Na máquina nerv11, inicie o Service OLAP.


$GRID_HOME/bin/srvctl start service -d ORCL -s OLAP

147
Lab 9.4: Fast-Start Failover
Na máquina observer-rac01, adicione estas duas entradas ao tnsnames.ora, e teste sua
conexão após um novo Failover.

OLTP_RAC01 =
(DESCRIPTION=
(LOAD_BALANCE=OFF)
(FAILOVER=ON)
(ADDRESS=(PROTOCOL=TCP)(HOST=rac01-scan)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=nerv11)(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=OLTP))
)

OLAP_RAC01 =
(DESCRIPTION=
(LOAD_BALANCE=OFF)
(FAILOVER=ON)
(ADDRESS=(PROTOCOL=TCP)(HOST=nerv11)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=rac01-scan)(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=OLAP))
)

148
Lab 10 – ACFS
Hands On !

149 149
Vantagens ACFS
• Mirroring
• Stripping
• Replicação
• Snapshots
• Alta Disponibilidade

150
Desvantagens ACFS
• Produto recente (11gR2)
• Configuração complexa
• Dependência do Kernel
• Depêndencia dos componentes do Grid Infrastructure
• Ainda não suportado pelo UEK 3 sem Patch

Bug ID 16318126
Oracle ASM Cluster File System (ACFS) is currently not supported for use with UEK R3.
http://docs.oracle.com/cd/E37670_01/E51472/E51472.pdf

151
Lab 10.1: NFS
Na máquina nerv09, crie 1 diretório.
# mkdir /shared_ogg

Na máquina nerv09, adicionar no arquivo /etc/exports:


/shared_ogg *(rw,sync,no_wdelay,insecure_locks,no_root_squash)

Na máquina nerv09, reiniciar o NFS Server:


# yum -y install nfs-utils
# service rpcbind start; service nfs start; chkconfig rpcbind on; chkconfig nfs on

Nas máquinas nerv01 e nerv02, adicionar no arquivo /etc/fstab a linha abaixo.


nerv09:/shared_ogg /u01/shared_ogg nfs
rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600 0 0

Na máquina nerv01, executar:


# mkdir /u01/shared_ogg
# mount /u01/shared_ogg
# mkdir /u01/shared_ogg/rac01
# chown -R oracle:oinstall /u01/shared_ogg/rac01

Na máquina nerv02, executar:


# mkdir /u01/shared_ogg
# mount /u01/shared_ogg

152 152
Lab 11 – Golden Gate Unidirecional
Hands On !

153 153
Lab 11.1: Golden Gate Unidirecional
Na máquina nerv11, crie um novo banco de dados.
$ $ORACLE_HOME/bin/dbca -silent -createDatabase -templateName General_Purpose.dbc \
-gdbName BI -sid BI \
-sysPassword Nerv2015 -systemPassword Nerv2015 \
-storageType ASM -asmsnmpPassword Nerv2015 \
-diskGroupName DATA -recoveryAreaDestination FRA \
-nodelist nerv11 \
-characterSet WE8IS08859P15 -listeners LISTENER \
-memoryPercentage 20 -sampleSchema true -emConfiguration NONE \
-continueOnNonFatalErrors false

Na máquina nerv11, coloque o banco BI em modo ARCHIVELOG.


$ export ORACLE_SID=BI
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT;
SQL> ALTER DATABASE ARCHIVELOG;
SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST='+FRA';
SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE=10G;
SQL> ALTER DATABASE FORCE LOGGING;
SQL> ALTER DATABASE OPEN;

154
Lab 11.2: Instalação Golden Gate
Nas máquinas nerv01 e nerv11, inicie a instalação do Golden Gate.
[oracle@nerv01 ~]$ unzip -q fbo_ggs_Linux_x64_shiphome.zip
[oracle@nerv01 ~]$ cd fbo_ggs_Linux_x64_shiphome/Disk1
[oracle@nerv01 Disk1]$ ./runInstaller

155
Lab 11.3: Instalação Golden Gate

156
Lab 11.4: Instalação Golden Gate
Na máquina nerv01, instale em /u01/shared_ogg/rac01
Na máquina nerv11, instale em /u01/app/oracle/product/12.1.0.2/ogg

157
Lab 11.5: Instalação Golden Gate

158
Lab 11.6: Instalação Golden Gate

159
Lab 11.7: Golden Gate Unidirecional
Na máquina nerv01, verifique se o MANAGER está em funcionamento.
$ cd /u01/shared_ogg/rac01
$ ./ggsci
GGSCI> info all

Na máquina nerv11, verifique se o MANAGER está em funcionamento.


$ cd /u01/app/oracle/product/12.1.0.2/ogg
$ ./ggsci
GGSCI> info all

160
Lab 11.8: Golden Gate Unidirecional
Na máquina nerv01, habilite os pré-requisitos do Golden Gate.
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
SQL> ALTER SYSTEM SET ENABLE_GOLDENGATE_REPLICATION=TRUE;

Na máquina nerv01, crie o usuário para o Golden Gate.


$ cd /u01/shared_ogg/rac01/
$ORACLE_HOME/bin/sqlplus / AS SYSDBA
SQL> CREATE TABLESPACE OGG;
SQL> CREATE USER OGG IDENTIFIED BY Nerv2015 DEFAULT TABLESPACE OGG
TEMPORARY TABLESPACE TEMP;
SQL> GRANT CONNECT, RESOURCE, UNLIMITED TABLESPACE TO OGG;
SQL> GRANT EXECUTE ON UTL_FILE TO OGG;
@marker_setup.sql
OGG <enter>
@ddl_setup.sql
OGG <enter>
@role_setup.sql
OGG <enter>
@ddl_enable.sql

161
Lab 11.9: Golden Gate Unidirecional
Na máquina nerv11, habilite os pré-requisitos do Golden Gate.
$ export ORACLE_SID=BI
$ORACLE_HOME/bin/sqlplus / AS SYSDBA
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
SQL> ALTER SYSTEM SET ENABLE_GOLDENGATE_REPLICATION=TRUE;

Na máquina nerv11, crie o usuário para o Golden Gate.


$ cd /u01/app/oracle/product/12.1.0.2/ogg
$ORACLE_HOME/bin/sqlplus / AS SYSDBA
SQL> CREATE TABLESPACE OGG;
SQL> CREATE USER OGG IDENTIFIED BY Nerv2015 DEFAULT TABLESPACE OGG
TEMPORARY TABLESPACE TEMP;
SQL> GRANT CONNECT, RESOURCE, UNLIMITED TABLESPACE TO OGG;
SQL> GRANT EXECUTE ON UTL_FILE TO OGG;
@marker_setup.sql
OGG <enter>
@ddl_setup.sql
OGG <enter>
@role_setup.sql
OGG <enter>
@ddl_enable.sql

162
Lab 11.10: Golden Gate Unidirecional
Na máquina nerv01, adicione o processo EXTRACT.
GGSCI> add extract ext1, tranlog, THREADS 2, begin now
GGSCI> add exttrail /u01/app/oracle/product/12.1.0.2/ogg/dirdat/lt, extract ext1

Na máquina nerv01, edite o arquivo de parâmetros do processo EXTRACT.


GGSCI> edit params ext1
extract ext1
userid OGG@ORCL, password Nerv2015
rmthost nerv11, mgrport 7809
rmttrail /u01/app/oracle/product/12.1.0.2/ogg/dirdat/lt
TRANLOGOPTIONS EXCLUDEUSER OGG ASMUSER SYS@ASM, ASMPASSWORD Nerv2015
ddl include mapped objname SCOTT.*;
table SCOTT.*;

163
Lab 11.11: Golden Gate Unidirecional
Na máquina nerv11, edite o arquivo de parâmetros GLOBAL.
GGSCI> edit params ./GLOBAL
GGSCHEMA OGG CHECKPOINTTABLE OGG.checkpoint

Na máquina nerv11, crie a tabela de CHECKPOINT.


GGSCI> dblogin userid OGG
Nerv2015 <enter>
GGSCI> add checkpointtable OGG.checkpoint

Na máquina nerv11, adicione o processo REPLICAT.


GGSCI> add replicat rep1, exttrail /u01/app/oracle/product/12.1.0.2/ogg/dirdat/lt, checkpointtable
OGG.checkpoint

Na máquinas nerv11, edite o arquivo de parâmetros do processo REPLICAT.


GGSCI> edit params rep1
replicat rep1
ASSUMETARGETDEFS
userid OGG@BI, password Nerv2015
discardfile /u01/app/oracle/product/12.1.0.2/ogg/dircrd/rep1_discard.txt, append, megabytes 10
DDL
map SCOTT.*, target SCOTT.*;

164
Lab 11.12: Golden Gate Unidirecional
Nas máquinas nerv01, nerv02 e nerv11, adicione o ASM ao tnsnames.ora do ORACLE_HOME.
ASM =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = nerv01.localdomain)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = +ASM)
)
)

Nas máquinas nerv01 e nerv11, habilite o usuário SCOTT.


SQL> ALTER USER SCOTT IDENTIFIED BY TIGER ACCOUNT UNLOCK;

165
Lab 11.13: Golden Gate Unidirecional
Na máquina nerv01, acompanhe o log de erros.
$ tail -f /u01/shared_ogg/rac01/ggserr.log

Na máquina nerv11, acompanhe o log de erros.


$ tail -f /u01/app/oracle/product/12.1.0.2/ogg/ggserr.log

Na máquina nerv01, inicie o processo EXTRACT.


GGSCI> info all
GGSCI> start extract ext1
GGSCI> info all

Na máquina nerv11, inicie o processo REPLICAT.


GGSCI> info all
GGSCI> start replicat rep1
GGSCI> info all

Teste a replicação de dados do usuário SCOTT entre as máquinas nerv01 e nerv11.

Teste a replicação de dados do usuário SCOTT entre as máquinas nerv02 e nerv11.

166
Lab 12 – Golden Gate Bidirecional
Hands On !

167 167
Lab 12.1: Golden Gate Bidirecional
Na máquina nerv11, verifique se o MANAGER e o REPLICAT estão em funcionamento.
cd /u01/app/oracle/product/12.1.0.2/ogg
$ ./ggsci
GGSCI> info all

Na máquina nerv11, adicione o processo EXTRACT.


GGSCI> add extract ext2, tranlog, THREADS 1, begin now
GGSCI> add exttrail /u01/shared_ogg/rac01/dirdat/lt, extract ext2

Na máquina nerv11, edite o arquivo de parâmetros do processo EXTRACT.


GGSCI> edit params ext2
extract ext2
userid OGG@BI, password Nerv2015
rmthost nerv01-vip, mgrport 7809
TRANLOGOPTIONS EXCLUDEUSER OGG ASMUSER SYS@ASM, ASMPASSWORD Nerv2015
rmttrail /u01/shared_ogg/rac01/dirdat/lt
ddl include mapped objname SCOTT.*;
table SCOTT.*;

168
Lab 12.2: Golden Gate Bidirecional
Na máquina nerv01, edite o arquivo de parâmetros GLOBAL.
GGSCI> edit params ./GLOBAL
GGSCHEMA OGG CHECKPOINTTABLE OGG.checkpoint

Na máquina nerv01, crie a tabela de CHECKPOINT.


GGSCI> dblogin userid OGG
Nerv2015 <enter>
GGSCI> add checkpointtable OGG.checkpoint

Na máquina nerv01, adicione o processo REPLICAT.


GGSCI> add replicat rep2, exttrail /u01/shared_ogg/rac01/dirdat/lt, checkpointtable OGG.checkpoint

Na máquinas nerv01, edite o arquivo de parâmetros do processo REPLICAT.


GGSCI> edit params rep2
replicat rep2
ASSUMETARGETDEFS
userid OGG@ORCL, password Nerv2015
discardfile /u01/shared_ogg/rac01/dircrd/rep1_discard.txt, append, megabytes 10
DDL
map SCOTT.*, target SCOTT.*;

169
Lab 12.3: Golden Gate Bidirecional
Na máquina nerv01, acompanhe o log de erros.
$ tail -f /u01/shared_ogg/rac01/ggserr.log

Na máquina nerv11, acompanhe o log de erros.


$ tail -f /u01/app/oracle/product/12.1.0.2/ogg/ggserr.log

Na máquina nerv11, inicie o processo EXTRACT.


GGSCI> info all
GGSCI> start extract ext2
GGSCI> info all

Na máquina nerv01, inicie o processo REPLICAT.


GGSCI> info all
GGSCI> start replicat rep2
GGSCI> info all

Teste a replicação de dados do usuário SCOTT entre as máquinas nerv01 e nerv11.

Teste a replicação de dados do usuário SCOTT entre as máquinas nerv02 e nerv11.

Teste a replicação de dados do usuário SCOTT entre as máquinas nerv11 e nerv01.

170
Lab 13 – Golden Gate High Availability
Hands On !

171 171
Lab 13.1: Golden Gate HA
Na máquina nerv01, crie um VIP e um Resource para o Golden Gate.
# /u01/app/12.1.0.2/grid/bin/appvipcfg create -network=1 -ip=192.168.0.141
-vipname=rac01-ogg-vip -user=root

# /u01/app/12.1.0.2/grid/bin/crsctl start resource rac01-ogg-vip -n nerv01

# vi /u01/shared_ogg/rac01/ogg_action.sh

# chmod +x /u01/shared_ogg/rac01/ogg_action.sh

# chown oracle:oinstall /u01/shared_ogg/rac01/ogg_action.sh

# /u01/app/12.1.0.2/grid/bin/crsctl add resource ogg -type cluster_resource -attr


"ACTION_SCRIPT=/u01/shared_ogg/rac01/ogg_action.sh, CHECK_INTERVAL=30,
START_DEPENDENCIES='hard(rac01-ogg-vip,ora.orcl.db) pullup(rac01-ogg-vip)',
STOP_DEPENDENCIES='hard(rac01-ogg-vip)'"

# /u01/app/12.1.0.2/grid/bin/crsctl setperm resource rac01-ogg-vip -u user:oracle:r-x

# /u01/app/12.1.0.2/grid/bin/crsctl setperm resource ogg -o oracle

172
Lab 13.2: Golden Gate HA
Na máquina nerv01, verifique e inicie o Resource do Golden Gate.
$ /u01/shared_ogg/rac01/ogg_action.sh stop
$ $GRID_HOME/bin/crsctl status res ogg
$ $GRID_HOME/bin/crsctl start res ogg
$ $GRID_HOME/bin/crsctl status res ogg
$ $GRID_HOME/bin/crsctl stop res ogg
$ $GRID_HOME/bin/crsctl status res ogg
$ $GRID_HOME/bin/crsctl start res ogg
$ $GRID_HOME/bin/crsctl status res ogg

Reinicie a máquina nerv01, e verifique se o Golden Gate é iniciado na máquina nerv02.

173
Lab 14 – RAC Extended
Hands On !

174 174
Lab 14.0: RAC Extended
Na máquina nerv01, desabilite a replicação via Data Guard.
DGMGRL> DISABLE FAST_START FAILOVER;
DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MAXPERFORMANCE;
DGMGRL> REMOVE CONFIGURATION;
SQL> ALTER SYSTEM SET DG_BROKER_START=FALSE;

Na máquina nerv01, remova os SERVICEs utilizados para o replicação via Data Guard.
$ /u01/app/12.1.0.2/grid/bin/srvctl stop service -d ORCL -s OLTP
$ /u01/app/12.1.0.2/grid/bin/srvctl stop service -d ORCL -s OLAP
$ /u01/app/12.1.0.2/grid/bin/srvctl disable service -d ORCL -s OLTP
$ /u01/app/12.1.0.2/grid/bin/srvctl disable service -d ORCL -s OLAP
$ /u01/app/12.1.0.2/grid/bin/srvctl remove service -d ORCL -s OLAP
$ /u01/app/12.1.0.2/grid/bin/srvctl remove service -d ORCL -s OLTP

Na máquina nerv01, desabilite a replicação via Golden Gate.


# /u01/app/12.1.0.2/grid/bin/crsctl stop resource ogg
# /u01/app/12.1.0.2/grid/bin/crsctl delete resource ogg
# /u01/app/12.1.0.2/grid/bin/crsctl stop resource rac01-ogg-vip
# /u01/app/12.1.0.2/grid/bin/appvipcfg delete -vipname=rac01-ogg-vip

175
Lab 14.1: RAC Extended
Na máquina nerv11, execute novamente o Lab 1 e 2.

Na máquina nerv10, apague o conteúdo dos discos iSCSI.


# dd if=/dev/zero of=/dev/sda5 bs=512 count=10000
...

Na máquina nerv09, altere o Servidor iSCSI para permitir acesso aos discos para a máquina
nerv11.
# cat /etc/tgt/targets.conf
<target iqn.2010-10.com.nervinformatica:storage.asm01-01>
backing-store /dev/sda5
initiator-address 192.168.0.101
initiator-address 192.168.0.102
initiator-address 192.168.0.121
</target>
...
# service tgtd restart

Também na máquina nerv10, altere o Servidor iSCSI para permitir acesso aos discos para
as máquinas nerv01 e nerv02.

176
Lab 14.2: RAC Extended
Nas máquinas nerv01, nerv02 e nerv11, verifique os discos exportados pelo
Storage.
# chkconfig iscsid on
# iscsiadm -m discovery -t sendtargets -p 192.168.0.201 -l
# iscsiadm -m discovery -t sendtargets -p 192.168.0.202 -l

Nas máquinas nerv01, nerv02 e nerv11, adicione os novos discos no arquivo


/etc/iscsi/initiatorname.iscsi.
...
InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm11-01
InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm11-02
InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm11-03
InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm11-04
InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm11-05
InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm11-06
InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm11-07

177 177
Lab 14.3: RAC Extended
Nas máquinas nerv01, nerv02 e nerv11, verifique se os discos foram
configurados localmente.
# fdisk -l

Na máquina nerv01, particione os novos discos.


# fdisk /dev/sdi (e sdj, sdk, sdl, sdm, sdn, sdo)
n <enter>
p <enter>
1 <enter>
<enter>
<enter>
w <enter>
...

178 178
Lab 14.4: RAC Extended
Nas máquinas nerv02 e nerv11, execute a detecção dos novos discos.
# partprobe /dev/sdi
# partprobe /dev/sdj
# partprobe /dev/sdk
# partprobe /dev/sdl
# partprobe /dev/sdm
# partprobe /dev/sdn
# partprobe /dev/sdo

Na máquinas nerv11, configure a ASMLib.


# /etc/init.d/oracleasm configure
oracle <enter>
asmadmin <enter>
y <enter>
y <enter>
# /etc/init.d/oracleasm status

179 179
Lab 14.5: RAC Extended
Na máquina nerv01, crie os novos discos do ASM.
# /etc/init.d/oracleasm createdisk DISK08 /dev/sdi1
# /etc/init.d/oracleasm createdisk DISK09 /dev/sdj1
# /etc/init.d/oracleasm createdisk DISK10 /dev/sdk1
# /etc/init.d/oracleasm createdisk DISK11 /dev/sdl1
# /etc/init.d/oracleasm createdisk DISK12 /dev/sdm1
# /etc/init.d/oracleasm createdisk DISK13 /dev/sdn1
# /etc/init.d/oracleasm createdisk DISK14 /dev/sdo1

Nas máquinas nerv02 e nerv11, execute a detecção dos discos criados.


# /etc/init.d/oracleasm scandisks

Nas máquinas nerv01, nerv02 e nerv11, verifique se os discos estão corretos.


# /etc/init.d/oracleasm listdisks
# /etc/init.d/oracleasm querydisk -v -p DISK08
...

Nas máquinas nerv01, nerv02 e nerv11, verifique se os discos estão corretos.


# ls -lh /dev/oracleasm/disks/
brw-rw----. 1 oracle oinstall 8, 17 Mar 3 08:40 DISK00
brw-rw----. 1 oracle oinstall 8, 33 Mar 3 08:40 DISK01
...

180 180
Lab 14.6: RAC Extended
Nas máquinas nerv01, nerv02 e nerv11, remova a pasta /home/oracle/.ssh.
$ rm -rf .ssh

Na máquina nerv01, reconfigure o SSH sem senha.


[oracle@nerv01 ~]$ ssh-keygen -t rsa
<enter>
<enter>
<enter>
[oracle@nerv01 ~]$ ssh oracle@nerv02 mkdir -p .ssh
[oracle@nerv01 ~]$ ssh oracle@nerv11 mkdir -p .ssh
[oracle@nerv01 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv01 'cat >> .ssh/authorized_keys'
[oracle@nerv01 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv02 'cat >> .ssh/authorized_keys'
[oracle@nerv01 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv11 'cat >> .ssh/authorized_keys'

181
Lab 14.7: RAC Extended
Na máquina nerv02, reconfigure o SSH sem senha.
[oracle@nerv02 ~]$ ssh-keygen -t rsa
<enter>
<enter>
<enter>
[oracle@nerv02 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv01 'cat >> .ssh/authorized_keys'
[oracle@nerv02 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv02 'cat >> .ssh/authorized_keys'
[oracle@nerv02 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv11 'cat >> .ssh/authorized_keys'

Na máquina nerv11, reconfigure o SSH sem senha.


[oracle@nerv11 ~]$ ssh-keygen -t rsa
<enter>
<enter>
<enter>
[oracle@nerv11 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv01 'cat >> .ssh/authorized_keys'
[oracle@nerv11 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv02 'cat >> .ssh/authorized_keys'
[oracle@nerv11 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv11 'cat >> .ssh/authorized_keys'

182
Lab 14.8: RAC Extended
Na máquina nerv01, execute a instalação do Grid na máquina nerv11.
$ cd $GRID_HOME/addnode
$ ./addnode.sh -silent “CLUSTER_NEW_NODES={nerv11}”
“CLUSTER_NEW_VIRTUAL_HOSTNAMES={nerv11-vip}”

Na máquina nerv11, com o usuário root, execute os seguintes scripts.


# /u01/app/oraInventory/orainstRoot.sh
# /u01/app/12.1.0.2/grid/root.sh

Na máquina nerv01, execute instalação do Oracle na máquina nerv11.


$ cd $ORACLE_HOME/addnode
$ ./addnode.sh -silent "CLUSTER_NEW_NODES={nerv11}"

Na máquina nerv11, com o usuário root, execute o script abaixo.


# /u01/app/oracle/product/12.1.0.2/db_1/root.sh

Na máquina nerv01, execute a adição da instância.


$ $GRID_HOME/bin/srvctl add instance -d ORCL -i ORCL3 -n nerv11

183
Lab 14.9: RAC Extended
Na máquina nerv01, conclua a adição do nó.
SQL> ALTER SYSTEM SET INSTANCE_NUMBER=3 SID='ORCL3' SCOPE=SPFILE;
SQL> ALTER DATABASE ADD LOGFILE THREAD 3;
SQL> ALTER DATABASE ADD LOGFILE THREAD 3;
SQL> CREATE UNDO TABLESPACE UNDOTBS3;
SQL> ALTER SYSTEM SET UNDO_TABLESPACE=UNDOTBS3 SID='ORCL3' SCOPE=SPFILE;

$ $GRID_HOME/bin/srvctl start instance -d ORCL -i ORCL3

184
Lab 14.10: RAC Extended
Na máquina nerv01, faça a preparação para a criação dos novos FAILGROUPs.
$ export ORACLE_HOME=$GRID_HOME
$ export ORACLE_SID=+ASM1
SQL> ALTER SYSTEM SET ASM_POWER_LIMIT = 11;
SQL> ALTER DISKGROUP CONFIG REBALANCE POWER 11;
SQL> ALTER DISKGROUP DATA REBALANCE POWER 11;
SQL> ALTER DISKGROUP FRA REBALANCE POWER 11;

185
Lab 14.11: RAC Extended
Na máquina nerv01, crie os novos FAILGROUPs.
SQL> SELECT GROUP_NUMBER, PATH, NAME, FAILGROUP FROM V$ASM_DISK ORDER BY
1,2;

SQL> ALTER DISKGROUP CONFIG ADD FAILGROUP FAILGROUPB DISK 'ORCL:DISK08';


SQL> ALTER DISKGROUP CONFIG ADD FAILGROUP FAILGROUPB DISK 'ORCL:DISK09';
SQL> ALTER DISKGROUP CONFIG ADD FAILGROUP FAILGROUPB DISK 'ORCL:DISK10';
SQL> ALTER DISKGROUP CONFIG DROP DISK DISK01;
SQL> ALTER DISKGROUP CONFIG DROP DISK DISK02;
SQL> SELECT * FROM V$ASM_OPERATION;

SQL> ALTER DISKGROUP CONFIG ADD FAILGROUP FAILGROUPA DISK 'ORCL:DISK01';


SQL> ALTER DISKGROUP CONFIG ADD FAILGROUP FAILGROUPA DISK 'ORCL:DISK02';
SQL> ALTER DISKGROUP CONFIG DROP DISK DISK03;
SQL> SELECT * FROM V$ASM_OPERATION;
SQL> ALTER DISKGROUP CONFIG ADD FAILGROUP FAILGROUPA DISK 'ORCL:DISK03';
SQL> SELECT * FROM V$ASM_OPERATION;

SQL> SELECT GROUP_NUMBER, PATH, NAME, FAILGROUP FROM V$ASM_DISK ORDER BY


1,2;

186
Lab 14.12: RAC Extended
Na máquina nerv01, crie os novos FAILGROUPs.
SQL> SELECT GROUP_NUMBER, PATH, NAME, FAILGROUP FROM V$ASM_DISK ORDER BY
1,2;

SQL> ALTER DISKGROUP DATA ADD FAILGROUP FAILGROUPA DISK 'ORCL:DISK11';


SQL> ALTER DISKGROUP DATA ADD FAILGROUP FAILGROUPB DISK 'ORCL:DISK12';
SQL> ALTER DISKGROUP DATA DROP DISK DISK04;
SQL> ALTER DISKGROUP DATA DROP DISK DISK05;
SQL> SELECT * FROM V$ASM_OPERATION;

SQL> ALTER DISKGROUP DATA ADD FAILGROUP FAILGROUPA DISK 'ORCL:DISK04';


SQL> ALTER DISKGROUP DATA ADD FAILGROUP FAILGROUPA DISK 'ORCL:DISK05';
SQL> ALTER DISKGROUP DATA DROP DISK DISK11;
SQL> SELECT * FROM V$ASM_OPERATION;
SQL> ALTER DISKGROUP DATA ADD FAILGROUP FAILGROUPB DISK 'ORCL:DISK11';

SQL> SELECT GROUP_NUMBER, PATH, NAME, FAILGROUP FROM V$ASM_DISK ORDER BY


1,2;

187
Lab 14.13: RAC Extended
Na máquina nerv01, crie os novos FAILGROUPs.
SQL> SELECT GROUP_NUMBER, PATH, NAME, FAILGROUP FROM V$ASM_DISK ORDER BY
1,2;

SQL> ALTER DISKGROUP FRA ADD FAILGROUP FAILGROUPA DISK 'ORCL:DISK13';


SQL> ALTER DISKGROUP FRA ADD FAILGROUP FAILGROUPB DISK 'ORCL:DISK14';
SQL> ALTER DISKGROUP FRA DROP DISK DISK06;
SQL> ALTER DISKGROUP FRA DROP DISK DISK07;
SQL> SELECT * FROM V$ASM_OPERATION;

SQL> ALTER DISKGROUP FRA ADD FAILGROUP FAILGROUPA DISK 'ORCL:DISK06';


SQL> ALTER DISKGROUP FRA ADD FAILGROUP FAILGROUPA DISK 'ORCL:DISK07';
SQL> ALTER DISKGROUP FRA DROP DISK DISK13;
SQL> SELECT * FROM V$ASM_OPERATION;
SQL> ALTER DISKGROUP FRA ADD FAILGROUP FAILGROUPB DISK 'ORCL:DISK13';

SQL> SELECT GROUP_NUMBER, PATH, NAME, FAILGROUP FROM V$ASM_DISK ORDER BY


1,2;

188
Lab 14.14: RAC Extended
Na máquina nerv01, selecione os FAILGROUPs preferidos para leituras.
SQL> ALTER SYSTEM SET ASM_PREFERRED_READ_FAILURE_GROUPS =
'DATA.FAILGROUPA' SCOPE=BOTH SID='+ASM1';
SQL> ALTER SYSTEM SET ASM_PREFERRED_READ_FAILURE_GROUPS =
'DATA.FAILGROUPA' SCOPE=BOTH SID='+ASM2';
SQL> ALTER SYSTEM SET ASM_PREFERRED_READ_FAILURE_GROUPS =
'DATA.FAILGROUPB' SCOPE=BOTH SID='+ASM3';

189
Lab 15 – RAC Extended Quorum
Hands On !

190 190
Lab 15.1: RAC Extended Quorum
Na máquina nerv15, crie 1 diretório.
# mkdir /shared_config

Nas máquinas nerv15, adicionar no arquivo /etc/exports:


/shared_config *(rw,sync,no_wdelay,insecure_locks,no_root_squash)

Nas máquinas nerv15, iniciar o NFS Server:


# service nfs start
# chkconfig nfs on

191 191
Lab 15.2: RAC Extended Quorum
Nas máquinas nerv01, nerv02 e nerv11, adicionar no arquivo /etc/fstab a linha
abaixo.
nerv15:/shared_config /u01/shared_config15 nfs
rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0,noac 0 0

Nas máquinas nerv01, nerv02 e nerv11, executar:


# mkdir /u01/shared_config15
# mount /u01/shared_config15

Na máquina nerv01, executar:


# mkdir /u01/shared_config15/rac01
# chown -R oracle:oinstall /u01/shared_config15/rac01

192 192
Lab 15.3: RAC Extended Quorum
Nas máquinas nerv09 e nerv10, crie 1 partição de 1GB, sem formatar.

Nas máquinas nerv09 e nerv10, adicione o disco ao iSCSI server.


# cat /etc/tgt/targets.conf
<target iqn.2010-10.com.nervinformatica:storage.asm01-08>
backing-store /dev/sda33
initiator-address 192.168.0.101
Initiator-address 192.168.0.102
initiator-address 192.168.0.121
...

# service tgtd restart

193 193
Lab 15.4: RAC Extended Quorum
Nas máquinas nerv01, nerv02 e nerv11, verifique os Discos exportados no
Storage.
# iscsiadm -m discovery -t sendtargets -p 192.168.0.201 -l
# iscsiadm -m discovery -t sendtargets -p 192.168.0.202 -l

Nas máquinas nerv01, nerv02 e nerv11, adicione o novo disco no arquivo


/etc/iscsi/initiatorname.iscsi.
InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-08
InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm11-08

194 194
Lab 15.5: RAC Extended Quorum
Nas máquinas nerv01, nerv02 e nerv11, verifique se os discos foram
configurados localmente.
# fdisk -l

Nas máquinas nerv01, particione os novos discos.


# fdisk /dev/sdp
n <enter>
p <enter>
1 <enter>
<enter>
<enter>
w <enter>

# fdisk /dev/sdq
n <enter>
p <enter>
1 <enter>
<enter>
<enter>
w <enter>

195 195
Lab 15.6: RAC Extended Quorum
Nas máquinas nerv02 e nerv11, execute a detecção dos novos discos.
# partprobe /dev/sdp
# partprobe /dev/sdq

Na máquina nerv01, execute a criação dos novos discos do ASM.


# /etc/init.d/oracleasm createdisk DISK15 /dev/sdp1
# /etc/init.d/oracleasm createdisk DISK16 /dev/sdq1

Nas máquinas nerv02 e nerv11, execute a detecção dos discos criados.


# /etc/init.d/oracleasm scandisks

Nas máquinas nerv01, nerv02 e nerv11, verifique se os discos estão corretos.


# /etc/init.d/oracleasm listdisks
# /etc/init.d/oracleasm querydisk -v -p DISK15
# /etc/init.d/oracleasm querydisk -v -p DISK16

Nas máquinas nerv01, nerv02 e nerv11, verifique se os discos estão corretos.


# ls -lh /dev/oracleasm/disks/
brw-rw----. 1 oracle oinstall 8, 17 Mar 3 08:40 DISK00
...

196 196
Lab 15.8: RAC Extended Quorum
Na máquina nerv01, crie o FAILGROUP para o Voting Disk.
$ export ORACLE_HOME=$GRID_HOME
$ export ORACLE_SID=+ASM1
SQL> CREATE DISKGROUP VD NORMAL REDUNDANCY
FAILGROUP FG1 DISK 'ORCL:DISK15'
FAILGROUP FG2 DISK 'ORCL:DISK16'
ATTRIBUTE 'compatible.asm' = '12.1.0.0.0';

Na máquina nerv01, crie um arquivo para o Voting Disk.


# dd if=/dev/zero of=/u01/shared_config15/rac01/asm01 bs=10M count=58

Na máquina nerv01, altere a permissão do disco.


# chown -R oracle:oinstall /u01/shared_config15/rac01/

Na máquina nerv01, altere a localização de discos.


$ export ORACLE_HOME=$GRID_HOME
$ export ORACLE_SID=+ASM1
SQL> ALTER SYSTEM SET asm_diskstring='ORCL:*', '/u01/shared_config15/rac01/*' SID='*';

197 197
Lab 15.9: RAC Extended Quorum
Na máquina nerv01, adicione o FAILGROUP de QUORUM.
$ export ORACLE_HOME=$GRID_HOME
$ export ORACLE_SID=+ASM1
SQL> ALTER DISKGROUP VD ADD QUORUM FAILGROUP FG3 DISK
'/u01/shared_config15/rac01/asm01';

Na máquina nerv01, habilitar o novo FAILGROUP para as outr máquinas.


$GRID_HOME/bin/srvctl start diskgroup -g VD -n nerv02
$GRID_HOME/bin/srvctl enable diskgroup -g VD -n nerv02
$GRID_HOME/bin/srvctl start diskgroup -g VD -n nerv11
$GRID_HOME/bin/srvctl enable diskgroup -g VD -n nerv11

198 198
Lab 15.10 RAC Extended Quorum
Na máquina nerv01, altere a localização do OCR para o novo DISKGROUP.
# /u01/app/12.1.0.2/grid/bin/ocrcheck
# /u01/app/12.1.0.2/grid/bin/ocrconfig -add +VD
# /u01/app/12.1.0.2/grid/bin/ocrcheck
# /u01/app/12.1.0.2/grid/bin/ocrconfig -delete +CONFIG
# /u01/app/12.1.0.2/grid/bin/ocrcheck

Na máquina nerv01, altere a localização do Voting Disk para o novo DISKGROUP.


# /u01/app/12.1.0.2/grid/bin/crsctl query css votedisk
# /u01/app/12.1.0.2/grid/bin/crsctl replace votedisk +VD
# /u01/app/12.1.0.2/grid/bin/crsctl query css votedisk

199 199
Lab 15.11: RAC Extended Quorum
Na máquina nerv01, configure o tempo permitido de DOWNTIME.
$ export ORACLE_HOME=$GRID_HOME
$ export ORACLE_SID=+ASM1

SQL> ALTER DISKGROUP CONFIG SET ATTRIBUTE 'compatible.asm' = '12.1.0.2.0';


SQL> ALTER DISKGROUP VD SET ATTRIBUTE 'compatible.asm' = '12.1.0.2.0';
SQL> ALTER DISKGROUP DATA SET ATTRIBUTE 'compatible.asm' = '12.1.0.2.0';
SQL> ALTER DISKGROUP FRA SET ATTRIBUTE 'compatible.asm' = '12.1.0.2.0';

SQL> ALTER DISKGROUP CONFIG SET ATTRIBUTE 'compatible.rdbms' = '12.1.0.2.0';


SQL> ALTER DISKGROUP VD SET ATTRIBUTE 'compatible.rdbms' = '12.1.0.2.0';
SQL> ALTER DISKGROUP DATA SET ATTRIBUTE 'compatible.rdbms' = '12.1.0.2.0';
SQL> ALTER DISKGROUP FRA SET ATTRIBUTE 'compatible.rdbms' = '12.1.0.2.0';

SQL> ALTER DISKGROUP CONFIG SET ATTRIBUTE 'disk_repair_time' = '30m';


SQL> ALTER DISKGROUP VD SET ATTRIBUTE 'disk_repair_time' = '30m';
SQL> ALTER DISKGROUP DATA SET ATTRIBUTE 'disk_repair_time' = '30m';
SQL> ALTER DISKGROUP FRA SET ATTRIBUTE 'disk_repair_time' = '30m';

Desligue o Storage de Produção, e teste o funcionamento do Site Produção e DR.

200 200

Você também pode gostar