Você está na página 1de 22

Oracle Real Application Clusters

Oracle RAC Overview

Oracle Real Application Clusters (Oracle RAC) is a shared cache clustered database
architecture that utilizes Oracle Grid Infrastructure to enable the sharing of server and
storage resources. Automatic, instantaneous failover to other nodes, and therefore
enables an extremely high degree of scalability, availability, and performance.

Originally focused on providing improved database services, Oracle RAC has evolved
over the years and is now based on a comprehensive high availability (HA) stack that can
be used as the foundation of a data base cloud system as well as a shared infrastructure
that can ensure high availability, scalability, flexibility and agility for any application in your
data center.

1) ARCHITECTURE

1.1) WHAT IS IT

Oracle RAC allows multiple computers to run Oracle RDBMS software simultaneously
while accessing a single database, thus providing a clustered database.

In a non-RAC Oracle database, a single instance accesses a single database. The


database consists of a collection of data files, control files, and redo logs located on disk.
The instance comprises the collection of Oracle-related memory and operating system
processes that run on a computer system.

In an Oracle RAC environment, two or more computers (each with an instance)


concurrently access a single database. This allows an application or user to connect to
either computer and have access to a single coordinated set of data.

1.2) ARCHITECTURAL OVERVIEW

THE ORACLE 10G RAC ARCHITECTURE

From the point of view of the installation, the main architecture of the RAC environment
includes the following:

* Nodes or Servers

* Private Interconnect

* Vendor Supplied Cluster Manager or Cluster Software (Optional)

* Oracle provided Cluster Ready Services

* Shared Storage Subsystem


Oracle Real Application Clusters

* Raw Partitions or Cluster File System or Network Attached Storage (NAS) or Automatic
Storage Management (ASM)

* Public Network Connection

* Oracle Database software with RAC option

* Filesystem

1.2.1) Nodes or Hosts

The nodes or servers are the main platforms on which the Oracle RAC database is
installed. The Cluster nodes range from a high-end powerful Sun Fire 15K to a low-end
Linux server. They can also range from a mainframe grade IBM zSeries server to the
emerging blade-server technologies such as IBM BladeCenter or Egenera. First, the
appropriate operating system needs to be installed on the nodes. It is also important to
choose the appropriate number of nodes while setting up the node operating environment.

1.2.2) Private Interconnect

The private interconnect is the physical construct that allows inter-node communication. It
can be a simple crossover cable with UDP or it can be a proprietary interconnect with
specialized proprietary communications protocol. When setting up more than 2- nodes, a
switch is usually needed. This provides the maximum performance for RAC, which relies
on inter-process communication between the instances for cache-fusion implementation.

1.2.3) Clusterware

Oracle Clusterware is software that enables servers to operate together as if they are one
server. Each server looks like any standalone server.

Creating clusters involves installation of the cluster software on all nodes in the proposed
cluster, as well as checking the configuration. The necessary tests need to be performed
to verify the validity of the cluster. At the same time, the necessary software that controls
the private interconnect is also installed and configured. With the availability of Oracle
provided Cluster Ready Services (CRS), one can achieve a uniform and standard cluster
platform. CRS is more than just cluster software, but it extends the high availability
services in the cluster.

1.2.4) Shared Storage

The storage system provides an external common disk system accessible by all nodes of
the cluster. The connection from the nodes to the disk sub-system is usually through a
fiber switch or a SCSI connection. Once the storage volumes are presented to the hosts in
the cluster, usually with the help of the logical volume manager, one can create volumes
of suitable size for use in the RAC database. With the introduction of ASM methodology,
the shared storage structures can be managed very easily. Once the disk groups are
created with input of the disk devices, the ASM instances on each of the node in the
Oracle Real Application Clusters

cluster provide the shared storage resources to create the Database Files. The
preparation of storage structures has been covered extensively in Chapter 5, Preparing
Shared Storage.

1.2.5) Public Network

The clustered servers or hosts need to have public network connectivity so that client
machines in the network can access the resources on the RAC system.

1.2.6) Virtual IP Address for CRS

Oracle 10g release supports the concept of Service, which can be assigned the Virtual IP
address, and which float among the specified nodes. By creating the Virtual IP address(A
virtual IP address (VIP or VIPA) is an IP address that is not connected to a specific
computer or network interface card (NIC) on a computer. Incoming packets are sent to the
VIP address, but they are redirected to physical network interfaces.VIPs are mostly used
for connection redundancy; a VIP address may still be available if a computer or NIC fails
because an alternative computer or NIC replies to connections.) and Virtual Host names,
the applications get a sense of transparency in their connection to the RAC database
service.

1.2.7) File systems

A regular single-instance database has three basic types of files: database software and
dump files; datafiles, spfile, control files and log files, often referred to as "database files";
and it may have recovery files, if using RMAN. A RAC database has an additional type of
file referred to as "CRS files". These consist of the Oracle Cluster Registry (OCR) and the
voting disk. An Oracle RAC database can have up to

100 instances.

Depending on your platform, you can use the following file storage

options for Oracle RAC

■ ASM, which Oracle recommends

■ Oracle Cluster File System (OCFS), which is available for Linux and Windows

platforms, or a third-party cluster file system that is certified for Oracle RAC

■ A network file system

■ Raw devices

Oracle RAC databases differ architecturally from Oracle RACsingle-instance Oracle


Oracle Real Application Clusters

databases in that each Oracle RAC database instance also has:

■ At least one additional thread of redo for each instance

■ An instance-specific undo tablespace

1.3) ARCHITECHTURAL COMPONENTS

1.3.1) Private interconnect components

Oracle recommends that you configure a redundant interconnect to prevent the


interconnect from being a single point of failure. Oracle also recommends that you use
User Datagram Protocol (UDP) on a Gigabit Ethernet for your cluster interconnect.
Crossover cables are not supported for use with Oracle Clusterware or Oracle RAC

databases. All nodes in an Oracle RAC environment must connect to a Local Area
Network(LAN) to enable users and applications to access the database. . Users can
access an Oracle RAC database using a client-server configuration or through one or
more middle tiers, with or without connection pooling. Users can be DBAs, developers,
application users, power users, such as data miners who create their own searches, and
so on. Most public networks typically use TCP/IP, but you can use any supported
hardware and software combination. Oracle RAC database instances can be accessed
through a database’s defined, default IP address and through VIP addresses.

In addition to the node’s host name and IP address, you must also assign a virtual
hostname and an IP address to each node. The virtual host name or VIP should be used
to

connect to the database instance. For example, you might enter the virtual host name
CRM in the address list of the tnsnames.ora file.

1.3.2) ORACLE CLUSTERWARE COMPONENTS

The Oracle Clusterware requires two clusterware components: a voting disk to record
node membership information.Oracle RAC uses the voting disk to determine which
instances are members of a cluster. The voting disk must reside on shared disk and the
Oracle Cluster Registry (OCR) to record cluster configuration information. Maintains
cluster configuration information as well as configuration information about any cluster
database within the cluster.The OCR also manages information about processes that
Oracle Clusterware controls. The voting disk and the OCR must reside on shared storage.
The Oracle Clusterware can multiplex the OCR and the voting disks and Oracle
recommends that you use this feature to ensure cluster high availability.

The following list describes the functions of some of the major Oracle Clusterware
components Cluster Synchronization Services (CSS)—Manages the cluster configuration
Oracle Real Application Clusters

by controlling which nodes are members of the cluster and by notifying members when a
node joins or leaves the cluster. If you are using third-party clusterware, then the css
process interfaces with your clusterware to manage node

membership information.

■ Cluster Ready Services (CRS)—The primary program for managing high

availability operations within a cluster. Anything that the crs process manages is known as
a cluster resource which could be a database, an instance, a service, a Listener, a virtual
IP (VIP) address, an application process, and so on. The crs process manages cluster
resources based on the resource’s configuration information that is stored in the OCR.
This includes start, stop, monitor and failover operations. The crs process generates
events when a resource status changes. When you have installed Oracle RAC, crs
monitors the Oracle instance, Listener, and so on, and automatically restarts these
components when a failure occurs. By default, the crs process makes five attempts to
restart a resource and then does not make further restart attempts if the resource does not
restart.

■ Event Management (EVM)—A background process that publishes events that crs
creates.

■ Oracle Notification Service (ONS)—A publish and subscribe service for communicating
Fast Application Notification (FAN- FAN is a notification mechanism that Oracle RAC uses
to notify other processes about configuration and service level information such as
includes service status changes, such as UP or DOWN events) events.

■ RACG—Extends clusterware to support Oracle-specific requirements and complex


resources. Runs server callout scripts when FAN events occur.

■ Process Monitor Daemon (OPROCD)—This background process is locked in memory


to monitor the cluster and provide I/O fencing. OPROCD performs its check, stops running,
and if the wake up is beyond the expected time, then OPROCD resets the processor and
reboots the node. An OPROCD failure results in Oracle Clusterware restarting the node

What is a standby database?

Standby - Oracle standby database Solution within your reach

A standby database is a shadow copy of an operational or primary database on a remote or


local* server. It protects agains data loss and availability of the primary database. It is
Oracle Real Application Clusters

synchronised with the primary database on a continual basis and can be used for disaster
recovery, backups, replication, analysis, shadow environment, high availability and reporting to
name a few applications.

A standby database is far superior to a normal backup as it is instantly available in the event of a
disaster or failure. To restore a backup takes time, and during the restore time the system is not
available. The restore may also cause too much impact on other systems. The hardware may also
not be available for the restore causing further delay.

With a standby database there is nothing to restore in the event of a failure as the standby
database is always available, up to date and ready to take over. It is possible to switch
applications over to the standby database in a matter of minutes to allow business continuity.
Overview of a standby database:

Exact copy of main database in a remote or local* location


Is held up to date by applying changes from main database
In the event of a disaster, the standby database becomes active
Users (or applications) are transferred to the standby database to continue operation

Oracle Data Guard Concept

Oracle Data Guard is one of the most effective and comprehensive data availability, data
protection and disaster recovery solutions available today for enterprise data.
Oracle Data Guard is the management, monitoring, and automation software infrastructure that
creates, maintains, and monitors one or more standby databases to protect enterprise data from
failures, disasters, errors, and corruptions.
Data Guard maintains these standby databases as transitional consistent copies of the
production database. These standby databases can be located at remote disaster recovery sites
thousands of miles away from the production data center, or they may be located in the same
city, same campus, or even in the same building. If the production database becomes unavailable
because of a planned or an unplanned outage, Data Guard can switch any standby database to
the production role, thus minimizing the downtime associated with the outage, and preventing
any data loss.

RAC Support:

It is now possible to use the Data Guard Broker, and the Broker’s Command Line Interface
(DGMGRL), as well as Enterprise Manager, to create and manage Data Guard configurations that
contain RAC primary and RAC standby databases. In Oracle9i, such administration is possible only
through SQL*Plus. In Data Guard 10g, Data Guard Broker interfaces with Oracle Clusterware such
Oracle Real Application Clusters

that it has control over critical operations during specific Data Guard state transitions, such as
switchovers, failovers, protection mode changes and state changes.

Simplified Browser-based Interface


Administration of a Data Guard configuration can be done through the new streamlined
browser-based HTML interface of Enterprise Manager that enables complete standby database
lifecycle management. The focus of such streamlined administration is on:

Ease of use.
Management based on best practices.
Pre-built integration with other HA features.

Data Guard Benefits


Disaster recovery and high availability
Data Guard provides an efficient and comprehensive disaster recovery and high availability
solution. Automatic failover and easy-to-manage switchover capabilities allow quick role
reversals between primary and standby databases, minimizing the downtime of the primary
database for planned and unplanned outages.

Complete data protection


A standby database also provides an effective safeguard against data corruptions and user
errors. Storage level physical corruptions on the primary database do not propagate to the
standby database. Similarly, logical corruptions or user errors that cause the primary database to
be permanently damaged can be resolved. Finally, the redo data is validated at the time it is
received at the standby database and further when applied to the standby database.

Efficient utilization of system resources


A physical standby database can be used for backups and read-only reporting, thereby reducing
the primary database workload and saving valuable CPU and I/O cycles. In Oracle Database 10g
Release 2, a physical standby database can also be easily converted back and forth between
being a physical standby database and an open read/write database. A logical standby database
allows its tables to be simultaneously available for read-only access while they are updated from
the primary database. A logical standby database also allows users to perform data manipulation
operations on tables that are not updated from the primary database. Finally, additional indexes
and materialized views can be created in the logical standby database for better reporting
performance.

Flexibility in data protection to balance availability against performance requirements Oracle


Data Guard offers the maximum protection, maximum availability, and maximum performance
Oracle Real Application Clusters

modes to help enterprises balance data availability against system performance requirements.
Protection from communication failures If network connectivity is lost between the primary
and one or more standby databases, redo data cannot be sent from the primary to those standby
databases. Once connectivity is re-established, the missing redo data is automatically detected
by Data Guard and the necessary archive logs are automatically transmitted to the standby
databases. The standby databases are resynchronized with the primary database, with no
manual intervention by the administrator.

Centralized and simple management Data Guard Broker automates the management and
monitoring tasks across the multiple databases in a Data Guard configuration. Administrators
may use either Oracle Enterprise Manager or the Broker’s own specialized command-line
interface (DGMGRL) to take advantage of this integrated management framework.

Integrated with Oracle databaseData Guard is available as an integrated feature of the Oracle
Database (Enterprise Edition) at no extra cost.

How to Create a RAC Standby Database


how to create a RAC physical standby from a RAC primary.
The Setup
For this paper, I have a two-node Oracle RAC database with the following details:
Hosts: prim01 & prim02
OS: Oracle Linux 7.2
Grid Infrastructure: Oracle GI 12.1.0.2
Oracle RDBMS: Oracle EE 12.1.0.2
Instances: orcl1 & orcl2 (db name is ‘orcl’)

The RAC primary database is already up and running.


The standby system will be another two-node RAC database. Currently, the nodes are up and
running. Grid Infrastructure has been installed. The RDBMS software has been installed, but
there is no standby database running. That's the point of this paper after all. The following are
details for the standby cluster:
Oracle Real Application Clusters

Hosts: stdby01 &


stdby02OS: Oracle Linux 7.2Grid
Infrastructure: Oracle GI 12.1.0.2Oracle
RDBMS: Oracle EE 12.1.0.2Instances: orcls1 &
orcls2 (db name will be ‘orcls’)

The Basics

Just like creating a single-instance standby database, the basic steps are as follows:
 Prepare the primary to support a physical standby
 Create the standby database
 Start the standby and apply recovery

If you’ve read the previous document, these steps are the same. There is one additional step for
this document.

 Convert the standby to Oracle RAC


The way I create a RAC standby is to first create it as a single-instance database. I then convert
the single-instance standby database to Oracle RAC as a final step. You can use Enterprise
Manager or the DBCA to convert the database to RAC, but I perform the process manually.

Prepare the Primary

The first major step in creating a physical standby is to prepare the primary database. Oracle
transmits changes from the primary to the physical standby database by transporting its redo
stream from one to the other. Preparing the primary means we need to ensure redo records are
always in the redo stream, that the primary knows where to ship the redo, and that the redo
stream is archived. In other words, the steps to prepare the primary are:
 Enable Forced Logging
 Set Initialization Parameters
 Enable Archive Logging

So why do we need to enable forced logging? Many, many versions ago, Oracle introduced the
ability to define a table as NOLOGGING. Performance tuning specialists loved this feature
because bulk data loads to a table could avoid the overhead of writing redo records to the Log
Buffer and subsequently to the Online Redo Logs (ORLs). Bulk loads were faster. As with just
about anything in database systems, there is a tradeoff. That increased speed came with a
penalty. One could not recover the results of a bulk data load. The DBA needed to either backup
the database after the bulk load or have the ability to re-load the data. For many, the tradeoff
was worth it. Unfortunately for physical standby databases, there is no tradeoff. Remember that
Oracle Real Application Clusters

the physical standby database must be an exact copy of the primary. We cannot have an exact
copy if some of the transactions are never in the redo stream. We could ensure that all tables are
not defined as NOLOGGING, but what if a developer slips one past the DBA? Any DML on that
NOLOGGING table would break the physical standby. We need a mechanism to force all DML to
be logged to the redo stream no matter what. This entire paragraph took much longer than it
does to enable force logging. The DBA needs to issue the following

SQL ORCL> alter database force logging; Database altered.


SQL ORCL> select force_logging from v$database;

Next, we need to define some initialization parameters. Some of these may already be defined
already in your system but we’ll review them here just in case. These are the list of parameters I
need to ensure are defined properly in the primary and what they are used for:

 DB_NAME – both the primary and the physical standby database will have the same database
name. After all, the standby is an exact copy of the primary and its name needs to be the same.
 DB_UNIQUE_NAME – While both the primary and the standby have the same database name,
they have a unique name. In the primary, DB_UNIQUE_NAME equals DB_NAME. In the standby,
DB_UNIQUE_NAME does not equal DB_NAME. The DB_UNIQUE_NAME will match the
ORACLE_SID for that environment.
 LOG_ARCHIVE_FORMAT – Because we have to turn on archive logging, we need to specify the
file format of the archived redo logs.
 LOG_ARCHIVE_DEST_1 – The location on the primary where the ORLs are archived. This is called
the archive log destination.
 LOG_ARCHIVE_DEST_2 – The location of the standby database.
 REMOTE_LOGIN_PASSWORDFILE – We need a password file because the primary will sign on to
the standby remotely as SYSDBA.

I used a text editor to modify the PFILE’s contents as follows

*.audit_file_dest='/u01/app/oracle/admin/orcl/adump'
*.audit_trail='db'
*.compatible='12.1.0.2.0'
*.control_files='/u01/app/oracle/oradata/orcl/control01.ctl','/u01/app/ora
cle/oradata/orcl/control02.ctl'
*.db_block_size=8192
*.db_domain=''
*.db_name='orcl'
*.db_unique_name='orcl'
*.diagnostic_dest='/u01/app/oracle'
Oracle Real Application Clusters

*.dispatchers='(PROTOCOL=TCP) (SERVICE=orclXDB)'
*.log_archive_dest_1='LOCATION="/u01/app/oracle/oradata/arch"'
*.log_archive_dest_2='SERVICE="orcls", ASYNC NOAFFIRM'
*.log_archive_format=%t_%s_%r.arc
*.open_cursors=300
*.pga_aggregate_target=500m
*.processes=300
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=2000m
*.undo_tablespace='UNDOTBS1'

Most of the parameters above were defined by the DBCA when the database was created. The
ones in red are important to support the standby configuration. LOG_ARCHIVE_DEST_1 points to
a location on disk for the archive log destination. LOG_ARCHIVE_DEST_2 points to an alias we will
add in our TNSNAMES.ORA file soon. We also specified ASYNC NOAFFIRM because in this
example, we will configure for maximum performance, not zero data loss.
We’ve nearly completed the first two subtasks in this section. Our initialization parameters
changes are not complete until we create the SPFILE from this text file and startup the instance.
Since this requires downtime, we will also use that same downtime window to configure archive
logging. This downtime window should be brief. We will shutdown the instance, create the
SPFILE with our parameter changes, start the instance in MOUNT mode, and start archive logging
SQL ORCL> shutdown immediate
SQL ORCL> create spfile from pfile='/home/oracle/pfile.txt';
SQL ORCL> startup mount ORACLE instance started.
SQL ORCL> alter database archivelog;
SQL ORCL> alter database open; Database altered.

One thing we haven’t done yet is to configure TNSNAMES.ORA. Remember that


LOG_ARCHIVE_DEST_2 is saying to use the service ORCLS. We need a TNS alias for this service. In
our $ORACLE_HOME/network/admin/tnsnames.ora configuration file, the following entry is
placed:

ORCLS = (DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = stdby)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcls)
)
)
Oracle Real Application Clusters

Before we leave this section, it is important that the database administrator verifies the
configuration. First, we verify archive logging is working correctly.

SQL ORCL> archive log list

SQL ORCL> alter system switch logfile;


SQL ORCL> !ls -l /u01/app/oracle/oradata/arch
-rw-r-----. 1 oracle oinstall 3663360 Jul 2 08:20 1_227_909527034.arc
-rw-r-----. 1 oracle oinstall 12386816 Jul 2 08:25 1_228_909527034.arc
-rw-r-----. 1 oracle oinstall 5632 Jul 2 08:25 1_229_909527034.arc

The log files are in the archive destination as expected. The current log was sequence #229.
When the log switch was forced, we can see the archiver created the archived redo log for that
sequence number. We can also examine this activity in the database’s Alert Log

Create the Standby Database


This is the second major task to complete. In this section, we will create a backup of the primary
and use it to create the standby database. We need to create a special standby control file. A
password file, a parameter file, and a similar TNS alias will complete the setup. The sub tasks are
outlined below.
 Create a backup of the primary.
 Create a standby controlfile.
 Copy the password file.
 Create a parameter file.
 Create a TNSNAMES.ORA file

Remember that a physical standby database is an exact copy of the primary database. So we
need to duplicate the primary. There are many methods to do so. I’m simply going to bring down
the primary database and copy its datafiles to a backup location. This will work well for me
because this is a small database and because its only a testbed. I can handle the downtime. For
larger databases and or ones with little or no downtime window, then use the RMAN DUPLICATE
functionality.

All of my database’s files for this testbed database reside in one directory so my life is easy here.
I will determine the file location, shutdown the database, copy the files to the backup location,
and then start the primary instance again.

SQL ORCL> connect / as sysdba


Oracle Real Application Clusters

Connected.
SQL ORCL> select file_name from dba_data_files;
FILE_NAME
----------------------------------------------------------------
/u01/app/oracle/oradata/orcl/system01.dbf
/u01/app/oracle/oradata/orcl/sysaux01.dbf
/u01/app/oracle/oradata/orcl/users01.dbf
/u01/app/oracle/oradata/orcl/undotbs01.dbf

Then I’ll perform an orderly shutdown of the database. Because the primary is Oracle RAC, I need
to do this with the srvctl utility. After I shutdown the instances, I’ll create a directory to hold my
backup, copy the files to that directory, then bring the primary back up.

[oracle@prim01 ~]$ srvctl stop database -d orcl -o immediate


[oracle@prim01 ~]$ mkdir /u01/app/oracle/oradata/orcl/bkup
[oracle@prim01 ~]$ cp /u01/app/oracle/oradata/orcl/* /u01/app/oracle/oradata/orcl/bkup/.

cp: omitting directory ‘/u01/app/oracle/oradata/orcl/bkup’


[oracle@prim01 ~]$ srvctl start database -d orcl

On one of the nodes, I will copy the backup files to shared storage on the standby cluster.
[oracle@prim01 ~]$ scp /u01/app/oracle/oradata/orcl/bkup/*
oracle@stdby01:/u01/app/oracle/oradata/orcl/.
control01.ctl 100% 18MB 18.1MB/s 00:01
control02.ctl 100% 18MB 18.1MB/s 00:01
orapworcl 100% 7680 7.5KB/s 00:00
redo01.log 100% 50MB 16.7MB/s 00:03
redo02.log 100% 50MB 16.7MB/s 00:03
redo03.log 100% 50MB 16.7MB/s 00:03
redo04.log 100% 50MB 16.7MB/s 00:03
spfileorcl.ora 100% 4608 4.5KB/s 00:00
sysaux01.dbf 100% 1200MB 17.9MB/s 01:07
system01.dbf 100% 700MB 30.4MB/s 00:23
temp01.dbf 100% 25MB 25.0MB/s 00:01
undotbs01.dbf 100% 415MB 29.6MB/s 00:14
undotbs02.dbf 100% 200MB 25.0MB/s 00:08
Oracle Real Application Clusters

users01.dbf 100% 5128KB 5.0MB/s 00:00

Next, I need to create a standby controlfile and copy it to the standby’s shared storage.
SQL> alter database create standby controlfile as '/home/oracle/control.stdby';
Database altered.

SQL> !scp /home/oracle/control.stdby oracle@stdby01:/u01/app/oracle/oradata/orcl/.


control.stdby 100% 18MB 9.0MB/s 00:02

If you were paying attention to the file transfers above, you may have seen that when I copied
the backup to the standby’s shared storage, I also copied the password file as well.

Remember from the previous document that we need to remove the control files on the standby
and replace them with the standby control file. On one node of the standby cluster, in the shared
storage:

[oracle@stdby01 orcl]$ rm control01.ctl control02.ctl


[oracle@stdby01 orcl]$ cp control.stdby control01.ctl
[oracle@stdby01 orcl]$ mv control.stdby control02.ctl

Next I need to prepare a PFILE that I’m going to use to startup the standby database. Here is
what I have in my standby’s PFILE:

*.audit_file_dest='/u01/app/oracle/admin/orcls/adump'
*.cluster_database=true
*.compatible='12.1.0'
*.control_files='/u01/app/oracle/oradata/orcl/control01.ctl','/u01/app/oracle/oradata/orcl/cont
rol02.ctl'
*.db_block_size=8192
*.db_domain=''
*.db_name='orcl'
*.db_unique_name='orcls'
orcls2.instance_number=2
orcls1.instance_number=1
Oracle Real Application Clusters

orcls.instance_number=1
*.log_archive_dest_1='LOCATION="/u01/app/oracle/oradata/arch"'
*.log_archive_dest_2='SERVICE="orcl", ASYNC NOAFFIRM'
*.log_archive_format=%t_%s_%r.arc
*.memory_target=2048M
*.remote_login_passwordfile='exclusive'
orcls2.thread=2
orcls1.thread=1
orcls.thread=1

orcls1.undo_tablespace='UNDOTBS1'
orcls2.undo_tablespace='UNDOTBS2'
orcls.undo_tablespace='UNDOTBS1'

Before I go further, I should explain a few things about the parameter file above. The changes in
red are pretty simple and are identical to what we changed in the document were I created a
single-instance physical standby database. The parameters in green are largely to support the
RAC database. For example, the RAC standby will have two threads.

orcls2.thread=2
orcls1.thread=1

But for a temporary basis, I will start this as a single-instance database. I will need a single
instance version of this parameter, i.e. a parameter for an instance with no instance number
associated with it.

orcls.thread=1

You will notice similarly for the other parameters in green that I have defined the parameter for
instances 1 and 2 and for the single instance.

Next I need to precreate the audit dump directory. On both nodes, I do the following:

[oracle@stdby01 oracle]$ cd /u01/app/oracle/admin


[oracle@stdby01 admin]$ mkdir orcls
[oracle@stdby01 admin]$ cd orcls
Oracle Real Application Clusters

[oracle@stdby01 orcls]$ mkdir adump

I’m close to starting up the standby database for the first time. I have a few minor tasks left.
Remember that we copied over the password file and it is on shared storage? I need to rename
that file to match the convention for this standby database.

[oracle@stdby01 ~]$ cd /u01/app/oracle/oradata/orcl


[oracle@stdby01 orcl]$ mv orapworcl orapworcls

Next I’ll set my environment variables.

[oracle@stdby01 dbs]$ export ORACLE_HOME=/u01/app/oracle/product/12.1.0.2


[oracle@stdby01 dbs]$ export ORACLE_SID=orcls
[oracle@stdby01 dbs]$ export PATH=$ORACLE_HOME/bin:$PATH

The last thing for me to do is to create a TNS alias. If you look back when I created the parameter
file for the standby, I specified that LOG_ARCHIVE_DEST_2 will point back to the primary. I need
to have a TNS alias for that database. Should we switchover, this standby will then be able to
transport redo to what is now the new standby.
ORCL =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = primary_scan)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcl)
s) )

Start Standby and Recovery

We’re ready to start our standby, albeit in single-instance mode. I have my PFILE on some
directory on disk and the SPFILE on shared storage. Since neither is in $ORACLE_HOME/dbs, I will
need to explicitly point to one of them when I start the instance. After starting the instance, I’ll
start managed recovery
Oracle Real Application Clusters

SQL> connect / as sysdba


Connected to an idle instance.
SQL> startup mount pfile='/home/oracle/pfile.txt';
ORACLE instance started.

Total System Global Area 2147483648 bytes

Fixed Size 2926472 bytes


Variable Size 1392511096 bytes
Database Buffers 738197504 bytes
Redo Buffers 13848576 bytes
Database mounted.

SQL> alter database recover managed standby database disconnect from session;
Database altered.

The standby is now up and running. Let’s verify that both instances on the primary and
transporting redo to the standby.
On the first instance of the primary, I will get the current log sequence number and then force a
log switch.

SQL> archive log list


Database log mode Archive Mode
Automatic archival Enabled
Archive destination /u01/app/oracle/oradata/arch
Oldest online log sequence 149
Next log sequence to archive 150
Current log sequence 150
Oracle Real Application Clusters

SQL> alter system switch logfile;

System altered.
I’ll do similarly for the second instance of the primary.
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled

Archive destination /u01/app/oracle/oradata/arch


Oldest online log sequence 135
Next log sequence to archive 136
Current log sequence 136
SQL> alter system switch logfile;
System altered.
If I look in the archive log destination of the standby, I can see the files are there for both
threads.

[oracle@stdby01 arch]$ ls -l
total 19980
-r--r-----. 1 oracle oinstall 3838464 Nov 10 14:53 1_149_925053972.arc
-r--r-----. 1 oracle oinstall 2225152 Nov 10 14:53 1_150_925053972.arc
-r--r-----. 1 oracle oinstall 2831360 Nov 10 14:53 2_135_925053972.arc
-r--r-----. 1 oracle oinstall 2596352 Nov 10 14:53 2_136_925053972.arc

Instance 1 was on log sequence number 150 and instance 2 was on sequence 136. We can see
both logs arrived safely at the standby destination. If we look in the standby database’s alert log,
we can see media recovery has applied the contents as well.

Thu Nov 10 15:17:14 2016


Media Recovery Log /u01/app/oracle/oradata/arch/1_150_925053972.arc

At this point, we now have a single instance physical standby database up and running. Much of
it was the same as we saw in the earlier document. Next we’ll convert our standby database to
Oracle RAC
Oracle Real Application Clusters

Convert Standby to RAC

There seems to be some mysticism on how to convert an Oracle database to RAC when it’s really
quite simple. The basic steps are:
Prepare the SPFILE and password file on shared storage.
Register the database and its instances with Grid Infrastructure
Start the database and its instances.

When its boiled down to just two steps, it doesn’t look so daunting. So let’s get started. First, I’ll
shutdown the single-instance standby we have running.

SQL> shutdown abort


ORACLE instance shut down.
Next, remember that in our PFILE, we had some instance-specific parameters like the following:

orcls2.instance_number=2
orcls1.instance_number=1
orcls.instance_number=1

We have a parameter value for ORCLS1, ORCLS2 and our temporary ORCLS. Remove the ORCLS
parameters from the PFILE for the following parameters:

INSTANCE_NUMBER
THREAD
UNDO_TABLESPACE

We won’t have an instance named ORCLS running any more, only instances with numbers at the
end for the RAC database. Next, I’ll create the SPFILE from this PFILE and store it on shared
storage.

SQL> create spfile='/u01/app/oracle/oradata/orcl/spfileorcls.ora'


from pfile='/home/oracle/pfile.txt';
File created.
I should have both the SPFILE and the password file on shared storage now. If you remember,
when I copied the primary database over to shared storage, it included the password file which
was then renamed. So both the SPFILE and password file are ready to go.
Oracle Real Application Clusters

If you remember, I had the TNS alias on the primary nodes pointing to just one node of the
standby.

ORCLS =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = stdby01)(PORT = 1521))
(CONNECT_DATA (SERVER = DEDICATED)
(SERVICE_NAME = orcls)
)

)
Now I need to change the TNS alias on the primary nodes to point to the Scan Listener of the
standby cluster.
ORCLS =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = standby_scan)(PORT = 1521)) (CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcls)
)
)
At this point, the config is all set. We now need to add the standby to Grid Infrastructure. First,
we’ll add the database. I did that with this command:

[oracle@stdby01 ~]$ srvctl add database -db orcls -oraclehome


/u01/app/oracle/product/12.1.0.2 -dbtype RAC -spfile
/u01/app/oracle/oradata/orcl/spfileorcls.ora -pwfile /u01/app/oracle/oradata/orcl/orapworcls
-role PHYSICAL_STANDBY -startoption MOUNT -stopoption IMMEDIATE

I’ll walk through those parameters in the ‘srvctl add database’ command.
-db This should match the DB_UNIQUE_NAME parameter. Our
standby database’s name.
-oraclehome The location of the ORACLE_HOME directory
-dbtype We are adding a RAC database, not single instance. -spfile
The location of the SPFILE on shared storage
-pwfile The location of the password file on shared storage
-role What role this database serves. In our work here, this database
is a physical standby.
Oracle Real Application Clusters

-startoption How do you want Grid Infrastructure to start the database? Because
it’s a standby, we want to do the equivalent of STARTUP MOUNT.
-stopoption How do you want GI to stop the instances? I’m specifying the
equivalent of SHUTDOWN IMMEDIATE
Now that the database is registered with Grid Infrastructure, we need to register the instances as
follows:

[oracle@stdby01 ~]$ srvctl add instance -db orcls -instance orcls1 -node stdby01

[oracle@stdby01 ~]$ srvctl add instance -db orcls -instance orcls2 -node stdby02
That’s not too difficult. All that’s left is to start the RAC version of our standby database.
[oracle@stdby01 ~]$ srvctl start database -db orcls -startoption MOUNT
[oracle@stdby01 ~]$ srvctl status database -db orcls
Instance orcls1 is running on node stdby01
Instance orcls2 is running on node stdby02

Success! Our standby is now running as an Oracle RAC database. Let’s verify.
SQL> select inst_id,instance_name,status from gv$instance;
INST_ID INSTANCE_NAME STATUS

---------- ---------------- ------------

2 orcls2 MOUNTED

1 orcls1 MOUNTED

All we need to do is to start managed recovery on one node.


SQL> alter database recover managed standby database disconnect from session;
Database altered.
If you attempt to start managed recovery on the other node when MRP is running already, you
will receive an error.
SQL> alter database recover managed standby database disconnect from session;

Conclusion
Hopefully this document helps you get your RAC physical standby databases up and
running. Its not that much more difficult than a single-instance standby. Most of the
Oracle Real Application Clusters

steps are the same. At the end, we just convert a single-instance standby to a RAC
standby.

Você também pode gostar