Você está na página 1de 32

Configuring Oracle Data Guard for Azure

This tutorial demonstrates how to set up and implement Oracle Data Guard in Azure Virtual Machines
environment for high availability and disaster recovery. The tutorial focuses on one-way replication for non-
RAC Oracle databases.

Oracle Data Guard supports data protection and disaster recovery for Oracle Database. It is a simple, high-
performance, drop-in solution for disaster recovery, data protection, and high availability for the entire Oracle
database.

This tutorial assumes that you already have theoretical and practical knowledge on Oracle Database High
Availability and Disaster Recovery concepts. For information, see the Oracle web site and also the Oracle Data
Guard Concepts and Administration Guide.

In addition, the tutorial assumes that you have already implemented the following prerequisites:

 You’ve already reviewed the High Availability and Disaster Recovery Considerations section in the Oracle
Virtual Machine images - Miscellaneous Considerations topic. Azure supports standalone Oracle Database
instances but not Oracle Real Application Clusters (Oracle RAC) currently.
 You have created two Virtual Machines (VMs) in Azure using the same platform provided Oracle Enterprise
Edition image. Make sure the Virtual Machines are in the same cloud service and in the same Virtual
Network to ensure they can access each other over the persistent private IP address. Additionally, it is
recommended to place the VMs in the same availability set to allow Azure to place them into separate fault
domains and upgrade domains. Oracle Data Guard is only available with Oracle Database Enterprise Edition.
Each machine must have at least 2 GB of memory and 5 GB of disk space. For the most up-to-date
information on the platform provided VM sizes, see Virtual Machine Sizes for Azure. If you need additional
disk volume for your VMs, you can attach additional disks. For information, see How to Attach a Data Disk
to a Virtual Machine.
 You’ve set the Virtual Machine names as “Machine1” for the primary VM and “Machine2” for the standby
VM at the Azure classic portal.
 You’ve set the ORACLE_HOME environment variable to point to the same oracle root installation path in
the primary and standby Virtual Machines, such as C:\OracleDatabase\product\11.2.0\dbhome_1\database .
 You log on to your Windows server as a member of the Administrators group or a member of
the ORA_DBA group.

In this tutorial, you will:

Implement the physical standby database environment

1. Create a primary database


2. Prepare the primary database for standby database creation
a. Enable forced logging
b. Create a password file
c. Configure a standby redo log
d. Enable Archiving
e. Set primary database initialization parameters

Create a physical standby database

1. Prepare an initialization parameter file for standby database


2. Configure the listener and tnsnames to support the database on primary and standby machines
a. Configure listener.ora on both servers to hold entries for both databases
b. To hold entries for both primary and standby databases, configure tnsnames.ora on the primary and
standby Virtual Machines.
c. Start the listener and check tnsping on both Virtual Machines to both services.
3. Start up the standby instance in nomount state
4. Use RMAN to clone the database and to create a standby database
5. Start the physical standby database in managed recovery mode
6. Verify the physical standby database
Important

This tutorial has been set up and tested against the following hardware and software configuration:

Primary Database Standby Database

Oracle Release Oracle11g Enterprise Release (11.2.0.4.0) Oracle11g Enterprise Release (11.2.0.4.0)

Machine Name Machine1 Machine2

Operating System Windows 2008 R2 Windows 2008 R2

Oracle SID TEST TEST_STBY

Memory Min 2 GB Min 2 GB

Disk Space Min 5 GB Min 5 GB

For subsequent releases of Oracle Database and Oracle Data Guard, there might be some additional changes
that you need to implement. For the most up-to-date version-specific information, see Data Guard and Oracle
Databasedocumentation at Oracle web site.

Implement the physical standby database environment

1. Create a primary database

 Create a primary database “TEST” in the primary Virtual Machine. For information, see Creating and
Configuring an Oracle Database.
 To see the name of your database, connect to your database as the SYS user with SYSDBA role in the
SQL*Plus command prompt and run the following statement:

SQL> select name from v$database;

The result will display like the following:

NAME

---------

TEST

 Next, query the names of the database files from the dba_data_files system view:

SQL> select file_name from dba_data_files;

FILE_NAME

-------------------------------------------------------------------------------

C:\ <YourLocalFolder>\TEST\USERS01.DBF

C:\ <YourLocalFolder>\TEST\UNDOTBS01.DBF

C:\ <YourLocalFolder>\TEST\SYSAUX01.DBF

C:\<YourLocalFolder>\TEST\SYSTEM01.DBF

C:\<YourLocalFolder>\TEST\EXAMPLE01.DBF

2. Prepare the primary database for standby database creation

Before creating a standby database, it’s recommended that you ensure the primary database is configured
properly. The following is a list of steps that you need to perform:

1. Enable forced logging


2. Create a password file
3. Configure a standby redo log
4. Enable Archiving
5. Set primary database initialization parameters

Enable forced logging


To implement a Standby Database, we need to enable 'Forced Logging' in the primary database. This option
ensures that even if an 'nologging' operation is done, force logging takes precedence and all operations are
logged into the redo logs. Therefore, we make sure that everything in the primary database is logged and
replication to the standby includes all operations in the primary database. To enable force logging, run the
alter database statement:

SQL> ALTER DATABASE FORCE LOGGING;

Database altered.

Create a password file

To be able to ship and apply archived logs from the Primary server to the Standby server, the sys password
must be identical on both primary and standby servers. That’s why you create a password file on the primary
database and copy it to the Standby server.

Important

When using Oracle Database 12c, there is a new user, SYSDG, which you can use to administer Oracle Data
Guard. For more information, see Changes in Oracle Database 12c Release.

In addition, make sure that the ORACLE_HOME environment is already defined in Machine1. If not, define it as
an environment variable using the Environment Variables dialog box. To access this dialog box, start
the System utility by double-clicking the System icon in the Control Panel; then click the Advanced tab and
choose Environment Variables. To set the environment variables, click the New button under System
Variables. After setting up the environment variables, close the existing Windows command prompt and open
up a new one.

Run the following statement to switch to the Oracle_Home directory, such as


C:\OracleDatabase\product\11.2.0\dbhome_1\database.

cd %ORACLE_HOME%\database

Then, create a password file using the password file creation utility, ORAPWD. In the same Windows command
prompt in Machine1, run the following command by setting the password value as the password of SYS:

ORAPWD FILE=PWDTEST.ora PASSWORD=password FORCE=y

This command creates a password file, named as PWDTEST.ora, in the ORACLE_HOME\database directory. You
should copy this file to %ORACLE_HOME%\database directory in Machine2 manually.
Configure a standby redo log

Then, you need to configure a Standby Redo Log so that the primary can correctly receive the redo when it
becomes a standby. Pre-creating them here also allows the standby redo logs to be automatically created on
the standby. It is important to configure the Standby Redo Logs (SRL) with the same size as the online redo
logs. The size of the current standby redo log files must exactly match the size of the current primary database
online redo log files.

Run the following statement in the SQL*PLUS command prompt in Machine1. The v$logfile is a system view
that contains information about redo log files.

SQL> select * from v$logfile;

GROUP# STATUS TYPE MEMBER IS_

---------- ------- ------- ------------------------------------------------------------ ---

3 ONLINE C:\<YourLocalFolder>\TEST\REDO03.LOG NO

2 ONLINE C:\<YourLocalFolder>\TEST\REDO02.LOG NO

1 ONLINE C:\<YourLocalFolder>\TEST\REDO01.LOG NO

Next, query the v$log system view, displays log file information from the control file.

SQL> select bytes from v$log;

BYTES

----------

52428800

52428800

52428800

Note that 52428800 is 50 megabytes.

Then, in the SQL*Plus window, run the following statements to add a new standby redo log file group to a
standby database and specify a number that identifies the group using the GROUP clause. Using group
numbers can make administering standby redo log file groups easier:

SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 'C:\<YourLocalFolder>\TEST\REDO04.LOG' SIZE 50M;

Database altered.

SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 'C:\<YourLocalFolder>\TEST\REDO05.LOG' SIZE 50M;

Database altered.

SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 'C:\<YourLocalFolder>\TEST\REDO06.LOG' SIZE 50M;
Database altered.

Next, run the following system view to list information about redo log files. This operation also verifies that the
standby redo log file groups were created:

SQL> select * from v$logfile;

GROUP# STATUS TYPE MEMBER IS_

---------- ------- ------- --------------------------------------------- ---

3 ONLINE C:\<YourLocalFolder>\TEST\REDO03.LOG NO

2 ONLINE C:\<YourLocalFolder>\TEST\REDO02.LOG NO

1 ONLINE C:\<YourLocalFolder>\TEST\REDO01.LOG NO

4 STANDBY C:\<YourLocalFolder>\TEST\REDO04.LOG

5 STANDBY C:\<YourLocalFolder>\TEST\REDO05.LOG NO

6 STANDBY C:\<YourLocalFolder>\TEST\REDO06.LOG NO

6 rows selected.

Enable Archiving

Then, enable archiving by running the following statements to put the primary database in ARCHIVELOG
mode and enable automatic archiving. You can enable archive log mode by mounting the database and then
executing the archivelog command.

First, log in as sysdba. In the Windows command prompt, run:

sqlplus /nolog

connect / as sysdba

Then, shutdown the database in the SQL*Plus command prompt:

SQL> shutdown immediate;

Database closed.

Database dismounted.
ORACLE instance shut down.

Then, execute the startup mount command to mount the database. This ensures that Oracle associates the
instance with the specified database.

SQL> startup mount;

ORACLE instance started.

Total System Global Area 1503199232 bytes

Fixed Size 2281416 bytes

Variable Size 922746936 bytes

Database Buffers 570425344 bytes

Redo Buffers 7745536 bytes

Database mounted.

Then, run:

SQL> alter database archivelog;

Database altered.

Then, run the Alter database statement with the Open clause to make the database available for normal use:

SQL> alter database open;

Database altered.

Set primary database initialization parameters

To configure the Data Guard, you need to create and configure the standby parameters on a regular pfile (text
initialization parameter file) first. When the pfile is ready, you need to convert it to a server parameter file
(SPFILE).

You can control the Data Guard environment using the parameters in the INIT.ORA file. When following this
tutorial, you need to update the Primary database INIT.ORA so that it can hold both roles: Primary or Standby.

SQL> create pfile from spfile;


File created.

Next, you need to edit the pfile to add the standby parameters. To do this, open the INITTEST.ORA file in the
location of %ORACLE_HOME%\database. Next, append the following statements to the INITTEST.ora file. The
naming convention for your INIT.ORA file is INIT<YourDatabaseName>.ORA.

db_name='TEST'

db_unique_name='TEST'

LOG_ARCHIVE_CONFIG='DG_CONFIG=(TEST,TEST_STBY)'

LOG_ARCHIVE_DEST_1= 'LOCATION=C:\OracleDatabase\archive VALID_FOR=(ALL_LOGFILES,ALL_ROLES)


DB_UNIQUE_NAME=TEST'

LOG_ARCHIVE_DEST_2= 'SERVICE=TEST_STBY LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)


DB_UNIQUE_NAME=TEST_STBY'

LOG_ARCHIVE_DEST_STATE_1=ENABLE

LOG_ARCHIVE_DEST_STATE_2=ENABLE

REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE

LOG_ARCHIVE_FORMAT=%t_%s_%r.arc

LOG_ARCHIVE_MAX_PROCESSES=30

# Standby role parameters --------------------------------------------------------------------

fal_server=TEST_STBY

fal_client=TEST

standby_file_management=auto

db_file_name_convert='TEST_STBY','TEST'

log_file_name_convert='TEST_STBY','TEST'

# ---------------------------------------------------------------------------------------------

The previous statement block includes three important setup items:

 LOG_ARCHIVE_CONFIG...: You define the unique database ids using this statement.
 LOG_ARCHIVE_DEST_1...: You define the local archive folder location using this statement. We recommend
you create a new directory for your database’s archiving needs and specify the local archive location using
this statement explicitly rather than using Oracle’s default folder %ORACLE_HOME%\database\archive.
 LOG_ARCHIVE_DEST_2 .... LGWR ASYNC...: You define an asynchronous log writer process (LGWR) to
collect transaction redo data and transmit it to standby destinations. Here, the DB_UNIQUE_NAME specifies
a unique name for the database at the destination standby server.
Once the new parameter file is ready, you need to create the spfile from it.

First, shutdown the database:

SQL> shutdown immediate;

Database closed.

Database dismounted.

ORACLE instance shut down.

Next, run startup nomount command as follows:

SQL> startup nomount pfile='c:\OracleDatabase\product\11.2.0\dbhome_1\database\initTEST.ora';

ORACLE instance started.

Total System Global Area 1503199232 bytes

Fixed Size 2281416 bytes

Variable Size 922746936 bytes

Database Buffers 570425344 bytes

Redo Buffers 7745536 bytes

Now, create an spfile:

SQL>create spfile frompfile='c:\OracleDatabase\product\11.2.0\dbhome\_1\database\initTEST.ora';

File created.

Then, shutdown the database:

SQL> shutdown immediate;

ORA-01507: database not mounted


Then, use the startup command to start an instance

SQL> startup;

ORACLE instance started.

Total System Global Area 1503199232 bytes

Fixed Size 2281416 bytes

Variable Size 922746936 bytes

Database Buffers 570425344 bytes

Redo Buffers 7745536 bytes

Database mounted.

Database opened.

Create a physical standby database


This section focuses on the steps that you must perform in Machine2 to prepare the physical standby
database.

First, you need to remote desktop to Machine2 via the Azure classic portal.

Then, on the Standby Server (Machine2), create all the necessary folders for the standby database, such as
C:\<YourLocalFolder>\TEST. While following this tutorial, make sure that the folder structure matches the
folder structure on Machine1 to keep all the necessary files, such as controlfile, datafiles, redologfiles, udump,
bdump, and cdump files. In addition, define the ORACLE_HOME and ORACLE_BASE environment variables in
Machine2. If not, define them as an environment variable using the Environment Variables dialog box. To
access this dialog box, start the System utility by double-clicking the System icon in the Control Panel; then
click the Advanced tab and choose Environment Variables. To set the environment variables, click
the New button under the System Variables. After setting up the environment variables, you need to close
the existing Windows command prompt and open up a new one to see the changes.

Next, follow these steps:

1. Prepare an initialization parameter file for standby database


2. Configure the listener and tnsnames to support the database on primary and standby machines
a. Configure listener.ora on both servers to hold entries for both databases
b. Configure tnsnames.ora on the primary and standby Virtual Machines to hold entries for both primary
and standby databases
c. Start the listener and check tnsping on both Virtual Machines to both services.
3. Start up the standby instance in nomount state
4. Use RMAN to clone the database and to create a standby database
5. Start the physical standby database in managed recovery mode
6. Verify the physical standby database

1. Prepare an initialization parameter file for standby database

This section demonstrates how to prepare an initialization parameter file for the standby database. To do this,
first copy the INITTEST.ORA file from Machine 1 to Machine2 manually. You should be able to see the
INITTEST.ORA file in the %ORACLE_HOME%\database folder in both machines. Then, modify the INITTEST.ora
file in Machine2 to set it up for the standby role as specified below:

Copy

db_name='TEST'

db_unique_name='TEST_STBY'

db_create_file_dest='c:\OracleDatabase\oradata\test_stby’

db_file_name_convert=’TEST’,’TEST_STBY’

log_file_name_convert='TEST','TEST_STBY'

job_queue_processes=10

LOG_ARCHIVE_CONFIG='DG_CONFIG=(TEST,TEST_STBY)'

LOG_ARCHIVE_DEST_1='LOCATION=c:\OracleDatabase\TEST_STBY\archives VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=’TEST'

LOG_ARCHIVE_DEST_2='SERVICE=TEST LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)

LOG_ARCHIVE_DEST_STATE_1='ENABLE'

LOG_ARCHIVE_DEST_STATE_2='ENABLE'

LOG_ARCHIVE_FORMAT='%t_%s_%r.arc'

LOG_ARCHIVE_MAX_PROCESSES=30

The previous statement block includes two important setup items:

 *.LOG_ARCHIVE_DEST_1: You need to create the c:\OracleDatabase\TEST_STBY\archives folder in Machine2


manually.
 *.LOG_ARCHIVE_DEST_2: This is an optional step. You set this as it might be needed when the primary
machine is in maintenance and the standby machine becomes a primary database.
Then, you need to start the standby instance. On the standby database server, enter the following command at
a Windows command prompt to create an Oracle instance by creating a Windows service:

oradim -NEW -SID TEST\_STBY -STARTMODE MANUAL

The Oradim command creates an Oracle instance but does not start it. You can find it in the
C:\OracleDatabase\product\11.2.0\dbhome_1\BIN directory.

Configure the listener and tnsnames to support the database on primary


and standby machines
Before you create a standby database, you need to make sure that the primary and standby databases in your
configuration can talk to each other. To do this, you need to configure both the listener and TNSNames either
manually or by using the network configuration utility NETCA. This is a mandatory task when you use the
Recovery Manager utility (RMAN).

Configure listener.ora on both servers to hold entries for both databases

Remote desktop to Machine1 and edit the listener.ora file as specified below. When you edit the listener.ora
file, always make sure that the opening and closing parenthesis line up in the same column. You can find the
listener.ora file in the following folder: c:\OracleDatabase\product\11.2.0\dbhome_1\NETWORK\ADMIN\.

# listener.ora Network Configuration File:


C:\OracleDatabase\product\11.2.0\dbhome_1\network\admin\listener.ora

# Generated by Oracle configuration tools.

SID_LIST_LISTENER =

(SID_LIST =

(SID_DESC =

(SID_NAME = test)

(ORACLE_HOME = C:\OracleDatabase\product\11.2.0\dbhome_1)

(PROGRAM = extproc)

(ENVS = "EXTPROC_DLLS=ONLY:C:\OracleDatabase\product\11.2.0\dbhome_1\bin\oraclr11.dll")

)
LISTENER =

(DESCRIPTION_LIST =

(DESCRIPTION =

(ADDRESS = (PROTOCOL = TCP)(HOST = MACHINE1)(PORT = 1521))

(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))

Next, remote desktop to Machine2 and edit the listener.ora file as follows:

# listener.ora Network Configuration File:


C:\OracleDatabase\product\11.2.0\dbhome_1\network\admin\listener.ora

# Generated by Oracle configuration tools.

SID_LIST_LISTENER =

(SID_LIST =

(SID_DESC =

(SID_NAME = test_stby)

(ORACLE_HOME = C:\OracleDatabase\product\11.2.0\dbhome_1)

(PROGRAM = extproc)

(ENVS = "EXTPROC_DLLS=ONLY:C:\OracleDatabase\product\11.2.0\dbhome_1\bin\oraclr11.dll")

LISTENER =

(DESCRIPTION_LIST =

(DESCRIPTION =

(ADDRESS = (PROTOCOL = TCP)(HOST = MACHINE2)(PORT = 1521))


(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))

Configure tnsnames.ora on the primary and standby Virtual Machines to hold entries for both
primary and standby databases

Remote desktop to Machine1 and edit the tnsnames.ora file as specified below. You can find the tnsnames.ora
file in the following folder: c:\OracleDatabase\product\11.2.0\dbhome_1\NETWORK\ADMIN\.

TEST =

(DESCRIPTION =

(ADDRESS_LIST =

(ADDRESS = (PROTOCOL = TCP)(HOST = MACHINE1)(PORT = 1521))

(CONNECT_DATA =

(SERVICE_NAME = test)

TEST_STBY =

(DESCRIPTION =

(ADDRESS_LIST =

(ADDRESS = (PROTOCOL = TCP)(HOST = MACHINE2)(PORT = 1521))

(CONNECT_DATA =

(SERVICE_NAME = test_stby)

Remote desktop to Machine2 and edit the tnsnames.ora file as follows:


TEST =

(DESCRIPTION =

(ADDRESS_LIST =

(ADDRESS = (PROTOCOL = TCP)(HOST = MACHINE1)(PORT = 1521))

(CONNECT_DATA =

(SERVICE_NAME = test)

TEST_STBY =

(DESCRIPTION =

(ADDRESS_LIST =

(ADDRESS = (PROTOCOL = TCP)(HOST = MACHINE2)(PORT = 1521))

(CONNECT_DATA =

(SERVICE_NAME = test_stby)

Start the listener and check tnsping on both Virtual Machines to both services.

Open up a new Windows command prompt in both primary and standby Virtual Machines and run the
following statements:

C:\Users\DBAdmin>tnsping test

TNS Ping Utility for 64-bit Windows: Version 11.2.0.1.0 - Production on 14-NOV-2013 06:29:08

Copyright (c) 1997, 2010, Oracle. All rights reserved.

Used parameter files:

C:\OracleDatabase\product\11.2.0\dbhome_1\network\admin\sqlnet.ora
Used TNSNAMES adapter to resolve the alias

Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = MACHINE1)(PORT =


1521))) (CONNECT_DATA = (SER

VICE_NAME = test)))

OK (0 msec)

C:\Users\DBAdmin>tnsping test_stby

TNS Ping Utility for 64-bit Windows: Version 11.2.0.1.0 - Production on 14-NOV-2013 06:29:16

Copyright (c) 1997, 2010, Oracle. All rights reserved.

Used parameter files:

C:\OracleDatabase\product\11.2.0\dbhome_1\network\admin\sqlnet.ora

Used TNSNAMES adapter to resolve the alias

Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = MACHINE2)(PORT =


1521))) (CONNECT_DATA = (SER

VICE_NAME = test_stby)))

OK (260 msec)

Start up the standby instance in nomount state


To set up the environment to support the standby database on the standby Virtual Machine (MACHINE2).

First, copy the password file from the primary machine (Machine1) to the standby machine (Machine2)
manually. This is necessary as the sys password must be identical on both machines.

Then, open the Windows command prompt in Machine2, and setup the environment variables to point to the
Standby database as follows:

SET ORACLE_HOME=C:\OracleDatabase\product\11.2.0\dbhome_1

SET ORACLE_SID=TEST_STBY

Next, start the Standby database in nomount state and then generate an spfile.

Start the database:


SQL>shutdown immediate;

SQL>startup nomount

ORACLE instance started.

Total System Global Area 747417600 bytes

Fixed Size 2179496 bytes

Variable Size 473960024 bytes

Database Buffers 264241152 bytes

Redo Buffers 7036928 bytes

Use RMAN to clone the database and to create a standby database


You can use the Recovery Manager utility (RMAN) to take any backup copy of the primary database to create
the physical standby database.

Remote desktop to the standby Virtual Machine (MACHINE2) and run the RMAN utility by specifying a full
connection string for both the TARGET (primary database, Machine1) and AUXILLARY (standby database,
Machine2) instances.

Important

Do not use the operating system authentication as there is no database in the standby server machine yet.

C:\> RMAN TARGET sys/password@test AUXILIARY sys/password@test_STBY

RMAN>DUPLICATE TARGET DATABASE

FOR STANDBY

FROM ACTIVE DATABASE

DORECOVER

NOFILENAMECHECK;

Start the physical standby database in managed recovery mode


This tutorial demonstrates how to create a physical standby database. For information on creating a logical
standby database, see the Oracle documentation.

Open up SQL*Plus command prompt and enable the Data Guard on the standby Virtual Machine or server
(MACHINE2) as follows:

SHUTDOWN IMMEDIATE;

STARTUP MOUNT;

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

When you open the standby database in MOUNT mode, the archive log shipping continues and the managed
recovery process continues log applying on the standby database. This ensures that the standby database
remains up-to-date with the primary database. Note that the standby database cannot be accessible for
reporting purposes during this time.

When you open the standby database in READ ONLY mode, the archive log shipping continues. But the
managed recovery process stops. This causes the standby database to become increasingly out of date until
the managed recovery process is resumed. You can access the standby database for reporting purposes
during this time but data may not reflect the latest changes.

In general, we recommend that you keep the standby database in MOUNT mode to keep the data in the
standby database up-to-date if there is a failure of the primary database. However, you can keep the standby
database in READ ONLY mode for reporting purposes depending on your application’s requirements. The
following steps demonstrate how to enable the Data Guard in read-only mode using SQL*Plus:

SHUTDOWN IMMEDIATE;

STARTUP MOUNT;

ALTER DATABASE OPEN READ ONLY;

Verify the physical standby database


This section demonstrates how to verify the high availability configuration as an administrator.

Open up SQL*Plus command prompt window and check archived redo log on the Standby Virtual Machine
(Machine2):

SQL> show parameters db_unique_name;

NAME TYPE VALUE

------------------------------------ ----------- ------------------------------


db_unique_name string TEST_STBY

SQL> SELECT NAME FROM V$DATABASE

SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME, APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;

SEQUENCE# FIRST_TIM NEXT_TIM APPLIED

---------------- --------------- --------------- ------------

45 23-FEB-14 23-FEB-14 YES

45 23-FEB-14 23-FEB-14 NO

46 23-FEB-14 23-FEB-14 NO

46 23-FEB-14 23-FEB-14 YES

47 23-FEB-14 23-FEB-14 NO

47 23-FEB-14 23-FEB-14 NO

Open up SQL*Plus command prompt window and switch logfiles on the Primary machine (Machine1):

SQL> alter system switch logfile;

System altered.

SQL> archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination C:\OracleDatabase\archive

Oldest online log sequence 69

Next log sequence to archive 71

Current log sequence 71

Check archived redo log on the Standby Virtual Machine (Machine2):

SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME, APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
SEQUENCE# FIRST_TIM NEXT_TIM APPLIED

---------------- --------------- --------------- ------------

45 23-FEB-14 23-FEB-14 YES

46 23-FEB-14 23-FEB-14 YES

47 23-FEB-14 23-FEB-14 YES

48 23-FEB-14 23-FEB-14 YES

49 23-FEB-14 23-FEB-14 YES

50 23-FEB-14 23-FEB-14 IN-MEMORY

Check for any gap on the Standby Virtual Machine (Machine2):

SQL> SELECT * FROM V$ARCHIVE_GAP;

no rows selected.

Another verification method could be to failover to the standby database and then test if it is possible to
failback to the primary database. To activate the standby database as a primary database, use the following
statements:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH;

SQL> ALTER DATABASE ACTIVATE STANDBY DATABASE;

If you have not enabled flashback on the original primary database, it’s recommended that you drop the
original primary database and recreate as a standby database.

We recommend that you enable flashback database on the primary and the standby databases. When a
failover happens, the primary database can be flashed back to the time before the failover and quickly
converted to a standby database.

=============------------------=========================-----------------------================================
=============------------------=========================-----------------------================================

Arup Nanda ::

Resolving Gaps in Data Guard Apply Using Incremental RMAN BAckup


Recently, we had a glitch on a Data Guard (physical standby database) on infrastructure. This is not a critical database; so the
monitoring was relatively lax. And that being done by an outsourcer does not help it either. In any case, the laxness resulted in a
failure remaining undetected for quite some time and it was eventually discovered only when the customer complained. This
standby database is usually opened for read only access from time to time.This time, however, the customer saw that the data
was significantly out of sync with primary and raised a red flag. Unfortunately, at this time it had become a rather political issue.

Since the DBA in charge couldn’t resolve the problem, I was called in. In this post, I will describe the issue and how it was
resolved. In summary, there are two parts of the problem:

(1) What happened


(2) How to fix it

What Happened

Let’s look at the first question – what caused the standby to lag behind. First, I looked for the current SCN numbers of the
primary and standby databases. On the primary:

SQL> select current_scn from v$database;

CURRENT_SCN
-----------
1447102

On the standby:

SQL> select current_scn from v$database;

CURRENT_SCN
-----------
1301571

Clearly there is a difference. But this by itself does not indicate a problem; since the standby is expected to lag behind the
primary (this is an asynchronous non-real time apply setup). The real question is how much it is lagging in the terms of wall
clock. To know that I used the scn_to_timestamp function to translate the SCN to a timestamp:

SQL> select scn_to_timestamp(1447102) from dual;

SCN_TO_TIMESTAMP(1447102)
-------------------------------
18-DEC-09 08.54.28.000000000 AM

I ran the same query to know the timestamp associated with the SCN of the standby database as well (note, I ran it on the
primary database, though; since it will fail in the standby in a mounted mode):

SQL> select scn_to_timestamp(1301571) from dual;

SCN_TO_TIMESTAMP(1301571)
-------------------------------
15-DEC-09 07.19.27.000000000 PM

This shows that the standby is two and half days lagging! The data at this point is not just stale; it must be rotten.

The next question is why it would be lagging so far back in the past. This is a 10.2 database where FAL server should
automatically resolved any gaps in archived logs. Something must have happened that caused the FAL (fetch archived log)
process to fail. To get that answer, first, I checked the alert log of the standby instance. I found these lines that showed the
issue clearly:


Fri Dec 18 06:12:26 2009
Waiting for all non-current ORLs to be archived...
Media Recovery Waiting for thread 1 sequence 700
Fetching gap sequence in thread 1, gap sequence 700-700
… …
Fri Dec 18 06:13:27 2009
FAL[client]: Failed to request gap sequence
GAP - thread 1 sequence 700-700
DBID 846390698 branch 697108460
FAL[client]: All defined FAL servers have been attempted.

Going back in the alert log, I found these lines:

Tue Dec 15 17:16:15 2009


Fetching gap sequence in thread 1, gap sequence 700-700
Error 12514 received logging on to the standby
FAL[client, MRP0]: Error 12514 connecting to DEL1 for fetching gap sequence
Tue Dec 15 17:16:15 2009
Errors in file /opt/oracle/admin/DEL2/bdump/del2_mrp0_18308.trc:
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
Tue Dec 15 17:16:45 2009
Error 12514 received logging on to the standby
FAL[client, MRP0]: Error 12514 connecting to DEL1 for fetching gap sequence
Tue Dec 15 17:16:45 2009
Errors in file /opt/oracle/admin/DEL2/bdump/del2_mrp0_18308.trc:
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

This clearly showed the issue. On December 15th at 17:16:15, the Managed Recovery Process encountered an error while
receiving the log information from the primary. The error was ORA-12514 “TNS:listener does not currently know of service
requested in connect descriptor”. This is usually the case when the TNS connect string is incorrectly specified. The primary is
called DEL1 and there is a connect string called DEL1 in the standby server.

The connect string works well. Actually, right now there is no issue with the standby getting the archived logs; so there connect
string is fine - now. The standby is receiving log information from the primary. There must have been some temporary hiccups
causing that specific archived log not to travel to the standby. If that log was somehow skipped (could be an intermittent
problem), then it should have been picked by the FAL process later on; but that never happened. Since the sequence# 700 was
not applied, none of the logs received later – 701, 702 and so on – were applied either. This has caused the standby to lag behind
since that time.

So, the fundamental question was why FAL did not fetch the archived log sequence# 700 from the primary. To get to that, I
looked into the alert log of the primary instance. The following lines were of interest:


Tue Dec 15 19:19:58 2009
Thread 1 advanced to log sequence 701 (LGWR switch)
Current log# 2 seq# 701 mem# 0: /u01/oradata/DEL1/onlinelog/o1_mf_2_5bhbkg92_.log
Tue Dec 15 19:20:29 2009Errors in file
/opt/oracle/product/10gR2/db1/admin/DEL1/bdump/del1_arc1_14469.trc:
ORA-00308: cannot open archived log '/u01/oraback/1_700_697108460.dbf'
ORA-27037: unable to obtain file status
Linux Error: 2: No such file or directory
Additional information: 3
Tue Dec 15 19:20:29 2009
FAL[server, ARC1]: FAL archive failed, see trace file.
Tue Dec 15 19:20:29 2009
Errors in file /opt/oracle/product/10gR2/db1/admin/DEL1/bdump/del1_arc1_14469.trc:
ORA-16055: FAL request rejected
ARCH: FAL archive failed.
Archiver continuing
Tue Dec 15 19:20:29 2009
ORACLE Instance DEL1 - Archival Error. Archiver continuing.

These lines showed everything clearly. The issue was:

ORA-00308: cannot open archived log '/u01/oraback/1_700_697108460.dbf'


ORA-27037: unable to obtain file status
Linux Error: 2: No such file or directory

The archived log simply was not available. The process could not see the file and couldn’t get it across to the standby site.

Upon further investigation I found that the DBA actually removed the archived logs to make some room in the filesystem without
realizing that his action has removed the most current one which was yet to be transmitted to the remote site. The mystery
surrounding why the FAL did not get that log was finally cleared.

Solution

Now that I know the cause, the focus was now on the resolution. If the archived log sequence# 700 was available on the primary,
I could have easily copied it over to the standby, registered the log file and let the managed recovery process pick it up. But
unfortunately, the file was gone and I couldn’t just recreate the file. Until that logfile was applied, the recovery will not move
forward. So, what are my options?

One option is of course to recreate the standby - possible one but not technically feasible considering the time required. The
other option is to apply the incremental backup of primary from that SCN number. That’s the key – the backup must be from a
specific SCN number. I have described the process since it is not very obvious. The following shows the step by step approach for
resolving this problem. I have shown where the actions must be performed – [Standby] or [Primary].

1. [Standby] Stop the managed standby apply process:

SQL> alter database recover managed standby database cancel;

Database altered.

2. [Standby] Shutdown the standby database

3. [Primary] On the primary, take an incremental backup from the SCN number where the standby has been stuck:

RMAN> run {
2> allocate channel c1 type disk format '/u01/oraback/%U.rmb';
3> backup incremental from scn 1301571 database;
4> }

using target database control file instead of recovery catalog


allocated channel: c1
channel c1: sid=139 devtype=DISK

Starting backup at 18-DEC-09


channel c1: starting full datafile backupset
channel c1: specifying datafile(s) in backupset
input datafile fno=00001 name=/u01/oradata/DEL1/datafile/o1_mf_system_5bhbh59c_.dbf
… …
piece handle=/u01/oraback/06l16u1q_1_1.rmb tag=TAG20091218T083619 comment=NONE
channel c1: backup set complete, elapsed time: 00:00:06
Finished backup at 18-DEC-09
released channel: c1
4. [Primary] On the primary, create a new standby controlfile:

SQL> alter database create standby controlfile as '/u01/oraback/DEL1_standby.ctl';

Database altered.

5. [Primary] Copy these files to standby host:

oracle@oradba1 /u01/oraback# scp *.rmb *.ctl oracle@oradba2:/u01/oraback


oracle@oradba2's password:
06l16u1q_1_1.rmb 100% 43MB 10.7MB/s 00:04
DEL1_standby.ctl 100% 43MB 10.7MB/s 00:04

6. [Standby] Bring up the instance in nomount mode:

SQL> startup nomount

7. [Standby] Check the location of the controlfile:

SQL> show parameter control_files

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
control_files string /u01/oradata/standby_cntfile.ctl

8. [Standby] Replace the controlfile with the one you just created in primary.

9. $ cp /u01/oraback/DEL1_standby.ctl /u01/oradata/standby_cntfile.ctl

10.[Standby] Mount the standby database:

SQL> alter database mount standby database;

11.[Standby] RMAN does not know about these files yet; so you must let it know – by a process called cataloging. Catalog these
files:

$ rman target=/

Recovery Manager: Release 10.2.0.4.0 - Production on Fri Dec 18 06:44:25 2009

Copyright (c) 1982, 2007, Oracle. All rights reserved.

connected to target database: DEL1 (DBID=846390698, not open)


RMAN> catalog start with '/u01/oraback';

using target database control file instead of recovery catalog


searching for all files that match the pattern /u01/oraback

List of Files Unknown to the Database


=====================================
File Name: /u01/oraback/DEL1_standby.ctl
File Name: /u01/oraback/06l16u1q_1_1.rmb

Do you really want to catalog the above files (enter YES or NO)? yes
cataloging files...
cataloging done

List of Cataloged Files


=======================
File Name: /u01/oraback/DEL1_standby.ctl
File Name: /u01/oraback/06l16u1q_1_1.rmb
12.Recover these files:

RMAN> recover database;

Starting recover at 18-DEC-09


using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001: /u01/oradata/DEL2/datafile/o1_mf_system_5lptww3f_.dbf
...…
channel ORA_DISK_1: reading from backup piece /u01/oraback/05l16u03_1_1.rmb
channel ORA_DISK_1: restored backup piece 1
piece handle=/u01/oraback/05l16u03_1_1.rmb tag=TAG20091218T083619
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07

starting media recovery

archive log thread 1 sequence 8012 is already on disk as file /u01/oradata/1_8012_697108460.dbf


archive log thread 1 sequence 8013 is already on disk as file /u01/oradata/1_8013_697108460.dbf
… …

13. After some time, the recovery fails with the message:

archive log filename=/u01/oradata/1_8008_697108460.dbf thread=1 sequence=8009


RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 12/18/2009 06:53:02
RMAN-11003: failure during parse/execution of SQL statement: alter database recover logfile
'/u01/oradata/1_8008_697108460.dbf'
ORA-00310: archived log contains sequence 8008; sequence 8009 required
ORA-00334: archived log: '/u01/oradata/1_8008_697108460.dbf'

This happens because we have come to the last of the archived logs. The expected archived log with sequence# 8008 has not
been generated yet.

14.At this point exit RMAN and start managed recovery process:

SQL> alter database recover managed standby database disconnect from session;

Database altered.

15.Check the SCN’s in primary and standby:

[Standby] SQL> select current_scn from v$database;

CURRENT_SCN
-----------
1447474
[Primary] SQL> select current_scn from v$database;

CURRENT_SCN
-----------
1447478
Now they are very close to each other. The standby has now caught up.
===============************************================******************===========

===============************************================******************===========

Don Burleson ::
Rohit Gupta ::

Tips from the trenches by Rohit Gupta

When you are using Dataguard, there are several scenarios when physical standby can go out of sync with
the primary database.

Before doing anything to correct the problem, we need to verify that why standby is not in sync with the
primary. In this particular article, we are covering the scenario where a log is missing from the standby but
apart from the missing log, all logs are available.

Verify from v$archived_log that there is a gap in the sequence number. All the logs up to that gap should
have APPLIED=YES and all the sequence# after the missing log sequence# are APPLIED=NO. This means
that due to the missing log, MRP is not applying the logs on standby but the logs are still being transmitted
to the standby and are available.

SQL> SELECT SEQUENCE#, APPLIED FROM V$ARCHIVED_LOG;

So for example, if the missing log sequence# is 400, then the above query should show that up to
sequence#399, all have APPLIED=YES and starting from 401, all are APPLIED=NO.

There are few steps to be performed when the standby is not in sync with the primary because there is a gap
of logs on standby.

These steps are:

STEP #1: Take an incremental backup of primary from the SCN where standby is lagging behind and
apply on the standby server

STEP #2: If step#1 is not able to sync up, then re-create the controlfile of standby database from the
primary

STEP #3: If after step#2, you still find that logs are not being applied on the standby, check the alert log
and you may need to re-register the logs with the standby database.

**************************************************************************************
*****

STEP#1

1. On STANDBY database query the v$database view and record the current SCN of the standby database:
SQL> SELECT CURRENT_SCN FROM V$DATABASE;

CURRENT_SCN
-----------
1.3945E+10

SQL> SELECT to_char(CURRENT_SCN) FROM V$DATABASE;


TO_CHAR(CURRENT_SCN)

----------------------------------------
13945141914

2. Stop Redo Apply on the standby database:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;


ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL
*
ERROR at line 1:
ORA-16136: Managed Standby Recovery not active

If you see the above error, it means Managed Recovery is already off

You can also confirm from the view v$managed_standby to see if the MRP is running or not

SQL> SELECT PROCESS, STATUS FROM V$MANAGED_STANDBY;

3. Connect to the primary database as the RMAN target and create an incremental backup from the current
SCN of the standby database that was recorded in step 1:

For example,

BACKUP INCREMENTAL FROM SCN 13945141914 DATABASE FORMAT '/tmp/ForStandby_%U'


tag 'FOR STANDBY'

You can choose a location other than /tmp also.

4. Do a recovery of the standby database using the incremental backup of primary taken above:

On the Standby server, without connecting to recovery catalog, catalog the backupset of the incremental
backup taken above. Before this, of course you need to copy the backup piece of the incremental backup
taken above to a location accessible to standby server.

$ rman nocatalog target /


RMAN> CATALOG BACKUPPIECE '/dump/proddb/inc_bkup/ForStandby_1qjm8jn2_1_1';

Now in the same session, start the recovery

RMAN> RECOVER DATABASE NOREDO;

You should see something like:


Starting recover at 2015-09-17 04:59:57
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=309 devtype=DISK
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
....
..
..
.
channel ORA_DISK_1: reading from backup piece /dump/proddb/inc_bkup/ForStandby_1qjm8jn2_1_1
channel ORA_DISK_1: restored backup piece 1
piece handle=/dump/proddb/inc_bkup/ForStandby_1qjm8jn2_1_1 tag=FOR STANDBY
channel ORA_DISK_1: restore complete, elapsed time: 01:53:08
Finished recover at 2015-07-25 05:20:3

Delete the backup set from standby:

RMAN> DELETE BACKUP TAG 'FOR STANDBY';


using channel ORA_DISK_1
List of Backup Pieces

BP Key BS Key Pc# Cp# Status Device Type Piece Name


------- ------- --- --- ----------- ----------- ----------
17713 17713 1 1 AVAILABLE DISK /dump/proddb/inc_bkup/ForStandby_1qjm8jn2_1_1

Do you really want to delete the above objects (enter YES or NO)? YES

deleted backup piece

backup piece handle=/dump/proddb/inc_bkup/ForStandby_1qjm8jn2_1_1 recid=17713 stamp=660972421

Deleted 1 objects

5. Try to start the managed recovery.

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE


DISCONNECT FROM SESSION;

If you get an error here, you need to go to STEP#2 for bringing standby in sync.

If no error, then using the view v$managed_standby, verify that MRP process is started and has the status
APPLYING_LOGS.

6. After this, check whether the logs are being applied on the standby or not:

SQL> SELECT SEQUENCE#, APPLIED FROM V$ARCHIVED_LOG;

After doing a recovery using the incremental backup, you will not see the sequence#'s which were visible
earlier with APPLIED=NO because they have been absorbed as part of the incremental backup and applied
on standby during recovery.
The APPLIED column starts showing YES for the logs which are being transmitted now, this means logs
are being applied.

Check the status of MRP process in the view v$managed_standby. The status should be
APPLYING_LOGS for the duration that available logs are being applied and once all available logs have
been applied, the status should be WAITING_FOR_LOGS

7. Another check to verify that primary and standby are in sync. Run the following query on both standby
and primary:

SQL> select max(sequence#) from v$log_history.

Output should be same on both databases.

**************************************************************************************
*****

STEP #2:

Since Managed recovery failed after applying the incremental backup, we need to recreate the controlfile of
standby. The reason for recreating the controlfile is that the state of the database was same because the
database_scn was not updated in the control file after applying the incremental backup while the scn for
datafiles were updated. Consequently, the standby database was still looking for the old file to apply.

A good MOSC note for re-creating the controlfile in such a scenario is 734862.1.

Steps to recreate the standby controlfile and start the managed recovery on standby:

1. Take the backup of controlfile from primary

rman target sys/oracle@proddb catalog rman/cat@emrep


backup current controlfile for standby;

2. Copy the controlfile backup to the standby system (or if it is on the common NFS mount, no need to
transfer or copy) and restore the controlfile onto the standby database

Shutdown all instances (If standby is RAC) of the standby.

sqlplus / as sysdba
shutdown immediate
exit

Startup nomount, one instance.

sqlplus / as sysdba
startup nomount
exit

Restore the standby control file.


rman nocatalog target /
restore standby controlfile from '/tmp/o1_mf_TAG20070220T151030_.bkp';
exit

3. Startup the standby with the new control file.

sqlplus / as sysdba
shutdown immediate
startup mount
exit

4. Restart managed recovery in one instance (if standby is RAC) of the standby database:

sqlplus / as sysdba
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;

The above statement may succeed without errors but the MRP process will still not start. The reason is that
since the controlfile has been restored from the primary, it is looking for datafiles at the same location as are
in primary instead of standby. For example, if the primary datafiles are located at
'+DATA/proddb_1/DATAFILE' and standby datafiles are at '+DATA/proddb_2/DATAFILE', the new
controlfile will show the datafile's location as '+DATA/proddb_1/DATAFILE'. This can be verified from
the query "select name from v$datafile" on the standby instance. We need to rename all the datafiles to reflect the
correct location.

There are two ways to rename the datafiles:

1. Without using RMAN

Change the parameter standby_file_management=manual in standby's parameter file.

ALTER DATABASE RENAME FILE '+DATA/proddb_1/datafile/users.310.620229743' TO


'+DATA/proddb_2/datafile/USERS.1216.648429765';

2. Using RMAN

rman nocatalog target /

Catalog the files, the string specified should refer to the diskgroup/filesystem destination of the standby data
files.
RMAN> catalog start with '+diskgroup/<dbname>/datafile/';

e.g.:
RMAN> catalog start with '+DATA/proddb_2/datafile/';

This will give the user a list of files and ask if they should all be cataloged. The user should review and say
YES if all the datafiles are properly listed.

Once that is done, then commit the changes to the controlfile


RMAN> switch database to copy;
Now start the managed recovery as:
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT

and check for processes in the view v$managed_standby. MRP process should be there. It will also start
applying all the archived logs that were missing since last applied log. This process might take hours.

5. Another check to verify that primary and standby are in sync:

Run the following query on both standby and primary after all logs in v$archived_log show
APPLIED=YES:
SQL> select max(sequence#) from v$log_history.

Output should be same on both databases.

**************************************************************************************
***

STEP #3

After recreating the controlfile, you still find that logs are being transmitted but not being applied on the
standby. Check the alert log of standby. For example, see if you find something similar to below snippet:

Fetching gap sequence in thread 1, gap sequence 74069-74095


Wed Sep 17 06:45:47 2015
RFS[1]: Archived Log:
'+DATA/ipwp_sac1/archivelog/2008_09_17/thread_1_seq_74093.259.665649929'
Wed Sep 17 06:45:55 2015
Fetching gap sequence in thread 1, gap sequence 74069-74092
Wed Sep 17 06:45:57 2015
RFS[1]: Archived Log:
'+DATA/proddb_2/archivelog/2008_09_17/thread_1_seq_74094.258.665649947'
Wed Sep 17 06:46:16 2015
RFS[1]: Archived Log:
'+DATA/proddb_2/archivelog/2008_09_17/thread_1_seq_74095.256.665649957'
Wed Sep 17 06:46:26 2015
FAL[client]: Failed to request gap sequence
GAP - thread 1 sequence 74069-74092

The contents of alert log shows that logs sequence# from 74069 to 74092 may have been transmitted but
not applied. The view v$archived_log shows the sequence# starting from 74093 and APPLIED=NO.

So this situation means that logs up to 74068 were applied as part of the incremental backup and from
74069 to 74093 have been transferred to standby server but they must have failed to register with standby
database. Try the following steps:

1. Locate the log sequence# shown in alert log (for example 74069 to 74092). For
example,+DATA/proddb_2/archivelog/2008_09_17/thread_1_seq_74069.995.665630861
2. Register all these archived logs with the standby database.
alter database register logfile
'+DATA/proddb_2/archivelog/2008_09_17/thread_1_seq_74069.995.665630
861';

alter database register logfile


'+DATA/proddb_2/archivelog/2008_09_17/thread_1_seq_74070.998.665631
405';

alter database register logfile


'+DATA/proddb_2/archivelog/2008_09_17/thread_1_seq_74071.792.665633
755';

alter database register logfile


'+DATA/proddb_2/archivelog/2008_09_17/thread_1_seq_74072.263.665633
713';
??..

?.and so on till the last one.

3. Now check the view v$archived_log and finally should see the logs being applied. The status of
MRP should change from ARCHIVE_LOG_GAP to APPLYING_LOGS and eventually
WAITING_FOR_LOGS.

**************************************************************************************
*****

Você também pode gostar