Você está na página 1de 242

Oracle 11g DBA Handouts Book-3

Version:7 Revised Edition: October 2009


Index
Oracle 11g DBA Handouts Book-3......................................................................................................................................... 1
Version:7 Revised Edition: October
2009........................................................................................................................................................................................ 1
74. Introduction to Oracle Data Guard.................................................................................................................................... 7
74.1. Data Guard Configurations........................................................................................................................................ 7
74.1.1. Primary Database................................................................................................................................................ 7
74.1.2. Standby Databases............................................................................................................................................. 7
74.1.3. Configuration Example........................................................................................................................................ 8
75. Data Guard....................................................................................................................................................................... 9
75.1. Overview.................................................................................................................................................................... 9
75.2. Concepts.................................................................................................................................................................. 10
75.2.1. No Data Loss..................................................................................................................................................... 10
75.2.2. No Data Divergence.......................................................................................................................................... 10
75.3. Overview of Oracle Data Guard Functional Components........................................................................................10
75.3.1. Data Guard Configuration................................................................................................................................. 10
75.3.2. Role Management............................................................................................................................................. 10
75.4. Data Guard Protection Modes................................................................................................................................. 10
75.5. Data Guard Broker................................................................................................................................................... 11
75.6. Data Guard Architecture.......................................................................................................................................... 11
75.7. What's New in Oracle Data Guard 10g Release 1?.................................................................................................11
75.7.1. Real Time Apply................................................................................................................................................ 11
75.7.2. Integration with Flashback Database................................................................................................................11
75.7.3. Simplified Browser-based Interface...................................................................................................................12
75.8. What's New in Oracle Data Guard 10g Release 2?.................................................................................................12
75.8.1. Fast-Start Failover............................................................................................................................................. 12
75.8.2. Improved Redo Transmission...........................................................................................................................12
75.8.3. Easy conversion of a physical standby database to a reporting database........................................................12
75.8.4. Automatic deletion of applied archived redo log files in logical standby DB’s...................................................12
75.8.5. Fine-grained monitoring of Data Guard configurations.....................................................................................12
75.9. Data Guard Benefits................................................................................................................................................ 12
75.9.1. Disaster recovery and high availability..............................................................................................................12
75.9.2. Complete data protection.................................................................................................................................. 12
75.9.3. Efficient utilization of system resources............................................................................................................12
75.9.4. Flexibility in data protection to balance availability against performance requirements....................................13
75.9.5. Protection from communication failures............................................................................................................13
75.9.6. Centralized and simple management................................................................................................................13
75.9.7. Integrated with Oracle database.......................................................................................................................13
76. Data Guard Protection Modes........................................................................................................................................ 14
77. Starting Up and Shutting Down a Physical Standby Database......................................................................................16
77.1. Starting Up a Physical Standby Database...............................................................................................................16
77.2. Shutting Down a Physical Standby Database.........................................................................................................16
77.3. Opening a Physical Standby Database................................................................................................................... 16
78. Standby Database Types............................................................................................................................................... 17
78.1. A standby database can be one of these types:......................................................................................................17
78.1.1. Physical Standby Databases.............................................................................................................................17
78.2. Benefits of a Physical Standby Database................................................................................................................17
78.3. Data protection......................................................................................................................................................... 17
78.4. Reduction in primary database workload.................................................................................................................17
78.5. Performance............................................................................................................................................................ 17
78.5.1. Logical Standby Databases...............................................................................................................................17
78.6. Benefits of a Logical Standby Database:.................................................................................................................18
78.7. Protection against additional kinds of failure...........................................................................................................18
78.8. Efficient use of resources......................................................................................................................................... 18
78.9. Workload distribution............................................................................................................................................... 18
78.10. Optimized for reporting and decision support requirements..................................................................................18
78.11. Minimizing downtime on software upgrades..........................................................................................................18
78.11.1. Snapshot Standby Databases.........................................................................................................................18
78.12. Benefits of a Snapshot Standby Database............................................................................................................18
79. User Interfaces for Administering Data Guard Configurations.......................................................................................20
80. Monitoring Stand by Databases..................................................................................................................................... 21
80.1. Primary DB Changes That Require Manual Intervention at a Physical Standby.....................................................21
80.1.1. Adding a Datafile or Creating a Tablespace......................................................................................................21
80.1.2. Dropping Tablespaces and Deleting Datafiles..................................................................................................23
80.1. 3. Using DROP TABLESPACE INCLUDING CONTENTS AND DATAFILES.....................................................24
80.1.4. Using Transportable Tablespaces with a Physical Standby Database.............................................................24
80.1.5. Renaming a Datafile in the Primary Database..................................................................................................24
80.1.6. Add or Drop a Redo Log File Group..................................................................................................................25
80.1.7. NOLOGGING or Unrecoverable Operations.....................................................................................................25
80.1.8. Refresh the Password File................................................................................................................................ 25
80.1.9. Reset the TDE Master Encryption Key..............................................................................................................26
80.2. Recovering Through the OPEN RESETLOGS Statement.......................................................................................26
80.3. Monitoring Primary, Physical Standby, and Snapshot Standby Databases............................................................26
81. Dataguard Services........................................................................................................................................................ 28
81.1. Redo Transport Services...................................................................................................................................... 28
81.2. Apply Services...................................................................................................................................................... 28
81. 3 Role Transitions................................................................................................................................................... 29
81. 3.1 Introduction to Role Transitions.........................................................................................................................29
81. 3.2 Preparing for a Role Transition.........................................................................................................................29
81. 3.3 Choosing a Target Standby Database for a Role Transition.............................................................................30
81. 3.4 Switchovers....................................................................................................................................................... 30
81. 3.5. Failovers........................................................................................................................................................... 32
81. 3.6 Preparing for a Failover..................................................................................................................................... 33
81. 3.7. Role Transition Triggers................................................................................................................................... 33
81. 3.8. Role Transitions Involving Physical Standby Databases.................................................................................33
82. Redo Apply Services...................................................................................................................................................... 40
82.1 Introduction to Apply Services.................................................................................................................................. 40
82.2 Apply Services Configuration Options...................................................................................................................... 40
82.2.1. Specifying a Time Delay for the Application of Archived Redo Log Files..........................................................41
82.3 Applying Redo Data to Physical Standby Databases...............................................................................................42
82.3.1. Starting Redo Apply.......................................................................................................................................... 42
82.3.2. Stopping Redo Apply......................................................................................................................................... 42
82.4. Applying Redo Data to Logical Standby Databases................................................................................................42
82.4.1. Starting and Stopping SQL Apply.....................................................................................................................42
83. Redo Transport Services................................................................................................................................................ 43
83.1 Introduction to Redo Transport Services.................................................................................................................. 43
83.2 Configuring Redo Transport Services....................................................................................................................... 43
83.2.1 Redo Transport Security.................................................................................................................................... 43
83.2.2 Configuring an Oracle Database to Send Redo Data........................................................................................44
83.2.3 Configuring an Oracle Database to Receive Redo Data....................................................................................45
83.3 Monitoring Redo Transport Services........................................................................................................................ 46
83.3.1. Monitoring Redo Transport Status....................................................................................................................46
83.3.2. Monitoring Synchronous Redo Transport Response Time...............................................................................47
83.3.3. Redo Gap Detection and Resolution.................................................................................................................48
83.4. Manual Gap Resolution........................................................................................................................................... 48
83.4.1. Redo Transport Services Wait Events..............................................................................................................49
84. Logical Standby Dataguard............................................................................................................................................ 50
84.1. Introduction.............................................................................................................................................................. 50
84.2. When to choose Logical Standby database............................................................................................................50
84.3. Prerequisite Conditions for creating a Logical Standby Database:..........................................................................50
84.3.1. Improvements in Oracle Data Guard in Oracle 10gr2:......................................................................................51
85. Cloning Oracle Database............................................................................................................................................... 52
86. Oracle Data Guard Broker.............................................................................................................................................. 55
86.1 Overview of Oracle Data Guard and the Broker.......................................................................................................55
86.1.1. Data Guard Configurations and Broker Configurations.....................................................................................55
86.1.2. Oracle Data Guard Broker................................................................................................................................. 55
86.2 Benefits of Data Guard Broker.................................................................................................................................. 56
86.3. Data Guard Broker Management Model.................................................................................................................. 58
86.3. Data Guard Broker User Interfaces......................................................................................................................... 59
86.3.1. Oracle Enterprise Manager............................................................................................................................... 60
86.3.2. Data Guard Command-Line Interface (DGMGRL)............................................................................................60
86.4 Data Guard Monitor.................................................................................................................................................. 61
86.4.1. Data Guard Monitor (DMON) Process..............................................................................................................61
86.4.2. Configuration Management............................................................................................................................... 62
86.4.3. Database Property Management......................................................................................................................62
87. What's New in Oracle Data Guard?................................................................................................................................63
88. Active Dataguard............................................................................................................................................................ 65
88.1. Traditional Data Guard............................................................................................................................................. 65
88.2. Oracle Active Data Guard........................................................................................................................................ 65
88.3. Unique Advantages of Oracle Active Data Guard...................................................................................................65
89.1. Benefits of a Snapshot Standby Database...........................................................................................................67
89.2. Using a Snapshot Standby Database..................................................................................................................67
90. Data Guard Enhancements in Oracle 11 G R2..............................................................................................................69
90.1. Redo Apply and SQL Apply..................................................................................................................................... 69
90.2. Redo Apply.............................................................................................................................................................. 69
90.3. SQL Apply................................................................................................................................................................ 69
90.4. Compressed Table Support in Logical Standby Databases and Oracle LogMiner..................................................70
90.5. Configurable Real-Time Query Apply Lag Limit......................................................................................................70
90.6. Support Up to 30 Standby Database....................................................................................................................... 70
90.7. Automatic Repair of Corrupt Data Blocks................................................................................................................70
90.8. Manual Repair of Corrupt Data Blocks.................................................................................................................... 70
91 Virtual Private Database (VPD)....................................................................................................................................... 90
91.1. Overview.................................................................................................................................................................. 90
91.2. Policy Types............................................................................................................................................................. 90
91.3. Selective Columns................................................................................................................................................... 92
92. Online Redefinition......................................................................................................................................................... 99
92.1. Online Redefinition of a Single Partition................................................................................................................ 100
93. OEM Jobs & Events..................................................................................................................................................... 103
93.1. Overview................................................................................................................................................................ 103
93.2. Creating Jobs without Programs............................................................................................................................ 103
93.3. Associating Jobs with Programs............................................................................................................................ 104
93.4. Classes, Plans, and Windows................................................................................................................................ 105
93.5. Monitoring.............................................................................................................................................................. 106
93.6. Administration........................................................................................................................................................ 106
94. Log Miner...................................................................................................................................................................... 115
95. RMAN Tablespace Point-in-Time Recovery (TSPITR).................................................................................................123
95.1. Understanding RMAN TSPITR.............................................................................................................................. 123
95.1.1. RMAN TSPITR Concepts................................................................................................................................ 123
95.1.2. How TSPITR Works With an RMAN-Managed Auxiliary Instance..................................................................124
95.1.3. Deciding When to Use TSPITR....................................................................................................................... 124
95.1.4. Limitations of TSPITR..................................................................................................................................... 124
95.1.5. Limitations of TSPITR Without a Recovery Catalog........................................................................................125
95.2. Performing Basic RMAN TSPITR.......................................................................................................................... 125
96. DBMS Built-in Packages.............................................................................................................................................. 135
96.1. DBMS_SCHEDULER............................................................................................................................................ 135
95.1.1. Basic Features................................................................................................................................................ 135
96.1.2. Scheduler Components................................................................................................................................... 135
96.1.3. Advanced Features......................................................................................................................................... 135
96.2. DBMS_REPAIR..................................................................................................................................................... 136
96.3. DBMS_OUTPUT built-in........................................................................................................................................ 138
96.4. DBMS_ALERT Built-In........................................................................................................................................... 138
96.5. DBMS_PIPE Built-in.............................................................................................................................................. 140
96.6. DBMS_SQL Built-in............................................................................................................................................... 142
96.7. DBMS_JOBS Built-In............................................................................................................................................. 143
Example:..................................................................................................................................................................... 144
97. Database Normalization............................................................................................................................................... 145
97.1. A layman’s Approach to Database Normalization.................................................................................................145
97.2. First Normal Form.................................................................................................................................................. 145
97.3. Second Normal Form............................................................................................................................................. 146
97.4. Third Normal Form................................................................................................................................................. 146
97.5. Fourth Normal Form............................................................................................................................................... 147
97.6. Fifth Normal Form.................................................................................................................................................. 147
97.7. A Summary of Normalization................................................................................................................................. 148
98. Installation of Oracle 11g on Red Hat Enterprise Linux 4.............................................................................................149
98.1. Installation Steps.................................................................................................................................................... 149
99. Upgradation to Oracle 10g........................................................................................................................................... 159
99.1. Validating the Database before Upgrade:..............................................................................................................160
99.2. Performing the Upgrade......................................................................................................................................... 161
Post Upgrade Tasks:................................................................................................................................................... 162
100. Dynamic Performance Views..................................................................................................................................... 170
100.1. Instance............................................................................................................................................................... 170
100.2. Archivelog Management Views........................................................................................................................... 170
100.3. Control File Views................................................................................................................................................ 170
100.4. Redolog File Views.............................................................................................................................................. 170
100.5. Datafile Views...................................................................................................................................................... 170
100.6. User Management Views..................................................................................................................................... 171
100.7. Multi-Threaded Server Views............................................................................................................................... 171
100.8. Backups & Recovery............................................................................................................................................ 171
100.9. Real Application Cluster Views............................................................................................................................ 172
100.10. Recovery Manager Views.................................................................................................................................. 172
100.11. RMAN Backups & Recovery Views................................................................................................................... 172
100.12. Network Views................................................................................................................................................... 172
100.13. Database Views................................................................................................................................................. 172
100.14. Calling Dynamic Views...................................................................................................................................... 173
100.15. Replication Views............................................................................................................................................... 173
100.16. Base Table Index Views.................................................................................................................................... 173
100.17. SQL Loader Views............................................................................................................................................. 173
100.18. Logminer............................................................................................................................................................ 173
100.19. NLS Views......................................................................................................................................................... 173
100.20. PL/SQL Views.................................................................................................................................................... 173
100.21. Statspack Views................................................................................................................................................. 173
100.22. Parameters Views.............................................................................................................................................. 174
100.23. Session Information Views................................................................................................................................. 174
100.24. Tablespace........................................................................................................................................................ 174
100.25. Temporary File Views........................................................................................................................................ 174
100.26. Performance Tuning Views................................................................................................................................ 174
Other Views.................................................................................................................................................................... 177
101. Data Dictionary Views................................................................................................................................................ 179
102. New Features for OLAP............................................................................................................................................. 184
102.1. The SQL Model clause........................................................................................................................................ 184
102.2. Improvements to the multidimensional OLAP engine..........................................................................................184
102.3. Asynchronous Change Data Capture.................................................................................................................. 185
102.4. Improvements to Oracle data mining................................................................................................................... 185
102.5. The SQLAccess Adviser...................................................................................................................................... 185
102.6. The Tune MView Advisor and improvements to Query Rewrite..........................................................................185
102.7. Data Pump: The replacement for import and export............................................................................................186
102.8. Improvements to storage management...............................................................................................................186
102.9. Faster full table scans.......................................................................................................................................... 187
102.10. Automatic tuning and maintenance................................................................................................................... 187
103. Real Application Clusters – RAC................................................................................................................................ 188
103.1. Overview.............................................................................................................................................................. 188
103.2. What is Oracle Database 10g RAC?................................................................................................................... 188
103.3. Real Application Clusters Architecture................................................................................................................ 188
103.3.1. Oracle Clusterware........................................................................................................................................ 189
103.3.2. Hardware Architecture................................................................................................................................... 189
103.3.3. File Systems and Volume Management........................................................................................................189
103.3.4. Virtual Internet Protocol Address (VIP).........................................................................................................189
103.3.5. Cluster Verification Utility.............................................................................................................................. 189
103.3.6. RAC on Extended Distance Clusters............................................................................................................190
103.4. RAC Benefits....................................................................................................................................................... 190
103.4.1. High Availability............................................................................................................................................. 190
103.4.2. Reliability....................................................................................................................................................... 190
103.4.3. Recoverability................................................................................................................................................ 190
103.4.4. Error Detection.............................................................................................................................................. 190
103.4.5. Continuous Operations.................................................................................................................................. 190
103.4.6. Scalability...................................................................................................................................................... 190
104. Glossary..................................................................................................................................................................... 198
105. Oracle Certification Details......................................................................................................................................... 203
105.1. What is OCP?...................................................................................................................................................... 203
105.2. What are the benefits from being certified?.........................................................................................................203
105.3. How to Emboss OCP Logo on your Resume......................................................................................................203
105.4. Oracle 7.3 DBA.................................................................................................................................................... 203
105.5. Oracle 8i DBA...................................................................................................................................................... 203
105.6. Oracle 9i DBA...................................................................................................................................................... 203
105.7. Oracle 10g DBA.................................................................................................................................................... 204
105.8. Prometric Centers for OCP Exams...................................................................................................................... 205
105.9. Website Links for Oracle OCP Dumps and Oracle FAQ’s...................................................................................205
106. FAQs.......................................................................................................................................................................... 205
107. Common UNIX Commands........................................................................................................................................ 233
108. Important Websites for Oracle................................................................................................................................. 240
g
Oracle 11 – Introduction to Dataguard Page 7 of 242
WK: 5 - Day: 3

74. Introduction to Oracle Data Guard


Oracle Data Guard ensures high availability, data protection, and disaster recovery for enterprise data. Data Guard
provides a comprehensive set of services that create, maintain, manage, and monitor one or more standby databases to
enable production Oracle databases to survive disasters and data corruptions. Data Guard maintains these standby
databases as transactionally consistent copies of the production database. Then, if the production database becomes
unavailable because of a planned or an unplanned outage, Data Guard can switch any standby database to the
production role, minimizing the downtime associated with the outage. Data Guard can be used with traditional backup,
restoration, and cluster techniques to provide a high level of data protection and data availability. With Data Guard,
administrators can optionally improve production database performance by offloading resource-intensive backup and
reporting operations to standby systems.

74.1. Data Guard Configurations


A Data Guard configuration consists of one production database and one or more standby databases. The databases in a
Data Guard configuration are connected by Oracle Net and may be dispersed geographically. There are no restrictions on
where the databases are located, provided they can communicate with each other. For example, you can have a standby
database on the same system as the production database, along with two standby databases on other systems at remote
locations. You can manage primary and standby databases using the SQL command-line interfaces or the Data Guard
broker interfaces, including a command-line interface (DGMGRL) and a graphical user interface that is integrated in
Oracle Enterprise Manager.

74.1.1. Primary Database


A Data Guard configuration contains one production database, also referred to as the primary database, that functions in
the primary role. This is the database that is accessed by most of your applications. The primary database can be either a
single-instance Oracle database or an Oracle Real Application Clusters (RAC) database.

74.1.2. Standby Databases


A standby database is a transactionally consistent copy of the primary database. Using a backup copy of the primary
database, you can create up to nine standby databases and incorporate them in a Data Guard configuration. Once
created, Data Guard automatically maintains each standby database by transmitting redo data from the primary database
and then applying the redo to the standby database. Similar to a primary database, a standby database can be either a
single-instance Oracle database or an Oracle RAC database. The types of standby databases are as follows:
Physical standby database
Provides a physically identical copy of the primary database, with on disk database structures that are identical to the
primary database on a block-for-block basis. The database schema, including indexes, are the same. A physical standby
database is kept synchronized with the primary database, though Redo Apply, which recovers the redo data received from
the primary database and applies the redo to the physical standby database. As of Oracle Database 11g release 1 (11.1),
a physical standby database can receive and apply redo while it is open for read-only access. A physical standby
database can therefore be used concurrently for data protection and reporting.
Logical standby database
Contains the same logical information as the production database, although the physical organization and structure of the
data can be different. The logical standby database is kept synchronized with the primary database though SQL Apply,
which transforms the data in the redo received from the primary database into SQL statements and then executing the
SQL statements on the standby database. A logical standby database can be used for other business purposes in addition
to disaster recovery requirements. This allows users to access a logical standby database for queries and reporting
purposes at any time. Also, using a logical standby database, you can upgrade Oracle Database software and patch sets
with almost no downtime. Thus, a logical standby database can be used concurrently for data protection, reporting, and
database upgrades.
Snapshot Standby Database
A snapshot standby database is a fully updatable standby database that is created by converting a physical standby
database into a snapshot standby database. Like a physical or logical standby database, a snapshot standby database
receives and archives redo data from a primary database. Unlike a physical or logical standby database, a snapshot
standby database does not apply the redo data that it receives. The redo data received by a snapshot standby database is
not applied until the snapshot standby is converted back into a physical standby database, after first discarding any local
updates made to the snapshot standby database.
A snapshot standby database is best used in scenarios that require a temporary, updatable snapshot of a physical
standby database. Note that because redo data received by a snapshot standby database is not applied until it is
converted back into a physical standby, the time needed to perform a role transition is directly proportional to the amount
of redo data that needs to be applied.
g
Oracle 11 – Introduction to Dataguard Page 8 of 242
WK: 5 - Day: 3

74.1.3. Configuration Example


The figure below shows a typical Data Guard configuration that contains a primary database that transmits redo data to a
standby database. The standby database is remotely located from the primary database for disaster recovery and backup
operations. You can configure the standby database at the same location as the primary database. However, for disaster
recovery purposes, Oracle recommends you configure standby databases at remote locations. It shows a typical Data
Guard configuration in which redo is being applied out of standby redo log files to a standby database.
g
Oracle 11 – Dataguard Page 9 of 242
WK: 5 - Day: 3

75. Data Guard


75.1. Overview
Oracle Data Guard is the management, monitoring, and automation software infrastructure that creates, maintains, and
monitors one or more standby DBs to protect enterprise data from failures, disasters, errors, and corruptions.
Data Guard maintains these standby DBs as transactionally consistent copies of the production database. These standby
databases can be located at remote disaster recovery sites thousands of miles away from the production data center, or
they may be located in the same city, same campus, or even in the same building. If the production database becomes
unavailable because of a planned or an unplanned outage, Data Guard can switch any standby DB to the production role,
thus minimizing the downtime associated with the outage, and preventing any data loss.
Data Guard can be used in combination with other Oracle High Availability (HA) solutions such as Real Application
Clusters (RAC) and Recovery Manager (RMAN), to provide a high level of data protection and data availability that is
unprecedented in the industry.

 
Figure 1-1. Hi-Level Overview of Oracle Data Guard

In order to implement the availability option using ORACLE 7/8/8i/9i we need to ensure that there is a 2nd server with
1. Similar hardware (may be less number of CPU/memory).
2. Same operating system including patches
3. Same oracle version including patches.
In case of Oracle 8i we can configure all the clients with an alias containing fail over option targeting standby server. Thus
the moment production server goes down immediately we can cancel the MRM mode on the standby server and opens
the database as a regular database (no longer treating that one as Hot Standby anymore).
Since this database was running in MRM mode the recovery has been performed already with generated archived files
coming from the production sever. With 8i option we no longer need to waste time for the recovery. Crystal reports,
analyzing the statistics and some other time taking process can be done on the second hot standby server. With out
wasting time on the primary server for backup processes we can take the backup of the hot standby by shutting down the
database (cold backup).
g
Oracle 11 – Dataguard Page 10 of 242
WK: 5 - Day: 3

75.2. Concepts
1. Analyzing high availability and disaster protection.
2. Two loosely connected sites (Ethernet) primary and standby combines into a single easily managed disaster
recovery solution.
3. Uniform management solution
4. Support for both GUI and CLI interfaces
5. Automating creating / configuring of physical hot standby by databases.
6. Oracle data guards log transport shifts the data in the form of archive log files to the standby database.
7. Fail over and switchover automation.
8. Monitoring alert and control mechanism

75.2.1. No Data Loss


Log transport service will not commit application on primary database until the modifications are also available on standby
database but May or may not be applied so far.

75.2.2. No Data Divergence


No data divergence extends no data loss by prohibiting primary database modifications when connection to standby
database is not available.

75.3. Overview of Oracle Data Guard Functional Components


75.3.1. Data Guard Configuration
A Data Guard configuration consists of one production (or primary) database and up to nine standby databases. The
databases in a Data Guard configuration are connected by Oracle Net and may be dispersed geographically. There are no
restrictions on where the databases are located, provided that they can communicate with each other. However, for
disaster recovery, it is recommended that the standby databases are hosted at sites that are geographically separated
from the primary site.

75.3.2. Role Management


Using Data Guard, the role of a database can be switched from a primary role to a standby role and vice versa, ensuring
no data loss in the process, and minimizing downtime. There are two kinds of role transitions - a switchover and a failover.
A switchover is a role reversal between the primary database and one of its standby databases. This is typically done for
planned maintenance of the primary system. During a switchover, the primary database transitions to a standby role and
the standby database transitions to the primary role. The transition occurs without having to re-create either database. A
failover is an irreversible transition of a standby database to the primary role. This is only done in the event of a
catastrophic failure of the primary database, which is assumed to be lost and to be used again in the Data Guard
configuration, it must be re-instantiated as a standby from the new primary.

75.4. Data Guard Protection Modes


In some situations, a business cannot afford to lose data at any cost. In other situations, some applications require
maximum database performance and can tolerate a potential loss of data. Data Guard provides three distinct modes of
data protection to satisfy these varied requirements:
1. Maximum Protection: This mode offers the highest level of data protection. Data is synchronously transmitted to
the standby database from the primary database and transactions are not committed on the primary database
unless the redo data is available on at least one standby database configured in this mode. If the last standby
database configured in this mode becomes unavailable, processing stops on the primary database. This mode
ensures no-data-loss.
2. Maximum Availability: This mode is similar to the maximum protection mode, including zero data loss. However,
if a standby database becomes unavailable (for example, because of network connectivity problems), processing
continues on the primary database. When the fault is corrected, the standby database is automatically
resynchronized with the primary database.
3. Maximum Performance: This mode offers slightly less data protection on the primary database, but higher
performance than maximum availability mode. In this mode, as the primary database processes transactions,
redo data is asynchronously shipped to the standby database. The commit operation of the primary database
does not wait for the standby database to acknowledge receipt of redo data before completing write operations
on the primary database. If any standby destination becomes unavailable, processing continues on the primary
database and there is little effect on primary database performance.
g
Oracle 11 – Dataguard Page 11 of 242
WK: 5 - Day: 3

75.5. Data Guard Broker


The Oracle Data Guard Broker is a distributed management framework that automates and centralizes the creation,
maintenance, and monitoring of Data Guard configurations. All management operations can be performed either through
Oracle Enterprise Manager, which uses the Broker, or through the Broker's specialized command-line interface
(DGMGRL).

75.6. Data Guard Architecture

Figure 2-1. Data Guard Architecture

1. Primary database in production database which is used to create standby database.


2. Physical standby database is database replicated, created from backup of primary database
3. Log Transport Service (LTS) controls automated transportation of archive logs from primary to standby.
4. Network configuration, primary DB is connected to one or more remote standby DB over the network
5. Log apply service (LAS)- applies the Archive logs to standby database.
6. Data guard broker is the management and monitoring tool.

75.7. What's New in Oracle Data Guard 10g Release 1?


This section will highlight some of the key new features of Oracle Data Guard 10g Release 1.

75.7.1. Real Time Apply


With this feature, redo data can be applied on the standby database (whether Redo Apply or SQL Apply) as soon as they
have written to a Standby Redo Log (SRL). Prior releases of Data Guard require this redo data to be archived at the
standby database in the form of archive logs before they can be applied. The Real Time Apply feature allows standby
databases to be closely synchronized with the primary database, enabling up-to-date and real-time reporting. This also
enables faster switchover and failover times, which in turn reduces planned and unplanned downtime for the business.
The impact of a disaster is often measured in terms of Recovery Point Objective (RPO - i.e. how much data can a
business afford to lose in the event of a disaster) and Recovery Time Objective (RTO - i.e. how much time a business can
afford to be down in the event of a disaster). With Oracle Data Guard, when Maximum Protection is used in combination
with Real Time Apply, businesses get the benefits of both zero data loss as well as minimal downtime in the event of a
disaster and this makes Oracle Data Guard the only solution available today with the best RPO and RTO benefits for a
business.

75.7.2. Integration with Flashback Database


Data Guard in 10g has been integrated with the Flashback family of features to bring the Flashback feature benefits to a
Data Guard configuration. One such benefit is human error protection. In Oracle9i, administrators may configure Data
Guard with an apply delay to protect standby databases from possible logical data corruptions that occurred on the
primary database. The side-effects of such delays are that any reporting that gets done on the standby database is done
on old data, and switchover/failover gets delayed because the accumulated logs have to be applied first. In Data Guard
10g, with the Real Time Apply feature, such delayed-reporting or delayed-switchover/failover issues do not exist, and - if
logical corruptions do land up affecting both the primary and standby database, the administrator may decide to use
Flashback Database on both the primary and standby databases to quickly revert the databases to an earlier point-in-time
to back out such user errors. Another benefit that such integration provides is during failovers. In releases prior to 10g,
following any failover operation, the old primary database must be recreated (as a new standby database) from a backup
of the new primary database, if the administrator intends to bring it back in the Data Guard configuration. This may be an
issue when the database sizes are fairly large, and the primary/standby databases are hundreds/thousands of miles away.
g
Oracle 11 – Dataguard Page 12 of 242
WK: 5 - Day: 3
However, in Data Guard 10g, after the primary server fault is repaired, the primary database may simply be brought up in
mounted mode, "flashed back" (using flashback database) to the SCN at which the failover occurred, and then brought
back as a standby database in the Data Guard configuration. No reinstantiation is required.

75.7.3. Simplified Browser-based Interface


Administration of a Data Guard configuration can be done through the new streamlined browser-based HTML interface of
Enterprise Manager, which enables complete standby database lifecycle management. The focus of such streamlined
administration is on:
 Ease of use.
 Management based on best practices.
 Pre-built integration with other HA features.

75.8. What's New in Oracle Data Guard 10g Release 2?


This section will highlight some of the key new features of Oracle Data Guard 10g Release 2.

75.8.1. Fast-Start Failover


This capability allows Data Guard to automatically, and quickly fail over to a previously chosen, synchronized standby
database in the event of loss of the primary database, without requiring any manual steps to invoke the failover, and
without incurring any data loss. Following a fast-start failover, once the old primary database is repaired, Data Guard
automatically reinstates it to be a standby database. This act restores high availability to the Data Guard configuration. 

75.8.2. Improved Redo Transmission


Several enhancements have been made in the redo transmission architecture to make sure redo data generated on the
primary database can be transmitted as quickly and efficiently as possible to the standby database(s). 

75.8.3. Easy conversion of a physical standby database to a reporting database


A physical standby database can be activated as a primary database, opened read/write for reporting purposes, and then
flashed back to a point in the past to be easily converted back to a physical standby database. At this point, Data Guard
automatically synchronizes the standby database with the primary database. This allows the physical standby database to
be utilized for read/write reporting and cloning activities. 

75.8.4. Automatic deletion of applied archived redo log files in logical standby DB’s
Archived logs, once they are applied on the logical standby database, are automatically deleted, reducing storage
consumption on the logical standby and improving Data Guard manageability. Physical standby databases have already
had this functionality since Oracle Database 10g Release 1, with Flash Recovery Area. 

75.8.5. Fine-grained monitoring of Data Guard configurations


Oracle Enterprise Manager has been enhanced to provide granular, up-to-date monitoring of Data Guard configurations,
so that administrators may make an informed and expedient decision regarding managing this configuration.

75.9. Data Guard Benefits


75.9.1. Disaster recovery and high availability
Data Guard provides an efficient and comprehensive disaster recovery and high availability solution. Automatic failover
and easy-to-manage switchover capabilities allow quick role reversals between primary and standby DB’s, minimizing the
downtime of the primary database for planned and unplanned outages.

75.9.2. Complete data protection


A standby database also provides an effective safeguard against data corruptions and user errors. Storage level physical
corruptions on the primary database do not propagate to the standby database. Similarly, logical corruptions or user errors
that cause the primary database to be permanently damaged can be resolved. Finally, the redo data is validated at the
time it is received at the standby database and further when applied to the standby database.
g
Oracle 11 – Dataguard Page 13 of 242
WK: 5 - Day: 3

75.9.3. Efficient utilization of system resources


A physical standby database can be used for backups and read-only reporting, thereby reducing the primary database
workload and saving valuable CPU and I/O cycles. In Oracle Database 10g Release 2, a physical standby database can
also be easily converted back and forth between being a physical standby database and an open read/write database. A
logical standby database allows its tables to be simultaneously available for read-only access while they are updated from
the primary database. A logical standby database also allows users to perform data manipulation operations on tables that
are not updated from the primary database. Finally, additional indexes and materialized views can be created in the logical
standby database for better reporting performance.

75.9.4. Flexibility in data protection to balance availability against performance


requirements
Oracle Data Guard offers the maximum protection, maximum availability, and maximum performance modes to help
enterprises balance data availability against system performance requirements.

75.9.5. Protection from communication failures


If network connectivity is lost between the primary and one or more standby DB’s, redo data cannot be sent from the
primary to those standby databases. Once connectivity is re-established, the missing redo data is automatically detected
by Data Guard and the necessary archive logs are automatically transmitted to the standby databases. The standby DB’s
are resynchronized with the primary database, with no manual intervention by the administrator.

75.9.6. Centralized and simple management


Data Guard Broker automates the management and monitoring tasks across the multiple databases in a Data Guard
configuration. Administrators may use either Oracle Enterprise Manager or the Broker?s own specialized command-line
interface (DGMGRL) to take advantage of this integrated management framework.

75.9.7. Integrated with Oracle database


Data Guard is available as an integrated feature of the Oracle Database (Enterprise Edition) at no extra cost.
g
Oracle 11 – Dataguard Protection Modes Page 14 of 242
WK: 5 - Day: 3

76. Data Guard Protection Modes


This chapter contains the following sections:
o Data Guard Protection Modes
o Setting the Data Protection Mode of a Primary Database
Data Guard Protection Modes
In these descriptions, a synchronized standby database is meant to be one that meets the minimum requirements of the
configured data protection mode and that does not have a redo gap.
Maximum Availability
This protection mode provides the highest level of data protection that is possible without compromising the availability of
a primary database. Transactions do not commit until all redo data needed to recover those transactions has been written
to the online redo log and to at least one synchronized standby database. If the primary database cannot write its redo
stream to at least one synchronized standby database, it effectively switches to maximum performance mode to preserve
primary database availability and operates in that mode until it is again able to write its redo stream to a synchronized
standby database. This mode ensures that no data loss will occur if the primary database fails, but only if a second fault
does not prevent a complete set of redo data from being sent from the primary database to at least one standby database.
Maximum Performance
This protection mode provides the highest level of data protection that is possible without affecting the performance of a
primary database. This is accomplished by allowing transactions to commit as soon as all redo data generated by those
transactions has been written to the online log. Redo data is also written to one or more standby databases, but this is
done asynchronously with respect to transaction commitment, so primary database performance is unaffected by delays in
writing redo data to the standby database(s). This protection mode offers slightly less data protection than maximum
availability mode and has minimal impact on primary database performance.
This is the default protection mode.
Maximum Protection
This protection mode ensures that zero data loss occurs if a primary database fails. To provide this level of protection, the
redo data needed to recover a transaction must be written to both the online redo log and to at least one synchronized
standby database before the transaction commits. To ensure that data loss cannot occur, the primary database will shut
down, rather than continue processing transactions, if it cannot write its redo stream to at least one synchronized standby
database. Because this data protection mode prioritizes data protection over primary database availability, Oracle
recommends that a minimum of two standby databases be used to protect a primary database that runs in maximum
protection mode to prevent a single standby database failure from causing the primary database to shut down.
Setting the Data Protection Mode of a Primary Database
Perform the following steps to change the data protection mode of a primary database:
Step 1   Select a data protection mode that meets your availability, performance and data protection requirements.
Step 2   Verify that redo transport is configured to at least one standby database
The value of the LOG_ARCHIVE_DEST_n database initialization parameter that corresponds to the standby database
must include the redo transport attributes listed in Table 5-1 for the data protection mode that you are moving to.
If the primary database has more than one standby database, only one of those standby databases must use the redo
transport settings listed in Table. The standby database must also have a standby redo log.
Table Required Redo Transport Attributes for Data Protection Modes

Maximum Availability Maximum Performance Maximum Protection

AFFIRM NOAFFIRM AFFIRM

SYNC ASYNC SYNC


DB_UNIQUE_NAME
DB_UNIQUE_NAME DB_UNIQUE_NAME

Step 3   Verify that the DB_UNIQUE_NAME database initialization parameter has been set to a unique name on the
primary and standby database.
For example, if the DB_UNIQUE_NAME parameter has not been defined on either database, the following SQL
statements might be used to assign a unique name to each database.
g
Oracle 11 – Dataguard Protection Modes Page 15 of 242
WK: 5 - Day: 3
Execute this SQL statement on the primary database:
SQL> ALTER SYSTEM SET DB_UNIQUE_NAME='CHICAGO' SCOPE=SPFILE;
Execute this SQL statement on the standby database:
SQL> ALTER SYSTEM SET DB_UNIQUE_NAME='BOSTON' SCOPE=SPFILE;
Step 4   Verify that the LOG_ARCHIVE_CONFIG database initialization parameter has been defined on the primary and
standby database and that its value includes a DG_CONFIG list that includes the DB_UNIQUE_NAME of the primary and
standby database.
For example, if the LOG_ARCHIVE_CONFIG parameter has not been defined on either database, the following SQL
statement could be executed on each database to configure the LOG_ARCHIVE_CONFIG parameter:
SQL> ALTER SYSTEM SET
2> LOG_ARCHIVE_CONFIG='DG_CONFIG=(CHICAGO,BOSTON)';
Step 5   Skip this step unless you are raising your protection mode.
Shut down the primary database and restart it in mounted mode if the protection mode is being changed to Maximum
Protection or being changed from Maximum Performance to Maximum Availability.
If the primary database is an Oracle Real Applications Cluster, shut down all of the instances and then start and mount a
single instance. For example:
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT;
Step 6   Set the data protection mode.
Execute the following SQL statement on the primary database
SQL> ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE {AVAILABILITY | PERFORMANCE |
PROTECTION};
Step 7   Open the primary database.
If the database was restarted in Step 5, open the database:
SQL> ALTER DATABASE OPEN;
Step 8   Confirm that the primary database is operating in the new protection mode.
Perform the following query on the primary database to confirm that it is operating in the new protection mode:
SQL> SELECT PROTECTION_MODE FROM V$DATABASE;
g
Oracle 11 – Startingup & Shutting down a Physical Database Page 16 of 242
WK: 5 - Day: 3

77. Starting Up and Shutting Down a Physical Standby Database

This section describes how to start up and shut down a physical standby database.

77.1. Starting Up a Physical Standby Database


Use the SQL*Plus STARTUP command to start a physical standby database. The SQL*Plus STARTUP command starts,
mounts, and opens a physical standby database in read-only mode when it is invoked without any arguments. Once
mounted or opened, a physical standby database can receive redo data from the primary database.
Note: When Redo Apply is started on a physical standby database that has not yet received redo data from the primary
database, an ORA-01112 message may be returned. This indicates that Redo Apply is unable to determine the starting
sequence number for media recovery. If this occurs, manually retrieve an archived redo log file from the primary database
and register it on the standby database, or wait for redo transport to begin before starting Redo Apply.

77.2. Shutting Down a Physical Standby Database


Use the SQL*Plus SHUTDOWN command to stop Redo Apply and shut down a physical standby database. Control is not
returned to the session that initiates a database shutdown until shutdown is complete. If the primary database is up and
running, defer the standby destination on the primary database and perform a log switch before shutting down the physical
standby database.

77.3. Opening a Physical Standby Database


A physical standby database can only be opened in read-only mode. An open physical standby database can continue to
receive and apply redo data from the primary database. This allows read-only transactions to be offloaded from a primary
database to a physical standby and increases the return on investment in a physical standby database. A physical
standby database instance cannot be opened if Redo Apply is active on any instance, even if one or more instances have
already been opened. If Redo Apply is active, use the following SQL statement to stop Redo Apply before attempting to
open a physical standby database instance:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
Once a physical standby database has been opened, Redo Apply can be started and stopped at any time. You can
perform queries against the standby while Redo Apply is active. (This is also known Real Time Query.)
Note: A physical standby database cannot be open while Redo Apply is active unless the physical standby database and
its primary database are running in 11g compatibility mode.
Note:
You must issue the SET TRANSACTION READ ONLY command before performing a distributed query on a physical
standby database.
g
Oracle 11 – Standby Database Types Page 17 of 242
WK: 5 - Day: 3

78. Standby Database Types


A standby database is a transactionally consistent copy of an Oracle production database that is initially created from a
backup copy of the primary database. Once the standby database is created and configured, Data Guard automatically
maintains the standby database by transmitting primary database redo data to the standby system, where the redo data is
applied to the standby database.

78.1. A standby database can be one of these types:


1. Physical standby database
2. Logical standby database
3. Snapshot standby database.
If needed, either a physical or a logical standby database can assume the role of the primary database and take over
production processing. A Data Guard configuration can include any combination of these types of standby databases.

78.1.1. Physical Standby Databases


A physical standby database is an exact, block-for-block copy of a primary database. A physical standby is maintained as
an exact copy through a process called Redo Apply, in which redo data received from a primary database is continuously
applied to a physical standby database using the database recovery mechanisms.

78.2. Benefits of a Physical Standby Database


A physical standby database provides the following benefits:
o Disaster recovery and high availability
o A physical standby database is a robust and efficient disaster recovery and high availability solution. Easy-to-
manage switchover and failover capabilities allow easy role reversals between primary and physical standby
databases, minimizing the downtime of the primary database for planned and unplanned outages.

78.3. Data protection


A physical standby database can prevent data loss, even in the face of unforeseen disasters. A physical standby database
supports all datatypes, and all DDL and DML operations that the primary database can support. It also provides a
safeguard against data corruptions and user errors. Storage level physical corruptions on the primary database will not be
propagated to a standby database. Similarly, logical corruptions or user errors that would otherwise cause data loss can
be easily resolved.

78.4. Reduction in primary database workload


Oracle Recovery Manager (RMAN) can use a physical standby database to off-load backups from a primary database,
saving valuable CPU and I/O cycles. A physical standby database can also be queried while Redo Apply is active, which
allows queries to be offloaded from the primary to a physical standby, further reducing the primary workload.

78.5. Performance
The Redo Apply technology used by a physical standby database is the most efficient mechanism for keeping a standby
database updated with changes being made at a primary database because it applies changes using low-level recovery
mechanisms which bypass all SQL level code layers.

78.5.1. Logical Standby Databases


A logical standby database is initially created as an identical copy of the primary database, but it later can be altered to
have a different structure. The logical standby database is updated by executing SQL statements. This allows users to
access the standby database for queries and reporting at any time. Thus, the logical standby database can be used
concurrently for data protection and reporting operations. Data Guard automatically applies information from the archived
redo log file or standby redo log file to the logical standby database by transforming the data in the log files into SQL
statements and then executing the SQL statements on the logical standby database. Because the logical standby
database is updated using SQL statements, it must remain open. Although the logical standby database is opened in
read/write mode, its target tables for the regenerated SQL are available only for read-only operations. While those tables
are being updated, they can be used simultaneously for other tasks such as reporting, summations, and queries.
Moreover, these tasks can be optimized by creating additional indexes and materialized views on the maintained tables. A
logical standby database has some restrictions on datatypes, types of tables, and types of DDL and DML operations.
g
Oracle 11 – Standby Database Types Page 18 of 242
WK: 5 - Day: 3

78.6. Benefits of a Logical Standby Database:


A logical standby database is ideal for high availability (HA) while still offering data recovery (DR) benefits. Compared to a
physical standby database, a logical standby database provides significant additional HA benefits:

78.7. Protection against additional kinds of failure


Because logical standby analyzes the redo and reconstructs logical changes to the database, it can detect and protect
against certain kinds of hardware failure on the primary that could potentially be replicated through block level changes.
Oracle supports having both physical and logical standbys for the same primary server.

78.8. Efficient use of resources


A logical standby database is open read/write while changes on the primary are being replicated. Consequently, a logical
standby database can simultaneously be used to meet many other business requirements, for example it can run
reporting workloads that would problematical for the primary's throughput. It can be used to test new software releases
and some kinds of applications on a complete and accurate copy of the primary's data. It can host other applications and
additional schemas while protecting data replicated from the primary against local changes. It can be used to assess the
impact of certain kinds of physical restructuring (for example, changes to partitioning schemes). Because a logical standby
identifies user transactions and replicates only those changes while filtering out background system changes, it can
efficiently replicate only transactions of interest.

78.9. Workload distribution


Logical standby provides a simple turnkey solution for creating up-to-the-minute, consistent replicas of a primary database
that can be used for workload distribution. As the reporting workload increases, additional logical standbys can be created
with transparent load distribution without affecting the transactional throughput of the primary server.

78.10. Optimized for reporting and decision support requirements


A key benefit of logical standby is that significant auxiliary structures can be created to optimize the reporting workload;
structures that could have a prohibitive impact on the primary's transactional response time. A logical standby can have its
data physically reorganized into a different storage type with different partitioning, have many different indexes, have on-
demand refresh materialized views created and maintained, and it can be used to drive the creation of data cubes and
other OLAP data views.

78.11. Minimizing downtime on software upgrades


Logical standby can be used to greatly reduce downtime associated with applying patchsets and new software releases. A
logical standby can be upgraded to the new release and then switched over to become the active primary. This allows full
availability while the old primary is converted to a logical standby and the patchset is applied.

78.11.1. Snapshot Standby Databases


A snapshot standby database is a fully updatable standby database that is created by converting a physical standby
database into a snapshot standby database. A snapshot standby database receives and archives, but does not apply,
redo data from its primary database. Redo data received from the primary database is applied when a snapshot standby
database is converted back into a physical standby database, after discarding all local updates to the snapshot standby
database.
A snapshot standby database typically diverges from its primary database over time because redo data from the primary
database is not applied as it is received. Local updates to the snapshot standby database will cause additional
divergence. The data in the primary database is fully protected however, because a snapshot standby can be converted
back into a physical standby database at any time, and the redo data received from the primary will then be applied.

78.12. Benefits of a Snapshot Standby Database


A snapshot standby database is a fully updatable standby database that provides disaster recovery and data protection
benefits that are similar to those of a physical standby database. Snapshot standby databases are best used in scenarios
where the benefit of having a temporary, updatable snapshot of the primary database justifies additional administrative
complexity and increased time to recover from primary database failures.
The benefits of using a snapshot standby database include the following:
o It provides an exact replica of a production database for development and testing purposes, while maintaining data
protection at all times
o It can be easily refreshed to contain current production data by converting to a physical standby and resynchronizing.
o The ability to create a snapshot standby, test, resynchronize with production, and then again create a snapshot
g
Oracle 11 – Standby Database Types Page 19 of 242
WK: 5 - Day: 3

standby and test, is a cycle that can be repeated as often as desired. The same process can be used to easily create
and regularly update a snapshot standby for reporting purposes where read/write access to data is required.
g
Oracle 11 – User Interfaces for Administering Data Guard Configurations Page 20 of 242
WK: 5 - Day: 3

79. User Interfaces for Administering Data Guard Configurations


You can use the following interfaces to configure, implement, and manage a Data Guard configuration:
Oracle Enterprise Manager(OEM)
Enterprise Manager provides a GUI interface for the Data Guard broker that automates many of the tasks involved in
creating, configuring, and monitoring a Data Guard environment.
SQL*Plus Command-line interface
Several SQL*Plus statements use the STANDBY keyword to specify operations on a standby database. Other SQL
statements do not include standby-specific syntax, but they are useful for performing operations on a standby database.
Initialization parameters
Several initialization parameters are used to define the Data Guard environment.
Data Guard broker command-line interface (DGMGRL)
The DGMGRL command-line interface is an alternative to using Oracle Enterprise Manager. The DGMGRL command-line
interface is useful if you want to use the broker to manage a Data Guard configuration from batch programs or scripts.
g
Oracle 11 – Monitoring Stand by Databases Page 21 of 242
WK: 5 - Day: 3

80. Monitoring Stand by Databases


80.1. Primary DB Changes That Require Manual Intervention at a
Physical Standby
Most structural changes made to a primary database are automatically propagated through redo data to a physical
standby database. Table lists primary database structural and configuration changes which require manual intervention at
a physical standby database.

Table: Primary Database Changes That Require Manual Intervention at a Physical Standby
Primary Database Change Action Required on Physical Standby Database

Add a datafile or create a tablespace No action is required if the STANDBY_FILE_MANAGEMENT


database initialization parameter is set to AUTO. If this
parameter is set to MANUAL, the new datafile must be copied to
the physical standby database.

Drop or delete a tablespace or datafile Delete datafile from primary and physical standby database after
the redo data containing the DROP or DELETE command is
applied to the physical standby.

Use transportable tablespaces Move tablespace between the primary and the physical standby
database.

Rename a datafile Rename the datafile on the physical standby database.

Add or drop a redo log file group Evaluate the configuration of the redo log and standby redo log
on the physical standby database and adjust as necessary.

Perform a DML or DDL operation using the Copy the datafile containing the unlogged changes to the
NOLOGGING or UNRECOVERABLE clause physical standby database.

Grant or revoke administrative privileges or If the REMOTE_LOGIN_PASSWORDFILE initialization


change the password of a user who has parameter is set to SHARED or EXCLUSIVE, replace the
administrative privileges password file on the physical standby database with a fresh copy
of the password file from the primary database.

Reset the TDE master encryption key Replace the database encryption wallet on the physical standby
database with a fresh copy of the database encryption wallet
from the primary database.

Change initialization parameters Evaluate whether a corresponding change must be made to the
initialization parameters on the physical standby database.

80.1.1. Adding a Datafile or Creating a Tablespace


The STANDBY_FILE_MANAGEMENT database initialization parameter controls whether the addition of a datafile to the
primary database is automatically propagated to a physical standby databases.
o If the STANDBY_FILE_MANAGEMENT parameter on the physical standby database is set to AUTO, any new
datafiles created on the primary database are automatically created on the physical standby database.
o If the STANDBY_FILE_MANAGEMENT database parameter on the physical standby database is set to
MANUAL, a new datafile must be manually copied from the primary database to the physical standby databases
after it is added to the primary database.
Note that if an existing datafile from another database is copied to a primary database, that it must also be copied to the
standby database and that the standby control file must be re-created, regardless of the setting of
STANDBY_FILE_MANAGEMENT parameter.
Using the STANDBY_FILE_MANAGEMENT Parameter with Raw Devices
Note:
Do not use the following procedure with databases that use Oracle Managed Files. Also, if the raw device path names are
not the same on the primary and standby servers, use the DB_FILE_NAME_CONVERT database initialization parameter
g
Oracle 11 – Monitoring Stand by Databases Page 22 of 242
WK: 5 - Day: 3

to convert the path names.


By setting the STANDBY_FILE_MANAGEMENT parameter to AUTO whenever new datafiles are added or dropped on
the primary database, corresponding changes are made in the standby database without manual intervention. This is true
as long as the standby database is using a file system. If the standby database is using raw devices for datafiles, then the
STANDBY_FILE_MANAGEMENT parameter will continue to work, but manual intervention is needed. This manual
intervention involves ensuring the raw devices exist before Redo Apply applies the redo data that will create the new
datafile.On the primary database, create a new tablespace where the datafiles reside in a raw device. At the same time,
create the same raw device on the standby database. For example:
SQL> CREATE TABLESPACE MTS2 –
> DATAFILE '/dev/raw/raw100' size 1m;
Tablespace created.

SQL> ALTER SYSTEM SWITCH LOGFILE;


System altered.

The standby database automatically adds the datafile because the raw devices exist. The standby alert log shows the
following:
Fri Apr 8 09:49:31 2005
Media Recovery Log
/u01/MILLER/flash_recovery_area/MTS_STBY/archivelog/2005_04_08/o1_mf_1_7_15ffgt0z_.arc
Recovery created file /dev/raw/raw100
Successfully added datafile 6 to media recovery
Datafile #6: '/dev/raw/raw100'
Media Recovery Waiting for thread 1 sequence 8 (in transit)

However, if the raw device was created on the primary system but not on the standby, then Redo Apply will stop due to
file-creation errors. For example, issue the following statements on the primary database:
SQL> CREATE TABLESPACE MTS3 –
> DATAFILE '/dev/raw/raw101' size 1m;
Tablespace created.

SQL> ALTER SYSTEM SWITCH LOGFILE;


System altered.

The standby system does not have the /dev/raw/raw101 raw device created. The standby alert log shows the following
messages when recovering the archive:
Fri Apr 8 10:00:22 2005
Media Recovery Log
/u01/MILLER/flash_recovery_area/MTS_STBY/archivelog/2005_04_08/o1_mf_1_8_15ffjrov_.arc
File #7 added to control file as 'UNNAMED00007'.
Originally created as:
'/dev/raw/raw101'
Recovery was unable to create the file as:
'/dev/raw/raw101'
MRP0: Background Media Recovery terminated with error 1274
Fri Apr 8 10:00:22 2005
Errors in file /u01/MILLER/MTS/dump/mts_mrp0_21851.trc:
ORA-01274: cannot add datafile '/dev/raw/raw101' - file could not be created
ORA-01119: error in creating database file '/dev/raw/raw101'
ORA-27041: unable to open file
Linux Error: 13: Permission denied
Additional information: 1
Some recovered datafiles maybe left media fuzzy
Media recovery may continue but open resetlogs may fail
Fri Apr 8 10:00:22 2005
Errors in file /u01/MILLER/MTS/dump/mts_mrp0_21851.trc:
ORA-01274: cannot add datafile '/dev/raw/raw101' - file could not be created
ORA-01119: error in creating database file '/dev/raw/raw101'
g
Oracle 11 – Monitoring Stand by Databases Page 23 of 242
WK: 5 - Day: 3

ORA-27041: unable to open file


Linux Error: 13: Permission denied
Additional information: 1
Fri Apr 8 10:00:22 2005
MTS; MRP0: Background Media Recovery process shutdown
ARCH: Connecting to console port...

Recovering from Errors


To correct the problems perform the following steps:
1. Create the raw device on the standby database and assign permissions to the Oracle user.
2. Query the V$DATAFILE view. For example:
SQL> SELECT NAME FROM V$DATAFILE;

NAME.
--------------------------------------------------------------------------------
/u01/MILLER/MTS/system01.dbf
/u01/MILLER/MTS/undotbs01.dbf
/u01/MILLER/MTS/sysaux01.dbf
/u01/MILLER/MTS/users01.dbf
/u01/MILLER/MTS/mts.dbf
/dev/raw/raw100
/u01/app/oracle/product/10.1.0/dbs/UNNAMED00007

SQL> ALTER SYSTEM SET –


> STANDBY_FILE_MANAGEMENT=MANUAL;

SQL> ALTER DATABASE CREATE DATAFILE


2 '/u01/app/oracle/product/10.1.0/dbs/UNNAMED00007'
3 AS
4 '/dev/raw/raw101';

3. In the standby alert log you should see information similar to the following:
Fri Apr 8 10:09:30 2005
alter database create datafile
'/dev/raw/raw101' as '/dev/raw/raw101'

Fri Apr 8 10:09:30 2005


Completed: alter database create datafile
'/dev/raw/raw101' a

4. On the standby database, set STANDBY_FILE_MANAGEMENT to AUTO and restart Redo Apply:
SQL> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO;
SQL> RECOVER MANAGED STANDBY DATABASE DISCONNECT;

At this point Redo Apply uses the new raw device datafile and recovery continues.

80.1.2. Dropping Tablespaces and Deleting Datafiles


When a tablespace is dropped or a datafile is deleted from a primary database, the corresponding datafile(s) must be
deleted from the physical standby database. The following example shows how to drop a tablespace:
SQL> DROP TABLESPACE tbs_4;
SQL> ALTER SYSTEM SWITCH LOGFILE;

To verify that deleted datafiles are no longer part of the database, query the V$DATAFILE view.
Delete the corresponding datafile on the standby system after the redo data that contains the previous changes is applied
to the standby database. For example:
% rm /disk1/oracle/oradata/payroll/s2tbs_4.dbf

On the primary database, after ensuring the standby database applied the redo information for the dropped tablespace,
g
Oracle 11 – Monitoring Stand by Databases Page 24 of 242
WK: 5 - Day: 3

you can remove the datafile for the tablespace. For example:
% rm /disk1/oracle/oradata/payroll/tbs_4.dbf

80.1. 3. Using DROP TABLESPACE INCLUDING CONTENTS AND DATAFILES


You can issue the SQL DROP TABLESPACE INCLUDING CONTENTS AND DATAFILES statement on the primary
database to delete the datafiles on both the primary and standby databases. To use this statement, the
STANDBY_FILE_MANAGEMENT initialization parameter must be set to AUTO. For example, to drop the tablespace at
the primary site:
SQL> DROP TABLESPACE INCLUDING CONTENTS –
> AND DATAFILES tbs_4;
SQL> ALTER SYSTEM SWITCH LOGFILE;

80.1.4. Using Transportable Tablespaces with a Physical Standby Database


You can use the Oracle transportable tablespaces feature to move a subset of an Oracle database and plug it in to
another Oracle database, essentially moving tablespaces between the databases.
To move or copy a set of tablespaces into a primary database when a physical standby is being used, perform the
following steps:
 Generate a transportable tablespace set that consists of datafiles for the set of tablespaces being transported
and an export file containing structural information for the set of tablespaces.
 Transport the tablespace set:
o Copy the datafiles and the export file to the primary database.
o Copy the datafiles to the standby database.
 The datafiles must be copied in a directory defined by the DB_FILE_NAME_CONVERT initialization parameter. If
DB_FILE_NAME_CONVERT is not defined, then issue the ALTER DATABASE RENAME FILE statement to
modify the standby control file after the redo data containing the transportable tablespace has been applied and
has failed. The STANDBY_FILE_MANAGEMENT initialization parameter must be set to AUTO.
 Plug in the tablespace.
 Invoke the Data Pump utility to plug the set of tablespaces into the primary database. Redo data will be
generated and applied at the standby site to plug the tablespace into the standby database.

80.1.5. Renaming a Datafile in the Primary Database


When you rename one or more datafiles in the primary database, the change is not propagated to the standby database.
Therefore, if you want to rename the same datafiles on the standby database, you must manually make the equivalent
modifications on the standby database because the modifications are not performed automatically, even if the
STANDBY_FILE_MANAGEMENT initialization parameter is set to AUTO.
The following steps describe how to rename a datafile in the primary database and manually propagate the changes to the
standby database.
o To rename the datafile in the primary database, take the tablespace offline:
SQL> ALTER TABLESPACE tbs_4 OFFLINE;

o Exit from the SQL prompt and issue an operating system command, such as the following UNIX mv command, to
rename the datafile on the primary system:
% mv /disk1/oracle/oradata/payroll/tbs_4.dbf
/disk1/oracle/oradata/payroll/tbs_x.dbf

o Rename the datafile in the primary database and bring the tablespace back online:
SQL> ALTER TABLESPACE tbs_4 RENAME DATAFILE
2> '/disk1/oracle/oradata/payroll/tbs_4.dbf'
3> TO '/disk1/oracle/oradata/payroll/tbs_x.dbf';
SQL> ALTER TABLESPACE tbs_4 ONLINE;

o Connect to the standby database, query the V$ARCHIVED_LOG view to verify all of the archived redo log files
are applied, and then stop Redo Apply:
SQL> SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
SEQUENCE# APP
g
Oracle 11 – Monitoring Stand by Databases Page 25 of 242
WK: 5 - Day: 3

--------- ---
8 YES
9 YES
10 YES
11 YES
4 rows selected.

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

o Shut down the standby database:


SQL> SHUTDOWN;

o Rename the datafile at the standby site using an operating system command, such as the UNIX mv command:
% mv /disk1/oracle/oradata/payroll/tbs_4.dbf
/disk1/oracle/oradata/payroll/tbs_x.dbf

o Start and mount the standby database:


SQL> STARTUP MOUNT;

o Rename the datafile in the standby control file. Note that the STANDBY_FILE_MANAGEMENT initialization
parameter must be set to MANUAL.
SQL> ALTER DATABASE RENAME FILE '/disk1/oracle/oradata/payroll/tbs_4.dbf'
2> TO '/disk1/oracle/oradata/payroll/tbs_x.dbf';

o On the standby database, restart Redo Apply:


SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
2> DISCONNECT;

If you do not rename the corresponding datafile at the standby system, and then try to refresh the standby database
control file, the standby database will attempt to use the renamed datafile, but it will not find it. Consequently, you will see
error messages similar to the following in the alert log:
ORA-00283: recovery session canceled due to errors
ORA-01157: cannot identify/lock datafile 4 - see DBWR trace file
ORA-01110: datafile 4: '/Disk1/oracle/oradata/payroll/tbs_x.dbf'

80.1.6. Add or Drop a Redo Log File Group


The configuration of the redo log and standby redo log on a physical standby database should be reevaluated and
adjusted as necessary after adding or dropping a redo log file group on the primary database.
Take the following steps to add or drop a redo log file group or standby redo log file group on a physical standby
database:
o Stop Redo Apply.
o If the STANDBY_FILE_MANAGEMENT initialization parameter is set to AUTO, change the value to MANUAL.
o Add or drop a log file group.
o Restore the STANDBY_FILE_MANAGEMENT initialization parameter and the Redo Apply options to their
original states.
o Restart Redo Apply.

80.1.7. NOLOGGING or Unrecoverable Operations


When you perform a DML or DDL operation using the NOLOGGING or UNRECOVERABLE clause, the standby database
is invalidated and may require substantial DBA administrative activities to repair. You can specify the SQL ALTER
DATABASE or SQL ALTER TABLESPACE statement with the FORCELOGGING clause to override the NOLOGGING
setting. However, this statement will not repair an already invalidated database.

80.1.8. Refresh the Password File


If the REMOTE_LOGIN_PASSWORDFILE database initialization parameter is set to SHARED or EXCLUSIVE, the
password file on a physical standby database must be replaced with a fresh copy from the primary database after granting
g
Oracle 11 – Monitoring Stand by Databases Page 26 of 242
WK: 5 - Day: 3

or revoking administrative privileges or changing the password of a user with administrative privileges.
Failure to refresh the password file on the physical standby database may cause authentication of redo transport sessions
or connections as SYSDBA or SYSOPER to the physical standby database to fail.

80.1.9. Reset the TDE Master Encryption Key


The database encryption wallet on a physical standby database must be replaced with a fresh copy of the database
encryption wallet from the primary database whenever the TDE master encryption key is reset on the primary database.
Failure to refresh the database encryption wallet on the physical standby database will prevent access to encrypted
columns on the physical standby database that are modified after the master encryption key is reset on the primary
database.

80.2. Recovering Through the OPEN RESETLOGS Statement


Data Guard allows recovery on a physical standby database to continue after the primary database has been opened with
the RESETLOGS option. When an ALTER DATABASE OPEN RESETLOGS statement is issued on the primary database,
the incarnation of the database changes, creating a new branch of redo data.

When a physical standby database receives a new branch of redo data, Redo Apply automatically takes the new branch
of redo data. For physical standby databases, no manual intervention is required if the standby database did not apply
redo data past the new resetlogs SCN (past the start of the new branch of redo data). The following table describes how
to resynchronize the standby database with the primary database branch.

If the standby database. . . Then. . . Perform these steps. . .

Has not applied redo data past the Redo Apply automatically No manual intervention is necessary. The MRP
new resetlogs SCN (past the start of takes the new branch of automatically resynchronizes the standby
the new branch of redo data) redo. database with the new branch of redo data.

Has applied redo data past the new The standby database is Follow the procedure in
resetlogs SCN (past the start of the recovered in the future of “flashback_dg_specific_point.doc” to flash back a
new branch of redo data) and the new branch of redo physical standby database.
Flashback Database is enabled on data.
the standby database Restart Redo Apply to continue application of redo
data onto new reset logs branch.
The MRP automatically resynchronizes the
standby database with the new branch.

Has applied redo data past the new The primary database Re-create the physical standby database following
resetlogs SCN (past the start of the has diverged from the the procedures in
new branch of redo data) and standby on the indicated
Flashback Database is not enabled primary database
on the standby database branch.

Is missing intervening archived redo The MRP cannot Locate and register missing archived redo log files
log files from the new branch of redo continue until the missing from each branch.
data log files are retrieved.

Is missing archived redo log files The MRP cannot Locate and register missing archived redo log files
from the end of the previous branch continue until the missing from the previous branch.
of redo data. log files are retrieved.

80.3. Monitoring Primary, Physical Standby, and Snapshot Standby


Databases
This section describes where to find useful information for monitoring primary and standby databases.
Table summarizes common primary database management actions and where to find information related to these actions.
Table Sources of Information About Common Primary Database Management Actions
Primary Database Action Primary Site Information Standby Site Information

Enable or disable a redo thread Alert log Alert log


V$THREAD
g
Oracle 11 – Monitoring Stand by Databases Page 27 of 242
WK: 5 - Day: 3

Primary Database Action Primary Site Information Standby Site Information

Display database role, protection V$DATABASE V$DATABASE


mode, protection level, switchover
status, fast-start failover
information, and so forth

Add or drop a redo log file group Alert log Alert log
V$LOG
STATUS column of V$LOGFILE

CREATE CONTROLFILE Alert log Alert log

Monitor Redo Apply Alert log Alert log


V$ARCHIVE_DEST_STATUS V$ARCHIVED_LOG
V$LOG_HISTORY
V$MANAGED_STANDBY

Change tablespace status V$RECOVER_FILE V$RECOVER_FILE


DBA_TABLESPACES DBA_TABLESPACES
Alert log

Add or drop a datafile or tablespace DBA_DATA_FILES V$DATAFILE


Alert log Alert log

Rename a datafile V$DATAFILE V$DATAFILE


Alert log Alert log

Unlogged or unrecoverable V$DATAFILE Alert log


operations
V$DATABASE

Monitor redo transport V$ARCHIVE_DEST_STATUS V$ARCHIVED_LOG


V$ARCHIVED_LOG Alert log
V$ARCHIVE_DEST
Alert log

Issue OPEN RESETLOGS or Alert log Alert log


CLEAR UNARCHIVED LOGFILES
statements

Change initialization parameter Alert log Alert log


g
Oracle 11 – Dataguard Services Page 28 of 242
WK: 5 - Day: 3

81. Dataguard Services


The following sections explain how Data Guard manages the transmission of redo data, the application of redo data, and
changes to the database roles:
Redo Transport Services
Control the automated transfer of redo data from the production database to one or more archival destinations.
Apply Services
Apply redo data on the standby database to maintain transactional synchronization with the primary database. Redo data
can be applied either from archived redo log files, or, if real-time apply is enabled, directly from the standby redo log files
as they are being filled, without requiring the redo data to be archived first at the standby database.
Role Transitions
Change the role of a database from a standby database to a primary database, or from a primary database to a standby
database using either a switchover or a failover operation.

81.1. Redo Transport Services


Redo transport services control the automated transfer of redo data from the production database to one or more archival
destinations.
Redo transport services perform the following tasks:
o Transmit redo data from the primary system to the standby systems in the configuration
o Manage the process of resolving any gaps in the archived redo log files due to a network failure
o Automatically detect missing or corrupted archived redo log files on a standby system and automatically
retrieve replacement archived redo log files from the primary database or another standby database

81.2. Apply Services


The redo data transmitted from the primary database is written to the standby redo log on the standby database. Apply
services automatically apply the redo data on the standby database to maintain consistency with the primary database. It
also allows read-only access to the data. The main difference between physical and logical standby databases is the
manner in which apply services apply the archived redo data:
For physical standby databases, Data Guard uses Redo Apply technology, which applies redo data on the standby
database using standard recovery techniques of an Oracle database, as shown in figure below.

Automatic Updating of a Physical Standby Database

For logical standby databases, Data Guard uses SQL Apply technology, which first transforms the received redo data into
SQL statements and then executes the generated SQL statements on the logical standby database, as shown in figure
below.
Automatic Updating of a Logical Standby Database
g
Oracle 11 – Dataguard Services Page 29 of 242
WK: 5 - Day: 3

81. 3 Role Transitions


A Data Guard configuration consists of one database that functions in the primary role and one or more databases that
function in the standby role. Typically, the role of each database does not change. However, if Data Guard is used to
maintain service in response to a primary database outage, you must initiate a role transition between the current primary
database and one standby database in the configuration. To see the current role of the databases, query the
DATABASE_ROLE column in the V$DATABASE view.
The number, location, and type of standby databases in a Data Guard configuration and the way in which redo data from
the primary database is propagated to each standby database determine the role-management options available to you in
response to a primary database outage.
This chapter describes how to manage role transitions in a Data Guard configuration. It contains the following topics:
o Introduction to Role Transitions.
o Role Transitions Involving Physical Standby Databases.
o Role Transitions Involving Logical Standby Databases.
The role transitions described in this chapter are invoked manually using SQL statements. You can also use the Oracle
Data Guard broker to simplify role transitions and automate failovers.

81. 3.1 Introduction to Role Transitions


A database operates in one of the following mutually exclusive roles: primary or standby. Data Guard enables you to
change these roles dynamically by issuing the SQL statements described in this chapter, or by using either of the Data
Guard broker's interfaces. Oracle Data Guard supports the following role transitions:
Switchover
Allows the primary database to switch roles with one of its standby databases. There is no data loss during a switchover.
After a switchover, each database continues to participate in the Data Guard configuration with its new role.
Failover
Changes a standby database to the primary role in response to a primary database failure. If the primary database was
not operating in either maximum protection mode or maximum availability mode before the failure, some data loss may
occur. If Flashback Database is enabled on the primary database, it can be reinstated as a standby for the new primary
database once the reason for the failure is corrected.

81. 3.2 Preparing for a Role Transition


Before starting any role transition, perform the following preparations:
Verify that each database is properly configured for the role that it is about to assume.
Note:
o You must define the LOG_ARCHIVE_DEST_n and LOG_ARCHIVE_DEST_STATE_n parameters on each
standby database so that when a switchover or failover occurs, all standby sites continue to receive redo
data from the new primary database.
o Ensure temporary files exist on the standby database that match the temporary files on the primary
database.
o Remove any delay in applying redo that may be in effect on the standby database that will become the new
primary database.
g
Oracle 11 – Dataguard Services Page 30 of 242
WK: 5 - Day: 3

o Before performing a switchover from an Oracle RAC primary database to a physical standby database, shut
down all but one primary database instance. Any primary database instances shut down at this time can be
started after the switchover completes.
o Before performing a switchover or a failover to an Oracle RAC physical standby database, shut down all but
one standby database instance. Any standby database instances shut down at this time can be restarted
after the role transition completes.

81. 3.3 Choosing a Target Standby Database for a Role Transition


For a Data Guard configuration with multiple standby databases, there are a number of factors to consider when choosing
the target standby database for a role transition. These include the following:
o Locality of the standby database.
o The capability of the standby database (hardware specifications—such as the number of CPUs, I/O
bandwidth available, and so on).
o The time it will take to perform the role transition. This is affected by how far behind the standby database is
in terms of application of redo data, and how much flexibility you have in terms of trading off application
availability with data loss.
o Standby database type.
The type of standby chosen as the role transition target determines how other standby databases in the configuration will
behave after the role transition. If the new primary was a physical standby before the role transition, all other standby
databases in the configuration will become standbys of the new primary. If the new primary was a logical standby before
the role transition, then all other logical standbys in the configuration will become standbys of the new primary, but
physical standbys in the configuration will continue to be standbys of the old primary and will therefore not protect the new
primary. In the latter case, a future switchover or failover back to the original primary database will return all standbys to
their original role as standbys of the current primary. For the reasons described above, a physical standby is generally the
best role transition target in a configuration that contains both physical and logical standbys.
Note: A snapshot standby cannot be the target of a role transition.
Data Guard provides the V$DATAGUARD_STATS view that can be used to evaluate each standby database in terms of
the currency of the data in the standby database, and the time it will take to perform a role transition if all available redo
data is applied to the standby database. For example:

SQL> COLUMN NAME FORMAT A18


SQL> COLUMN VALUE FORMAT A16
SQL> COLUMN TIME_COMPUTED FORMAT A24
SQL> SELECT * FROM V$DATAGUARD_STATS;
NAME VALUE TIME_COMPUTED
------------------ ---------------- ------------------------
apply finish time +00 00:00:02.4 15-MAY-2005 10:32:49
second(1)
interval
apply lag +00 0:00:04 15-MAY-2005 10:32:49
second(0)
interval
transport lag +00 00:00:00 15-MAY-2005 10:32:49
second(0)
interval
The time at which each of the statistics is computed is shown in the TIME_COMPUTED column. The
V$DATATGUARD_STATS.TIME_COMPUTED column is a timestamp taken when the metric in a
V$DATATGUARD_STATS row is computed. This column indicates the freshness of the associated metric. This shows
that for this standby database, there is no transport lag, that apply services has not applied the redo generated in the last
4 seconds (apply lag), and that it will take apply services 2.4 seconds to finish applying the unapplied redo (apply finish
time). The APPLY LAG and TRANSPORT LAG metrics are computed based on information received from the primary
database, and these metrics become stale if communications between the primary and standby database are disrupted.
An unchanging value in this column for the APPLY LAG and TRANSPORT LAG metrics indicates that these metrics are
not being updated (or have become stale), possibly due to a communications fault between the primary and standby
databases.

81. 3.4 Switchovers


o A switchover is typically used to reduce primary database downtime during planned outages, such as operating
system or hardware upgrades, or rolling upgrades of the Oracle database software and patch sets.
o A switchover takes place in two phases. In the first phase, the existing primary database undergoes a transition to a
standby role. In the second phase, a standby database undergoes a transition to the primary role.
g
Oracle 11 – Dataguard Services Page 31 of 242
WK: 5 - Day: 3

o Figure 8-1 shows a two-site Data Guard configuration before the roles of the databases are switched. The primary
database is in San Francisco, and the standby database is in Boston.
This illustration shows a Data Guard configuration consisting of a primary database and a standby database.
An application is performing read/write transactions on the primary database in San Francisco, from which online redo
logs are being archived locally and over Oracle Net services to the standby database in Boston. On the Boston standby
location, the archived redo logs are being applied to the standby database, which is performing read-only transactions.
Figure 8-2 shows the Data Guard environment after the original primary database was switched over to a standby
database, but before the original standby database has become the new primary database. At this stage, the Data Guard
configuration temporarily has two standby databases.
Figure Standby Databases Before Switchover to the New Primary Database

This illustration shows a Data Guard configuration during a switchover operation. The San Francisco database (originally
the primary database) has changed to the standby role, but the Boston database has not yet changed to the primary role.
At this point in time, both the San Francisco and Boston databases are operating in the standby role.
Applications that were previously sending read/write transactions to the San Francisco database are preparing to send
read/write transactions to the Boston database. On the Boston standby database, the standby database online redo logs
and local archived redo logs are still being generated. However, no redo logs are being sent or received over the Oracle
Net network. Both of the standby databases are capable of operating in read-only mode.
Figure shows the Data Guard environment after a switchover took place. The original standby database became the new
primary database. The primary database is now in Boston, and the standby database is now in San Francisco.
Figure Data Guard Environment After Switchover
g
Oracle 11 – Dataguard Services Page 32 of 242
WK: 5 - Day: 3
This illustration shows a Data Guard configuration after a switchover operation has occurred. The San Francisco database
(originally the primary database) is now operating as the standby database and the Boston database is now operating as
the primary database.
Preparing for a Switchover
Ensure the prerequisites listed in Section 8.1.1 are satisfied. In addition, the following prerequisites must be met for a

switchover:
For switchovers involving a physical standby database, verify that the primary database is open and that redo apply is
active on the standby database.
For switchovers involving a logical standby database, verify both the primary and standby database instances are open
and that SQL Apply is active.

81. 3.5. Failovers


A failover is typically used only when the primary database becomes unavailable, and there is no possibility of restoring it
to service within a reasonable period of time. The specific actions performed during a failover vary based on whether a
logical or a physical standby database is involved in the failover, the state of the Data Guard configuration at the time of
the failover, and on the specific SQL statements used to initiate the failover. Figure shows the result of a failover from a
primary database in San Francisco to a physical standby database in Boston.
Figure Failover to a Standby Database

This illustration shows a two-site Data Guard configuration after a system or software failure occurred. In this figure, the
primary site (in San Francisco) is crossed out to indicate that the site is no longer operational. The Boston site that was
originally a standby site is now operating as the new primary site. Applications that were previously sending read/write
transactions to the San Francisco site when it was the primary site are now sending all read/write transactions to the new
primary site in Boston. The Boston site is writing to online redo logs and local archived redo logs.
g
Oracle 11 – Dataguard Services Page 33 of 242
WK: 5 - Day: 3

81. 3.6 Preparing for a Failover


If possible, before performing a failover, you should transfer as much of the available and unapplied primary database
redo data as possible to the standby database. Ensure the prerequisites are satisfied. In addition, the following
prerequisites must be met for a failover:
If a standby database currently running in maximum protection mode will be involved in the failover, first place it in
maximum performance mode by issuing the following statement on the standby database:
SQL> ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE;
Then, if appropriate standby databases are available, you can reset the desired protection mode on the new primary
database after the failover completes.
This is required because you cannot fail over to a standby database that is in maximum protection mode. In addition, if a
primary database in maximum protection mode is still actively communicating with the standby database, issuing the
ALTER DATABASE statement to change the standby database from maximum protection mode to maximum performance
mode will not succeed. Because a failover removes the original primary database from the Data Guard configuration,
these features serve to protect a primary database operating in maximum protection mode from the effects of an
unintended failover.

81. 3.7. Role Transition Triggers


The DB_ROLE_CHANGE system event is signaled whenever a role transition occurs. This system event is signaled
immediately if the database is open when the role transition occurs, or the next time the database is opened if it is closed
when a role transition occurs. The DB_ROLE_CHANGE system event can be used to fire a trigger that performs a set of
actions whenever a role transition occurs.

81. 3.8. Role Transitions Involving Physical Standby Databases


This section describes how to perform switchovers and failovers involving a physical standby database.

81. 3.8.1 Switchovers Involving a Physical Standby Database


This section describes how to perform a switchover. A switchover must be initiated on the current primary database and
completed on the target standby database. The following steps describe how to perform a switchover.
Step 1. Verify it is possible to perform a switchover.
On the current primary database, query the SWITCHOVER_STATUS column of the V$DATABASE fixed view on the
primary database to verify it is possible to perform a switchover. For example:

SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;


SWITCHOVER_STATUS
-----------------
TO STANDBY

1 row selected
The TO STANDBY value in the SWITCHOVER_STATUS column indicates that it is possible to switch the primary
database to the standby role. If the TO STANDBY value is not displayed, then verify the Data Guard configuration is
functioning correctly (for example, verify all LOG_ARCHIVE_DEST_n parameter values are specified correctly).
After performing these steps, the SWITCHOVER_STATUS column still displays SESSIONS ACTIVE, you can
successfully perform a switchover by appending the WITH SESSION SHUTDOWN clause to the ALTER DATABASE
COMMIT TO SWITCHOVER TO PHYSICAL STANDBY statement described in Step 2.
Step 2. Initiate the switchover on the primary database.
To change the current primary database to a physical standby database role, use the following SQL statement on the
primary database:

SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY;


After this statement completes, the primary database is converted into a standby database. The current control file is
backed up to the current SQL session trace file before the switchover. This makes it possible to reconstruct a current
control file, if necessary.
Step 3. Shut down and restart the former primary instance.
Shut down the former primary instance, and restart and mount the database:

SQL> SHUTDOWN IMMEDIATE;


SQL> STARTUP MOUNT;
g
Oracle 11 – Dataguard Services Page 34 of 242
WK: 5 - Day: 3
At this point in the switchover process, both databases are configured as standby databases (see Figure 8-2).
Step 4. Verify the switchover status in the V$DATABASE view.
After you change the primary database to the physical standby role and the switchover notification is received by the
standby databases in the configuration, you should verify if the switchover notification was processed by the target
standby database by querying the SWITCHOVER_STATUS column of the V$DATABASE fixed view on the target standby
database.
For example:

SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;


SWITCHOVER_STATUS
-----------------
TO_PRIMARY
1 row selected
If the value in the SWITCHOVER_STATUS column is neither TO_PRIMARY nor SESSIONS_ACTIVE, verify that redo
apply is active and that redo transport is working properly, and continue to query this view until either TO_PRIMARY or
SESSIONS_ACTIVE is displayed.
If the value in the SWITCHOVER_STATUS column is TO_PRIMARY, go to step 5.
Step 5. Switch the target physical standby database role to the primary role.
You can switch a physical standby database from the standby role to the primary role when the standby database
instance is either mounted in Redo Apply mode or open for read-only access. It must be in one of these modes so that the
primary database switchover request can be coordinated. After the standby database is in an appropriate mode, issue the
following SQL statement on the physical standby database that you want to change to the primary role:

SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;


Step 6. Finish the transition of the standby database to the primary role.
Issue the SQL ALTER DATABASE OPEN statement to open the new primary database:

SQL> ALTER DATABASE OPEN;


Step 7. Start redo apply on the new physical standby database.
For example, issue the following statement on the new physical standby database:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE
DISCONNECT FROM SESSION;

81. 3.8.2 Failovers Involving a Physical Standby Database


This section describes how to perform failovers involving a physical standby database.
During failovers involving a physical standby database:
o In all cases, after a failover, the original primary database can no longer participate in the Data Guard configuration.
o In most cases, other logical or physical standby databases not directly participating in the failover remain in the
configuration and do not have to be shut down or restarted.
o In some cases, it might be necessary to re-create all standby databases after configuring the new primary database.
o These cases are described, where appropriate, within the failover steps below.
Note:
Oracle recommends you use only the failover steps and commands described in the following sections to perform a
failover. Do not use the ALTER DATABASE ACTIVATE STANDBY DATABASE to perform a failover, because this
statement may cause data loss.
Failover Steps
This section describes the steps that must be performed to transition the selected physical standby database to the
primary role. Any other physical or logical standby databases that are also part of the configuration will remain in the
configuration and will not need to be shut down or restarted.
If the target standby database was operating in maximum protection mode or maximum availability mode, no gaps in the
archived redo log files should exist, and you can proceed directly to Step 6. Otherwise, begin with Step 1 to determine if
any manual gap resolution steps must be performed.
Step 1: Identify and resolve any gaps in the archived redo log files.
To determine if there are gaps in the archived redo log files on the target standby database, query the V$ARCHIVE_GAP
view.
g
Oracle 11 – Dataguard Services Page 35 of 242
WK: 5 - Day: 3
The V$ARCHIVE_GAP view contains the sequence numbers of the archived redo log files that are known to be missing
for each thread. The data returned reflects the highest gap only.
For example:
SQL> SELECT THREAD#, LOW_SEQUENCE#, HIGH_SEQUENCE# FROM V$ARCHIVE_GAP;
THREAD# LOW_SEQUENCE# HIGH_SEQUENCE#
---------- ------------- --------------
1 90 92
In this example the gap comprises archived redo log files with sequences 90, 91, and 92 for thread 1. If possible, copy all
of the identified missing archived redo log files to the target standby database from the primary database and register
them. This must be done for each thread.
For example:

SQL> ALTER DATABASE REGISTER PHYSICAL LOGFILE 'filespec1';


Step 2: Repeat Step 1 until all gaps are resolved.
The query executed in Step 1 displays information for the highest gap only. After resolving that gap, you must repeat Step
1 until the query returns no rows.
Step 3: Copy any other missing archived redo log files.
To determine if there are any other missing archived redo log files, query the V$ARCHIVED_LOG view on the target
standby database to obtain the highest sequence number for each thread.
For example:
SQL> SELECT UNIQUE THREAD# AS THREAD, MAX(SEQUENCE#)
2> OVER (PARTITION BY thread#) AS LAST from V$ARCHIVED_LOG;

THREAD LAST
---------- ----------
1 100
Copy any available archived redo log files from the primary database that contains sequence numbers higher than the
highest sequence number available on the target standby database to the target standby database and register them. This
must be done for each thread.
For example:
SQL> ALTER DATABASE REGISTER PHYSICAL LOGFILE 'filespec1';
After all available archived redo log files have been registered, query the V$ARCHIVE_GAP view as described in Step 1
to verify no additional gaps were introduced in Step 3.
Note:
If, while performing Steps 1 through 3, you are not able to resolve gaps in the archived redo log files (for example,
because you do not have access to the system that hosted the failed primary database), some data loss will occur during
the failover.
Step 4: Stop Managed Recovery.
Issue the following statement to stop managed recovery:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;


Step 5: Verify that the standby database is ready to become a primary database.
Query the SWITCHOVER_STATUS column of the V$DATABASE view on the target standby database. For example:

SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;


SWITCHOVER_STATUS
-----------------
TO_PRIMARY
1 row selected
If the value in the SWITCHOVER_STATUS column is neither TO_PRIMARY nor SESSIONS_ACTIVE, verify that redo
apply is active and continue to query this view until either TO_PRIMARY or SESSIONS_ACTIVE is displayed.
If the value in the SWITCHOVER_STATUS column is TO_PRIMARY, go to step 6.
If the value in the SWITCHOVER_STATUS column is SESSIONS_ACTIVE, perform the steps described in Section A.4,
"Problems Switching Over to a Standby Database" to identify and terminate active user or SQL sessions that might
prevent a switchover from being processed. If, after performing these steps, the SWITCHOVER_STATUS column still
displays SESSIONS_ACTIVE, you can proceed to Step 6, and append the WITH SESSION SHUTDOWN clause to the
switchover statement.
g
Oracle 11 – Dataguard Services Page 36 of 242
WK: 5 - Day: 3
Step 6: Initiate a failover on the target physical standby database.
Issue the following statement to initiate the failover:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH;


Step 7: Convert the physical standby database to the primary role.
Issue the following statement to open the new primary database:

SQL> ALTER DATABASE OPEN;


Step 8: Finish the transition of the standby database to the primary database role.
Issue the SQL ALTER DATABASE OPEN statement to open the new primary database:

SQL> ALTER DATABASE OPEN;


Step 9: Back up the new primary database.
Before issuing the STARTUP statement, back up the new primary database. Performing a backup immediately is a
necessary safety measure, because you cannot recover changes made after the failover without a complete backup copy
of the database.
As a result of the failover, the original primary database can no longer participate in the Data Guard configuration, and all
other standby databases are now receiving and applying redo data from the new primary database.
Step 10: Optionally, restore the failed primary database.
After a failover, the original primary database can be converted into a physical standby database of the new primary
database, or it can be re-created as a physical standby database from a backup of the new primary database .
Once the original primary database has been converted into a standby database, a switchover can be performed to
restore it to the primary role.

81.3.8.3. Role Transitions Involving Logical Standby Databases


This section describes how to perform switchovers and failovers involving a logical standby database.
Switchovers Involving a Logical Standby Database
When you perform a switchover that changes roles between a primary database and a logical standby database, always
initiate the switchover on the primary database and complete it on the logical standby database. These steps must be
performed in the order in which they are described or the switchover will not succeed.
Step 1: Verify it is possible to perform a switchover on the primary database.
On the current primary database, query the SWITCHOVER_STATUS column of the V$DATABASE fixed view on the
primary database to verify it is possible to perform a switchover.
For example:
SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;
SWITCHOVER_STATUS
-----------------
TO STANDBY
1 row selected
A value of TO STANDBY or SESSIONS ACTIVE in the SWITCHOVER_STATUS column indicates that it is possible to
switch the primary database to the logical standby role. If one of these values is not displayed, then verify the Data Guard
configuration is functioning correctly (for example, verify all LOG_ARCHIVE_DEST_n parameter values are specified
correctly).
Step 2: Prepare the current primary database for the switchover.
To prepare the current primary database for a logical standby database role, issue the following SQL statement on the
primary database:

SQL> ALTER DATABASE PREPARE TO SWITCHOVER TO LOGICAL STANDBY;


This statement notifies the current primary database that it will soon switch to the logical standby role and begin receiving
redo data from a new primary database. You perform this step on the primary database in preparation to receive the
LogMiner dictionary to be recorded in the redo stream of the current logical standby database, as described in step 3.
The value PREPARING SWITCHOVER is displayed in the V$DATABASE.SWITCHOVER_STATUS column if this
operation succeeds.
Step 3: Prepare the target logical standby database for the switchover.
g
Oracle 11 – Dataguard Services Page 37 of 242
WK: 5 - Day: 3
Use the following statement to build a LogMiner dictionary on the logical standby database that is the target of the
switchover:

SQL> ALTER DATABASE PREPARE TO SWITCHOVER TO PRIMARY;


This statement also starts redo transport services on the logical standby database that begins transmitting its redo data to
the current primary database and to other standby databases in the Data Guard configuration. The sites receiving redo
data from this logical standby database accept the redo data but they do not apply it.
Depending on the work to be done and the size of the database, the switchover can take some time to complete.
The V$DATABASE.SWITCHOVER_STATUS on the logical standby database initially shows PREPARING DICTIONARY
while the LogMiner dictionary is being recorded in the redo stream. Once this has completed successfully, the
SWITCHOVER_STATUS column shows PREPARING SWITCHOVER.
Step 4: Ensure the current primary database is ready for the future primary database's redo stream.
Before you can complete the role transition of the primary database to the logical standby role, verify the LogMiner
dictionary was received by the primary database by querying the SWITCHOVER_STATUS column of the V$DATABASE
fixed view on the primary database. Without the receipt of the LogMiner dictionary, the switchover cannot proceed,
because the current primary database will not be able to interpret the redo records sent from the future primary database.
The SWITCHOVER_STATUS column shows the progress of the switchover.
When the query returns the TO LOGICAL STANDBY value, you can proceed with Step 5. For example:

SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;


SWITCHOVER_STATUS
-----------------
TO LOGICAL STANDBY
1 row selected
Note:
You can cancel the switchover operation by issuing the following statements in the order shown:
Cancel switchover on the primary database:

SQL> ALTER DATABASE PREPARE TO SWITCHOVER CANCEL;


Cancel the switchover on the logical standby database:

SQL> ALTER DATABASE PREPARE TO SWITCHOVER CANCEL;


Step 5: Switch the primary database to the logical standby database role.
To complete the role transition of the primary database to a logical standby database, issue the following SQL statement:

SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO LOGICAL STANDBY;


This statement waits for all current transactions on the primary database to end and prevents any new users from starting
new transactions, and establishes a point in time where the switchover will be committed.
Executing this statement will also prevent users from making any changes to the data being maintained in the logical
standby database. To ensure faster execution, ensure the primary database is in a quiet state with no update activity
before issuing the switchover statement (for example, have all users temporarily log off the primary database). You can
query the V$TRANSACTION view for information about the status of any current in-progress transactions that could delay
execution of this statement.
The primary database has now undergone a role transition to run in the standby database role.
When a primary database undergoes a role transition to a logical standby database role, you do not have to shut down
and restart the database.
Step 6: Ensure all available redo has been applied to the target logical standby database that is about to become
the new primary database.
After you complete the role transition of the primary database to the logical standby role and the switchover notification is
received by the standby databases in the configuration, you should verify the switchover notification was processed by the
target standby database by querying the SWITCHOVER_STATUS column of the V$DATABASE fixed view on the target
standby database. Once all available redo records are applied to the logical standby database, SQL Apply automatically
shuts down in anticipation of the expected role transition.
The SWITCHOVER_STATUS value is updated to show progress during the switchover. When the status is TO PRIMARY,
you can proceed with Step 7.
For example:
g
Oracle 11 – Dataguard Services Page 38 of 242
WK: 5 - Day: 3
SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;
SWITCHOVER_STATUS
-----------------
TO PRIMARY
1 row selected
Step 7: Switch the target logical standby database to the primary database role.
On the logical standby database that you want to switch to the primary role, use the following SQL statement to switch the
logical standby database to the primary role:

SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;


There is no need to shut down and restart any logical standby databases that are in the Data Guard configuration. All
other logical standbys in the configuration will become standbys of the new primary, but any physical standby databases
will remain standbys of the original primary database.
Step 8: Start SQL Apply on the new logical standby database.
On the new logical standby database, start SQL Apply:

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;


Failovers Involving a Logical Standby Database
This section describes how to perform failovers involving a logical standby database. A failover role transition involving a
logical standby database necessitates taking corrective actions on the failed primary database and on all bystander logical
standby databases. If Flashback Database was not enabled on the failed primary database, you must re-create the
database from backups taken from the current primary database. Otherwise, you can convert a failed primary database to
be a logical standby database for the new primary database.
Depending on the protection mode for the configuration and the attributes you chose for redo transport services, it might
be possible to automatically recover all or some of the primary database modifications.
Step 1: Copy and register any missing archived redo log files to the target logical standby database slated to become the
new primary database.
Depending on the condition of the components in the configuration, you might have access to the archived redo log files
on the primary database. If so, do the following:
Determine if any archived redo log files are missing on the logical standby database.
Copy missing log files from the primary database to the logical standby database.
Register the copied log files.
You can register an archived redo log files with the logical standby database by issuing the following statement. For
example:

SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE


2> '/disk1/oracle/dbs/log-%r_%s_%t.arc';
Database altered.
Step 2: Enable remote destinations.
If you have not previously configured role-based destinations, identify the initialization parameters that correspond to the
remote logical standby destinations for the new primary database, and manually enable archiving of redo data for each of
these destinations.
For example, to enable archiving for the remote destination defined by the LOG_ARCHIVE_DEST_2 parameter, issue the
following statement:

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE SCOPE=BOTH;


To ensure this change will persist if the new primary database is later restarted, update the appropriate text initialization
parameter file or server parameter file. In general, when the database operates in the primary role, you must enable
archiving to remote destinations, and when the database operates in the standby role, you must disable archiving to
remote destinations.
Step 3: Activate the new primary database.
Issue the following statement on the target logical standby database (that you are transitioning to the new primary role):

SQL> ALTER DATABASE ACTIVATE LOGICAL STANDBY DATABASE FINISH APPLY;


g
Oracle 11 – Dataguard Services Page 39 of 242
WK: 5 - Day: 3
This statement stops the RFS process, applies remaining redo data in the standby redo log file before the logical standby
database becomes a primary database, stops SQL Apply, and activates the database in the primary database role.
If the FINISH APPLY clause is not specified, then unapplied redo from the current standby redo log file will not be applied
before the standby database becomes the primary database.
Step 4: Recovering other standby databases after a failover
Step 5: Back up the new primary database.
Back up the new primary database immediately after the Data Guard database failover. Immediately performing a backup
is a necessary safety measure, because you cannot recover changes made after the failover without a complete backup
copy of the database.
Step 6: Restore the failed primary database.
After a failover, the original primary database can be converted into a logical standby database of the new primary
database or it can be recreated as a logical standby database from a backup of the new primary database .
Once the original primary database has been converted into a standby database, a switchover can be performed to
restore it to the primary role.
g
Oracle 11 – Redo Apply Services Page 40 of 242
WK: 5 - Day: 3

82. Redo Apply Services


This chapter describes how redo data is applied to a standby database. It includes the following topics:
 Introduction to Apply Services
 Apply Services Configuration Options
 Applying Redo Data to Physical Standby Databases
 Applying Redo Data to Logical Standby Databases

82.1 Introduction to Apply Services


Apply services automatically apply redo to standby databases to maintain synchronization with the primary database and
allow transactionally consistent access to the data.

By default, apply services wait for the full archived redo log file to arrive on the standby database before applying it to the
standby database. However, if you use a standby redo log, you can enable real-time apply, which allows Data Guard to
recover redo data from the current standby redo log file as it is being filled.

Apply services use the following methods to maintain physical and logical standby databases:
 Redo apply (physical standby databases only)
Uses media recovery to keep the primary and physical standby databases synchronized.
 SQL Apply (logical standby databases only)
Reconstitutes SQL statements from the redo received from the primary database and executes the SQL
statements against the logical standby database.

Logical standby databases can be opened in read/write mode, but the target tables being maintained by the logical
standby database are opened in read-only mode for reporting purposes (providing the database guard was set
appropriately). SQL Apply enables you to use the logical standby database for reporting activities, even while SQL
statements are being applied.

The sections in this chapter describe Redo Apply, SQL Apply, real-time apply, and delayed apply in more detail.

82.2 Apply Services Configuration Options


This section contains the following topics:
 Using Real-Time Apply to Apply Redo Data Immediately
 Specifying a Time Delay for the Application of Archived Redo Log Files
Using Real-Time Apply to Apply Redo Data Immediately
If the real-time apply feature is enabled, apply services can apply redo data as it is received, without waiting for the current
standby redo log file to be archived. This results in faster switchover and failover times because the standby redo log files
have been applied already to the standby database by the time the failover or switchover begins.
Use the ALTER DATABASE statement to enable the real-time apply feature, as follows:
 For physical standby databases, issue the ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
USING CURRENT LOGFILE statement.
 For logical standby databases, issue the ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE
statement.
Standby redo log files are required to use real-time apply.
Figure shows a Data Guard configuration with a local destination and a standby destination. As the remote file server
(RFS) process writes the redo data to standby redo log files on the standby database, apply services can recover redo
from standby redo log files as they are being filled.
g
Oracle 11 – Redo Apply Services Page 41 of 242
WK: 5 - Day: 3

(Figure) Applying Redo Data to a Standby Destination Using Real-Time Apply

82.2.1. Specifying a Time Delay for the Application of Archived Redo Log Files
In some cases, you may want to create a time lag between the time when redo data is received from the primary site and
when it is applied to the standby database. You can specify a time interval (in minutes) to protect against the application of
corrupted or erroneous data to the standby database. When you set a DELAY interval, it does not delay the transport of
the redo data to the standby database. Instead, the time lag you specify begins when the redo data is completely archived
at the standby destination.

Note: If you define a delay for a destination that has real-time apply enabled, the delay is ignored.

Specifying a Time Delay


You can set a time delay on primary and standby databases using the DELAY=minutes attribute of the
LOG_ARCHIVE_DEST_n initialization parameter to delay applying archived redo log files to the standby database. By
default, there is no time delay. If you specify the DELAY attribute without specifying a value, then the default delay interval
is 30 minutes.

Canceling a Time Delay


You can cancel a specified delay interval as follows:
 For physical standby databases, use the NODELAY keyword of the RECOVER MANAGED STANDBY
DATABASE clause:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE NODELAY;

 For logical standby databases, specify the following SQL statement:


SQL> ALTER DATABASE START LOGICAL STANDBY APPLY NODELAY;

These commands result in apply services immediately beginning to apply archived redo log files to the standby database,
before the time interval expires.
Using Flashback Database as an Alternative to Setting a Time Delay
g
Oracle 11 – Redo Apply Services Page 42 of 242
WK: 5 - Day: 3

As an alternative to setting an apply delay, you can use Flashback Database to recover from the application of corrupted
or erroneous data to the standby database. Flashback Database can quickly and easily flash back a standby database to
an arbitrary point in time.

82.3 Applying Redo Data to Physical Standby Databases


By default, the redo data is applied from archived redo log files. When performing Redo Apply, a physical standby
database can use the real-time apply feature to apply redo directly from the standby redo log files as they are being
written by the RFS process. Note that apply services cannot apply redo data to a physical standby database when it is
opened in read-only mode.
This section contains the following topics:
 Starting Redo Apply
 Stopping Redo Apply
 Monitoring Redo Apply on Physical Standby Databases

82.3.1. Starting Redo Apply


To start apply services on a physical standby database, ensure the physical standby database is started and mounted and
then start Redo Apply using the SQL ALTER DATABASE RECOVER MANAGED STANDBY DATABASE statement.
You can specify that Redo Apply runs as a foreground session or as a background process, and enable it with real-time
apply.
 To start Redo Apply in the foreground, issue the following SQL statement:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE;

If you start a foreground session, control is not returned to the command prompt until recovery is canceled by
another session.
 To start Redo Apply in the background, include the DISCONNECT keyword on the SQL statement. For example:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;

This statement starts a detached server process and immediately returns control to the user. While the managed
recovery process is performing recovery in the background, the foreground process that issued the RECOVER
statement can continue performing other tasks. This does not disconnect the current SQL session.
 To start real-time apply, include the USING CURRENT LOGFILE clause on the SQL statement. For example:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE;

82.3.2. Stopping Redo Apply


To stop Redo Apply, issue the following SQL statement in another window:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

82.4. Applying Redo Data to Logical Standby Databases


SQL Apply converts the data from the archived redo log or standby redo log in to SQL statements and then executes
these SQL statements on the logical standby database. Because the logical standby database remains open, tables that
are maintained can be used simultaneously for other tasks such as reporting, summations, and queries.
This section contains the following topics:
 Starting SQL Apply
 Stopping SQL Apply on a Logical Standby Database
 Monitoring SQL Apply on Logical Standby Databases

82.4.1. Starting and Stopping SQL Apply


To start SQL Apply, start the logical standby database and issue the following statement:
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY;
To start real-time apply on the logical standby database to immediately apply redo data from the standby redo log files on
the logical standby database, include the IMMEDIATE keyword as shown in the following statement:
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
Stopping SQL Apply
To stop SQL Apply, issue the following statement on the logical standby database:
SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;

When you issue this statement, SQL Apply waits until it has committed all complete transactions that were in the process
of being applied. Thus, this command may not stop the SQL Apply processes immediately.
g
Oracle 11 – Redo transport Services Page 43 of 242
WK: 5 - Day: 3

83. Redo Transport Services


This chapter describes how to configure and monitor Oracle redo transport services. The following topics are discussed:
 Introduction to Redo Transport Services
 Configuring Redo Transport Services
 Monitoring Redo Transport Services

83.1 Introduction to Redo Transport Services


Redo transport services performs the automated transfer of redo data between Oracle databases. The following redo
transport destinations are supported:
 Oracle Data Guard standby databases
 Archive Log repository
o This destination type is used for temporary offsite storage of archived redo log files. An archive log repository
consists of an Oracle database instance and a physical standby control file. An archive log repository does not
contain datafiles, so it cannot support role transitions.
o The procedure used to create an archive log repository is identical to the procedure used to create a physical
standby database, except for the copying of datafiles.
 Oracle Streams downstream capture databases
 Oracle Change Data Capture staging databases
An Oracle database can send redo data to up to nine redo transport destinations. Each redo transport destination is
individually configured to receive redo data via one of two redo transport modes:
Synchronous
The synchronous redo transport mode transmits redo data synchronously with respect to transaction
commitment. A transaction cannot commit until all redo generated by that transaction has been successfully sent
to every enabled redo transport destination that uses the synchronous redo transport mode.This transport mode
is used by the Maximum Protection and Maximum
Asynchronous
The asynchronous redo transport mode transmits redo data asynchronously with respect to transaction
commitment. A transaction can commit without waiting for the redo generated by that transaction to be
successfully sent to any redo transport destination that uses the asynchronous redo transport mode.

83.2 Configuring Redo Transport Services


This section describes how to configure redo transport services. The following topics are discussed:
 Redo Transport Security
 Configuring an Oracle Database to Send Redo Data
 Configuring an Oracle Database to Receive Redo Data

83.2.1 Redo Transport Security


Redo transport uses Oracle Net sessions to transport redo data. These redo transport sessions are authenticated using
either the Secure Socket Layer (SSL) protocol or a remote login password file.
Redo Transport Authentication Using SSL
Secure Sockets Layer (SSL) is an industry standard protocol for securing network connections. SSL uses RSA public key
cryptography and symmetric key cryptography to provide authentication, encryption, and data integrity. SSL is
automatically used for redo transport authentication between two Oracle databases if:
 The databases are members of the same Oracle Internet Directory (OID) enterprise domain and that domain
allows the use of current user database links.
 The LOG_ARCHIVE_DEST_n, FAL_SERVER, and FAL_CLIENT database initialization parameters that
correspond to the databases use Oracle Net connect descriptors configured for SSL.
 Each database has an Oracle wallet or a supported hardware security module that contains a user certificate with
a distinguished name (DN) that matches the DN in the OID entry for the database.
Redo Transport Authentication Using a Password File
If the SSL authentication requirements are not met, each database must use a remote login password file. In a Data
Guard configuration, all physical and snapshot standby databases must use a copy of the password file from the primary
database, and that copy must be refreshed whenever the SYSOPER or SYSDBA privilege is granted or revoked, and after
the password of any user with these privileges is changed.
g
Oracle 11 – Redo transport Services Page 44 of 242
WK: 5 - Day: 3
When a password file is used for redo transport authentication, the password of the user account used for redo transport
authentication is compared between the database initiating a redo transport session and the target database. The
password must be the same at both databases to create a redo transport session.
By default, the password of the SYS user is used to authenticate redo transport sessions when a password file is used.
The REDO_TRANSPORT_USER database initialization parameter can be used to select a different user password for
redo transport authentication by setting this parameter to the name of any user who has been granted the SYSOPER
privilege. For administrative ease, Oracle recommends that the REDO_TRANSPORT_USER parameter be set to the
same value on the redo source database and at each redo transport destination.

83.2.2 Configuring an Oracle Database to Send Redo Data


This section describes how to configure an Oracle database to send redo data to a redo transport destination.
 The LOG_ARCHIVE_DEST_n database initialization parameter (where n is an integer from 1 to 10) is used to
specify the location of a local archive redo log or to specify a redo transport destination. This section describes the
latter use of this parameter.
 There is a LOG_ARCHIVE_DEST_STATE_n database initialization parameter (where n is an integer from 1 to 10)
that corresponds to each LOG_ARCHIVE_DEST_n parameter. This parameter is used to enable or disable the
corresponding redo destination. Table 6-1 shows the valid values that can be assigned to this parameter.
Table LOG_ARCHIVE_DEST_STATE_n Initialization Parameter Values

Value Description
ENABLE Redo transport services can transmit redo data to this destination. This is the default.
DEFER Redo transport services will not transmit redo data to this destination.
ALTERNATE This destination will become enabled if communication to its associated destination fails.

 A redo transport destination is configured by setting the LOG_ARCHIVE_DEST_n parameter to a character string
that includes one or more attributes. This section briefly describes the most commonly used attributes.
 The SERVICE attribute, which is a mandatory attribute for a redo transport destination, must be the first attribute
specified in the attribute list. The SERVICE attribute is used to specify the Oracle Net service name used to
connect to a redo transport destination.
 The SYNC attribute is used to specify that the synchronous redo transport mode be used to send redo data to a
redo transport destination.
 The ASYNC attribute is used to specify that the asynchronous redo transport mode be used to send redo data to
a redo transport destination. The asynchronous redo transport mode will be used if neither the SYNC nor the
ASYNC attribute is specified.
 The NET_TIMEOUT attribute is used to specify how long the LGWR process will block waiting for an
acknowledgement that redo data has been successfully received by a destination that uses the synchronous redo
transport mode. If an acknowledgement is not received within NET_TIMEOUT seconds, the redo transport
connection is terminated and an error is logged.
 Oracle recommends that the NET_TIMEOUT attribute be specified whenever the synchronous redo transport
mode is used, so that the maximum duration of a redo source database stall caused by a redo transport fault can
be precisely controlled.
 The AFFIRM attribute is used to specify that redo received from a redo source database is not acknowledged until
it has been written to the standby redo log. The NOAFFIRM attribute is used to specify that received redo is
acknowledged without waiting for received redo to be written to the standby redo log.
 The DB_UNIQUE_NAME attribute is used to specify the DB_UNIQUE_NAME of a redo transport destination. The
DB_UNIQUE_NAME attribute must be specified if the LOG_ARCHIVE_CONFIG database initialization parameter
has been defined and its value includes a DG_CONFIG list.
 If the DB_UNIQUE_NAME attribute is specified, its value must match one of the DB_UNIQUE_NAME values in
the DG_CONFIG list. It must also match the value of the DB_UNIQUE_NAME database initialization parameter at
the redo transport destination. If either match fails, an error is logged and redo transport will not be possible to that
destination.
 The VALID_FOR attribute is used to specify when redo transport services transmits redo data to a redo transport
destination. Oracle recommends that the VALID_FOR attribute be specified for each redo transport destination at
every site in a Data Guard configuration so that redo transport services will continue to send redo data to all
standby databases after a role transition, regardless of which standby database assumes the primary role.
 The REOPEN attribute is used to specify the minimum number of seconds between automatic reconnect attempts
to a redo transport destination that is inactive because of a previous error.
g
Oracle 11 – Redo transport Services Page 45 of 242
WK: 5 - Day: 3
 The COMPRESSION attribute is used to specify that redo data is transmitted to a redo transport destination in
compressed form when resolving redo data gaps. Redo transport compression can significantly improve redo gap
resolution time when network links with low bandwidth and high latency are used for redo transport.
The following example uses all of the LOG_ARCHIVE_DEST_n attributes described in this section. Two redo transport
destinations are defined and enabled. The first destination uses the asynchronous redo transport mode. The second
destination uses the synchronous redo transport mode with a 30-second timeout. A DB_UNIQUE_NAME has been
specified for both destinations, as has the use of compression when resolving redo gaps. If a redo transport fault occurs at
either destination, redo transport will attempt to reconnect to that destination, but not more frequently than once every 60
seconds.
DB_UNIQUE_NAME=BOSTON
LOG_ARCHIVE_CONFIG='DG_CONFIG=(BOSTON,CHICAGO,DENVER)'
LOG_ARCHIVE_DEST_2='SERVICE=CHICAGO
ASYNC
NOAFFIRM
VALID_FOR=(ONLINE_LOGFILE,PRIMARY_ROLE)
REOPEN=60
COMPRESSION=ENABLE
DB_UNIQUE_NAME=CHICAGO'
LOG_ARCHIVE_DEST_STATE_2='ENABLE'
LOG_ARCHIVE_DEST_3='SERVICE=DENVER
SYNC
AFFIRM
NET_TIMEOUT=30
VALID_FOR=(ONLINE_LOGFILE,PRIMARY_ROLE)
REOPEN=60
COMPRESSION=ENABLE
DB_UNIQUE_NAME=DENVER'
LOG_ARCHIVE_DEST_STATE_3='ENABLE'
Viewing Attributes With V$ARCHIVE_DEST
The V$ARCHIVE_DEST view can be queried to see the current settings and status for each redo transport destination.

83.2.3 Configuring an Oracle Database to Receive Redo Data


This section describes how to configure a redo transport destination to receive and to archive redo data from a redo
source database. The following topics are discussed:
 Creating and Managing a Standby Redo Log
 Configuring Standby Redo Log Archival
Creating and Managing a Standby Redo Log
The synchronous and asynchronous redo transport modes require that a redo transport destination have a standby redo
log. A standby redo log is used to store redo received from another Oracle database. Standby redo logs are structurally
identical to redo logs, and are created and managed using the same SQL statements used to create and manage redo
logs.
Redo received from another Oracle database via redo transport is written to the current standby redo log group by a RFS
background process. When a log switch occurs on the redo source database, incoming redo is then written to the next
standby redo log group, and the previously used standby redo log group is archived by an ARCn background process.
The process of sequentially filling and then archiving redo log file groups at a redo source database is mirrored at each
redo transport destination by the sequential filling and archiving of standby redo log groups.
Each standby redo log file must be at least as large as the largest redo log file in the redo log of the redo source database.
For administrative ease, Oracle recommends that all redo log files in the redo log at the redo source database and the
standby redo log at a redo transport destination be of the same size.
The standby redo log must have at least one more redo log group than the redo log on the redo source database.
Perform the following query on a redo source database to determine the size of each log file and the number of log groups
in the redo log:

SQL> SELECT GROUP#, BYTES FROM V$LOG;


Perform the following query on a redo destination database to determine the size of each log file and the number of log
groups in the standby redo log:

SQL> SELECT GROUP#, BYTES FROM V$STANDBY_LOG;


Oracle recommends that a standby redo log be created on the primary database in a Data Guard configuration so that it is
immediately ready to receive redo data following a switchover to the standby role.
g
Oracle 11 – Redo transport Services Page 46 of 242
WK: 5 - Day: 3
The ALTER DATABASE ADD STANDBY LOGFILE SQL statement is used to create a standby redo log and to add
standby redo log groups to an existing standby redo log.
For example, assume that the redo log on the redo source database has two redo log groups and that each of those
contain one 500 MB redo log file. In this case, the standby redo log should have at least 3 standby redo log groups to
satisfy the requirement that a standby redo log must have at least one more redo log group than the redo log at the redo
source database.
The following SQL statements might be used to create a standby redo log that is appropriate for the previous scenario:

ALTER DATABASE ADD STANDBY LOGFILE


('/oracle/dbs/slog1.rdo') SIZE 500M;

ALTER DATABASE ADD STANDBY LOGFILE


('/oracle/dbs/slog2.rdo') SIZE 500M;

ALTER DATABASE ADD STANDBY LOGFILE


('/oracle/dbs/slog3.rdo') SIZE 500M;
Caution: Whenever a redo log group is added to the primary database in an Oracle Data Guard configuration, a standby
redo log group must also be added to the standby redo log at each standby database in the configuration that uses the
synchronous redo transport mode. If this is not done, a primary database that is running in the maximum protection data
protection mode may shut down, and a primary database that is running in the maximum availability data protection mode
may shift to the maximum performance data protection mode.
Configuring Standby Redo Log Archival
This section describes how to configure standby redo log archival.
Standby Redo Log Archival to a Flash Recovery Area
Take the following steps to set up standby redo log archival to a flash recovery area:
1. Set the LOCATION attribute of a LOG_ARCHIVE_DEST_n parameter to USE_DB_RECOVERY_FILE_DEST.
2. Set the VALID_FOR attribute of the same LOG_ARCHIVE_DEST_n parameter to a value that allows standby
redo log archival.
The following are some sample parameter values that might be used to configure a physical standby database to archive
its standby redo log to the flash recovery area:

LOG_ARCHIVE_DEST_2 = 'LOCATION=USE_DB_RECOVERY_FILE_DEST
VALID_FOR=(STANDBY_LOGFILE,STANDBY_ROLE)'
LOG_ARCHIVE_DEST_STATE_2=ENABLE
Oracle recommends the use of a flash recovery area, because it simplifies the management of archived redo log files.
Standby Redo Log Archival to a Local FIle System Location
Take the following steps to set up standby redo log archival to a local file system location:
Set the LOCATION attribute of a LOG_ARCHIVE_DEST_n parameter to a valid pathname.
Set the VALID_FOR attribute of the same LOG_ARCHIVE_DEST_n parameter to a value that allows standby redo
log archival.
The following are some sample parameter values that might be used to configure a physical standby database to archive
its standby redo log to a local file system location:

LOG_ARCHIVE_DEST_2 = 'LOCATION = /disk2/archive


VALID_FOR=(STANDBY_LOGFILE,STANDBY_ROLE)'
LOG_ARCHIVE_DEST_STATE_2=ENABLE

83.3 Monitoring Redo Transport Services


This section discusses the following topics:
1. Monitoring Redo Transport Status
2. Monitoring Synchronous Redo Transport Response Time
3. Redo Gap Detection and Resolution
4. Redo Transport Services Wait Events

83.3.1. Monitoring Redo Transport Status


This section describes the steps used to monitor redo transport status on a redo source database.
Step 1: Determine the most recently archived redo log file.
g
Oracle 11 – Redo transport Services Page 47 of 242
WK: 5 - Day: 3
Perform the following query on the redo source database to determine the most recently archived sequence number for
each thread:

SQL> SELECT MAX(SEQUENCE#), THREAD# FROM V$ARCHIVED_LOG GROUP BY THREAD#;


Step 2: Determine the most recently archived redo log file at each redo transport destination.
Perform the following query on the redo source database to determine the most recently archived redo log file at each
redo transport destination:

SQL> SELECT DESTINATION, STATUS, ARCHIVED_THREAD#, ARCHIVED_SEQ#


2> FROM V$ARCHIVE_DEST_STATUS
3> WHERE STATUS <> 'DEFERRED' AND STATUS <> 'INACTIVE';

DESTINATION STATUS ARCHIVED_THREAD# ARCHIVED_SEQ#


------------------ ------ ---------------- -------------
/private1/prmy/lad VALID 1 947
standby1 VALID 1 947
The most recently archived redo log file should be the same for each destination. If it is not, a status other than VALID
may identify an error encountered during the archival operation to that destination.
Step 3: Find out if archived redo log files have been received at a redo transport destination.
A query can be performed at a redo source database to find out if an archived redo log file has been received at a
particular redo transport destination. Each destination has an ID number associated with it. You can query the DEST_ID
column of the V$ARCHIVE_DEST view on a database to identify each destination's ID number.
Assume that destination 1 points to the local archived redo log and that destination 2 points to a redo transport
destination. Perform the following query at the redo source database to find out if any log files are missing at the redo
transport destination:

SQL> SELECT LOCAL.THREAD#, LOCAL.SEQUENCE# FROM


2> (SELECT THREAD#, SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=1)
3> LOCAL WHERE
4> LOCAL.SEQUENCE# NOT IN
5> (SELECT SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND
6> THREAD# = LOCAL.THREAD#);

THREAD# SEQUENCE#
--------- ---------
1 12
1 13
1 14
Step 4: Trace the progression of redo transmitted to a redo transport destination.
Set the LOG_ARCHIVE_TRACE database initialization parameter at a redo source database and at each redo transport
destination to trace redo transport progress.

83.3.2. Monitoring Synchronous Redo Transport Response Time


 The V$REDO_DEST_RESP_HISTOGRAM view contains response time data for each redo transport destination.
This response time data is maintained for redo transport messages sent via the synchronous redo transport
mode.
 The data for each destination consists of a series of rows, with one row for each response time. To simplify record
keeping, response times are rounded up to the nearest whole second for response times less than 300 seconds.
Response times greater than 300 seconds are round up to 600, 1200, 2400, 4800, or 9600 seconds.
 Each row contains four columns: FREQUENCY, DURATION, DEST_ID, and TIME.
 The FREQUENCY column contains the number of times that a given response time has been observed. The
DURATION column corresponds to the response time. The DEST_ID column identifies the destination. The TIME
column contains a timestamp taken when the row was last updated.
 The response time data in this view is useful for identifying synchronous redo transport mode performance issues
that can affect transaction throughput on a redo source database. It is also useful for tuning the NET_TIMEOUT
attribute.
 The next three examples show example queries for destination 2, which corresponds to the
LOG_ARCHIVE_DEST_2 parameter. To display response time data for a different destination, simply change the
DEST_ID in the query.
Perform the following query on a redo source database to display the response time histogram for destination 2:
g
Oracle 11 – Redo transport Services Page 48 of 242
WK: 5 - Day: 3
SQL> SELECT FREQUENCY, DURATION FROM
2> V$REDO_DEST_RESP_HISTOGRAM WHERE DEST_ID=2 AND FREQUENCY>1;
Perform the following query on a redo source database to display the fastest response time for destination 2:

SQL> SELECT max(DURATION) FROM V$REDO_DEST_RESP_HISTOGRAM


2> WHERE DEST_ID=2 AND FREQUENCY>1;
Perform the following query on a redo source database to display the slowest response time for destination 2:

SQL> SELECT min( DURATION) FROM V$REDO_DEST_RESP_HISTOGRAM


2> WHERE DEST_ID=2 AND FREQUENCY>1;
Note:
The highest observed response time for a destination cannot exceed the highest specified NET_TIMEOUT value specified
for that destination, because synchronous redo transport mode sessions are terminated if a redo transport destination
does not respond to a redo transport message within NET_TIMEOUT seconds.

83.3.3. Redo Gap Detection and Resolution


A redo gap occurs whenever redo transmission is interrupted. When redo transmission resumes, redo transport services
automatically detects the redo gap and resolves it by sending the missing redo to the destination.
The time needed to resolve a redo gap is directly proportional to the size of the gap and inversely proportional to the
effective throughput of the network link between the redo source database and the redo transport destination. Redo
transport services has two options that may reduce redo gap resolution time when low performance network links are
used:
Redo Transport Compression
The COMPRESSION attribute of the LOG_ARCHIVE_DEST_n parameter can be used to specify that redo transport
compression be used to compress the redo sent to resolve a redo gap.
Parallel Redo Transport Network Sessions
The MAX_CONNECTIONS attribute of the LOG_ARCHIVE_DEST_n parameter can be used to specify that more
than one network session be used to send the redo needed to resolve a redo gap.

83.4. Manual Gap Resolution


In some situations, gap resolution cannot be performed automatically and it must be performed manually. For example,
redo gap resolution must be performed manually on a logical standby database if the primary database is unavailable.
Perform the following query at the physical standby database to determine if there is redo gap on a physical standby
database:

SQL> SELECT * FROM V$ARCHIVE_GAP;


THREAD# LOW_SEQUENCE# HIGH_SEQUENCE#
----------- ------------- --------------
1 7 10
The output from the previous example indicates that the physical standby database is currently missing log files from
sequence 7 to sequence 10 for thread 1.
Perform the following query on the primary database to locate the archived redo log files on the primary database
(assuming the local archive destination on the primary database is LOG_ARCHIVE_DEST_1):

SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE THREAD#=1 AND


2> DEST_ID=1 AND SEQUENCE# BETWEEN 7 AND 10;
NAME
--------------------------------------------------------------------------------
/primary/thread1_dest/arcr_1_7.arc
/primary/thread1_dest/arcr_1_8.arc
/primary/thread1_dest/arcr_1_9.arc
Note:
This query may return consecutive sequences for a given thread. In that case, there is no actual gap, but the associated
thread was disabled and enabled within the time period of generating these two archived logs. The query also does not
identify the gap that may exist at the tail end for a given thread. For instance, if the primary database has generated
archived logs up to sequence 100 for thread 1, and the latest archived log that the logical standby database has received
for the given thread is the one associated with sequence 77, this query will not return any rows, although we have a gap
for the archived logs associated with sequences 78 to 100.
g
Oracle 11 – Redo transport Services Page 49 of 242
WK: 5 - Day: 3
Copy these log files to the physical standby database and register them using the ALTER DATABASE REGISTER
LOGFILE. For example:

SQL> ALTER DATABASE REGISTER LOGFILE


'/physical_standby1/thread1_dest/arcr_1_7.arc';
SQL> ALTER DATABASE REGISTER LOGFILE
'/physical_standby1/thread1_dest/arcr_1_8.arc';
SQL> ALTER DATABASE REGISTER LOGFILE
'/physical_standby1/thread1_dest/arcr_1_9.arc';
Note:
The V$ARCHIVE_GAP view on a physical standby database only returns the gap that is currently blocking Redo Apply
from continuing. After resolving the gap, query the V$ARCHIVE_GAP view again on the physical standby database to
determine if there is another gap sequence. Repeat this process until there are no more gaps.
To determine if there is a redo gap on a logical standby database, query the DBA_LOGSTDBY_LOG view on the logical
standby database. For example, the following query indicates there is a gap in the sequence of archived redo log files
because it displays two files for THREAD 1 on the logical standby database. (If there are no gaps, the query will show only
one file for each thread.) The output shows that the highest registered file is sequence number 10, but there is a gap at
the file shown as sequence number 6:

SQL> COLUMN FILE_NAME FORMAT a55


SQL> SELECT THREAD#, SEQUENCE#, FILE_NAME FROM
DBA_LOGSTDBY_LOG L
2> WHERE NEXT_CHANGE# NOT IN
3> (SELECT FIRST_CHANGE# FROM DBA_LOGSTDBY_LOG WHERE L.THREAD# = THREAD#)
4> ORDER BY THREAD#, SEQUENCE#;

THREAD# SEQUENCE# FILE_NAME


---------- ---------- -----------------------------------------------
1 6 /disk1/oracle/dbs/log-1292880008_6.arc
1 10 /disk1/oracle/dbs/log-1292880008_10.arc
Copy the missing log files, with sequence numbers 7, 8, and 9, to the logical standby system and register them using the
ALTER DATABASE REGISTER LOGICAL LOGFILE statement. For example:

SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE '/disk1/oracle/dbs/log-1292880008_7.arc';


SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE '/disk1/oracle/dbs/log-1292880008_8.arc';
SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE '/disk1/oracle/dbs/log-1292880008_9.arc';
Note:
A query based on the DBA_LOGSTDBY_LOG view on a logical standby database, as specified above, only returns the
gap that is currently blocking SQL Apply from continuing. After resolving the gap, query the DBA_LOGSTDBY_LOG view
again on the logical standby database to determine if there is another gap sequence. Repeat this process until there are
no more gaps.

83.4.1. Redo Transport Services Wait Events


Table lists several of the Oracle wait events used to track redo transport wait time on a redo source database. These wait
events are found in the V$SYSTEM_EVENT dynamic performance view.
Table Redo Transport Wait Events
Wait Event Description
LNS wait on ATTACH Total time spent waiting for redo transport sessions to be established to all ASYNC and
SYNC redo transport destinations
LNS wait on SENDREQ Total time spent waiting for redo data to be written to all ASYNC and SYNC redo
transport destinations
LNS wait on DETACH Total time spent waiting for redo transport connections to be terminated to all ASYNC
and SYNC redo transport destinations
g
Oracle 11 – Logical standby Dataguard Page 50 of 242
WK: 5 - Day: 3

84. Logical Standby Dataguard


84.1. Introduction
Oracle started shipping Standby Database features with the release of Oracle 8.0.4. Standby Database features were
enhanced with the help of user feedback in subsequent releases of Oracle. Although there are several ways to set up
highly available (HA) databases, including Oracle 10g features like RAC, Veritas Cluster software (VCS), HP data guard
and Sun cluster (SC) etc., these clustered systems avoid having single points-of-failure by having software and hardware
redundancy. In the case of failure, tasks being performed by the failed component are taken over by the backup
component. Although these redundant features are good for high availability and scalability, they do not protect from user
mistakes, data corruptions and other disasters that may destroy the database itself. That is where Oracle 10g Data Guard
and Standby Database features protect your mission critical databases.
If you are using Oracle Enterprise edition, both Data Guard broker and Standby Database features are included at no
cost. The term "Data Guard" is synonymous to Standby Database in many ways, as Oracle renamed the Standby
Database feature to "Oracle Data Guard" in Oracle release 9.0.1.
DBAs have the option to set up two different types of standby databases. They are a physical standby database and a
logical standby database. Physical standby databases are physically identical to primary databases, meaning all objects in
the primary database are the same as in standby database. Logical Standby Databases are logically identical to primary
databases although the physical organization and structure of the data can be different. Physical Standby databases are
traditionally standby databases, identical to primary databases on a block for block basis. It is updated by performing
media recovery; imagine a DBA sitting in the office and recovering the database constantly. Logical Standby Databases
are updated using SQL statements. The advantage of a logical standby database is that it can be used for recovery and
reporting simultaneously. The logical standby feature as it can be used for my disaster recover project as well as it can be
used by data warehouse users for their reporting purpose.

84.2. When to choose Logical Standby database


Reporting: Synchronization of the logical standby database with the primary database is done using logminer technology,
which transforms standard archived redologs into SQL statements and applies them to the logical stand by database.
Therefore, the logical standby database must remain open and the tables that are maintained can be used simultaneously
for reporting.
System Resources: Besides the efficient utilisation of system resources, reporting tasks, summations and queries can be
optimized by creating additional indexes and materialised views, since both primary and logical standby database can
have a different physical lay out by protecting switchover and failover for the primary database.

84.3. Prerequisite Conditions for creating a Logical Standby Database:


1.  Determine if the primary database contains tables and datatypes that were not supported by a logical stand by
database. If the primary database contains tables that were unsupported, log apply services will exclude the tables
applying to the logical stand by database.
SQL> Select * from dba_logstdby_unsupported;
OWNER TABLE_NAME COLUMN_NAME DATA_TYPE
--------------------- -------------------------- ---------------- ---------------
WMSYS WM$UDTRIG_INFO TRIG_CODE LONG
WMSYS WM$VERSIONED_TABLES UNDO_CODE
WM$ED_UNDO_CODE_TABLE_TYPE
2 rows selected.
2.  To maintain data in a logical stand by database, SQL Apply operations must be able to identify the columns that
uniquely identify each row that has been updated in the primary database. Tables that do not have primary keys or non-
null unique indexes are identified by enabling supplemental logging.
SQL> select owner, table_name, bad_column from dba_logstdby_not_unique;
OWNER TABLE_NAME B
VCSUSER VCS N
Bad column ‘N’ indicates that the table contains enough column information to maintain the table in the logical standby
database, where as ‘Y’ indicates the table column is defined using an unbounded data type, such as LONG.
Add a primary key to the tables that do not have to improve performance. If the table has a primary key or a unique index
with a non-null column, the amount of information added to the redo log is minimal.
3.  Ensure that Primary database is in archivelog mode.
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /opt/app/oracle/admin/myDB/arch
g
Oracle 11 – Logical standby Dataguard Page 51 of 242
WK: 5 - Day: 3
Oldest online log sequence 345
Next log sequence to archive 347
Current log sequence 347
4.  Ensure supplemental logging is enabled and log parallelism is enabled on the primary database. Supplemental logging
must be enabled because the logical standby database cannot use archived redo logs that contain both supplemental log
data and no supplemental log data.

SQL> select supplemental_log_data_pk,supplemental_log_data_ui from v$database;

SUP SUP
--- ---
YES YES
If the supplemental logging is not enabled, execute the following

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY,UNIQUE INDEX) COLUMNS;
SQL> ALTER SYSTEM SWITCH LOGFILE;
If log parallelism is not enabled, execute the following:

SQL> ALTER SYSTEM SET LOG_PARALLELISM=1 SCOPE=BOTH;


Start Resource manager if you plan to create a logical standby database using hot backup. If you do not have a
resource_manager plan, you can use one of the system defined plans and restart the primary database to make sure it is
using the defined plan.

SQL> ALTER SYSTEM SET RESOURCE_MANAGER_PLAN=SYSTEM_PLAN SCOPE=BOTH;


SQL> SHUTDOWN
SQL> STARTUP

84.3.1. Improvements in Oracle Data Guard in Oracle 10gr2:


Automatic Deletion of applied archive logs: Once primary database Archived logs are applied to a Logical Standby
Database, they are deleted automatically without DBA intervention. This makes it easier to maintain both primary and
logical standby databases. Physical standby databases have had this functionality since Oracle 10g Release 1, by using
Flash Recovery Area option. 
No downtime required: The primary database is no longer required to shutdown or be put in QUIESCING state, as we can
create the logical standby database from a hotbackup of the primary database just like the physical standby database.
Online upgrades: A lot of DBAs have dreamed about this for long time: just like IBM’s DB2 or Microsoft SQL Server, the
DBA no longer required to shutdown the primary database to upgrade from Oracle 10g release 2 with Data Guard option.
First, upgrade the logical standby database to the next release, test and validate the upgrade, do a role reversal by
switching over to the upgraded database, and then finally upgrade the old primary database. New Datatypes Supported: I
always used to hesitate whenever I thought of logical standby databases, as some of my databases never meet the pre-
requisite conditions. In 10g relase2, Oracle supports most of the datatypes, such as NCLOB, LONG, LONGRAW,
BINARY_FLOAT, BINARY_DOUBLE,IOTs.
Conclusion: SQL Apply with Logical Standby Database is a viable option for customers who need to implement a
disaster recovery solution or maximum/high availability solution and use the same resources for reporting and decision
support operations. The success in creating a Logical Standby Database depends a lot on how the tasks are executed
and on the version is being used. It is very important, before starting the creation of a Logical Standby Database, to make
sure that all the Initialization Parameters are set correctly, that all the steps are followed in the correct order and the
appropriate parameters are used. If everything is done properly then you should be able to do a clean configuration of the
Logical Standby Database in the first go.
g
Oracle 11 – Cloning Oracle Database Page 52 of 242
WK: 5 - Day: 3

85. Cloning Oracle Database


This tells us how to create an Oracle clone database instance called TEST from the current PPRD database. It lists the
steps for this procedure along with the comments of each command.
The datafiles are in the /u03/oradata/TEST and /u03/oradata/PPRD directories respectively for this example. The
commands marked with an asterisk (*) are only needed if this is the first time we have done this clone, and are not needed
for subsequent e-cloning. (Note: "$" means we're at the UNIX prompt, "SQL>" means we're in SQLPLUS as a DBA user
ID).
WARNING: This cloning procedure may not work with some of the Oracle tools (such as the RMAN Recovery Manager's
recovery catalog, because it uses an internal database ID instead of just the Oracle SID to identify the database). So, if we
want to use those tools, we will need to find some other way to clone our Oracle database.

Login as user oracle


$ . oraenv
Set the oracle SID to the name of the instance to copy (PPRD here).
SQL> ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
Generates a textual control file to edit for the new instance later.
SQL> SELECT value FROM V$PARAMETER WHERE name LIKE '%user_dump%';
Shows the directory containing the generated textual control file.
SQL> SELECT name FROM V$DATAFILE;
Shows the names of the datafiles to copy to the new database instance's directory(s).
SQL> SELECT name FROM V$CONTROLFILE;
Shows the names of the control files to copy to the new directory(s).
SQL> SELECT member FROM V$LOGFILE;
Shows the names of the redo log files to copy to the new directory(s).
SQL> CONNECT / AS SYSDBA
SQL> SHUTDOWN IMMEDIATE
Shuts down the current instance to copy (PPRD here).
$ cd /u03/oradata
* $ mkdir TEST
$ cd TEST
$ cp -p /u03/oradata/PPRD/* .
$ ls *PPRD* | sed "s/\(.*\)PPRD\(.*\)/mv \1PPRD\2 \1TEST\2/" >rename.shl
$ sh rename.shl
These commands copy PPRD's datafiles to a new TEST directory and rename the files in the TEST directory to match the
Oracle SID name. Similar groups of commands will be used for other existing PPRD directories with control files and redo
log files. Also, create a directory for TEST's archive logs, if we want archiving turned on for TEST, but don't copy any of
PPRD's archive logs into it (since we'll need to do a RESETLOGS on the startup of the new instance anyway).
* $ cd $ORACLE_HOME/dbs
* $ cp initPPRD.ora initTEST.ora
* $ vi initTEST.ora
Create the init.ora file for TEST and change all references to PPRD into TEST (vi command is ":1,$s/PPRD/TEST/g").
Also, if the new TEST's files are on a different disk volume, change the volume names, as required, to match where we
copied the files.
* $ mkdir /u00/oracle/admin/TEST
* $ mkdir /u00/oracle/admin/TEST/bdump
* $ mkdir /u00/oracle/admin/TEST/cdump
* $ mkdir /u00/oracle/admin/TEST/udump
These commands create the dump directories for TEST that were given in the initTEST.ora file
$ cd /u03/oradata/TEST
$ ls -ltr /u00/oracle/admin/PPRD/udump
$ cp /u00/oracle/admin/PPRD/udump/ora_28570.trc ctrl.sql
These commands find the textual control file created at the beginning (should be the last .trc file listed) and copy it to
ctrl.sql.
$ vi ctrl.sql
Remove all the lines before the STARTUP NOMOUNT command (usually, the first 20 (or 22) lines, for which the vi
command is ":1,20d"). Edit it to match the new TEST datafile names and Oracle SID name.
g
Oracle 11 – Cloning Oracle Database Page 53 of 242
WK: 5 - Day: 3
If the directories are similar, this could just mean changing all of the PPRD references into TEST (vi command is
":1,$s/PPRD/TEST/g"). Also, change NORESETLOGS to RESETLOGS in the create controlfile command, resulting in the
following line (use NOARCHIVELOG if we don't want archiving turned on in the new instance):
CREATE CONTROLFILE REUSE SET DATABASE "TEST" RESETLOGS ARCHIVELOG
Comment out the RECOVER DATABASE command (put # in front of it), and change the last line to:
ALTER DATABASE OPEN RESETLOGS;
* $ vi /etc/oratab
Add a line for the new instance, usually by copying the last line (vi command is "G:.co.") and changing the name in that
copied line to the 4-character name of the new instance (vi command here is "RTEST<esc>"). Also, change the last
character in that copied line to Y to have the database start up automatically during dbstart, or N for only manual startups.
(Adding: TEST:/u00/oracle/product/v901:Y)
$ . oraenv
Set the oracle SID to the name of the new instance (TEST here).
SQL> connect / as sysdba
SQL> @ctrl.sql
Creates the new control files pointing to the new datafile and log file locations, and opens the new TEST database.

Note: If we get "ORA-1161 error: database name in file header does not match given name" when we try to run the
CREATE CONTROLFILE, try this instead: don't copy the control files to the new directories (or, delete the control files
from the new directories), and, edit ctrl.sql to take out the REUSE option in the CREATE CONTROLFILE command, then,
try rerunning ctrl.sql.
SQL> SELECT * FROM GLOBAL_NAME;
Shows the original global name, such as PPRD.WORLD
SQL> UPDATE GLOBAL_NAME SET GLOBAL_NAME = "TEST.WORLD';
Changes the global name so that doing a "create database link" to access a remote database doesn't give we a
"loopback" error. (Only do this if needed, since we don't know if this "world" change would adversely affect anything else in
Oracle.)
$ . oraenv
Set the Oracle SID to the name of the original instance (PPRD here).
SQL> CONNECT / AS SYSDBA
SQL> STARTUP
Restarts that original instance.
* $ lsnrctl status
Shows the pathname of the Listener Parameter File (listener.ora).
* $ vi /u00/oracle/product/v901/network/admin/listener.ora
Edit the listener.ora file to copy the PPRD lines and change the copy to match TEST (don't change any of the spacing!),
giving:
(SID_DESC=
(SID_NAME=TEST)
(ORACLE_HOME=/u00/oracle/product/v723)
)
* $ lsnrctl stop
* $ lsnrctl start
The TEST instance has now been added to the SQL*NET Listener. On the client network (such as Novell), we will need to
edit our tnsnames.ora file in the orawin\network\admin directory to copy the PPRD instance's lines and change the copy to
match TEST, similar to:
unix_test =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS =
(PROTOCOL = TCP)
(Host = myhost.domain.com)
(Port = 1521)
)
)
(CONNECT_DATA = (SID = TEST)
)
)
* $ su – jobsub
* $ vi start_jobsub.shl
These lines log into jobsub (if we are allowed to use "su"; otherwise, just login to jobsub, etc.) and edit the jobsub startup
script to include the TEST instance (copy PPRD's lines and change to match TEST):
g
Oracle 11 – Cloning Oracle Database Page 54 of 242
WK: 5 - Day: 3
ORACLE_SID=TEST; export ORACLE_SID; . oraenv
echo "=== Starting jobsubmission for $ORACLE_SID....... "
nohup sh $BANNER_LINKS/gurjobs.shl > gurjobsTEST.out 2>&1 &
$ su – jobsub
$ kill -9 -1
These lines kill the current jobsub processes for all instances.
$ su – jobsub
$ start_jobsub.shl
Starts jobsub for all instances, including for TEST. (Or, instead of killing jobsub and restarting it, we could have just
entered ". oraenv" to set the TEST instance, and the "nohup" line to start jobsub for it.)
If we want to change the internal database ID of the cloned copy so that we can use utilities such as RMAN on that copy
which requires unique database ID's, we can use the Oracle 9i "nid" (new ID) utility to generate a new database ID, as
shown below. Note that we haven't tried this, yet, but, we can give it a try if we need to use RMAN on a copy-datafile
clone.
$ . oraenv
Set the oracle SID to the name of the new instance (TEST here).
SQL> CONNECT / AS SYSDBA
SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP MOUNT
SQL> HOST
$ nid SYS/<syspassword>
Answer the prompt with Y
$ exit
SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP MOUNT
SQL> ALTER DATABASE OPEN RESETLOGS;
SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP
SQL> EXIT
g
Oracle 11 – Cloning Oracle Dataguard Broker Page 55 of 242
WK: 5 - Day: 3

86. Oracle Data Guard Broker


This chapter describes the Oracle Data Guard broker, its architecture and components, and how it automates the creation,
control, and monitoring of a Data Guard configuration. It contains the following topics:
1. Overview of Oracle Data Guard and the Broker
2. Benefits of Data Guard Broker
3. Data Guard Broker Management Model
4. Data Guard Broker Components
5. Data Guard Broker User Interfaces
6. Data Guard Monitor
See Oracle Data Guard Concepts and Administration for the definition of a Data Guard configuration and for complete
information about Oracle Data Guard concepts and terminology.

86.1 Overview of Oracle Data Guard and the Broker


Oracle Data Guard ensures high availability, data protection, and disaster recovery for enterprise data. Data Guard
provides a comprehensive set of services that create, maintain, manage, and monitor one or more standby databases to
enable production Oracle databases to survive disasters and data corruptions. Data Guard maintains these standby
databases as transactionally consistent copies of the primary database. If the primary database becomes unavailable
because of a planned or an unplanned outage, Data Guard can switch any standby database to the production role, thus
minimizing the downtime associated with the outage. Data Guard can be used with traditional backup, recovery, and
cluster techniques, as well as the Flashback Database feature to provide a high level of data protection and data
availability.

86.1.1. Data Guard Configurations and Broker Configurations


A Data Guard configuration consists of one primary database and up to nine standby databases. The databases in a Data
Guard configuration are connected by Oracle Net and may be dispersed geographically. There are no restrictions on
where the databases are located as long as they can communicate with each other. For example, you can have a standby
database on the same system as the primary database, along with two standby databases on another system.
The Data Guard broker logically groups these primary and standby databases into a broker configuration that allows the
broker to manage and monitor them together as an integrated unit. You can manage a broker configuration using either
the Oracle Enterprise Manager graphical user interface or the Data Guard command-line interface.

86.1.2. Oracle Data Guard Broker


The Oracle Data Guard broker is a distributed management framework that automates and centralizes the creation,
maintenance, and monitoring of Data Guard configurations. The following list describes some of the operations the broker
automates and simplifies:
o Creating Data Guard configurations that incorporate a primary database, a new or existing (physical, logical, or
snapshot) standby database, redo transport services, and log apply services, where any of the databases could
be Oracle Real Application Clusters (RAC) databases.
o Adding additional new or existing (physical, snapshot, logical, RAC or non-RAC) standby databases to an
existing Data Guard configuration, for a total of one primary database, and from 1 to 9 standby databases in the
same configuration.
o Managing an entire Data Guard configuration, including all databases, redo transport services, and log apply
services, through a client connection to any database in the configuration.
o Managing the protection mode for the broker configuration.
o Invoking switchover or failover with a single command to initiate and control complex role changes across all
databases in the configuration.
o Configuring failover to occur automatically upon loss of the primary database, increasing availability without
manual intervention.
o Monitoring the status of the entire configuration, capturing diagnostic information, reporting statistics such as the
redo apply rate and the redo generation rate, and detecting problems quickly with centralized monitoring, testing,
and performance tools.
You can perform all management operations locally or remotely through the broker's easy-to-use interfaces: the Data
Guard management pages in Oracle Enterprise Manager, which is the broker's graphical user interface (GUI), and the
Data Guard command-line interface called DGMGRL.
These interfaces simplify the configuration and management of a Data Guard configuration. Table 1-1 provides a
comparison of configuration management using the broker's interfaces and using SQL*Plus.
g
Oracle 11 – Cloning Oracle Dataguard Broker Page 56 of 242
WK: 5 - Day: 3
Table Configuration Management With and Without the Broker

With the Broker Without the Broker


General You must manage the primary and standby
Provides primary and standby database
management as one unified configuration. databases separately.
Standby
Provides the Enterprise Manager wizards that You must manually:
Database automate and simplify the steps required to create a
configuration with an Oracle database on each site,  Copy the database files to the standby
Creation database.
including creating the standby control file, online redo
log files, datafiles, and server parameter files.  Create a control file on the standby
database.
 Create server parameter or initialization
parameter files on the standby database.
Configuration
Enables you to configure and manage multiple You must manually:
and databases from a single location and automatically
Management unifies all of the databases in the broker  Set up redo transport services and log
configuration. apply services on each database in the
configuration.
 Manage the primary database and standby
databases individually.
Control
 Automatically set up redo transport services and You must manually:
log apply services. Simplify management of
these services, especially in an Oracle RAC  Use multiple SQL*Plus statements to
environment. manage the database.

 Simplifies switchovers and failovers by allowing  Coordinate sequences of multiple


you to invoke them through a single command. commands across multiple database sites
to execute switchover and failover
 Automates failover by allowing the broker to operations.
determine if failover is necessary and to initiate
failover to a specified target standby database,  Coordinate sequences of multiple
with no need for DBA intervention and with either commands to manage services and
no loss of data or with a configurable amount of instances during role transitions.
data loss.
 Integrates Cluster Ready Services (CRS)Foot 1 
and instance management over database role
transitions.
 Provides mouse-driven database state changes
and a unified presentation of configuration and
database status.
 Provides mouse-driven property changes.
Monitoring
 Provides continuous monitoring of the You must manually:
configuration health, database health, and other
runtime parameters.  Monitor the status and runtime parameters
using fixed views on each database—there
 Provides a unified updated status and detailed is no unified view of status for all of the
reports. databases in the configuration.
 Provides integration with Oracle Enterprise  Provide a custom method for monitoring
Manager events. Oracle Enterprise Manager events.

86.2 Benefits of Data Guard Broker


The broker's interfaces improve usability and centralize management and monitoring of the Data Guard configuration.
Available as a feature of the Enterprise Edition and Personal Edition of the Oracle database, the broker is also integrated
with the Oracle database and Oracle Enterprise Manager. These broker attributes result in the following benefits:
Disaster protection:  By automating many of the manual tasks required to configure and monitor a Data Guard
configuration, the broker enhances the high availability, data protection, and disaster protection capabilities that are
inherent in Oracle Data Guard. Access is possible through a client to any system in the Data Guard configuration,
eliminating any single point of failure. If the primary database fails, the broker automates the process for any one of the
g
Oracle 11 – Cloning Oracle Dataguard Broker Page 57 of 242
WK: 5 - Day: 3
standby databases to replace the primary database and take over production processing. The database availability that
Data Guard provides makes it easier to protect your data.
Higher availability and scalability with Oracle Real Application Clusters (RAC) Databases: While Oracle Data Guard broker
enhances disaster protection by maintaining transactionally consistent copies of the primary database, Data Guard,
configured with Oracle high availability solutions such as Oracle Real Application Clusters (RAC) databases, further
enhances the availability and scalability of any given copy of that database. The intrasite high availability of an Oracle
RAC database complements the intersite protection that is provided by Data Guard broker.
Consider that you have a cluster system hosting a primary Oracle RAC database comprised of multiple instances sharing
access to that database. Further consider that an unplanned failure has occurred. From a Data Guard broker perspective,
the primary database remains available as long as at least one instance of the clustered database continues to be
available for transporting redo data to the standby databases. Oracle Clusterware manages the availability of instances of
an Oracle RAC database. Cluster Ready Services (CRS), a subset of Oracle Clusterware, works to rapidly recover failed
instances to keep the primary database available. If CRS is unable to recover a failed instance, the broker continues to
run automatically with one less instance. If the last instance of the primary database fails, the broker provides a way to fail
over to a specified standby database. If the last instance of the primary database fails, and fast-start failover is enabled,
the broker can continue to provide high availability by automatically failing over to a pre-determined standby database.
The broker is integrated with CRS so that database role changes occur smoothly and seamlessly. This is especially
apparent in the case of a planned role switchover (for example, when a physical standby database is directed to take over
the primary role while the former primary database assumes the role of standby). The broker and CRS work together to
temporarily suspend service availability on the primary database, accomplish the actual role change for both databases
during which CRS works with the broker to properly restart the instances as necessary, and then start services defined on
the new primary database. The broker manages the underlying Data Guard configuration and its database roles while
CRS manages service availability that depends upon those roles. Applications that rely on CRS for managing service
availability will see only a temporary suspension of service as the role change occurs in the Data Guard configuration.
Note that while CRS helps to maintain the availability of the individual instances of an Oracle RAC database, the broker
coordinates actions that maintain one or more physical or logical copies of the database across multiple geographically
dispersed locations to provide disaster protection. Together, the broker and Oracle Clusterware provide a strong
foundation for Oracle's high-availability architecture.
Automated creation of a Data Guard configuration: The broker helps you to logically define and create a Data Guard
configuration consisting of a primary database and (physical or logical, snapshot, RAC or non-RAC) standby databases.
The broker automatically communicates between the databases in a Data Guard configuration using Oracle Net Services.
The databases can be local or remote, connected by a LAN or geographically dispersed over a WAN.
Oracle Enterprise Manager provides a wizard that automates the complex tasks involved in creating a broker
configuration, including:
o Adding an existing standby database, or a new standby database created from existing backups taken
through Enterprise Manager
o Configuring the standby control file, server parameter file, and datafiles
o Initializing communication with the standby databases
o Creating standby redo log files
o Enabling Flashback Database if you plan to use fast-start failover
Although DGMGRL cannot automatically create a new standby database, you can use DGMGRL commands to configure
and monitor an existing standby database, including those created using Enterprise Manager. Easy configuration of
additional standby databases: After you create a Data Guard configuration consisting of a primary and a standby
database, you can add up to eight new or existing, physical, snapshot, or logical standby databases to each Data Guard
configuration. Oracle Enterprise Manager provides an Add Standby Database wizard to guide you through the process of
adding more databases. It also makes all Oracle Net Services configuration changes necessary to support redo transport
services and log apply services across the configuration. Simplified, centralized, and extended management: You can
issue commands to manage many aspects of the broker configuration. These include:
o Simplify the management of all components of the configuration, including the primary and standby
databases, redo transport services, and log apply services.
o Coordinate database state transitions and update database properties dynamically with the broker recording
the changes in a broker configuration file that includes profiles of all the databases in the configuration. The
broker propagates the changes to all databases in the configuration and their server parameter files.
o Simplify the control of the configuration protection modes (to maximize protection, to maximize availability, or
to maximize performance).
o Invoke the Enterprise Manager verify operation to ensure that redo transport services and log apply services
are configured and functioning properly.
Simplified switchover and failover operations: The broker simplifies switchovers and failovers by allowing you to invoke
them using a single key click in Oracle Enterprise Manager or a single command at the DGMGRL command-line interface
(referred to in this documentation as manual failover). For lights-out administration, you can enable fast-start failover to
allow the broker to determine if a failover is necessary and to initiate the failover to a pre-specified target standby
g
Oracle 11 – Cloning Oracle Dataguard Broker Page 58 of 242
WK: 5 - Day: 3
database automatically, with no need for DBA intervention. Fast-start failover can be configured to occur with no data loss
or with a configurable amount of data loss.
Fast-start failover allows you to increase availability with less need for manual intervention, thereby reducing management
costs. Manual failover gives you control over exactly when a failover occurs and to which target standby database.
Regardless of the method you choose, the broker coordinates the role transition on all databases in the configuration.
Once failover is complete, the broker posts the DB_DOWN event to notify applications that the new primary is available.
Note that you can use the DBMS_DG PL/SQL package to enable an application to initiate a fast-start failover when it
encounters specific conditions.
Only one command is required to initiate complex role changes for switchover or failover operations across all databases
in the configuration. The broker automates switchover and failover to a specified standby database in the broker
configuration. Enterprise Manager enables you to select a new primary database from a set of viable standby databases
(enabled and running, with normal status). The DGMGRL SWITCHOVER and FAILOVER commands only require you to
specify the target standby database before automatically initiating and completing the many steps in switchover or failover
operations across the multiple databases in the configuration.
Built-in monitoring and alert and control mechanisms: The broker provides built-in validation that monitors the health of all
of the databases in the configuration. From any system in the configuration connected to any database, you can capture
diagnostic information and detect obvious and subtle problems quickly with centralized monitoring, testing, and
performance tools. Both Enterprise Manager and DGMGRL retrieve a complete configuration view of the progress of redo
transport services on the primary database and the progress of Redo Apply or SQL Apply on the standby database.
The ability to monitor local and remote databases and respond to events is significantly enhanced by the broker's health
check mechanism and tight integration with the Oracle Enterprise Manager event management system. Transparent to
application: Use of the broker is possible for any database because the broker works transparently with applications; no
application code changes are required to accommodate a configuration that you manage with the broker.

86.3. Data Guard Broker Management Model


The broker simplifies the management of a Data Guard environment by performing operations on the following logical
objects:
1. Configuration of databases
2. A single database
The broker supports one or more Data Guard configurations, each of which includes a profile for the primary database and
each standby database. A supported broker configuration consists of:
o A configuration object, which is a named collection of database profiles. A database profile is a description of a
database object including its current state, current status, and properties. The configuration object profiles one
primary database and its standby databases that can include a mix of physical, snapshot, and logical standby
databases. The databases of a given configuration are typically distributed across multiple host systems.
o Database objects, corresponding to primary or standby databases. The broker uses a database object's profile to
manage and control the state of a single database on a given system. The database object may be comprised of
one or more instance objects if this is an Oracle RAC database.
o Instance objects. The broker treats a database as a collection of one or more named instances. The broker
automatically discovers the instances and associates them with their database.

Figure Relationship of Objects Managed by the Data Guard Broker


g
Oracle 11 – Cloning Oracle Dataguard Broker Page 59 of 242
WK: 5 - Day: 3
Data Guard Broker Components
The Oracle Data Guard broker consists of the following components:
1. Oracle Enterprise Manager
2. Data Guard Command-Line Interface (DGMGRL)
3. Data Guard Monitor
Oracle Enterprise Manager and the Data Guard command-line interface (DGMGRL) are the broker client interfaces that
help you define and manage a configuration consisting of a collection of primary and standby databases. DGMGRL also
includes commands to create an observer, a process that facilitates fast-start failover. Section 1.5 describes these
interfaces in more detail.
The Data Guard monitor is the broker server-side component that is integrated with the Oracle database. Data Guard
monitor is composed of several processes, including the DMON process, and broker configuration files that allow you to
control the databases of that configuration, modify their behavior at runtime, monitor the overall health of the configuration,
and provide notification of other operational characteristics. Section 1.6 describes the Data Guard monitor in more detail.
Figure Oracle Data Guard Broker

Description of "Figure 1-2 Oracle Data Guard Broker"

86.3. Data Guard Broker User Interfaces


You can use either of the broker's user interfaces to create a broker configuration and to control and monitor the
configuration. The following sections describe the broker's user interfaces:
1. Oracle Enterprise Manager
2. Data Guard Command-Line Interface (DGMGRL)

86.3.1. Oracle Enterprise Manager


Oracle Enterprise Manager works with the Data Guard monitor to automate and simplify the management of a Data Guard
configuration.
With Enterprise Manager, the complex operations of creating and managing standby databases are simplified through
Data Guard management pages and wizards, including:
o An Add Standby Database wizard that helps you to create a broker configuration, if one does not already
exist, having a primary database and a local or remote standby database. The wizard can create a physical,
snapshot, or logical standby database or import an existing physical, snapshot, or logical (RAC or non-RAC)
standby database. If the wizard creates a physical, snapshot, or logical standby database, the wizard also
automates the creation of the standby control file, server parameter file, online and standby redo log files,
and the standby datafiles.
o A switchover operation that helps you switch roles between the primary database and a standby database.
o A failover operation that changes one of the standby databases to the role of a primary database.
o Performance tools and graphs that help you monitor and tune redo transport services and log apply services.
o Property pages that allow you to set database properties on any database and, if applicable, the settings are
immediately propagated to all other databases and server parameter files in the configuration.
o Event reporting through e-mail.

In addition, it makes all Oracle Net Services configuration changes necessary to support redo transport services and log
apply services.
g
Oracle 11 – Cloning Oracle Dataguard Broker Page 60 of 242
WK: 5 - Day: 3

86.3.2. Data Guard Command-Line Interface (DGMGRL)


The Data Guard command-line interface (DGMGRL) enables you to control and monitor a Data Guard configuration from
the DGMGRL prompt or within scripts. You can perform most of the activities required to manage and monitor the
databases in the configuration using DGMGRL commands. DGMGRL also includes commands to create an observer
process that continuously monitors the primary and target standby databases and evaluates whether failover is necessary,
and then initiates a fast-start failover when conditions warrant.

Table DGMGRL Commands

Command Description

ADD Adds a standby database to the broker configuration

CONNECT Connects to an Oracle instance

CONVERT Converts a database between a physical standby database and a snapshot standby database

CREATE Creates a broker configuration

DISABLE Disables a configuration, a database, fast-start failover, or a fast-start failover condition

EDIT Edits a configuration, database, or instance

ENABLE Enables a configuration, a database, fast-start failover, or a fast-start failover condition

EXIT Exits the program

FAILOVER Changes a standby database to be the primary database

HELP Displays description and syntax for individual commands

QUIT Exits the program

REINSTATE Changes a database marked for reinstatement into a viable standby database

REM Comment to be ignored by DGMGRL

REMOVE Removes a configuration, database, or instance

SHOW Displays information about a configuration, database, instance, or fast-start failover

SHUTDOWN Shuts down a currently running Oracle instance

START Starts the fast-start failover observer

STARTUP Starts an Oracle database instance

STOP Stops the fast-start failover observer

SWITCHOVER Switches roles between the primary database and a standby database

86.4 Data Guard Monitor


The configuration, control, and monitoring functions of the broker are implemented by server-side software and
configuration files that are maintained for each database that the broker manages. The software is called the Data Guard
monitor.
The following sections describe how the Data Guard monitor interacts with the Oracle database and with remote Data
Guard monitors to manage the broker configuration.

86.4.1. Data Guard Monitor (DMON) Process


The Data Guard monitor process (DMON) is an Oracle background process that runs for every database instance that is
managed by the broker. When you start the Data Guard broker, a DMON process is created. Whether you use Oracle
Enterprise Manager or DGMGRL to manage a database, the DMON process is the server-side component that interacts
with the local database and the DMON processes of the other databases to perform the requested function. The DMON
g
Oracle 11 – Cloning Oracle Dataguard Broker Page 61 of 242
WK: 5 - Day: 3
process is also responsible for monitoring the health of the broker configuration and for ensuring that every database has
a consistent description of the configuration.
Figure shows the broker's DMON process as one of several background processes that constitute an instance of the
Oracle database. Each database instance shown in the figure has its own DMON process.
Figure Databases with Broker (DMON) Processes. The zigzag arrow in the center of Figure 1-3 represents the two-way
Oracle Net Services communication channel that exists between the DMON processes of two databases in the same
broker configuration.

This two-way communication channel is used to pass requests between databases and to monitor the health of all of the
databases in the broker configuration.

86.4.2. Configuration Management


The broker's DMON process persistently maintains profiles about all database objects in the broker configuration in a
binary configuration file. A copy of this file is maintained by the DMON process for each of the databases that belong to
the broker configuration. If it is an Oracle RAC database, each database's copy of the file is shared by all instances of the
database.
This configuration file contains profiles that describe the states and properties of the databases in the configuration. For
example, the file records the databases that are part of the configuration, the roles and properties of each of the
databases, and the state of each database in the configuration.
The configuration data is managed transparently by the DMON process to ensure that the configuration information is kept
consistent across all of the databases. The broker uses the data in the configuration file to configure and start the
databases, control each database's behavior, and provide information to DGMGRL and Oracle Enterprise Manager.
Whenever you add databases to a broker configuration, or make a change to an existing database's properties, each
DMON process records the new information in its copy of the configuration file.

86.4.3. Database Property Management


Associated with each database are various properties that the DMON process uses to control the database's behavior.
The properties are recorded in the configuration file as a part of the database's object profile that is stored there. Many
database properties are used to control database initialization parameters related to the Data Guard environment.
To ensure that the broker can update the values of parameters in both the database itself and in the configuration file, you
must use a server parameter file to control static and dynamic initialization parameters. The use of a server parameter file
g
Oracle 11 – Cloning Oracle Dataguard Broker Page 62 of 242
WK: 5 - Day: 3
gives the broker a mechanism that allows it to reconcile property values selected by the database administrator (DBA)
when using the broker with any related initialization parameter values recorded in the server parameter file.
When you set values for database properties in the broker configuration, the broker records the change in the
configuration file and propagates the change to all of the databases in the Data Guard configuration.
Note: The broker supports both the default and nondefault server parameter file filenames. If you use a nondefault server
parameter filename, the initialization parameter file must include the complete filename and location of the server
parameter file. If this is an Oracle RAC database, there must be one nondefault server parameter file for all instances.
g
Oracle 11 – New Features in 11g Dataguard Page 63 of 242
WK: 5 - Day: 3

87. What's New in Oracle Data Guard?


This preface describes the new features added to Oracle Data Guard in release 11.1 and provides links to additional
information. The features and enhancements described in this preface were added to Oracle Data Guard in 11g Release 1
(11.1). The new features are described under the following main areas:
 New Features Common to Redo Apply and SQL Apply
 New Features Specific to Redo Apply and Physical Standby Databases
 New Features Specific to SQL Apply and Logical Standby Databases
New Features Common to Redo Apply and SQL Apply
The following enhancements to Oracle Data Guard in 11g Release 1 (11.1) improve ease-of-use, manageability,
performance, and include innovations that improve disaster-recovery capabilities:
 Compression of redo traffic over the network in a Data Guard configuration
This feature improves redo transport performance when resolving redo gaps by compressing redo before it is
transmitted over the network.
 Redo transport response time histogram
The V$REDO_DEST_RESP_HISTOGRAM dynamic performance view contains a histogram of response times for
each SYNC redo transport destination. The data in this view can be used to assist in the determination of an
appropriate value for the LOG_ARCHIVE_DEST_n NET_TIMEOUT attribute.
 Faster role transitions
 Strong authentication for redo transport network sessions
Redo transport network sessions can now be authenticated using SSL. This provides strong authentication and
makes the use of remote login password files optional in a Data Guard configuration.
 Simplified Data Guard management interface
The SQL statements and initialization parameters used to manage a Data Guard configuration have been
simplified through the deprecation of redundant SQL clauses and initialization parameters.
 Enhancements around DB_UNIQUE_NAME
You can now find the DB_UNIQUE_NAME of the primary database from the standby database by querying the
new PRIMARY_DB_UNIQUE_NAME column in the V$DATABASE view. Also, Oracle Data Guard release 11g
ensures each database's DB_UNIQUE_NAME is different. After upgrading to 11g, any databases with the same
DB_UNIQUE_NAME will not be able to communicate with each other.
 Use of physical standby database for rolling upgrades
A physical standby database can now take advantage of the rolling upgrade feature provided by a logical
standby. Through the use of the new KEEP IDENTITY clause option to the SQL ALTER DATABASE RECOVER
TO LOGICAL STANDBY statement, a physical standby database can be temporarily converted into a logical
standby database for the rolling upgrade, and then reverted back to the original configuration of a primary
database and a physical standby database when the upgrade is done.
 Heterogeneous Data Guard Configuration
This feature allows a mix of Linux and Windows primary and standby databases in the same Data Guard
configuration.
New Features Specific to Redo Apply and Physical Standby Databases
The following list summarizes the new features that are specific to Redo Apply and physical standby databases in Oracle
Database 11g Release 1 (11.1):
 Real-time query capability of physical standby
This feature makes it possible to query a physical standby database while Redo Apply is active.
 Snapshot standby
A snapshot standby database is new type of updatable standby database that provides full data protection for a
primary database.
 Lost-write detection using a physical standby
A "lost write" is a serious form of data corruption that can adversely impact a database. It occurs when an I/O
subsystem acknowledges the completion of a block write in the database, while in fact the write did not occur in
the persistent storage. This feature allows a physical standby database to detect lost writes to a primary or
physical standby database.
Improved Integration with RMAN
g
Oracle 11 – New Features in 11g Dataguard Page 64 of 242
WK: 5 - Day: 3

A number of enhancements in RMAN help to simplify backup and recovery operations across all primary and
physical standby databases, when using a catalog. Also, you can use the RMAN DUPLICATE command to
create a physical standby database over the network without a need for pre-existing database backups.
New Features Specific to SQL Apply and Logical Standby Databases
The following list summarizes the new features for SQL Apply and logical standby databases in Oracle Database 11g
Release 1 (11.1):
Support for additional object datatypes and PL/SQL package support
 XML stored as CLOB (Support for additional PL/SQL Package)
 DBMS_RLS (row level security or Virtual Private Database)

DBMS_FGA

 Support Transparent Data Encryption (TDE)


Data Guard SQL Apply can be used to provide data protection for the primary database with Transparent Data
Encryption enabled. This allows a logical standby database to provide data protection for applications with
advanced security requirements.
 Dynamic setting of Data Guard SQL Apply parameters
You can now configure specific SQL Apply parameters without requiring SQL Apply to be restarted. Using the
DBMS_LOGSTDBY.APPLY_SET package, you can dynamically set initialization parameters, thus improving the
manageability, uptime, and automation of a logical standby configuration.
In addition, the APPLY_SET and APPLY_UNSET subprograms include two new parameters:
LOG_AUTO_DEL_RETENTION_TARGET and EVENT_LOG_DEST.
 Enhanced RAC switchover support for logical standby databases
When switching over to a logical standby database where either the primary database or the standby database is
using Oracle RAC, the SWITCHOVER command can be used without having to shut down any instance, either at
the primary or at the logical standby database.
 Enhanced DDL handling in Oracle Data Guard SQL Apply
SQL Apply will execute parallel DDLs in parallel (based on availability of parallel servers).
 Use of the PL/SQL DBMS_SCHEDULER package to create Scheduler jobs on a standby database
Scheduler Jobs can be created on a standby database using the PL/SQL DBMS_SCHEDULER package and can
be associated with an appropriate database role so that they run when intended (for example, when the database
is the primary, standby, or both).
g
Oracle 11 – Active Dataguard Page 65 of 242
WK: 5 - Day: 3

88. Active Dataguard


88.1. Traditional Data Guard
The most popular use of Oracle Data Guard is to synchronize a physical standby database with its production counterpart
for data protection and high availability. Prior to Oracle Database 11g, physical standby databases typically operate in
continuous Redo Apply mode (i.e. continuously applying changes from the production database) to ensure that a
database failover can be accomplished within seconds of an outage at the production site. Redo Apply must be stopped to
enable read access to a Data Guard 10g standby database, eventually resulting in a replica that has stale data and
extending the time required to complete a failover operation.

88.2. Oracle Active Data Guard


Oracle Active Data Guard enables a physical standby database to be open for read-only access – for reporting, simple or
complex queries, sorting, web-based access, etc. while changes from the production database are being applied to it. All
queries reading from the physical replica execute in real-time, and return current results. This means any operation that
requires up-to-date read-only access can be offloaded to the replica, enhancing and protecting the performance of the
production database. This capability makes it possible for Active Data Guard to be deployed for a wide variety of business
applications. Examples include:
1. Telecommunications: Technician access to service schedules, customer inquiries to check status of service
requests.
2. Healthcare: Fast access to up-to-date medical records.
3. Finance and Administration: Ad-hoc queries and reports
4. Transportation: Package tracking queries, schedule status
5. Web-business: Catalog browsing, order status, scale-out using reader farms
Active Data Guard also provides support for RMAN block-change tracking, enabling very fast incremental backups to be
offloaded from the production database to the standby database. Fast incremental backups can be orders of magnitude
faster than backups taken on physical standby databases in earlier releases.

88.3. Unique Advantages of Oracle Active Data Guard


Active Data Guard is an evolution of Data Guard technology, providing unique performance advantages while leveraging
all other enhancements included in Oracle Data Guard 11g. For example, any Data Guard 11g physical standby database
can be easily converted to a Snapshot Standby. A Snapshot Standby is open read-write and is ideally suited as a test
system, able to process transactions independent of the primary database. A Snapshot Standby maintains protection by
continuing to receive data from the production database, archiving it for later use. When tests are complete, a single
g
Oracle 11 – Active Dataguard Page 66 of 242
WK: 5 - Day: 3
command discards changes made while open read-write and quickly resynchronizes the standby database with the
primary.
It is easy to see how all of these capabilities build upon each other – and because they all use the same common
infrastructure, they generate significant dividends for Oracle customers. The same replica maintained by Oracle Active
Data Guard to improve the quality of service and performance of the primary database, can be used during non-peak
hours as a test database using Snapshot Standby, and at all times can also serve as a disaster recovery solution. So
rather than maintain multiple replicas on costly redundant storage using different technologies to address different
requirements – Oracle Active Data Guard provides a common infrastructure and single replica with one management
interface to achieve the same objectives. Easier, less expensive, more functional – Oracle Active Data Guard is hard to
beat.
g
Oracle 11 – Snapshot Standby Databases Page 67 of 242
WK: 5 - Day: 3

89. Snapshot Standby Databases


A snapshot standby database is a fully updatable standby database that is created by converting a physical standby
database into a snapshot standby database. A snapshot standby database receives and archives, but does not apply,
redo data from its primary database. Redo data received from the primary database is applied when a snapshot standby
database is converted back into a physical standby database, after discarding all local updates to the snapshot standby
database.
A snapshot standby database typically diverges from its primary database over a time because redo data from the primary
database is not applied as it is received. Local updates to the snapshot standby database will cause additional
divergence. The data in the primary database is fully protected however, because a snapshot standby can be converted
back into a physical standby database at any time, and the redo data received from the primary will then be applied.
89.1. Benefits of a Snapshot Standby Database
A snapshot standby database is a fully updatable standby database that provides disaster recovery and data protection
benefits that are similar to those of a physical standby database. Snapshot standby databases are best used in scenarios
where the benefit of having a temporary, updatable snapshot of the primary database justifies additional administrative
complexity and increased time to recover from primary database failures.
The benefits of using a snapshot standby database include the following:
 It provides an exact replica of a production database for development and testing purposes, while
maintaining data protection at all times
 It can be easily refreshed to contain current production data by converting to a physical standby and
resynchronizing
The ability to create a snapshot standby, test, resynchronize with production, and then again create a snapshot standby
and test, is a cycle that can be repeated as often as desired. The same process can be used to easily create and regularly
update a snapshot standby for reporting purposes where read/write access to data is required.
89.2. Using a Snapshot Standby Database
Once a physical standby database has been converted into a snapshot standby database, it can be opened in read-write
mode and it is fully updatable. A snapshot standby database continues to receive and archive redo data from the primary
database, and this redo data will be automatically applied when the snapshot standby database is converted back into a
physical standby database. A snapshot standby database has the following characteristics:

Redo data gap detection and resolution works just as it does on a physical standby database.
 If the primary database moves to new database branch (for example, because of a Flashback Database
or an OPEN RESETLOGS, the snapshot standby database will continue accepting redo from new
database branch.
 A snapshot standby database cannot be the target of a switchover or failover. A snapshot standby
database must first be converted back into a physical standby database before performing a role
transition to it.
 After a switchover or failover between the primary database and one of the physical or logical standby
databases in a configuration, the snapshot standby database can receive redo data from the new
primary database after the role transition.
 A snapshot standby database cannot be the only standby database in a Maximum Protection Data
Guard configuration.
g
Oracle 11 – Snapshot Standby Databases Page 68 of 242
WK: 5 - Day: 3

Data Guard Scenario


g
Oracle 11 – Snapshot Standby Databases Page 69 of 242
WK: 5 - Day: 3

Intentionally Left Blank for Your Notes

90. Data Guard Enhancements in Oracle 11 G R2


The following sections describe new features in this release that provide improvements in Oracle Data Guard.

90.1. Redo Apply and SQL Apply


 A Data Guard configuration can now consist of a primary database and up to 30 standby databases.
 The FAL_CLIENT database initialization parameter is no longer required.
 The default archive destination used by the Oracle Automatic Storage Management (Oracle ASM)
feature and the fast recovery area feature has changed from LOG_ARCHIVE_DEST_10 to
LOG_ARCHIVE_DEST_1.
 Redo transport compression is no longer limited to compressing redo data only when a redo gap is
being resolved. When compression is enabled for a destination, all redo data sent to that destination
is compressed.
 The new ALTER SYSTEM FLUSH REDO SQL statement can be used at failover time to flush unsent
redo from a mounted primary database to a standby database, thereby allowing a zero data loss
failover to be performed even if the primary database is not running in a zero data loss data protection
mode.

90.2. Redo Apply


 You can configure apply lag tolerance in a real-time query environment by using the new
STANDBY_MAX_DATA_DELAY parameter.
 You can use the new ALTER SESSION SYNC WITH PRIMARY SQL statement to ensure that a
suitably configured physical standby database is synchronized with the primary database as of the
time the statement is issued.
 The V$DATAGUARD_STATS view has been enhanced to a greater degree of accuracy in many of its
columns, including apply lag and transport lag.
 You can view a histogram of apply lag values on the physical standby. To do so, query the new
V$STANDBY_EVENT_HISTOGRAM view.
 A corrupted data block in a primary database can be automatically replaced with an uncorrupted copy
of that block from a physical standby database that is operating in real-time query mode. A corrupted
block in a physical standby database can also be automatically replaced with an uncorrupted copy of
the block from the primary database.

90.3. SQL Apply


 Logical standby databases and the LogMiner utility support tables with basic table compression,
OLTP table compression, and hybrid columnar compression.
 Logical standby and the LogMiner utility support tables with SecureFile LOB columns. Compression
and encryption operations on SecureFile LOB columns are also supported. (De-duplication operations
and fragment-based operations are not supported.)
 Changes made in the context of XA global transactions on an Oracle RAC primary database are
replicated on a logical standby database.
 Online redefinition performed at the primary database using the DBMS_REDEFINITION PL/SQL
package is transparently replicated on a logical standby database.
 Logical Standby supports the use of editions at the primary database, including the use of edition-
based redefinition to upgrade applications with minimal downtime.
 Logical standby databases support Streams Capture. This allows you to offload processing from the
primary database in one-way information propagation configurations and make the logical standby the
hub that propagates information to multiple databases. Streams Capture can also propagate changes
that are local to the logical standby database.
g
Oracle 11 – Snapshot Standby Databases Page 70 of 242
WK: 5 - Day: 3

90.4. Compressed Table Support in Logical Standby Databases and


Oracle LogMiner
Compressed tables (that is, tables with compression that support both OLTP and direct load operations) are supported in
logical standby databases and Oracle LogMiner.
With support for this additional storage attribute, logical standby databases can now provide data protection and reporting
benefits for a wider range of tables.

90.5. Configurable Real-Time Query Apply Lag Limit


A physical standby database can be open for read-only access while redo apply is active only if the Oracle Active Data
Guard option is enabled. This capability is known as real-time query.
The new STANDBY_MAX_DATA_DELAY session parameter can be used to specify a session-specific apply lag
tolerance, measured in seconds, for queries issued by non-administrative users to a physical standby database that is in
real-time query mode.
This capability allows queries to be safely offloaded from the primary database to a physical standby database, because it
is possible to detect if the standby database has become unacceptably stale.

90.6. Support Up to 30 Standby Database


The number of standby databases that a primary database can support is increased from 9 to 30 in this release.
The capability to create 30 standby databases, combined with the functionality of the Oracle Active Data Guard option,
allows the creation of reader farms that can be used to offload large scale read-only workloads from a production
database.

90.7. Automatic Repair of Corrupt Data Blocks


A physical standby database operating in real-time query mode can also be used to repair corrupt data blocks in a primary
database. If possible, any corrupt data block encountered when a primary database is accessed is automatically replaced
with an uncorrupted copy of that block from a physical standby database operating in real-time query mode. Note that for
this to work, the standby database must be synchronized with the primary database.
If a corrupt data block is discovered on a physical standby database, the server attempts to automatically repair the
corruption by obtaining a copy of the block from the primary database if the following database initialization parameters
are configured on the standby database:
The LOG_ARCHIVE_CONFIG parameter is configured with a DG_CONFIG list and a LOG_ARCHIVE_DEST_n
parameter is configured for the primary database

The FAL_SERVER parameter is configured and its value contains an Oracle Net service name for the primary database
If automatic repair is not possible, an ORA-1578 error is returned.

90.8. Manual Repair of Corrupt Data Blocks


The RMAN RECOVER BLOCK command is used to manually repair a corrupted data block. This command searches
several locations for an uncorrupted copy of the data block. By default, one of the locations is any available physical
standby database operating in real-time query mode. The EXCLUDE STANDBY option of the RMAN RECOVER BLOCK
command can be used to exclude physical standby databases as a source for replacement blocks
g
Oracle 11 – Snapshot Standby Databases Page 71 of 242
WK: 5 - Day: 3

Intentionally Left Blank for Your Notes


g
Oracle 11 – Snapshot Standby Databases Page 72 of 242
WK: 5 - Day: 3

Intentionally Left Blank for Your Notes


g
Oracle 11 – Snapshot Standby Databases Page 73 of 242
WK: 5 - Day: 3

Intentionally Left Blank for Your Notes


g
Oracle 11 – Snapshot Standby Databases Page 74 of 242
WK: 5 - Day: 3

Intentionally Left Blank for Your Notes


g
Oracle 11 – Snapshot Standby Databases Page 75 of 242
WK: 5 - Day: 3

Intentionally Left Blank for Your Notes


g
Oracle 11 – Snapshot Standby Databases Page 76 of 242
WK: 5 - Day: 3

Intentionally Left Blank for Your Notes


g
Oracle 11 – Snapshot Standby Databases Page 77 of 242
WK: 5 - Day: 3

Intentionally Left Blank for Your Notes


g
Oracle 11 – Snapshot Standby Databases Page 78 of 242
WK: 5 - Day: 3

Intentionally Left Blank for Your Notes


g
Oracle 11 – Snapshot Standby Databases Page 79 of 242
WK: 5 - Day: 3

Intentionally Left Blank for Your Notes


g
Oracle 11 – Snapshot Standby Databases Page 80 of 242
WK: 5 - Day: 3

Intentionally Left Blank for Your Notes


g
Oracle 11 – Snapshot Standby Databases Page 81 of 242
WK: 5 - Day: 3

Intentionally Left Blank for Your Notes


g
Oracle 11 – Snapshot Standby Databases Page 82 of 242
WK: 5 - Day: 3

Intentionally Left Blank for Your Notes


g
Oracle 11 – Snapshot Standby Databases Page 83 of 242
WK: 5 - Day: 3

Intentionally Left Blank for Your Notes


g
Oracle 11 – Snapshot Standby Databases Page 84 of 242
WK: 5 - Day: 3

Intentionally Left Blank for Your Notes


g
Oracle 11 – Snapshot Standby Databases Page 85 of 242
WK: 5 - Day: 3

Intentionally Left Blank for Your Notes


g
Oracle 11 – Snapshot Standby Databases Page 86 of 242
WK: 5 - Day: 3

Intentionally Left Blank for Your Notes


g
Oracle 11 – Snapshot Standby Databases Page 87 of 242
WK: 5 - Day: 3

Intentionally Left Blank for Your Notes


g
Oracle 11 – Snapshot Standby Databases Page 88 of 242
WK: 5 - Day: 3

Intentionally Left Blank for Your Notes


g
Oracle 11 – Snapshot Standby Databases Page 89 of 242
WK: 5 - Day: 3

Intentionally Left Blank for Your Notes


g
Oracle 11 – Virtual Private Databases(VPD) Page 90 of 242
WK: 5 - Day: 3

91 Virtual Private Database (VPD)


91.1. Overview
Virtual Private Database (VPD) also known as Fine Grained Access Control, provides powerful row-level security
capabilities. Introduced in Oracle8i, it has become widely popular and can be found in a variety of applications ranging
from education software to financial services.
VPD works by transparently modifying requests for data to present a partial view of the tables to the users based on a set
of defined criteria. During runtime, predicates are appended to all the queries to filter any rows the user is supposed to
see. For example, if the user is supposed to see only accounts of account manager SCOTT, the VPD setup automatically
rewrites the query:
SQL> SELECT * FROM accounts;
TO:
SELECT * FROM accounts WHERE am_name = 'SCOTT';
The DBA sets a security policy on the table ACCOUNTS. The policy has an associated function called policy function,
which returns the string where am_name = 'SCOTT', which is applied as a predicate.

91.2. Policy Types


The repeated parsing necessary to generate the predicate is overhead that you can trim in some situations. For example,
in most real life cases the predicate is not as static as where am_name = 'SCOTT'; it's probably more dynamic based on
who the user is, the authority level of the user, which account manager she reports to, and so on. The string created and
returned by the policy function may become very dynamic, and to guarantee the outcome, Oracle must re-execute the
policy function every time, wasting resources and reducing performance. This type of policy, where the predicate can
potentially be very different each time it is executed, is known as a "dynamic" policy, and has been available in Oracle9i
Database and prior releases.
In addition to retaining dynamic policy, Oracle Database 10g introduces several new types of policies based on how the
predicate is constructed providing better controls for improving performance: context_sensitive, shared_context_sensitive,
shared_static, and static. Now, let's what each policy type means and how to use it in appropriate situations.
Dynamic Policy. To retain backward compatibility, the default policy type in 10g is "dynamic"—just as it was in Oracle9i. In
this case, the policy function is re-evaluated each time the table is accessed, for each row and for every user. Let's
examine the policy predicate closely:
where am_name = 'SCOTT'
Ignoring the where clause, the predicate has two distinct parts: the portion before the equality operator (am_name) and
the one after it ('SCOTT'). In most cases, the one after is more like a variable in that it is supplied from the user's data (if
the user is SCOTT, the value would be 'SCOTT'.) The part before the equality sign is static. So, even though the function
does have to evaluate the policy function for each row to generate the appropriate predicate, the knowledge about the
static-ness of the before-part and dynamic-ness of the after-part can be used to improve performance. This approach is
possible in 10g using a policy of type "context_sensitive" as a parameter in the dbms_rls.add_policy call:
policy_type => dbms_rls.context_sensitive
In another example scenario, we have a table called ACCOUNTS with several columns, one of which is BALANCE,
indicating the account balance. Let's assume that a user is allowed to view accounts below a certain balance that is
determined by an application context. Instead of hard-coding this balance amount in a policy function, we can use an
application context as in:
create or replace vpd_pol_func
(
p_schema in varchar2,
p_table in varchar2
)
return varchar2
is
begin
return 'balance < sys_context(''vpdctx'', ''maxbal'')';
end;
The attribute MAXBAL of the application context VPDCTX can be set earlier in the session and the function can simply get
the value at the runtime.
Note the example carefully here. The predicate has two parts: the one before the less-than sign and the other after it. The
one before, the word "balance," is a literal. The one after is more or less static because the application context variable is
constant until it is changed. If the application context attribute does not change, the entire predicate is constant, and hence
the function need not be re-executed. Oracle Database 10g recognizes this fact for optimization if the policy type is
g
Oracle 11 – Virtual Private Databases(VPD) Page 91 of 242
WK: 5 - Day: 3
defined as context sensitive. If no session context changes have occurred in the session, the function is not re-executed,
significantly improving performance.
Static Policy. Sometimes a business operation may warrant a predicate that is more static. For instance, in the context-
sensitive policy type example, we defined the maximum balance seen by a user as a variable. This approach is useful in
the case of web applications where an Oracle userid is shared by many web users and based on their authority this
variable (application context) is set by the application. Therefore web users TAO and KARTHIK, both connecting to the
database as user APPUSER, may have two different values of the application context in their session. Here the value of
MAXBAL is not tied to the Oracle userid, but rather to the individual session of TAO and KARTHIK.
In the static policy case the predicate is more predictable, as described below.
LORA and MICHELLE are account managers for Acme Bearings and Goldtone Bearings respectively. When they connect
to the database, they use their own id and should only see the rows pertaining to them. In Lora's case, the predicate
becomes where CUST_NAME = 'ACME'; for Michelle, where CUST_NAME = 'GOLDTONE'. Here the predicate is tied to
their userids, and hence any session they create will always have the same value in the application context.
This fact can be exploited by 10g to cache the predicate in the SGA and reuse that in the session without ever re-
executing the policy function. The policy function looks like this:
create or replace vpd_pol_func
(
p_schema in varchar2,
p_table in varchar2
)
return varchar2
is
begin
return 'cust_name = sys_context(''vpdctx'', ''cust_name'')';
end;
And the policy is defined as:
policy_type => dbms_rls.static
This approach ensures that the policy function is executed only once. Even if the application contexts are changed in the
session, the function is never re-executed, making this process extremely fast.
Static policies are recommended for hosting your applications across several subscribers. In this case a single database
has data for several users or subscribers. When each subscriber logs in, an after-logon trigger can set the application
context to a value that is used in the policy function to very quickly generate a predicate.
However, defining a policy as static is also a double-edged sword. In the above example, we assumed that the value of
the application context attribute VPDCTX.CUST_NAME does not change inside a session. What if that assumption is
incorrect? If the value changes, the policy function will not be executed and therefore the new value will not be used in the
predicate, returning wrong results! So, be very careful in defining a policy as static; you must be absolutely certain that the
value will not change. If you can't make that assumption, better to define the policy as context sensitive instead.
Shared Policy Types. To reuse code and maximize the usage of parsed code, you might decide to use a common policy
function for several tables. For instance, in the above example, we may have different tables for different types of
accounts—SAVINGS and CHECKING—but the rule is still the same: users are restricted from seeing accounts with
balances more than they are authorized for. This scenario calls for a single function used for policies on CHECKING and
SAVINGS tables. The policy is created as context_sensitive.
Suppose this is the sequence of events:
1. Session connected
2. Application context is set

SELECT * FROM savings;


SELECT * FROM checking;
Even though the application context does not change between steps 3 and 4, the policy function will be re-executed,
simply because the tables selected are different now. This is not desirable, as the policy function is the same and there is
no need to re-execute the function.
New in 10g is the ability to share a policy across objects. In the above example, you would define the policy type of these
policies as:
policy_type => dbms_rls.shared_context_sensitive
Declaring the policies as "shared" improves performance by not executing the function again in the cases as shown
above.
g
Oracle 11 – Virtual Private Databases(VPD) Page 92 of 242
WK: 5 - Day: 3

91.3. Selective Columns


Now imagine a situation where the VPD policy should be applied only if certain columns are selected. In the above
example with table ACCOUNTS, the rows are as follows:
ACCTNO ACCT_NAME BALANCE
------ ------------ -------
1 BILL CAMP 1000
2 TOM CONNOPHY 2000
3 ISRAEL D 1500
Michelle is not supposed to see accounts with balances over 1,600. When she issues a query like the following:
SQL> select * from accounts;

ACCTNO ACCT_NAME BALANCE


------ ------------ -------
1 BILL CAMP 1000
3 ISRAEL D 1500
acctno 2, with balance more than 1,600, has been suppressed in the display. There are only two rows in the table, not
three. When she issues a query such as:
SQL> select count(*) from accounts;
which simply counts the number of records from the table, the output is two, not three.
However, here we may decide to relax the security policy a bit. In this query Michelle can't view confidential data such as
account balance; she merely counts all the records in the table. Consistent with the security policy, we may allow this
query to count all the records whether or not she is allowed to see them. If this is the requirement, another parameter in
the call to dbms_rls.add_policy in 10g allows that function:
sec_relevant_cols => 'BALANCE'
Now when the user selects the column BALANCE, either explicitly or implicitly as in select *, the VPD policy will kick in to
restrict the rows. Otherwise all rows of the table will be selected, as in the query where the user has selected only the
count of the total rows, not the column BALANCE. If the above parameter is set as shown, then the query
SQL> SELECT COUNT(*) FROM accounts;
will show three columns, not two. But the query:
SQL> SELECT * FROM accounts;
will still return only two records, as expected.
g
Oracle 11 – Virtual Private Databases(VPD) Page 93 of 242
WK: 5 - Day: 3

Intentionally Left Blank for Your Notes


g
Oracle 11 – Virtual Private Databases(VPD) Page 94 of 242
WK: 5 - Day: 3

Intentionally Left Blank for Your Notes


g
Oracle 11 – Virtual Private Databases(VPD) Page 95 of 242
WK: 5 - Day: 3

Intentionally Left Blank for Your Notes


g
Oracle 11 – Virtual Private Databases(VPD) Page 96 of 242
WK: 5 - Day: 3

Intentionally Left Blank for Your Notes


g
Oracle 11 – Virtual Private Databases(VPD) Page 97 of 242
WK: 5 - Day: 3

Intentionally Left Blank for Your Notes


g
Oracle 11 – Virtual Private Databases(VPD) Page 98 of 242
WK: 5 - Day: 3

Intentionally Left Blank for Your Notes


g
Oracle 11 - Online Redefinition Page 99 of 242
WK: 6 - Day: 5.2

92. Online Redefinition


Prior to Oracle9i table redefinition was only possible using export/import which meant the table was offline during the
process, or the move syntax which locked DML during the operation. Neither of these methods is suitable for large OLTP
tables as the downtime can be considerable. To solve this problem Oracle9i has introduced Online Table Redefinitions
using the DBMS_REDEFINITION package.
Online data reorganization, or the ability to allow users full access to the database during data reorganizations, improves
the overall database availability and reduces planned downtime. Oracle Database 10g includes many online data
reorganization features such as creating indexes online, rebuilding indexes online, coalescing indexes online, and moving
index-organized tables (IOTs) online.
Oracle's online table redefinition feature offers database administrators unprecedented flexibility to modify physical
attributes of a table and transform both the data and structure of a table while allowing users full access to the database.
This feature can also make the application upgrade process easier, safer and faster.
Oracle Database 10g includes the following online data reorganization enhancements:
 Online table redefinition enhancements
o Easy cloning of indexes, grants, constraints, etc.     
o Convert from LONG to LOB online
o Allow unique index instead of primary key
 Change tables without recompiling stored procedures
o Stored procedures can depend on the signature of a table instead of the table itself
 Online segment shrink
o Return unused space within the blocks of a segment to the tablespace
For large, active databases, it is sometime necessary to redefine large ?hot? tables to improve the performance of queries
or data manipulation language (DML) operations performed against these tables. Additionally business applications may
require underlying database structure to be changed or transformed periodically.Oracle database provides a powerful tool
to redefine tables online. This mechanism provides a significant increase in availability compared to traditional methods of
redefining tables that require tables to be taken offline.
When a table is redefined online, it is accessible by all read and write operations during the redefinition process.
Administrators then have control over when to switch from the original to the newly redefined table. The switch process is
very brief and is independent of the size of the table or the complexity of the redefinition. The redefinition process
effectively creates a new table and improves its data block layout efficiency.
The online table redefinition feature improves data availability, database performance, response time and disk space
utilization.
Online table redefinition allows administrators to:
 Modify the physical attributes or storage parameters of a table
 Move a heap table or IOT to a different tablespace
 Add support for parallel queries
 Add or drop partitioning support
 Recreate a heap table or IOT to reduce fragmentation
 Change a heap table to IOT and vice versa
 Add, drop, or rename columns in a table
 Transform data in a table
The process is simlar to online rebuilds of indexes in that the original table is left online while a new copy of the table is
built. DML operations on the original table are stored in an temporary table for interim updates. Once the new table is
complete the interim updates are merged into it and the names of the original and the new table are swapped in the data
dictionary. This step requires a DML lock but it is only held for a short time. At this point all DML is processed against the
new table. The interim updates are automatically discarded, but the original table, with it's new name, has to be discarded
manually. An example of the process would be:
-- Check table can be redefined
EXEC Dbms_Redefinition.Can_Redef_Table('SCOTT', 'EMPLOYEES');

-- Create new table


CREATE TABLE scott.employees2
TABLESPACE tools AS
SELECT empno, first_name, salary as sal
FROM employees WHERE 1=2;

-- Start Redefinition
g
Oracle 11 - Online Redefinition Page 100 of 242
WK: 6 - Day: 5.2
EXEC Dbms_Redefinition.Start_Redef_Table( -
'SCOTT', -
'EMPLOYEES', -
'EMPLOYEES2', -
'EMPNO EMPNO, FIRST_NAME FIRST_NAME, SALARY*1.10 SAL);

-- Optionally synchronize new table with interim data before index creation
EXEC dbms_redefinition.sync_interim_table( -
'SCOTT', 'EMPLOYEES', 'EMPLOYEES2');

-- Add new keys, FKs and triggers


ALTER TABLE employees2 ADD
(CONSTRAINT emp_pk2 PRIMARY KEY (empno)
USING INDEX TABLESPACE indx);

-- Complete redefinition
EXEC Dbms_Redefinition.Finish_Redef_Table( -
'SCOTT', 'EMPLOYEES', 'EMPLOYEES2');

-- Remove original table which now has the name of the new table
DROP TABLE employees2;
If the column mappings are ommitted it is assumed that all column names in the new table match those of the old table.
Functions can be performed on the data during the redefinition if they are specified in the column mapping. Any indexes,
keys and triggers created against the new table must have unique names. All FKs should be created disabled as the
redefinition completion will enable them.
The redefinition process can be aborted using:
EXEC Dbms_Redefinition.Abort_Redef_Table('SCOTT', 'EMPLOYEES', 'EMPLOYEES2');
This process allows the following operations to be performed with no impact on DML operations:
 Converting a non-partitioned table to a partitioned table and vice versa.
 Switching a heap organized table to an index organized table and vice versa.
 Dropping non-primary key columns.
 Adding new columns.
 Adding or removing parallel support.
 Modifying storage parameters.
Online table redefinition has a number of restrictions including:
 There must be enough space to hold two copies of the table.
 Primary key columns cannot be modified.
 Tables must have primary keys.
 Redefinition must be done within the same schema.
 New columns added cannot be made NOT NULL until after the redefinition operation.
 Tables cannot contain LONGs, BFILEs or User Defined Types.
 Clustered tables cannot be redefined.
 Tables in the SYS or SYSTEM schema cannot be redefined.
 Tables with materialized view logs or materialized views defined on them cannot be redefined.
 Horizontal sub setting of data cannot be performed during the redefinition.

92.1. Online Redefinition of a Single Partition


Oracle9i introduced a feature that allows DBAs to perform complex table redefinitions online. The DBMS_REDEFINITION
utility allows users to change column names and datatypes, manipulate data, add and drop columns and partition tables
while the table is being accessed by online transactions. DBMS_REDEFINITION provides significant benefits over more
traditional methods of altering tables that require the object to be taken off-line during the redefinition process.
10G R2 enhances DBMS_REDEFINITION by providing it with the capability of redefining a single partition of a multi-
partition tablespace. One benefit that stands out is that administrators can now use DBMS_REDEFINITION to move a
single partition to a different tablespace while the data is being updated. In addition, this enhancement allows a partition
table to be redefined, one partition at a time.
Suppose we have a table TRANS that contains history of transactions. This table is partitioned on the TRANS_DATE, with
each quarter as a partition. During the normal course of business, the most recent partitions are updated frequently. After
a quarter is complete, there may not be much activity on that partition and it can be moved to a different location.
g
Oracle 11 - Online Redefinition Page 101 of 242
WK: 6 - Day: 5.2
However, the move itself will require a lock on the table, denying public access to the partition. How can we move the
partition with no impact on its availability?
In Oracle Database 10g Release 2, we can use online redefinition on a single partition. We can perform this task just as
we would for the entire table using the DBMS_REDEFINITION package but the underlying mechanism is different.
Whereas regular tables are redefined by creating a materialized view on the source table, a single partition is redefined
through an exchange partition method.
Let' see how it works. Here is the structure of the TRANS table:
SQL> desc trans
Name Null? Type
--------------------------------- -------- -------------------------
TRANS_ID NUMBER
TRANS_DATE DATE
TXN_TYPE VARCHAR2(1)
ACC_NO NUMBER
TX_AMT NUMBER(12,2)
STATUS
The table has been partitioned as follows:
partition by range (trans_date)
(partition y03q1 values less than (to_date('04/01/2003','mm/dd/yyyy')),
partition y03q2 values less than (to_date('07/01/2003','mm/dd/yyyy')),
partition y03q3 values less than (to_date('10/01/2003','mm/dd/yyyy')),
partition y03q4 values less than (to_date('01/01/2004','mm/dd/yyyy')),
partition y04q1 values less than (to_date('04/01/2004','mm/dd/yyyy')),
partition y04q2 values less than (to_date('07/01/2004','mm/dd/yyyy')),
partition y04q3 values less than (to_date('10/01/2004','mm/dd/yyyy')),
partition y04q4 values less than (to_date('01/01/2005','mm/dd/yyyy')),
partition y05q1 values less than (to_date('04/01/2005','mm/dd/yyyy')),
partition y05q2 values less than (to_date('07/01/2005','mm/dd/yyyy'))
)
At some point in time, we decide to move the partition Y03Q2 to a different tablespace (TRANSY03Q2), which may be on
a different type of disk, one that is a little slower and cheaper. To do that, first confirm that we can redefine the table
online:
begin
dbms_redefinition.can_redef_table(
uname => 'ARUP',
tname => 'TRANS',
options_flag => dbms_redefinition.cons_use_rowid,
part_name => 'Y03Q2');
end;
/
There is no output here, so we have our confirmation. Next, create a temporary table to hold the data for that partition:
create table trans_temp
(
trans_id number,
trans_date date,
txn_type varchar2(1),
acc_no number,
tx_amt number(12,2),
status varchar2(1)
)
tablespace transy03q2
/
Note that because the table TRANS is range partitioned, we have defined the table as un-partitioned. It's created in the
desired tablespace, TRANSY03Q2. If the table TRANS had some local indexes, we would have created those indexes (as
non-partitioned, of course) on the table TRANS_TEMP.
Now we are ready to start the redefinition process:
begin
dbms_redefinition.start_redef_table(
uname => 'ARUP',
orig_table => 'TRANS',
int_table => 'TRANS_TEMP',
col_mapping => NULL,
options_flag => dbms_redefinition.cons_use_rowid,
part_name => 'Y03Q2');
g
Oracle 11 - Online Redefinition Page 102 of 242
WK: 6 - Day: 5.2
end;
/
Note a few things about this call. First, the parameter col_mapping is set to NULL; in a single-partition redefinition, that
parameter is meaningless. Second, a new parameter, part_name, specifies the partition to be redefined. Third, note the
absence of the COPY_TABLE_DEPENDENTS parameter, which is also meaningless because the table itself is not
changed in any way; only the partition is moved.
If the table is large, the operation may take a long time; so sync it mid-way.
begin
dbms_redefinition.sync_interim_table(
uname => 'ARUP',
orig_table => 'TRANS',
int_table => 'TRANS_TEMP',
part_name => 'Y03Q2');
end;
/
Finally, finish the process with
begin
dbms_redefinition.finish_redef_table(
uname => 'ARUP',
orig_table => 'TRANS',
int_table => 'TRANS_TEMP',
part_name => 'Y03Q2');
end;
At this time, the partition Y03Q2 is in the tablespace TRANSY03Q2. If we had any global indexes on the table, they would
be marked UNUSABLE and must be rebuilt.
Single-partition redefinitions are useful for moving partitions across tablespaces, a common information lifecycle
management task. Obviously, however, there are a few restrictions for example, we can't change partitioning methods
(say, from range to hash) or change the structure of the table during the redefinition process.
g
Oracle 11 – OEM Jobs & Events Page 103 of 242
WK: 6 - Day: 5.2

93. OEM Jobs & Events


93.1. Overview
The DBMS_JOB package extensively is used to submit database jobs to run in the background, control the time or
interval of a run, report failures, and much more. The problem with the package is that it can handle only PL/SQL code
segments just anonymous blocks and stored program units. It cannot handle anything outside the database that is in an
operating system command file or executable. To do so, we would have to resort to using an operating system scheduling
utility such as cron in Unix or the AT command in Windows. Or, we could use a third-party tool, one that may even extend
this functionality by providing a graphical user interface.
Even so, dbms_job has a distinct advantage over these alternatives: it is active only when the database is up and running.
If the database is down, the jobs don't run. A tool outside the database must manually check if the database is up and that
can be difficult. Another advantage is that dbms_job is internal to the database; hence we can access it via a database
access utility such as SQL*Plus.
The Oracle Database 10g Scheduler feature offers the best of all worlds: a job scheduler utility right inside the database
that is sufficiently powerful to handle all types of jobs, not just PL/SQL code segments, and that can help us to create jobs
either with or without associated programs and/or schedules. Best of all, it comes with the database at no additional cost.
In this installment, we'll take a look at how it works.

93.2. Creating Jobs without Programs


Perhaps the concept can be best introduced through examples. Suppose we have created a shell script to move archived
log files to a different filesystem as follows:
/home/arup/dbtools/move_arcs.sh
We can specify the OS executable directly without creating it as a program first.
begin
dbms_scheduler.create_job
(
job_name => 'ARC_MOVE_2',schedule_name => 'EVERY_30_MINS',
job_type => 'EXECUTABLE',job_action => '/home/arup/dbtools/move_arcs.sh',
enabled => true,comments => 'Move Archived Logs to a Different Directory'
);
end;
Similarly, we can create a job without a named schedule.
begin
dbms_scheduler.create_job
(
job_name => 'ARC_MOVE_3',job_type => 'EXECUTABLE',
job_action => '/home/arup/dbtools/move_arcs.sh',
repeat_interval => 'FREQ=MINUTELY; INTERVAL=30', enabled => true,
comments => 'Move Archived Logs to a Different Directory'
);
end;
One advantage of Scheduler over dbms_job is pretty clear from our initial example: the ability to call OS utilities and
programs, not just PL/SQL program units. This ability makes it the most comprehensive job management tool for
managing Oracle Database and related jobs. However, we may have noted another, equally important advantage: the
ability to define intervals in natural language. Note that in the above example we wanted our schedule to run every 30
minutes; hence the parameter REPEAT_INTERVAL is defined with a simple, English-like expression (not a PL/SQL one) :
'FREQ=MINUTELY; INTERVAL=30'
A more complex example may help convey this advantage even better. Suppose our production applications become
most active at 7:00AM and 3:00PM. To collect system statistics, we want to run Statspack from Monday to Friday at
7:00AM and 3:00PM only. If we use DBMS_JOB.SUBMIT to create a job, the NEXT_DATE parameter will look something
like this:
DECODE
(
SIGN
(
15 - TO_CHAR(SYSDATE,'HH24')
),
1,
TRUNC(SYSDATE)+15/24,
TRUNC
(
g
Oracle 11 – OEM Jobs & Events Page 104 of 242
WK: 6 - Day: 5.2
SYSDATE +
DECODE
(
TO_CHAR(SYSDATE,'D'), 6, 3, 1
)
)
+7/24
)
Is that code easy to understand? Not really.
Now let's see the equivalent job in DBMS_SCHEDULER. The parameter REPEAT_INTERVAL will be as simple as:
'FREQ=DAILY; BYDAY=MON,TUE,WED,THU,FRI; BYHOUR=7,15'
Furthermore, this parameter value can accept a variety of intervals, some of them very powerful. Here are some more
examples:
 Last Sunday of every month:
 FREQ=MONTHLY; BYDAY=-1SUN
 Every third Friday of the month:
 FREQ=MONTHLY; BYDAY=3FRI
 Every second Friday from the end of the month, not from the beginning:
 FREQ=MONTHLY; BYDAY=-2FRI
The minus signs before the numbers indicate counting from the end, instead of the beginning.
What if we wanted to verify if the interval settings are correct? Wouldn't it be nice to see the various dates constructed
from the calendar string? Well, we can get a preview of the calculation of next dates using the
EVALUATE_CALENDAR_STRING procedure. Using the first example running Statspack every day from Monday through
Friday at 7:00AM and 3:00PM we can check the accuracy of our interval string as follows:
set serveroutput on size 999999
declare
L_start_date TIMESTAMP;
l_next_date TIMESTAMP;
l_return_date TIMESTAMP;
begin
l_start_date := trunc(SYSTIMESTAMP);
l_return_date := l_start_date;
for ctr in 1..10 loop
dbms_scheduler.evaluate_calendar_string(
'FREQ=DAILY; BYDAY=MON,TUE,WED,THU,FRI; BYHOUR=7,15',
l_start_date, l_return_date, l_next_date
);
dbms_output.put_line('Next Run on: ' ||
to_char(l_next_date,'mm/dd/yyyy hh24:mi:ss')
);
l_return_date := l_next_date;
end loop;
end;
/
The output is:
Next Run on: 03/22/2004 07:00:00
Next Run on: 03/22/2004 15:00:00
Next Run on: 03/23/2004 07:00:00
Next Run on: 03/23/2004 15:00:00
Next Run on: 03/24/2004 07:00:00
Next Run on: 03/24/2004 15:00:00
Next Run on: 03/25/2004 07:00:00
Next Run on: 03/25/2004 15:00:00
Next Run on: 03/26/2004 07:00:00
Next Run on: 03/26/2004 15:00:00
This confirms that our settings are correct.

93.3. Associating Jobs with Programs


In the above case, we created a job independently of any program. Now let's create one that refers to an operating system
utility or program, a schedule to specify how many times something should run, and then join the two to create a job.
g
Oracle 11 – OEM Jobs & Events Page 105 of 242
WK: 6 - Day: 5.2
First we need to make the database aware that our script is a program to be used in a job. To create this program, we
must have the CREATE JOB privilege.
begin
dbms_scheduler.create_program
(
program_name => 'MOVE_ARCS', program_type => 'EXECUTABLE',
program_action => '/home/arup/dbtools/move_arcs.sh',
enabled => TRUE, comments => 'Moving Archived Logs to Staging Directory'
);
end;
Here we have created a named program unit, specified it as an executable, and noted what the program unit is called.
Next, we will create a named schedule to be run every 30 minutes called EVERY_30_MINS. We would do that with:
begin
dbms_scheduler.create_schedule
(
schedule_name => 'EVERY_30_MINS',
repeat_interval => 'FREQ=MINUTELY; INTERVAL=30',
comments => 'Every 30-mins'
);
end;
Now that the program and schedule are created, we will associate the program to the schedule to create a job.
begin
dbms_scheduler.create_job
(
job_name => 'ARC_MOVE', program_name => 'MOVE_ARCS',
schedule_name => 'EVERY_30_MINS',
comments => 'Move Archived Logs to a Different Directory', enabled => TRUE
);
end;
/
This will create a job to be run every 30 minutes that executes the shell script move_arcs.sh. It will be handled by the
Scheduler feature inside the database no need for cron or the AT utility.

93.4. Classes, Plans, and Windows


A good job scheduling system worth its salt must support the ability to prioritize jobs. For instance, the statistics collection
job suddenly goes into the OLTP workload window affecting performance there. To ensure the stats collection job doesn't
consume resources affecting OLTP, we would use job classes, resource plans, and Scheduler Windows.
For example, while defining a job, we can make it part of a job class, which maps to a resource consumer group for
allocation of resources. To do that, first we need to define a resource consumer group called, say, OLTP_GROUP.
begin
dbms_resource_manager.clear_pending_area();
dbms_resource_manager.create_pending_area();
dbms_resource_manager.create_consumer_group (
consumer_group => 'oltp_group', comment => 'OLTP Activity Group'
);
dbms_resource_manager.submit_pending_area();
end;
/
Next, we need to create a resource plan.
begin
dbms_resource_manager.clear_pending_area();
dbms_resource_manager.create_pending_area();
dbms_resource_manager.create_plan('OLTP_PLAN','OLTP Database Activity Plan');
dbms_resource_manager.create_plan_directive(
plan => 'OLTP_PLAN', group_or_subplan => 'OLTP_GROUP',
comment => 'This is the OLTP Plan', cpu_p1 => 80, cpu_p2 => NULL,
cpu_p3 => NULL, cpu_p4 => NULL, cpu_p5 => NULL, cpu_p6 => NULL,
cpu_p7 => NULL, cpu_p8 => NULL, parallel_degree_limit_p1 => 4,
active_sess_pool_p1 => NULL, queueing_p1 => NULL,
switch_group => 'OTHER_GROUPS', switch_time => 10,
switch_estimate => true, max_est_exec_time => 10, undo_pool => 500,
max_idle_time => NULL, max_idle_blocker_time => NULL,
switch_time_in_call => NULL
);
g
Oracle 11 – OEM Jobs & Events Page 106 of 242
WK: 6 - Day: 5.2
dbms_resource_manager.create_plan_directive(
plan => 'OLTP_PLAN', group_or_subplan => 'OTHER_GROUPS', comment => NULL,
cpu_p1 => 20, cpu_p2 => NULL, cpu_p3 => NULL, cpu_p4 => NULL,
cpu_p5 => NULL, cpu_p6 => NULL, cpu_p7 => NULL, cpu_p8 => NULL,
parallel_degree_limit_p1 => 0, active_sess_pool_p1 => 0, queueing_p1 => 0,
switch_group => NULL, switch_time => NULL, switch_estimate => false,
max_est_exec_time => 0, undo_pool => 10, max_idle_time => NULL,
max_idle_blocker_time => NULL, switch_time_in_call => NULL
);
dbms_resource_manager.submit_pending_area();
end;
Finally, we create a job class with the resource consumer group created earlier.
begin
dbms_scheduler.create_job_class(
job_class_name => 'OLTP_JOBS',
logging_level => DBMS_SCHEDULER.LOGGING_FULL, log_history => 45,
resource_consumer_group => 'OLTP_GROUP', comments => 'OLTP Related Jobs'
);
end;
Let's examine the various parameters in this call. The parameter LOGGING_LEVEL sets how much log data is tracked for
the job class. The setting LOGGING_FULL indicates that all activities on jobs in this class creation, deletion, run, alteration,
and so on will be recorded in the logs. The logs can be seen from the view DBA_SCHEDULER_JOB_LOG and are available
for 45 days as specified in the parameter LOG_HISTORY (the default being 30 days). The resource consumer group
associated with this class is also specified. The job classes can be seen from the view DBA_SCHEDULER_JOB_CLASSES.
When we create a job, we can optionally associate a class to it. For instance, while creating COLLECT_STATS, a job that
collects optimizer statistics by executing a stored procedure collect_opt_stats(), we could have specified:
begin
dbms_scheduler.create_job
(
job_name => 'COLLECT_STATS',job_type => 'STORED_PROCEDURE',
job_action => 'collect_opt_stats', job_class => 'OLTP_JOBS',
repeat_interval => 'FREQ=WEEKLY; INTERVAL=1', enabled => true,
comments => 'Collect Optimizer Stats'
);
end;
This command will place the newly created job in the class OLTP_JOBS, which is then governed by the resource plan
OLTP_GROUP, which will restrict how much CPU can be allocated to the process, the maximum number of executions
before it is switched to a different group, the group to switch to, and so on. Any job defined in this class will be governed
by the same resource plan directive. This capability is particularly useful for preventing different types of jobs from taking
over the resources of the system.
The Scheduler Window is a time frame with an associated resource plan used for activating that plan-thereby supporting
different priorities for the jobs over a time frame. For example, some jobs, such as batch programs to update databases
for real-time decision-support, need high priority during the day but become low priority at night (or vice-versa). We can
implement this schedule by defining different resource plans and then activating them using Scheduler Windows.

93.5. Monitoring
After a job is issued, we can monitor its status from the view DBA_SCHEDULER_JOB_LOG, where the column STATUS
shows the current status of the job. If it shows FAILED, we can drill down further to find out the cause from the view
DBA_SCHEDULER_JOB_RUN_DETAILS.

93.6. Administration
So far, we've discussed how to create several types of objects: programs, schedules, job classes, and jobs. What if we
want to modify some of them to adjust to changing needs? Well, we can do that via APIs provided in the
DBMS_SCHEDULER package.
From the Database tab of the Enterprise Manager 10g home page, click on the Administration link. This will bring up the
Administration Screen, shown in Figure 1. All the Scheduler related tasks are found under the heading "Scheduler" to the
bottom right-hand corner, shown inside a red ellipse in the figure.
g
Oracle 11 – OEM Jobs & Events Page 107 of 242
WK: 6 - Day: 5.2

Figure 1: Administration page

All the tasks related to scheduler, such as creating, deleting, and maintaining jobs, can be easily accomplished through
the hyper-linked task in this page. Let's see a few of these tasks. We created all these tasks earlier, so the clicking on the
Jobs tab will show a screen similar to Figure 2.

Figure 2: Scheduled jobs

Clicking on the job COLLECT_STATS allows us to modify its attributes. The screen shown in Figure 3 shows up when we
click on "Job Name."
g
Oracle 11 – OEM Jobs & Events Page 108 of 242
WK: 6 - Day: 5.2

Figure 3: Job parameters

As we can see, we can change parameters of the job as well as the schedule and options by clicking on the appropriate
tabs. After all changes are made, we would press the button "Apply" to make the changes permanent. Before doing so, we
may want to click the button marked "Show SQL", which shows the exact SQL statement that will be issued if for no other
reason than to see what APIs are called, thereby enabling us to understand the workings behind the scene. We can also
store the SQL in a script and execute it later, or store it as a template for the future.
g
Oracle 11 – OEM Jobs & Events Page 109 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – OEM Jobs & Events Page 110 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – OEM Jobs & Events Page 111 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – OEM Jobs & Events Page 112 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – OEM Jobs & Events Page 113 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – OEM Jobs & Events Page 114 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – Log Miner Page 115 of 242
WK: 6 - Day: 5.2

94. Log Miner


LogMiner tool suite lets a DBA scan through online redo logs or archived redo logs to obtain actual DML SQL statements
that have been issued to the database server to create the redo change entries. LogMiner can also return the SQL
statements needed to undo the DML that has been issued.
However, LogMiner did have a few drawbacks: Even with the Oracle Enterprise Manager User Interface, it could take
some power struggle to get LogMiner to return the information needed for recovery. In addition, it did not support retrieval
of data from columns with Large Object (LOB) datatypes. The good news is that Oracle 10g has enhanced the LogMiner
tool suite to overcome many of these issues:
Automated Determination of Needed Log Files: Prior to Oracle 10g, one of the more tedious tasks before initiating a
LogMiner operation was to determine which archived redo logs were appropriate targets for mining.
This is handled by querying the V$ARCHIVED_LOG view to determine which archived redo log files might fulfill LogMiner
query based on their start and end time periods, and then used the DBMS_LOGMNR.ADD_LOGFILE procedure to query
against just those log files. Oracle 10g has greatly simplified this by scanning the control file of the target database to
determine which redo logs will fulfill the requested timeframe or SCN range.
Example: Letting the database's control file establish which redo logs LogMiner needs to complete its work
SQL> ALTER SESSION SET NLS_DATE_FORMAT = 'MM/DD/YYYY HH24:MI';
SQL> SPOOL C:\Listing_41.log
Start LogMiner, running from the database's online data dictionary and preparing for several mining attempts
BEGIN
DBMS_LOGMNR.START_LOGMNR(STARTTIME => '02/20/2005 06:00',
ENDTIME => '02/20/2005 12:00',
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.CONTINUOUS_MINE
);
END;
/

-- Find the desired data


SQL> SELECT seg_owner,seg_name,operation,sql_redo FROM v$logmnr_contents
WHERE operation = 'INSERT' AND seg_owner = 'HR';
Reissue the START_LOGMNR directive for a new starting and ending period, but this time based on specified starting and
ending SCNs
BEGIN
DBMS_LOGMNR.START_LOGMNR(STARTSCN => 2266500,ENDSCN => 2266679,
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.CONTINUOUS_MINE
);
END;
/

-- Find the desired data


SQL> SELECT seg_owner,seg_name,operation,sql_redo FROM v$logmnr_contents
WHERE operation = 'INSERT' AND seg_owner = 'HR';

-- End the LogMiner session


SQL> EXEC DBMS_LOGMNR.END_LOGMNR;
SQL> SPOOL OFF
The above shown example of the new CONTINUOUS_MINE directive of procedure DBMS_LOGMNR.START that directs
Oracle to determine what log files are needed based on the ranges specified. It also illustrates that the
DBMS_LOGMNR.START procedure can be executed multiple times within a LogMiner session to effectively limit the
range of log files required for the mining request.
In addition to the existing directive, NO_SQL_DELIMITER that removes semicolons from the final display, Oracle 10g also
adds a new directive, PRINT_PRETTY_SQL that formats the SQL into a more legible format. Another new directive,
NO_ROWID_IN_STMT, will omit the ROWID clause from the reconstructed SQL when the DBA intends to reissue the
generated SQL - especially when it is going to be executed against a different database with different ROWIDs. See the
below examples for these directives.
Example: Making LogMiner SQL output "prettier"
SQL> ALTER SESSION SET NLS_DATE_FORMAT = 'MM/DD/YYYY HH24:MI';
SQL> SPOOL C:\Listing_42.log
BEGIN
Start LogMiner, running from the database's online data dictionary and preparing for several mining attempts
DBMS_LOGMNR.START_LOGMNR(STARTTIME => '02/20/2005 06:05',
g
Oracle 11 – Log Miner Page 116 of 242
WK: 6 - Day: 5.2
ENDTIME => '02/20/2005 06:15',
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.CONTINUOUS_MINE +
DBMS_LOGMNR.NO_ROWID_IN_STMT + DBMS_LOGMNR.PRINT_PRETTY_SQL
);
END;
/
-- Find the desired data
SELECT sql_redo FROM v$logmnr_contents WHERE seg_owner = 'HR';

-- End the LogMiner session


BEGIN
DBMS_LOGMNR.END_LOGMNR;
END;
/
SPOOL OFF
Expanded Support for Additional Datatypes. LogMiner now supports retrieval of SQL Redo and Undo information for
Large Objects (LOBs) including multibyte CLOBs and NCLOBS. Data stored in Index-Organized Tables (IOTs) is now also
retrievable, so long as the IOT does not contain a LOB.
Storing the LogMiner Data Dictionary in Redo Logs. LogMiner needs to have access to the database's data dictionary so
that it can make sense of the redo entries stored in the log files. Prior to Oracle 10g, only two options were available. The
database's data dictionary can be used as long as the database instance is accessible. Another option is to store the
LogMiner data dictionary in a flat file created by the DBMS_LOGMNR_D.BUILD procedure. This offers the advantage of
being able to transport the data dictionary flat file and copies of the database's log files to another, possibly more powerful
or more available server for LogMiner analysis. However, this option does take some extra time and consumes a lot of
resources while the flat file is created.
Oracle 10g now offers a melding of these two options: The capability to store the LogMiner data dictionary within the
active database's redo log files. The advantage to this approach is that the data dictionary listing is guaranteed to be
consistent, and it is faster than creating the flat file version of the data dictionary. The resulting log files can then be
specified as the source of the LogMiner data dictionary during mining operations. See the below example to implement
this option.
Example: Creating a LogMiner data dictionary and storing it within the online redo log files
BEGIN
DBMS_LOGMNR_D.BUILD(OPTIONS => DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);
END;
/
DDL_DICT_TRACKING:
This new feature of Oracle9i allows logmnr dictionary to use either a flat file or the redo logs, to ensure that its internal
dictionary is updated if a DDL event is found in the redo log files. This ensures that SQL_REDO and SQL_UNDO
information is correct for objects that are modified in the redo log files after the LogMiner internal dictionary was built. By
default this option is disabled.
g
Oracle 11 – Log Miner Page 117 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – Log Miner Page 118 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – Log Miner Page 119 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – Log Miner Page 120 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – Log Miner Page 121 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – Log Miner Page 122 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – RMAN Tablespace Point-in-Time Recovery Page 123 of 242
WK: 6 - Day: 5.2

95. RMAN Tablespace Point-in-Time Recovery (TSPITR)


Recovery Manager (RMAN) automatic tablespace point-in-time recovery (commonly abbreviated TSPITR) enables us to
quickly recover one or more tablespaces in an Oracle database to an earlier time, without affecting the state of the rest of
the tablespaces and other objects in the database.
This chapter explains when we can and cannot use TSPITR, what RMAN actually does to our database during TSPITR,
how to prepare a database for TSPITR, how to run TSPITR, and options for controlling the TSPITR process.

95.1. Understanding RMAN TSPITR


In order to use TSPITR effectively, we need to understand what problems it can solve for us, what the major elements
used in TSPITR are, what RMAN does during TSPITR, and limitations on when and how it can be applied.

95.1.1. RMAN TSPITR Concepts

Figure 1 - Tablespace Point-in-Time Recovery (TSPITR) Architecture

The above figure contains the following entities:


 The target instance, containing the tablespace to be recovered
 The Recovery Manager client
 The control file and (optional) recovery catalog, used for the RMAN repository records of backup activity
 Archived redo logs and backup sets from the target database, which are the source of the reconstructed
tablespace.
 The auxiliary instance, an Oracle database instance used in the recovery process to perform the actual work of
recovery.
There are four other important terms related to TSPITR, which will be used in the rest of this discussion:
 The target time, the point in time or SCN that the tablespace will be left at after TSPITR
 The recovery set, which consists of the datafiles containing the tablespaces to be recovered;
 The auxiliary set, which includes datafiles required for TSPITR of the recovery set which are not themselves part
of the recovery set. The auxiliary set typically includes:
o A copy of the SYSTEM tablespace
g
Oracle 11 – RMAN Tablespace Point-in-Time Recovery Page 124 of 242
WK: 6 - Day: 5.2
o Datafiles containing rollback or undo segments from the target instance
o In some cases, a temporary tablespace, used during the export of database objects from the
auxiliary instance
The auxiliary instance has other files associated with it, such as a control file, parameter file, and online logs , but
they are not part of the auxiliary set.
 The auxiliary destination, an optional location on disk which can be used to store any of the auxiliary set datafiles,
control files and online logs of the auxiliary instance during TSPITR. Files stored here can be deleted after
TSPITR is complete.

95.1.2. How TSPITR Works With an RMAN-Managed Auxiliary Instance


To perform TSPITR of the recovery set using RMAN and an automated auxiliary instance, we carry out the preparations
for TSPITR described in "Planning and Preparing for TSPITR", and then issue the RECOVER TABLESPACE command,
specifying, at a minimum, the tablespaces of the recovery set and the target time for the point-in-time recovery, and, if
desired, an auxiliary destination as well.
RMAN then carries out the following steps:
1. If there is no connection to an auxiliary instance, RMAN creates the auxiliary instance, starts it up and connects
to it.
2. Takes the tablespaces to be recovered offline in the target database
3. Restores a backup controlfile from a point in time before the target time to the auxiliary instance
4. Restores the datafiles from the recovery set and the auxiliary set to the auxiliary instance. Files are restored
either in locations we specify for each file, or the original location of the file (for recovery set files) or in the
auxiliary destination (for auxiliary set files, if we used the AUXILIARY DESTINATION argument of RECOVER
TABLESPACE)
5. Recovers the restored datafiles in the auxiliary instance to the specified time
6. Opens the auxiliary database with the RESETLOGS option
7. Exports the dictionary metadata about objects in the recovered tablespaces to the target database
8. Shuts down the auxiliary instance
9. Issues SWITCH commands on the target instance, so that the target database control file now points to the
datafiles in the recovery set that were just recovered at the auxiliary instance.
10. Imports the dictionary metadata from the auxiliary instance to the target instance, allowing the recovered objects
to be accessed.
11. Deletes all auxiliary set files.
At that point the TSPITR process is complete. The recovery set datafiles are returned to their contents at the specified
point in time, and belong to the target database.

95.1.3. Deciding When to Use TSPITR


Like a table import, RMAN TSPITR enables us to recover a consistent data set; however, the data set recovered includes
an entire tablespace rather than one object.
RMAN TSPITR is most useful for situations such as these:
 Recovering data lost after an erroneous TRUNCATE TABLE statement;
 Recovering from logical corruption of a table;
 Undoing the effects of an incorrect batch job or other DML statement that has affected only a subset of the
database;
 Recovering a logical schema to a point different from the rest of the physical database, when multiple schemas
exist in separate tablespaces of one physical database.
Note that, as with database point-in-time recovery (DBPITR), we cannot perform TSPITR if we do not have our archived
redo logs. For databases running in NOARCHIVELOG mode, we cannot perform TSPITR.

95.1.4. Limitations of TSPITR


There are a number of situations which we cannot resolve by using TSPITR.
 We cannot recover dropped tablespaces.
 We cannot recover a renamed tablespace to a point in time before it was renamed. If we try to perform a TSPITR
to an SCN earlier than the rename operation, RMAN cannot find the new tablespace name in the repository as of
that earlier SCN (because the tablespace did not have that name at that SCN).
g
Oracle 11 – RMAN Tablespace Point-in-Time Recovery Page 125 of 242
WK: 6 - Day: 5.2
In this situation, we must recover the entire database to a point in time before the tablespace was renamed. The
tablespace will be found under the name it had at that earlier time.
 We cannot recover tables without their associated constraints, or constraints without the associated tables.
 We cannot use TSPITR to recover any of the following:
o Replicated master tables
o Partial tables (for example, if we perform RMAN TSPITR on partitioned tables and spread
partitions across multiple tablespaces, then we must recover all tablespaces which include
partitions of the table.)
o Tables with VARRAY columns, nested tables, or external files
o Snapshot logs and snapshot tables
o Tablespaces containing undo or rollback segments
o Tablespaces that contain objects owned by SYS, including rollback segments

TSPITR has some other limitations:


 If a datafile was added after the point to which RMAN is recovering, an empty datafile by the same name will be
included in the tablespace after RMAN TSPITR.
 TSPITR will not recover query optimizer statistics for recovered objects. We must gather new statistics after the
TSPITR.
 Assume that we run TSPITR on a tablespace, and then bring the tablespace online at time t. Backups of the
tablespace created before time t are no longer usable for recovery with a current control file. We cannot run
TSPITR again on this tablespace to recover it to any time less than or equal to time t, nor can we use the current
control file to recover the database to any time less than or equal to t. Therefore, we must back up the tablespace
as soon as TSPITR is complete.

95.1.5. Limitations of TSPITR Without a Recovery Catalog


If we do not use a recovery catalog when performing TSPITR, then note the following special restrictions:
 The undo segments at the time of the TSPITR must be part of the auxiliary set. Because RMAN has no historical
record of the undo in the control file, RMAN assumes that the current rollback or undo segments were the same
segments present at the time to which recovery is performed. If the undo segments have changed since that time,
then TSPITR will fail.
 TSPITR to a time that is too old may not succeed if Oracle has reused the control file records for needed backups.
(In planning our database, set the CONTROL_FILE_RECORD_KEEP_TIME initialization parameter to a value
large enough to ensure that control file records needed for TSPITR are kept.)
 When not using a recovery catalog, the current control file has no record of the older incarnation of the recovered
tablespace. Thus, recovery with a current control file that involves this tablespace can no longer use a backup
taken prior to time t. We can, however, perform incomplete recovery of the whole database to any time less than
or equal to t, if we can restore a backup control file from before time t.

95.2. Performing Basic RMAN TSPITR


Having selected our tablespaces to recover and our target time, we are now ready to perform RMAN TSPITR. We have a
few different options available to us:
 Fully automated TSPITR--in which we specify an auxiliary destination and let RMAN manage all aspects of the
TSPITR. This is the simplest way to perform TSPITR, and is recommended unless we specifically need more
control over the location of recovery set files after TSPITR or auxiliary set files during TSPITR, or control over the
channel configurations or some other aspect of our auxiliary instance.
 Customized TSPITR with an automatic auxiliary instance--in which we base our TSPITR on the behavior of fully
automated TSPITR, possibly still using an auxiliary destination, but customize one or more aspects of the
behavior, such as the location of auxiliary set or recovery set files, or specifying initialization parameters or
channel configurations for the auxiliary instance created and managed by RMAN.
 TSPITR with our own auxiliary instance--in which we take responsibility for setting up, starting, stopping and
cleaning up the auxiliary instance used in TSPITR, and possibly also manage the TSPITR process using some of
the methods available in customized TSPITR with an automatic auxiliary instance.

95.2.1. Fully Automated RMAN TSPITR


When performing fully automated TSPITR, letting RMAN manage the entire process, there are only two requirements
beyond the preparations in "Planning and Preparing for TSPITR":
 We must specify the auxiliary destination for RMAN to use for the auxiliary set datafiles and other files for the
auxiliary instance.
 We must configure any channels required for the TSPITR on the target instance. (The auxiliary instance will use
the same channel configuration as the target instance when performing the TSPITR.)
g
Oracle 11 – RMAN Tablespace Point-in-Time Recovery Page 126 of 242
WK: 6 - Day: 5.2
RMAN bases as much of the configuration for TSPITR as possible on our target database. During TSPITR, the recovery
set datafiles are written in their current locations on the target database. The same channel configurations in effect on the
target database are used on the auxiliary instance when restoring files from backup. Auxiliary set datafiles and other
auxiliary instance files, however, are stored in the auxiliary destination.

95.2.1.1. Using an Auxiliary Destination


Oracle Corporation recommends that we use an auxiliary destination with our auxiliary instance. Even if we use other
methods to rename some or all of the auxiliary set datafiles, specifying an AUXILIARY DESTINATION parameter provides
a default location for auxiliary set datafiles for which names are not specified. This way, TSPITR will not fail if we
inadvertently do not provide names for all auxiliary set datafiles.
To specify an auxiliary destination, find a location on disk where there is enough space to hold our auxiliary set datafiles.
Then, use the AUXILIARY DESTINATION parameter in our RECOVER TABLESPACE command to specify the auxiliary
destination location, as shown in the next section.

95.2.1.2. Performing Fully Automated RMAN TSPITR


To actually peform automated RMAN TSPITR, start the RMAN client, connecting to the target database and, if applicable,
a recovery catalog. This example shows connecting in NOCATALOG mode, using operating system authentication:
% rman TARGET /
Note: Do not connect to an auxiliary instance when starting the RMAN client for automated TSPITR. If there is no
connected auxiliary instance, RMAN constructs the automatic auxiliary instance for us when carrying out the RECOVER
TABLESPACE command. (If there is a connected auxiliary instance, RMAN will assume that we are trying to manage our
own auxiliary instance, and try to use the connected auxiliary for TSPITR.)
If we have configured channels that RMAN can use to restore from backup on the primary instance, then we are ready to
perform TSPITR now, by running the RECOVER TABLESPACE... UNTIL... command.
This example returns the users and tools tablespaces to the end of log sequence number 1300, and stores the auxiliary
instance files (including auxiliary set datafiles) in the destination /disk1/auxdest:
RMAN> RECOVER TABLESPACE users, tools
UNTIL LOGSEQ 1300 THREAD 1
AUXILIARY DESTINATION '/disk1/auxdest';
Assuming the TSPITR process completes without error, the tablespaces are taken offline by RMAN, restored from backup
and recovered to the desired point in time on the auxiliary instance, and then re-imported to the target database. The
tablespaces are left offline at the end of the process. All auxiliary set datafiles and other auxiliary instance files are
cleaned up from the auxiliary destination.

95.2.1.3. Tasks to Perform After Successful TSPITR


If TSPITR completes successfully, we must back up the recovered tablespaces, and then we can bring them online.

Backing Up Recovered Tablespaces After TSPITR


It is very important that we backup recovered tablespaces immediately after TSPITR is completed.
After we perform TSPITR on a tablespace, we cannot use backups of that tablespace from before the TSPITR was
completed and the tablespace put back on line. If we start using the recovered tablespaces without taking a backup, we
are running our database without a usable backup of those tablespaces. For this example, the users and tools
tablespaces must be backed up, as follows:
RMAN> BACKUP TABLESPACE users, tools;
We can then safely bring the tablespaces online, as follows:
RMAN> SQL "ALTER TABLESPACE users, tools ONLINE";
Our recovered tablespaces are now ready for use.

Handling Errors in Automated TSPITR


In the event of an error during automated TSPITR, we should refer to "Troubleshooting RMAN TSPITR". The auxiliary set
datafiles and other auxiliary instance files will be left in place in the auxililary destination as an aid to troubleshooting. The
state of the recovery set files is determined by the type of failure. Once we resolve the problem, we can try our TSPITR
operation again.
g
Oracle 11 – RMAN Tablespace Point-in-Time Recovery Page 127 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – RMAN Tablespace Point-in-Time Recovery Page 128 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – RMAN Tablespace Point-in-Time Recovery Page 129 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – RMAN Tablespace Point-in-Time Recovery Page 130 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – RMAN Tablespace Point-in-Time Recovery Page 131 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – RMAN Tablespace Point-in-Time Recovery Page 132 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – RMAN Tablespace Point-in-Time Recovery Page 133 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – RMAN Tablespace Point-in-Time Recovery Page 134 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – DBMS Built-in Packages Page 135 of 242
WK: 6 - Day: 5.2

96. DBMS Built-in Packages


96.1. DBMS_SCHEDULER
95.1.1. Basic Features
The Scheduler does keep the basic functionality of DBMS_JOB intact:
 A task can be scheduled to run at a particular date and time.
 A task can be scheduled to run only once, or multiple times.
 A task can be turned off temporarily or removed completely from the schedule.
 Complex scheduling is still available, but now much simpler. (For example, DBMS_JOB could be manipulated into
running a task every Tuesday, Thursday and Saturday at 08:00, but it did take some experimentation with
NEXT_DATE and the INTERVAL parameter of DBMS_JOB before we got it right.)

96.1.2. Scheduler Components


The Scheduler uses three basic components to handle the execution of scheduled tasks. An instance of each component
is stored as a separate object in the database when it is created:
Programs: A program defines what the Scheduler will execute. A program's attributes include its name, its type (e.g. a
PL/SQL procedure or anonymous block), and the action it is expected to perform. A program can also accept zero to
many arguments, which makes it a flexible building block for constructing schemes of tasks to be scheduled.
Schedules: A schedule defines when and at what frequency the Scheduler will execute a particular set of tasks. A
schedule's attributes include the date on which a set of tasks should begin, how often the tasks should be repeated and
when the set of tasks should no longer be executed, either as of a specified date and time, or after a specified number of
repetitions.
Jobs: A job assigns a specific task to a specific schedule. A job therefore tells the schedule which tasks - either one-time
tasks created "on the fly," or predefined programs - are to be run. A specific program can be assigned to one, multiple, or
no schedule(s); likewise, a schedule may be connected to one, multiple, or no program(s). The beauty of the redesigned
Scheduler' is that it relies upon the reuse of these three basic objects. This corrects one of the more serious shortcomings
of DBMS_JOB: For each scheduled task, a separate job had to be created, even if the task being performed was
essentially identical.
A perfect example of this shortcoming is refreshing table and index statistics. Since a database's objects are typically not
spread evenly across multiple schemas, we normally scheduled statistics refresh for different schemas at different
frequencies, which meant we needed to create separate DBMS_JOBs for each invocation of
DBMS_STATS.GATHER_SCHEMA_STATS. With the Scheduler, though, we can now create a program that accepts the
schema owner as an argument, create an appropriate schedule for each schema, and then schedule separate jobs to run
at the appropriate time for each schema.

96.1.3. Advanced Features


The new Scheduler also offers some advanced features that DBMS_JOB never offered. Here is a brief sampling that we
will flesh out in the next series of articles:
Job Classes: we are probably one of the few ex-mainframers who will admit that he enjoyed working with Job Control
Language (JCL). We loved its restart ability and especially the level of control it gave us to accomplish a complex set of
tasks in background mode. Moreover, we especially savored the concept of a job class, a set of resource thresholds that
helped insure jobs that needed fewer resources (e.g., less CPU or shorter run time) would get precedence over jobs that
were expected to run longer or consume more resources.
In the same way, the Scheduler provides the capability to group together jobs that have similar resource demands into job
classes. A job class can be used to insure all jobs within it utilize the same job class attributes, execute at a higher or
lower priority than other jobs in other job classes and only allow jobs in the job class to start if there are sufficient
resources available. For example, job class InstantInvoice might encompass tasks that call packages and procedures that
produce invoices instantly after a customer has completely serviced, while job class DBManagemt might encompass tasks
that are related to database backups, exports and statistics calculation.
Windows: Most database shops we have worked in tend to have periods of peak and off-peak use. For example, many
U.S. companies typically perform the majority of their on-line transaction processing tasks such as order fulfillment,
customer service, and production (manufacturing or supply of services) during the morning, afternoon, and early evening,
with demand tapering off during the evening and early morning. The Scheduler acknowledges this business reality, and
provides the concept of windows to assign resources to job classes. For example, window PeakTime might be established
for scheduled tasks that give 75% priority to the aforementioned InstantInvoice job class, but only a 25% priority to all
other job classes, during peak activity periods. Likewise, an OffPeak window could be established for scheduled database
maintenance that would give jobs in the DBManagemt job class 90% priority over all other job classes during periods of
off-peak usage.
g
Oracle 11 – DBMS Built-in Packages Page 136 of 242
WK: 6 - Day: 5.2
Window Groups: The Scheduler also allows windows with similar scheduling properties - for example, normal business
weekday off-peak time, weekends and holidays - to be collected within window groups for easier management of jobs and
scheduled tasks.
Window Overlaps: The Scheduler also acknowledges that it is possible to have windows overlap each other, and it does
provide a simple conflict-resolution method to insure that the appropriate jobs do get the appropriate resources. As we
might guess, much of the functionality in these advanced features is coupled with the existing Database Resource
Manager (DRM) functionality that enables and enforces resource groups.

96.2. DBMS_REPAIR
The DBMS_REPAIR utility provides a mechanism to rebuild the impacted freelists and bitmap entries after fixing block
corruption. This procedure recreates the header portion of the datafile, allowing Oracle to use the newly repaired blocks.
This package allows us to detect and repair corruption. The process requires two administration tables to hold a list of
corrupt blocks and index keys pointing to those blocks. These are created as follows:
BEGIN
DBMS_REPAIR.admin_tables (
table_name => 'REPAIR_TABLE',
table_type => DBMS_REPAIR.repair_table,
action => DBMS_REPAIR.create_action,
tablespace => 'USERS');

DBMS_REPAIR.admin_tables (
table_name => 'ORPHAN_KEY_TABLE',
table_type => DBMS_REPAIR.orphan_table,
action => DBMS_REPAIR.create_action,
tablespace => 'USERS');
END;
With the administration tables built we are able to check the table of interest using the CHECK_OBJECT procedure:
SET SERVEROUTPUT ON
DECLARE
v_num_corrupt INT;
BEGIN
v_num_corrupt := 0;
DBMS_REPAIR.check_object (
schema_name => 'SCOTT',
object_name => 'DEPT',
repair_table_name => 'REPAIR_TABLE',
corrupt_count => v_num_corrupt);
DBMS_OUTPUT.put_line('number corrupt: ' || TO_CHAR (v_num_corrupt));
END;
Assuming the number of corrupt blocks is greater than 0 the CORRUPTION_DESCRIPTION and the
REPAIR_DESCRIPTION columns of the REPAIR_TABLE can be used to get more information about the corruption. At
this point the currupt blocks have been detected, but are not marked as corrupt. The FIX_CORRUPT_BLOCKS procedure
can be used to mark the blocks as corrupt, allowing them to be skipped by DML once the table is in the correct mode:
SET SERVEROUTPUT ON
DECLARE
v_num_fix INT;
BEGIN
v_num_fix := 0;
g
Oracle 11 – DBMS Built-in Packages Page 137 of 242
WK: 6 - Day: 5.2
DBMS_REPAIR.fix_corrupt_blocks (
schema_name => 'SCOTT',
object_name => 'DEPT',
object_type => Dbms_Repair.table_object,
repair_table_name => 'REPAIR_TABLE',
fix_count => v_num_fix);
DBMS_OUTPUT.put_line('num fix: ' || TO_CHAR(v_num_fix));
END;
Once the corrupt table blocks have been located and marked all indexes must be checked to see if any of their key entries
point to a corrupt block. This is done using the DUMP_ORPHAN_KEYS procedure:
SET SERVEROUTPUT ON
DECLARE
v_num_orphans INT;
BEGIN
v_num_orphans := 0;
DBMS_REPAIR.dump_orphan_keys (
schema_name => 'SCOTT',
object_name => 'PK_DEPT',
object_type => DBMS_REPAIR.index_object,
repair_table_name => 'REPAIR_TABLE',
orphan_table_name => 'ORPHAN_KEY_TABLE',
key_count => v_num_orphans);
DBMS_OUTPUT.put_line('orphan key count: ' || TO_CHAR(v_num_orphans));
END;
If the orphan key count is greater than 0 the index should be rebuilt. The process of marking the table block as corrupt
automatically removes it from the freelists. This can prevent freelist access to all blocks following the corrupt block. To
correct this freelists must be rebuilt using the REBUILD_FREELISTS procedure:
BEGIN
DBMS_REPAIR.rebuild_freelists (
schema_name => 'SCOTT',
object_name => 'DEPT',
object_type => DBMS_REPAIR.table_object);
END;
The final step in the process is to make sure all DML statements ignore the data blocks marked as corrupt. This is done
using the SKIP_CORRUPT_BLOCKS procedure:
BEGIN
DBMS_REPAIR.skip_corrupt_blocks (
schema_name => 'SCOTT',
object_name => 'DEPT',
object_type => DBMS_REPAIR.table_object,
flags => DBMS_REPAIR.skip_flag);
END;
The SKIP_CORRUPT column in the DBA_TABLES view indicates if this action has been successful. At this point the table
can be used again but we will have to take steps to correct any data loss associated with the missing blocks.
g
Oracle 11 – DBMS Built-in Packages Page 138 of 242
WK: 6 - Day: 5.2

96.3. DBMS_OUTPUT built-in


We can use this standard packaged procedure to write messaged to the buffer area and later retrieve those messages.
One of the remarkable usage of this packaged procedure is it capability to display the buffer to your screen if you are
using SQLDBA or SQLPLUS.
SQL> DBMS_OUTPUT.PUT_LINE(message varchar2);
This procedure is used to Write a message to the session's buffer. We can invoke DBMS_OUTPUT.PUT_LINE(message)
either within a PL/SQL block or directly from SQL prompt. To instruct SQL*Plus or SQL*DBA to flush the buffer contents to
the screen (and clearing the buffer), we must use 'SET SERVEROUTPUT ON'
Example:
SQL> SET SERVEROUTPUT ON
Default buffer size is 2000 bytes. To override this limitation, you can use the following option "Set Serveroutput on size
4000";
SQL> Execute DBMS_OUTPUT.PUT_LINE(SYSDATE)
15-OCT-96
PL/SQL procedure successfully completed.
SQL> SET SERVEROUTPUT OFF -- disables DBMS_OUTPUT
The DBMS_OUTPUT.GET_LINE on the other hand reads one line from the buffer area. The syntax is as follows:
SQL> DBMS_OUTPUT.GET_LINE(Message out Varchar2 , Status out integer)
Once this procedure is executed it will return the buffer line into the (Message) variable and will return the Status into the
(Status) variable. If a line of information is found in the buffer, the procedure will return a zero in the Status variable,
otherwise status is <> 0
Note: The DBMS_OUTPUT write and read operations must be enabled by executing DBMS_OUTPUT.ENABLE
procedure. Failure to do that will prevent this package from functioning as expected. Using SET SERVEROUTPUT ON
enables the Package automatically and causes the output of the buffer to be redirected to the screen. The buffer is flushed
after the output is read and displayed on the screen. The Following Example will illustrate
SQL> Execute DBMS_OUTPUT.ENABLE
SQL> Execute DBMS_OUTPUT.PUT_LINE('HELLO') - Message in now in the buffer
DECLARE
MESS VARCHAR2(100);
STAT integer;
BEGIN
DBMS_OUTPUT.GET_LINE(MESS,STAT); -- Message is moved to MESS
INSERT INTO DEPT VALUES (50,MESS,STAT);
COMMIT;
END;
SQL> SELECT * FROM DEPT WHERE DEPTNO=50;
DEPTNO DNAME LOCATION
----------- ----------- ----------------
50 HELLO 0
The GET_LINE procedure reads the buffer and initializes the MESS and STAT variables. Then these values are inserted
into the table DEPT. A Query on the Dept. table verifies the message.

96.4. DBMS_ALERT Built-In


This package provides support for inter-session notification of database events. Inter-session communication is a key
strength of such feature. Before embarking on illustrating ALERTS, it is important to emphasize that ALERTS are
transaction dependent. This means that the ALERT is not broadcast until the signaling database event is committed.
Assume that session1, session2, ... ,session3 are all connected to the same database. Also assume that session1 is
required to notify other sessions about a certain event (i.e. Sending them an ALERT). To establish inter-session
communication, session1 must broadcast a notification signal to the virtual network. Such a signal would be triggered by
the signaling event using the procedure DBMS_ALERT.SIGNAL. Sessions that are interested in being notified about this
g
Oracle 11 – DBMS Built-in Packages Page 139 of 242
WK: 6 - Day: 5.2
alert must register their interest in it using DBMS_ALERT.REGISTER. The alert message is read using
DBMS_ALERT.WAITONE or DBMS_ALERT.WAITANY. The latter listens on any of the alerts that are registered, while the
former procedure listens on one particular Alert.
Example
Assume that session1 and session2 need to inform interested sessions that they are logged on to database system.
Session1 should execute the following:
SQL> Execute DBMS_ALERT.SIGNAL('ALERT1','I am user1')
SQL> COMMIT;
or equivalently a call can be made to the same procedure from PL/SQL block.
SQL> DBMS_ALERT.SIGNAL('ALERT1','I am User '|| user);
SQL> COMMIT;
Session2 should similarly execute the following
SQL> Execute DBMS_ALERT.SIGNAL('ALERT1','I am user2')
SQL> COMMIT;
The important thing to notice is that an ALERT is not actually sent until a COMMIT is issued by the signaling session. The
interested session say session3 must execute the following code:-
SQL> Declare
Status number;
Message varchar2(50);
Begin
DBMS_ALERT.REGISTER ('ALERT1'); -- Listen on ALERT1
DBMS_ALERT.REGISTER ('ALERT2'); -- Listen on ALERT2
Loop
DBMS_ALERT.WAITEANY(message, status); -- wait for any registered Alerts
If status = 0 Then
DBMS_OUTPUT.PUT_LINE (message);
Else
DBMS_OUTPUT.PUT_LINE('Error');
End if;
End loop;
End;
Important Notes:
1. All registered sessions will be notified about the ALERT they have registered for only if the registration for the
ALERT took place before the COMMIT was issued by the signaling session.
2. The packaged procedure DBMS_ALERT.WAITANY() puts the application in a waiting state; the application is
blocked in the database and can’t do any other work until the Alert is received.
3. If the interested sessions are not currently waiting (but registered), they are notified the next time they do a wait
call. Remember that this holds true only because they are registered.
4. If multiple sessions try to concurrently perform signals on the same ALERT (say ALERT1), the first session
blocks concurrent sessions until the first session commits.
Example
Assume that you are using ORACLE FORMS application to show current data residing in one of the tables of DB. We
also need the application to automatically refresh the screen whenever the database is changed by other user. A
database trigger is needed to signal an ALERT whenever the database table under consideration is changed. Consider
the following code:
Create or Replace trigger Send_Alert
After update on emp
g
Oracle 11 – DBMS Built-in Packages Page 140 of 242
WK: 6 - Day: 5.2
Begin
DBMS_ALERT.SIGNAL('EMP_ALERT', 'Any message');
End;
The Forms application needs the following trigger, When-New-Forms-Instance
Declare
Status integer;
My_message varchar2(100);
Begin
-- Register you interest in ALERT Emp_Alert
DBMS_ALERT.REGISTER('Emp_Alert');
Execute_query;
Synchronize;
-- Loop for ever and listen on ALERT emp_alert
Loop
DBMS_ALERT.WAITONE('Emp_Alert',my_message,status);
If status = 0 Then
-- Refresh Screen
Execute_Query;
Synchronize;
Else
Message ('Error');
End if;
End Loop;
End;

96.5. DBMS_PIPE Built-in


The previous description of ALERTs has clearly shown that ALERTs are transactional (Need commit), and act like a Radio
broadcasting concept; that is if you are tuned in (Registered) you will receive the broadcast, otherwise the broadcast
message is lost. PIPES on the other hands, are:
 Non transaction dependent
 Stored in Pipe buffer in FIFO manner where incoming information on the same pipe will not overcome others,
rather, incoming information will queue in FIFO fashion until they are read.
 Can be received by polling techniques; that is you can check the pipe whenever you want and you will not be
blocked until the pipe receives information.
 Once a pipe is received by a session, it is removed from the pipe and cannot be received by other sessions;
 A reader on an empty pipe can optionally wait for the next information to arrive.
Let us examine the available packaged procedures and functions
DBMS_PIPE.PACK_MESSAGE(item); where item is the message that needs to be sent to the pipe.
As figure (2) shows, this procedure sends the message that will populate the pipe to the session's message buffer stack,
and is not yet sent to the pipe. You can pack several message on the buffer stack.. Each message should be stacked with
a separate call to the PACK_MESSAGE procedure
DBMS_PIPE.SEND_MESSAGE(pipe_name), where Pipe_name is the name of the pipe that will be created in the shared
pool area (Part of the SGA) so that it can carry the information that is going to be sent. This function will transfer the
session's message buffer stack to the pipe called pipe_name. The session's message buffer stack is also cleared as a
result of the execution of this function. This function will return a zero (0) if it executes without errors. After the pipe is
being successfully populated, the receiving end can extract the information using the following packaged function
DBMS_PIPE.RECEIVE_MESSAGE(pipe_name,timeout). This function will read the information off the pipe and transfer
it to the receiving session's message stack buffer. The default value for timeout is 1000 days (Waiting state) A timeout of 0
g
Oracle 11 – DBMS Built-in Packages Page 141 of 242
WK: 6 - Day: 5.2
allows for a read attempt without a wait state; a situation known as a non-blocking read This function will return a zero (0)
if it executes without errors. After the information is transferred to the session's stack buffer, it can read using
DBMS_PIPE.UNPACK_MESSAGE(variable_name) procedure. Where variable_name is a PL/SQL declared variable
that will hold the intended information. Once unpacked, the message is removed from the buffer stack. Each call to this
procedure will read one packed piece of information. If for example, the transmitting session wants to send two message
on the pipe, it needs to call the PACK_MESSAGE procedure twice, once for each message and then it needs to call the
SEND_MESSAGE function once. Now the receiving session needs to call RECEIVE_MESSAGE function once, and call
the UNPACK_MESSAGE twice in order to read both messages distinctly.
Example:
Assume that you have a long running PL/SQL on one terminal and you want debugging messages to appear on another
terminal connected to the same database. This will allow operators and administrators to monitor the execution of this long
running PL/SQL program in an on-line manner (Note that this cannot be accomplished by DBMS_OUTPUT procedure
because this procedure writes its output to a buffer within the session and will only display the output after the PL/SQL
block terminates).
The following PL/SQL simulates a long running program
SQL> Set Serveroutput on
Declare
Status integer;
Begin
For I in 1 .. 20 Loop
DBMS_PIPE.PACK_MESSAGE(I);
For j in 1 .. 300000 loop
- - Any dummy code like status := 3;
End loop;
status := DBMS_PIPE.SEND_MESSAGE('test_pipe');
if status <> 0 then
dbms_output.put_line ('ERROR');
end if;
End loop;
End;
And on the receiving end, the following code will read the sent messages synchronously.
SQL> Set Serveroutput on
Declare
s integer;
out1 number;
Begin
for I in 1 ..20 loop
s := DBMS_PIPE.RECEIVE_MESSAGE ('test_pipe');
if s = 0 then
DBMS_PIPE.UNPACK_MESSAGE(out1);
DBMS_OUTPUT.PUT_LINE(out1);
end if;
End loop;
If you send messages through pipes and the client process died, Information will be left in the pipe and will take up space
in the shared area. You can identify such pipes using the following SQL statement
SQL> Select KGLNAOBJ from X$KGLOB where KGLOBTYP=18 and KGLOBSTA=1;
Then use DBMS_PIPE.Purge() to remove it.
g
Oracle 11 – DBMS Built-in Packages Page 142 of 242
WK: 6 - Day: 5.2
Notes: You noticed previously that a pipe is created automatically when the function DBMS_PIPE.SEND_MESSAGE is
called. You can, however, explicitly create a pipe using DBMS_PIPE.CREATE(pipe_name). The interesting thing about
such a pipe is that it is private. Private pipes are only accessible by user with the same user_id as that of the user that
created the pipe.

96.6. DBMS_SQL Built-in


If you are writing a PL/SQL block then the following restrictions apply:-
 You cannot execute DDL commands like CREATE, DROP etc ..
 You cannot write DML command with Column name and Table names that are not known until run-time.
With Oracle7 Release 7.1, an extension to the functionality of PL/SQL programs has been introduced that provides the
possibility of using dynamic SQL statements within PL/SQL programs. This new feature means that the full flexibility of
SQL programming can be made use of within stored PL/SQL programs. This feature allows programmers to execute DDL
statement and DML statement, which are not fixed. The major disadvantage with this method is that it is far more
demanding and complicated than static SQL.
Example:
Create a stored procedure that is capable of creating an index for the deptno field for the table whose name is passed as
a parameter to the procedure:
Create or replace procedure cr_index(table_name in char) is
cursor1 integer;
status integer;
begin
cursor1 := dbms_sql.open_cursor;
dbms_sql.parse (cursor1,'create index ind_one on '|| table_name ||
'(deptno)', dbms_sql.v7);
/*status :=dbms_sql.execute(cursor1); no need to execute since DDL
statement are executed at parse time */
end;
The previous example is a simple example that involves using CREATE INDEX (DDL). More realistic examples would be
creating dynamic SQL that involves SELECT.. INTO like statements. Note that in the previous example
DBMS_SQL.EXECUTE need not be executed since all DDL statement (Create, Drop, etc) are executed and committed
automatically at Parse time. But before we go into the details, some definitions are to be made.
 Parse: The process of checking the statement's syntax and associating it with a cursor in your programs.
 Bind: The process of taking the values of your program's local variables at run-time and passing them to
ORACLE.
 Define Column: The process used to specify the variables that are to receive the SELECT values, much the same
way an INTO clause does for static query.
Example
It is required that you fetch the empno, sal fields from a table whose name will be only known at runtime. The information
is to be displayed on screen:
Solution:-
Create or Replace Procedure Get_Emp (table_name) is
Cursor1 number ; -- to hold cursor id
ename1 varchar2(10);
emp1 number;
result number;
Begin
Cursor1:= dbms_sql.open_cursor;
dbms_sql.parse(Cursor1,'SELECT empno,ename from '||table_name,dbms_sql.V7);
/* Since this is a query, We have To define columns. The define_column is like the INTO
statement. The column to be fetched is identified by its relative position as it appears
in the select list.
g
Oracle 11 – DBMS Built-in Packages Page 143 of 242
WK: 6 - Day: 5.2
Note: When the column definition is char or varchar2 then one must also supply the width
of the column*/
dbms_sql.define_column(cursor1,1,empno1);
dbms_sql.define_column(cursor1,2,ename1,10);
/*Execute the cursor and put the return value into the variable called result. The return
value is only meaningful for INSERT,UPDATE and DELETE and it
indicates the number of records processed by the operation. */
result :=dbms_sql.execute(cursor1);
/*The records will be fetched using Fetch_Rows and then the procedure Column_values will
be used to get the column values of the row. A loop is going to be used to get all records
*/
LOOP
If dbms_sql.fetch_rows(cursor1) > 0 then
dbms_sql.column_value(cursor1,1,ename1);
dbms_sql.column_value(cursor1,2,sal1);
-- Send the fetched columns to standard output.
dbms_output.put_line(ename1 || ' '|| sal1);
Else
Exit; -- No rows are fetched
End if;
End Loop
dbms_sql.close_cursor(cursor1);
End;
Remarks:
1. FETCH_ROWS tries to fetch a row from the cursor and the result is retrieved into a buffer. This row must be read
by COLUMN_VALUE for each column of the fetched row.
2. Trying to fetch rows after last row in the cursor will give OUT_OF_SEQUENCE error.
3. DEFINE_COLUMN procedure is needed with SELECT cursors only. It is important to note that it takes 3
argument for NUMBER and DATE variables, 4 arguments for CHAR or VARCHAR variables, and the 4th
arguments being the width of CHAR or VARCHAR2 datatypes. The previous example shows the usage.

96.7. DBMS_JOBS Built-In


This built-in procedure provides for automatic scheduling and execution of user written stored procedures at user specified
interval using Job Queue mechanism. Deferred execution of repetitive administrative operations such as collection of
storage utilization can be automatic. Job Queues provides application developers with a portable and convenient
mechanism for scheduling database related tasks.
The following init.ora parameters need to be set
job_queue_processes=2 # number of background processes to handle scheduled job
Job_queue_interval=60 # the processes will wake up every 60 seconds.
job_queue_keep_connections=TRUE
The main procedure in this package in DBMS_JOB.SUBMIT. This package takes the following arguments
 JOB Out Binary Integer
 What In Varchar2
 Next_date In Date default sysdate
 Interval In Varchar2 default 'null'
 no_parse In Boolean default false
An explanation of the above argument is
Job: The number of the current job. This is automatically assigned by the SUBMIT procedure and is a unique number that
will identify you background job.
g
Oracle 11 – DBMS Built-in Packages Page 144 of 242
WK: 6 - Day: 5.2
What: Is the PL/SQL procedure to be executed.
 Next_Date: Is the date at which the job will next be automatically executed.
 Interval: Is a date function. When the job is successfully executed, the interval date function is placed in the
Next_date and therefore, becomes the target date for the next execution of the job. If this argument is null or
evaluates to null, the job is executed only once, and then removed from the queue.
Examples of valid Interval are
'SYSDATE+(1/24)' -- Executes every hour
'SYSDATE+3' -- Executes every three hours
'NEXT_DAT(sysdate,''MONDAY'') -- Executes every Monday

Example:
Assume that you have a stored procedure called proc_one which takes an argument arg1, and you want this procedure to
be executed every hour.
Solution:
Declare
job_no number;
Begin
DBMS_JOB.SUBMIT(job_no,'proc_one(''arg1''); ',sysdate,'sysdate+1/24');
-- Do not forget the semicolon
DBMS_OUTPUT.PUT_LINE(job_no); -- display job number
end;
You can delete a job from the queue by calling the procedure DBMS_JOB.REMOVE(job_no). This can be called
interactively from the SQL prompt for example:
SQL>Execute DBMS_JOB.REMOVE(1) -- Will remove job number 1.
Related data dictionary View is USER_JOBS
SQL> select job,log_user,last_sec,this_sec,next_sec,what from user_jobs;

JOB LOG_USER THIS_SEC NEXT_SEC WHAT


------------ -------- -------- ----
1 SCOTT 13:50:40 13:58:47 proc1;
2 SCOTT 14:04:35 14:09:35 sal_raise;
g
Oracle 11 – Database normalization Page 145 of 242
WK: 6 - Day: 5.2

97. Database Normalization


97.1. A layman’s Approach to Database Normalization
Application of the relational database model to a data set involves the removal of duplication. Removal of duplication is
performed using a process called normalization. Normalization is comprised of a set of rules called normal forms.
Normalization is applied to subsets of data or tables in a database.
Tables are for placing directly associated data into. Tables can be related or linked to each other through the use of index
identifiers. An index identifier identifies a row of data in a table much like an index is used in a book. The index is used to
locate an item of interest without having to read the whole book.
There are five levels or layers of normalization called first, second, third, fourth and fifth normal forms. Each normal form is
a refinement of the previous normal form. Fourth and fifth normal forms are rarely applied. In designing tables for
performance it is common practice to ignore the steps of normalization and jump directly to second normal form (2NF).
Third normal form (3NL) is often not applied either; unless many-to-many joins cause an absolute need for unique values
at the application level.
Normalization is for academics and in its strictest form is generally impractical due to its adverse effect on performance in
a commercial environment, especially 3NF, 4NF and 5NF. The simplest way to describe what normalization attempts to
achieve can be explained in three ways.
1. Divide the whole into smaller more manageable parts.
2. Removal of duplicated data into related subsets.
3. Linking of two indirectly related tables by the creation of a new table. The new table contains indexes from the
two indirectly related tables. This is commonly known as a many-to-many join.
These three points are meaningless without further explanation of normalization. Let’s review the rules and try to explain
them in a non-academic fashion. Let us start with some relational database buzzwords.
 A table contains many repetitions of the same row. A table defines the structure for a row. An example of a table is
a list of customer names and addresses.
 A row is a line of data. Many rows make up the data in a table. An example of a row is a single customer name
and address within a table of many customers. A row is also known as a record or a tuple.
 The structure of a row in a table is divided up into columns. Each column contains a single item of data such as a
name or address. A column can also be called a field or attribute.
 Referential integrity is a process of validation between related tables where references between different tables
are checked against each other. A primary key is placed on a parent or superset table as the primary identifier or
key to each row in the table. The primary key will always point to a single row only and it is unique within the table.
A foreign key is a copy of a primary key value in a subset table. An example of a function of Referential Integrity is
that it will not allow the deletion of a subset record where a foreign key value exists in a parent table. Primary keys
in this document are referred to as PK and foreign keys as FK. Note that both primary and foreign keys can
consist of more than one column. A key consisting of more than one column is known as a composite key.
 An index is used to gain fast access to a table and to enforce relationships between tables. An index allows direct
access to rows by duplicating a small part of each row to an additional (index) file. An index is a copy of the
contents of a small number of columns in a table. The most efficient indexes are made up of single columns
containing integers.
Primary and foreign keys are special types of indexes, applying referential integrity.

97.2. First Normal Form


First normal form removes repetition by creating one-to- many relationships. Data repeated many times in one table is
removed to a subset table. The subset table becomes the container for the removed repeating data. Each row in the
subset table will contain a single reference to each row in the original table. The original table will then contain only non-
duplicated data.
Figure 1 shows a 1NF transformation. The purchase order table on the left contains customer details, purchase order
details and descriptions of multiple items on the purchase order. Application of 1NF removes the multiple items from the
purchase order table by creating a one-to-many relationship between the purchase order and the purchase order item
tables. This has three benefits.
 Saves space.
 Reduces complexity.
 Ensures that every purchase order item will belong to a purchase order.
In Figure 1, the crows-foot pointing to the purchase order item table indicates that for a purchase order to exist, the
purchase order has to have at least one purchase order item. The line across the pointer to the purchase order table
g
Oracle 11 – Database normalization Page 146 of 242
WK: 6 - Day: 5.2
signifies that at least one purchase order is required in this relationship. The crows-foot is used to denote an inter- entity
relationship.
Inter-entity relationships can be zero, one or many to zero, one or many.
The relationship shown in Figure 1 between the purchase order and purchase order item table is that of one-and-only-one
to one- of-many.

Figure 1: First Normal Form

97.3. Second Normal Form


Second normal form (2NF) creates not one-to-many relationships but many-to-one relationships, effectively separating
static from dynamic information. Static information is potentially repeatable. This repeatable static information is moved
into separate tables. In Figure 2, the customer information is removed from the purchase order table. Customer
information can be duplicated for multiple purchase orders or have no purchase orders; thus the one-and-only-one to
zero-one-or-many relationship between customer and purchase order.

Figure 2: Second Normal Form

97.4. Third Normal Form


Third normal form is used to resolve many-to-many relationships into unique values. In Figure 3, a student can be enrolled
in many courses and a course can have many students enrolled. The point to note is that it is impossible to find a unique
course-student item without joining every student with every course. Therefore, each unique item can be found with the
combination of values. Thus the coursestudent entity in Figure 3 is a many-to-many join resolution entity. In a commercial
environment it is very unlikely that an application will ever need to find this unique item, especially not a modern-day Java
object Web application where the tendency is to drill down through list collections rather than display individual items.
g
Oracle 11 – Database normalization Page 147 of 242
WK: 6 - Day: 5.2
Many-to-many join resolutions should only be created when they are specifically required by the application. It can
sometimes be better to resolve these joins in the application to improve database performance.
Be very careful using 3NF and beyond.

Figure 3: Third Normal Form

97.5. Fourth Normal Form


Fourth normal form is intended to separate multi-valued facts in a single table into multiple tables. In Figure 4, employee
skill and certification lists are removed into separate entities. An employee could have skills or certifications, or both.

Figure 4: Fourth Normal Form

97.6. Fifth Normal Form


Fifth normal form divides related columns into separate tables based on those relationships. In Figure 5 product, manager
and employee are all related to each other. Thus three separate entities can be created to explicitly define those
interrelationships. The result is information that can be reconstructed from smaller parts.

Figure 5: Fifth Normal Form


g
Oracle 11 – Database normalization Page 148 of 242
WK: 6 - Day: 5.2

97.7. A Summary of Normalization


 1NF removes repetition by creating one-to-many relationships.
 2NF creates not one-to-many relationships but many-to-one relationships, effectively separating static from
dynamic information. 2NF removes items from tables independent of the primary key.
 3NF is used to resolve many-to-many relationships into unique values. 3NF allows for uniqueness of information
by creation of additional many-to-many join resolution tables. These tables are rarely required in modern day
applications.
 4NF is intended to separate multivalued facts in a single table into multiple tables. 5NF divides related columns
into separate tables based on those relationships. 4NF and 5NF minimize nulls and composite primary keys by
removing null capable fields and subset composite primary key dependent fields to new tables. 4NF and 5NF are
rarely useful.
g
Oracle 11 – Installation of 11g on Red Hat Enterprise Linux - 4 Page 149 of 242
WK: 6 - Day: 5.2

98. Installation of Oracle 11g on Red Hat Enterprise Linux 4


98.1. Installation Steps
 Install Red Hat Enterprise Linux 4 (kernel 2.6.9-5)
 Make sure you have enough disk space to install Oracle, preferably we need a special filesystem (other than
“ROOT ”)
 Also try to avoid: Database Engine & Database files on same filesystem.
The most preferred (or minimum requirements are)
/  1000 MB

/usr  7000 MB

/oraeng  2000 MB

/var  1000 MB

/tmp  1000 MB

/disk1  2000 MB

/disk2  2000 MB

/disk3  2000 MB

swap  2xRAM Size

In the above e.g. /disk1, /disk2 and /disk3 are external disk subsystems. The reason why we have like this, in case if the
internal disk goes corrupted, we can simply re-install Linux after replacing the drive and every thing can function normally.
And make sure your external drives are running with either RAID-0 or RAID-5, so those disk problems won’t stop the
show.

Login as root and do the following things:


Create the directory structure to hold the software
# mkdir –p /oraeng/app/oracle/product/11.1.0
Create group dba.
# groupadd –g 1000 oinstall
# groupadd –g 2000 dba
Create a user called “oracle10g” in which user account you’ll be installing the software.
# useradd –u 1001 –g oinstall –G dba –d /oraeng/app/oracle/product/11.1.0 –m
oracle11g
# passwd oracle11g
Changing password for user oracle11g.
New UNIX password:
Retype new UNIX password:
1. Change the owner of all the slices to “oracle11g”
# chown –R oracle10g:oinstall /oraeng
# chown –R oracle10g:dba /disk1 /disk2 /disk3
2. Set the SHMMAX & Semaphores
# cd /etc/rc.d/rc5.d
# vi S99kernel
echo 2147483648 > /proc/sys/kernel/shmmax
echo 4096 > /proc/sys/kernel/shmmni
echo 2097152 > /proc/sys/kernel/shmall
echo 65536 > /proc/sys/fs/file-max
echo 1024 65000 > /proc/sys/net/ipv4/ip_local_port_range
echo 250 32000 100 128 > /proc/sys/kernel/sem
:wq
# chmod 755 S99kernel
3. Append the following parameters to the /etc/sysctl.conf file
# vi /etc/sysctl.conf
g
Oracle 11 – Installation of 11g on Red Hat Enterprise Linux - 4 Page 150 of 242
WK: 6 - Day: 5.2
net.core.rmem_default=262144
net.core.wmem_default=262144
net.core.rmem_max=262144
net.core.wmem_max=262144
:wq
4. REBOOT the Server to get changed values into effect.
# init 6

Now login as oracle11g user


Update your profile to suit your environment and do the following things.
$ vi .bash_profile
export ORACLE_SID=ORCL
export ORACLE_HOME=/oraeng/app/oracle/product/11.1.0
export LD_LIBRARY_PATH=$ORACLE_HOME/lib,/usr/ucblib,/usr/openwin/lib
export PATH=$ORACLE_HOME/bin:/bin:/usr/bin:/usr/ccs/bin:/usr/ucb/bin:$PATH:.
export CLASS_PATH=$ORACLE_HOME/jlib

:wq
$ . .bash_profile
5. Now login as oracle10g and start the installation process
$ startx
$cd /mnt/cdrom
$sh runInstaller
g
Oracle 11 – Installation of 11g on Red Hat Enterprise Linux - 4 Page 151 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – Installation of 11g on Red Hat Enterprise Linux - 4 Page 152 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – Installation of 11g on Red Hat Enterprise Linux - 4 Page 153 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – Installation of 11g on Red Hat Enterprise Linux - 4 Page 154 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – Installation of 11g on Red Hat Enterprise Linux - 4 Page 155 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – Installation of 11g on Red Hat Enterprise Linux - 4 Page 156 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – Installation of 11g on Red Hat Enterprise Linux - 4 Page 157 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – Installation of 11g on Red Hat Enterprise Linux - 4 Page 158 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – Upgradation to Oracle 10g Page 159 of 242
WK: 6 - Day: 5.2

99. Upgradation to Oracle 10g


Oracle Databases can be upgrades from one version/release to a higher version/release to:
 Use the new features
 To be in a supported database version.
Oracle usually announces the de-support data for a DB Version several month ahead so that we can play and test
Database upgradation.
This document provides a generic approach to upgrading to Oracle 10g from order versions without any specific reference
to the underlying operating system.
Oracle 10g has four upgrade options;
 Method 1 – Use the Database Upgrade Assistant (DBUA)
 Method 2 – Manual upgrade by running oracle supplied scripts
 Method 3 – Using the export/import utilities
 Method 4 – Using the SQL*Plus COPY/CREATE TABLE AS commands.
Before we upgrade a database using any of the above methods, we should understand the major steps in the upgrade
process which are outlined below.
Step 1: Prepare to Upgrade
 Become familiar with the features of the new oracle DB 10g release.
 Determine the upgrade path to the new oracle DB 10g release
 Choose an upgrade method
 Choose an oracle home directory for the new Oracle Database 10g release.
 Develop a Testing Plan.
Step 2: Test the Upgrade Process
Perform a test upgrade on a test DB. The test upgrade should be conducted in any environment created for testing and
should not interfere with the actual production DB.
Step 3: Test the Upgrades Test Database
 Perform tests on the tests on the test DB and on the test DB that was upgraded.
 Compare results, noting anomalies between test results on the test DB & on the upgraded DB.
 Investigate ways to correct any anomalies and there implement the corrections, until the test upgrade is
completely successful and works with any required applications.
Step 4: Prepare and Preserve the Production Database
 Prepare the convent productions DB as appropriate to ensure that the upgrade to the new oracle Database 10g
release will be successful.
 Schedule the downtime required for backing up and upgrade the production DB.
 Perform a full backup of the current productions DB.
Step 5: Upgrade the Production Database
 Up grade the production DB to the new oracle 10g release.
 After the upgrade, perform a full backup of the production DB and perform the post upgrade tasks.
Step 6: Tune and Adjust the New Production Database
 The new oracle production DB should perform as good as, or better than, the DB prior to the upgrade.
 Determine which features of the new release we want to use and update our application accordingly.
 Develop new DB administration procedures as needed.
g
Oracle 11 – Upgradation to Oracle 10g Page 160 of 242
WK: 6 - Day: 5.2

Step 1: Prepare to upgrade

Step 2: Test the upgrade process

Step 3: Test the upgraded Test DB

Step 4: Prepare & Preserve the production DB

Step 5: Upgrade the production DB

Step 6: Tune & Adjust the New Production DB

99.1. Validating the Database before Upgrade:


Oracle 10g provides a utility script – utlu102i.sql, to perform preupgade validation on the DB to be upgraded. We can find
the script in the administration scripts directory ($ORACLE-HOME/rdbms/admin). The utlu102i.sql script needs to be run
as SYSDBA before we plan on performing a manual upgrade. It is preferred to copy thin script to a temporary folder and
run it after spooling the output to a file. We must run this script on the DB to be upgraded. The DBUA automatically runs
the script as part of upgrade process.
The script performs the following tasks:
 Checks DB Compatibility
 Verifies the redo log file size is at least 4 MB
 Estimates the time for upgrade
 Checks for obsolete & deprecated parameters
 Finds all the components installed
 Finds the default tablespace for each DB component schema.
 Checks the installed DB options
 Checks the DB Character set and natural character set are supported in oracle 10g.
Once the script is done with the execution open the spooled file and check-out for the corrections to be taken before
actually performing the upgradation. The spooled file gives various information like for example If the current Redolog file
size are adequate for the upgradation or not and also on the sizes of the various SGA memory components and much
more.
Note: The minimum redologifle size in oracle DB 10g is 4 MB.
g
Oracle 11 – Upgradation to Oracle 10g Page 161 of 242
WK: 6 - Day: 5.2

99.2. Performing the Upgrade


Method 1: Using DB UA
We can perform a direct upgrade of an oracle DB to oracle 10g by using oracle’s GUI interface, the DBUA. The DB UA will
be invoked by the OUI when installing oracle 10g software if it finds any executing oracle DB in ‘/etc/oratab’. In case of
Linux, or in the windows Registry in case of windows platform the DBUA can also be invoked as a stand-alone tool after
the installation of the Oracle 10g Software.
On UNIX platforms, we can invoke the DBUA by using the command ‘DBUA’ on windows platform it can be invoked by
choosing
Start  Program Files  Oracle Configuration 2 Migration tools  Database upgrade Asst.
The upgrade process is automated by DBUA, including the preupgrade steps. The following are some of the DBUA
features and their advantages:
 Proceeds with upgrade only if the selected database release is supported for direct upgrade.
 Runs the preupgrade validation and identifies the options to be upgraded. It performs the necessary adjustments.
 Checks for disk space & tablespace requirements
 Updates obsolete initialization parameters.
 Includes an options to backup the database prior to upgrade.
 Shows upgrade progress & writes detailed trace and logfiles.
 Disables archiving of the Database during upgrade.
 Includes an option to recompile invalid objects after upgrade.
 Shows summary page prior to upgrade and after the upgrade
 Able to upgrade all nodes of a Database in RAC
 Removes the Database entry from the listener.ora file of the old Database and add it to listener.ora of the new
Database.
Note: We can also invoke the DBUA in command-line mode. We can specify several parameters with the command dbua,
dbua-h shows the help information.
Method 2: Manual Upgrade (using scripts)
We can manually upgrade the Database by running scripts using the SQL*Plus Utility. Though manual upgrade provides
us with more control, the process is error prone, involves more work, and could take more time.
Oracle 10g supports the direct upgrade of database from the following releases:
 Oracle 8 Release 8.0.6
 Oracle 8i Release 8.1.7
 Oracle 9i Release 9.0.1
 Oracle 9i Release 9.2.0
If a direct upgrade is not supported from the release number of our DB, then we must first upgrade our Database to an
intermediate oracle release. The database then can be upgraded from this intermediate release to the new oracle DB 10g
release.
For example, if our current release is 8.1.6, then we need to first upgrade to release 8.1.7 using the instructions in oracle
8i migration for release 8.1.7. The release 8.1.7 DB can be upgraded to the new oracle DB 10g.
To manually upgrade the DB, follow these steps.
 Install the Oracle 10g release software only, into a new ORACLE HOME location. Do not select ‘create database’
options during installation.
 Use a cold backup of existing DB which is of a order release connect to the database and run the script
utlu102i.sql to determine the preupgrade tasks to be completed. (Please refer to sections ‘validating the DB before
upgrade’ of this document).
 Resume the redolog files if they are smaller than 4MB.
 Adjust the size of the tablespaces where the dictionary objects are stored.
 Perform a cold backup of the database.
 Shutdown the database with any of the grateful methods.
 Copy the parameter file (init.ora. or spfile) and password file from the old oracle home directory to the oracle 10g
home directory.
 Adjust the init.ora file by setting COMPATIBLE parameter to the file 10.2.0 and also adjust the SGA memory
components to the minimum required values.
 Make sure all the environment variables are set to correctly reference the oracle 10g Home.
g
Oracle 11 – Upgradation to Oracle 10g Page 162 of 242
WK: 6 - Day: 5.2
 Use the SQL+ plus, and correct the DB using SYSDBA privilege and start the instance by using the STARTUP
UPGRADE mode.
$ Sqlplus ‘ ‘/as sysdba’

SQL> STARTUP UPGRADE


 Create the SYSAUX tablespace with the following attributes.
o ONLINE
o PERMANENT
o READ WRITE
o EXTENT MANAGEMENT LOCAL
o SEGMENT SPACE MANAGEMENT AUTO
The syntax could be as follows :
SQL> CREATE TABLESPACE SYSAUX
DATAFILE ‘disk1/oradata/ORCL/sysaux01.dbf; size 100 M
EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO;
 Run the upgrade script from the $ORACLE-HOME/velbms/admins, Based on the versions of the old DB, the name
of the upgrade script varies.

Refer to the following table to choose the correct script


Old Release Run Script

Oracle 8.0.6 u0800060.sql

Oracle 8.1.7 u0801070.sql

Oracle 9.0.1 u0900010.sql

Oracle 9.2.0 u0902000.sql

If we get any errors during the upgrade script executes during the upgrade script executes re-execute the script after
fixing the error.
For example, to upgrade an oracle 9.2.0 DB to oracle 10g we must run u0902000.sql.
SQL>SPOOL upgrade.log
SQL>@$ORACLE- HOME/rdborns/adminisuu090200.sql
SQL>SPOOL OFF
 Check the spool file & verify that the packages and procedures are compiled successfully. Correct any problems
we find in this file and rerun the appropriate upgrade script if necessary.
 Run post-upgrade status script utlu1025.sql. Specifying the TEXT option to see if all the components are upgrade
successfully.
SQL> @$0.M/rdbms/admin/utlu102s.SQL TEXT
 Shutdown the DB & Startup
SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP
 If we encounter a message listing obsolete init parameters when we start the DB, then remove the remove the
obsolete parameters from the parameter file.
 Run utlrp.sql to re-compute any remaining stored PL/SQL and Java code.
SQL> @$0.M/rdbms/admin/uterp.sql
 Verify that all packages are valid
SQL> SELECT count(*) from DBA OBJECTS WHERE Status = ‘INVALID’;
 Our DB is now upgraded to the new oracle DB 10g release.

Post Upgrade Tasks:


 Backup the Database
 Change the password for oracle supplied Database accounts like DBSNMP, OUTLN
 Migrate the pfile to spfile (if needed)
 Modify the listener confirmation file to point to the new ORACLE_HOME.
Method 3: Using EXP/ IMP Utilities:
Upgrading a Database using the export/import method has the following advantages and disadvantages:
g
Oracle 11 – Upgradation to Oracle 10g Page 163 of 242
WK: 6 - Day: 5.2
 How long the upgrade process takes depends on the size of the DB
 For smaller DB’s in non-upgrade-supported releases like 8.0.6, 8.1.5, or 8.1.6, it may be faster to perform an
exp/imp rather than going through two upgrade processes.
 A new DB for oracle 10g needs to be created, so we need to double the amount of disk space required.
 Gives an opportunity to create tablespace using the new features of oracle 10g.
 The import process can defragment data that would improve performance.
Use the following steps to upgrade:
 Take a full DB export of the existing DB (Older versions) using command;
$ exp system/<password> file=full.dmp log=full.log full=y
Use other options of the exp utility as appropriate.
 Create a new 10g database with minimum tablespaces use the import utility to import the definition and data from
the dump file created in step 1.
$ imp system/<password> file=full.dmp log=upgrade.log full=y
Use other option of imp utility as appropriate.

Here, while importing we needs to take care that we have configured the DB_NK_CACHE_SIZE in init.ora
parameter for the non-default tablespaces. Otherwise, the tablespace creation fails if the older version Database
has tablespace of 2k, 2k, 16k and 32k as we know that the default block size from Oracle 10g is 8k.
 Check the Log file after the upgrade process is over and if we come across any errors, correct the error and
import that specified object only.

Alternate Method:
Use this method if we don’t wish to go with the full DB export perform export for all objects of the application as SYSTEM
using the OWNER option of the export utility.
$ exp system/<password> file=app.dmp log=app.log owner=app1
This has to be performed for all the schemas in case the DB is supporting more than on application. Once we are done
with export on all the schemas of the order version DB, create a new Oracle 10g DB with minimum requirements. Create
the tablespaces and users that are present in order version of the DB manually. Now use the above created dump files to
import back into the appropriate schemas.
Note: Here we need to take of the dependences between the schemas if any and resolve then as per the business
requirements.
Method 4: Upgrade using COPY / CREATE TABLE AS Commands
We can copy data from one oracle DB to another using Database links. For examples, we can create new tables and fill
the tables with data by using the INSER INTO statement and the CREATE TABLE AS Statement. Alternatively, we can
also use the COPY command.
Copying data and Export/Import offer the same advantages for upgrading. Using either method we can defragment data
files and restructure the DB by creating new tablespace or modifying existing tables or tablespace. In addition we can
copy only specified DB objects or users.
Copying data, however, unlike Export/Import enables the selection of specific rows of tables to be placed into the new DB.
Copying Data is good method for copying only point of a Database table. In contrast, using Export/Import, we can copy
only entire tables.
For more information on COPY and CREATE TABLE AS commands. Refer to Oracle DB Sql Reference.
g
Oracle 11 – Upgradation to Oracle 10g Page 164 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – Upgradation to Oracle 10g Page 165 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – Upgradation to Oracle 10g Page 166 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – Upgradation to Oracle 10g Page 167 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – Upgradation to Oracle 10g Page 168 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – Upgradation to Oracle 10g Page 169 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – Dynamic Performance Views Page 170 of 242
WK: 6 - Day: 5.2

100. Dynamic Performance Views


Dynamic performance views are identified by the prefix V_$. Public synonyms for these views have the prefix V$.
Database administrators or users should only access the V$ objects, not the V_$ objects. The dynamic performance
views are used by Enterprise Manager and Oracle Trace, which is the primary interface for accessing information about
system performance. Once the instance is started, the V$ views that read from memory are accessible. Views that read
data from disk require that the database be mounted.

100.1. Instance
V$BGPROCESS This view describes the background processes.
V$BH This is a Parallel Server view. This view gives the status and number of pings
for every buffer in the SGA.
V$BUFFER_POOL This view displays information about all buffer pools available for the instance.
The "sets" pertain to the number of LRU latch sets.
V$BUFFER_POOL_STATISTICS This view displays information about all buffer pools available for the instance.
The "sets" pertain to the number of LRU latch sets.
V$INSTANCE This view displays the state of the current instance. This version of
V$INSTANCE is not compatible with earlier versions of V$INSTANCE.
V$SGA This view contains summary information on the System Global Area.
V$SGASTAT This view contains detailed information on the System Global Area.

100.2. Archivelog Management Views


V$ARCHIVE This view contains information on redo log files in need of archiving. Each row
provides information for one thread. This information is also available in V$LOG.
Oracle recommends that you use V$LOG.
V$ARCHIVE_DEST This view describes, for the current instance, all the archive log destinations, their
current value, mode, and status.
V$ARCHIVED_LOG This view displays archived log information from the controlfile including archive log
names. An archive log record is inserted after the online redo log is successfully
archived or cleared (name column is NULL if the log was cleared). If the log is
archived twice, there will be two archived log records with the same THREAD#,
SEQUENCE#, and FIRST_CHANGE#, but with a different name. An archive log
record is also inserted when an archive log is restored from a backup set or a copy.
V$ARCHIVE_PROCESSES This view provides information about the state of the various ARCH processes for
the instance.
V$PROXY_ARCHIVEDLOG This view contains descriptions of archived log backups which are taken with a
new feature called Proxy Copy. Each row represents a backup of one archived log.

100.3. Control File Views


V$CONTROLFILE This view lists the names of the control files.
V$CONTROLFILE_RECORD_SECTION This view displays information about the controlfile record sections.

100.4. Redolog File Views


V$LOG This view contains log file information from the control files.
V$LOGFILE This view contains information about redo log files.
V$LOGHIST This view contains log history information from the control file. This view is retained for historical
compatibility. Use of V$LOG_HISTORY is recommended instead.

100.5. Datafile Views


V$DATAFILE This view contains datafile information from the control file.

V$DATAFILE_HEADER This view displays datafile information from the datafile headers.
g
Oracle 11 – Dynamic Performance Views Page 171 of 242
WK: 6 - Day: 5.2

V$DBFILE This view lists all datafiles making up the database. This view is retained for historical
compatibility. Use of V$DATAFILE is recommended instead.

V$PROXY_DATAFILE This view contains descriptions of datafile and controlfile backups which are taken with
a new feature called Proxy Copy. Each row represents a backup of one database file.

100.6. User Management Views


V$ENABLEDPRIVS This view displays which privileges are enabled. These privileges
can be found in the table SYS.SYSTEM_PRIVILEGES_MAP.

V$PWFILE_USERS This view lists users who have been granted SYSDBA and
SYSOPER privileges as derived from the password file.

V$RESOURCE This view contains resource name and address information.

V$RESOURCE_LIMIT This view displays information about global resource use for some
of the system resources. Use this view to monitor the consumption
of resources so that you can take corrective action, if necessary.

V$ROLLNAME This view lists the names of all online rollback segments. It can
only be accessed when the database is open.

V$ROLLSTAT This view contains rollback segment statistics.

V$RSRC_CONSUMER_GROUP This view displays data related to the currently active resource
consumer groups.

V$RSRC_CONSUMER_GROUP_CPU_MTH This view shows all available resource allocation methods for
resource consumer groups.

V$RSRC_PLAN This view displays the names of all currently active resource plans.

V$RSRC_PLAN_CPU_MTH This view shows all available CPU resource allocation methods for
resource plans.

100.7. Multi-Threaded Server Views


V$CIRCUIT This view contains information about virtual circuits, which are user connections to the
database through dispatchers and servers.

V$DISPATCHER This view provides information on the dispatcher processes.

V$DISPATCHER_RATE This view provides rate statistics for the dispatcher processes.

V$MTS This view contains information for tuning the multi-threaded server.

V$QUEUE This view contains information on the multi-thread message queues.

V$REQDIST This view lists statistics for the histogram of MTS dispatcher request times, divided into
12 buckets, or ranges of time. The time ranges grow exponentially as a function of the
bucket number.

V$SHARED_SERVER This view contains information on the shared server processes.

100.8. Backups & Recovery


V$BACKUP This view displays the backup status of all online datafiles.

V$OFFLINE_RANGE This view displays datafile offline information from the controlfile. Note that the last
offline range of each datafile is kept in the DATAFILE record.

V$RECOVERY_LOG This view lists information about archived logs that are needed to complete media
recovery. This information is derived from the log history view, V$LOG_HISTORY.

V$RECOVERY_PROGRESS V$RECOVERY_PROGRESS can be used to track database recovery operations


to ensure that they are not stalled, and also to estimate the time required to
complete the operation in progress.
V$RECOVERY_PROGRESS is a subview of V$SESSION_LONGOPS

V$INSTANCE_RECOVERY This view is used to monitor the mechanisms that implement the user-specifiable
limit on recovery reads.

V$RECOVERY_STATUS V$RECOVERY_STATUS contains statistics of the current recovery process. This


view contains useful information only for the Oracle process doing the recovery.
When Recovery Manager directs a server process to perform recovery, only
Recovery Manager is able to view the relevant information in this view.
g
Oracle 11 – Dynamic Performance Views Page 172 of 242
WK: 6 - Day: 5.2

V$RECOVERY_STATUS will be empty to all other Oracle users.

100.9. Real Application Cluster Views


V$ACTIVE_INSTANCES This view maps instance names to instance numbers for all instances that have the
database currently mounted.

V$THREAD This view contains thread information from the control file.

100.10. Recovery Manager Views


V$BACKUP_DATAFILE This view displays backup datafile and backup controlfile information from the
controlfile.

V$BACKUP_DEVICE This view displays information about supported backup devices. If a device type does
not support named devices, then one row with the device type and a null device
name is returned for that device type. If a device type supports named devices then
one row is returned for each available device of that type. The special device type
DISK is not returned by this view because it is always available.

V$BACKUP_PIECE This view displays information about backup pieces from the controlfile. Each backup
set consists of one or more backup pieces.

V$BACKUP_REDOLOG This view displays information about archived logs in backup sets from the controlfile.
That online redo logs cannot be backed up directly; they must be archived first to disk
and then backed up. An archive log backup set can contain one or more archived
logs.

V$BACKUP_SET This view displays backup set information from the controlfile. A backup set record is
inserted after the backup set is successfully completed.

V$BACKUP_SYNC_IO This view displays backup set information from the controlfile. A backup set record is
inserted after the backup set is successfully completed.

V$COPY_CORRUPTION This view displays information about datafile copy corruptions from the controlfile.

V$DATAFILE_COPY This view displays datafile copy information from the controlfile.

V$DB_PIPES This view displays the pipes that are currently in this database.

V$DELETED_OBJECT This view displays information about deleted archived logs, datafile copies and
backup pieces from the controlfile. The only purpose of this view is to optimize the
recovery catalog resync operation. When an archived log, datafile copy, or backup
piece is deleted, the corresponding record is marked deleted.

100.11. RMAN Backups & Recovery Views


V$BACKUP_ASYNC_IO This view displays backup set information from the controlfile. A backup set
record is inserted after the backup set is successfully completed.

V$BACKUP_CORRUPTION This view displays information about corruptions in datafile backups from the
controlfile. Note that corruptions are not tolerated in the controlfile and archived
log backups.

V$RECOVER_FILE This view displays the status of files needing media recovery.

V$RECOVERY_FILE_STATUS V$RECOVERY_FILE_STATUS contains one row for each datafile for each
RECOVER command. This view contains useful information only for the Oracle
process doing the recovery. When Recovery Manager directs a server process
to perform recovery, only Recovery Manager is able to view the relevant
information in this view. V$RECOVERY_FILE_STATUS will be empty to all
other Oracle users.

100.12. Network Views


V$DBLINK This view describes all database links (links with IN_TRANSACTION = YES) opened by the
session issuing the query on V$DBLINK. These database links must be committed or rolled
back before being closed.

100.13. Database Views


V$DATABASE This view contains database information from the control file.
g
Oracle 11 – Dynamic Performance Views Page 173 of 242
WK: 6 - Day: 5.2

100.14. Calling Dynamic Views


V$FIXED_TABLE This view displays all dynamic performance tables, views, and derived tables in
the database. Some V$tables (for example, V$ROLLNAME) refer to real tables
and are therefore not listed.

V$FIXED_VIEW_DEFINITION This view contains the definitions of all the fixed views (views beginning with
V$). Use this table with caution. Oracle tries to keep the behavior of fixed views
the same from release to release, but the definitions of the fixed views can
change without notice. Use these definitions to optimize your queries by using
indexed columns of the dynamic performance tables.

100.15. Replication Views


V$GLOBAL_TRANSACTION This view displays information on the currently active global transactions.

V$TRANSACTION This view lists the active transactions in the system.

V$TRANSACTION_ENQUEUE V$TRANSACTION_ENQUEUE displays locks owned by transaction state


objects.

100.16. Base Table Index Views


V$INDEXED_FIXED_COLUMN This view displays the columns in dynamic performance tables that are indexed
(X$ tables). The X$ tables can change without notice. Use this view only to write
queries against fixed views (V$ views) more efficiently.

100.17. SQL Loader Views


V$LOADCSTAT This view contains SQL*Loader statistics compiled during the execution of a direct load. These
statistics apply to the whole load. Any SELECT against this table results in "no rows returned"
since you cannot load data and do a query at the same time.

V$LOADTSTAT SQL*Loader statistics compiled during the execution of a direct load. These statistics apply to
the current table. Any SELECT against this table results in "no rows returned" since you
cannot load data and do a query at the same time.

100.18. Logminer
V$LOGMNR_CONTENTS This view contains log history information.

V$LOGMNR_DICTIONARY This view contains log history information.

V$LOGMNR_LOGS This view contains log information.

V$LOGMNR_PARAMETERS This view contains log information.

V$LOG_HISTORY This view contains log history information from the control file.

100.19. NLS Views


V$NLS_PARAMETERS This view contains current values of NLS parameters.

V$NLS_VALID_VALUES This view lists all valid values for NLS parameters.

100.20. PL/SQL Views


V$OBJECT_DEPENDENCY This view can be used to determine what objects are depended on by a package,
procedure, or cursor that is currently loaded in the shared pool. For example,
together with V$SESSION and V$SQL, it can be used to determine which tables
are used in the SQL statement that a user is currently executing

V$RESERVED_WORDS This view gives a list of all the keywords that are used by the PL/SQL compiler.
This view helps developers to determine whether a word is already being used as
a keyword in the language.

100.21. Statspack Views


V$MYSTAT This view contains statistics on the current session.
g
Oracle 11 – Dynamic Performance Views Page 174 of 242
WK: 6 - Day: 5.2

100.22. Parameters Views


V$OBSOLETE_PARAMETER This view lists obsolete parameters. If any value is true, you should examine
why.

V$PARAMETER This view lists information about initialization parameters.

100.23. Session Information Views


V$SESSION This view lists session information for each current session.

V$SESSION_CONNECT_INFO This view displays information about network connections for the current
session.

V$SESSION_CURSOR_CACHE This view displays information on cursor usage for the current session. The
V$SESSION_CURSOR_CACHE view is not a measure of the effectiveness of
the SESSION_CACHED_CURSORS initialization parameter.

V$SESSION_EVENT This view lists information on waits for an event by a session. Note that the
TIME_WAITED and AVERAGE_WAIT columns will contain a value of zero on
those platforms that do not support a fast timing mechanism. If you are
running on one of these platforms and you want this column to reflect true wait
times, you must set TIMED_STATISTICS to TRUE in the parameter file.
Please remember that doing this will have a small negative effect on system
performance.

V$SESSION_LONGOPS This view displays the status of certain long-running operations. It provides
progression reports on operations using the columns SOFAR and
TOTALWORK. For example, the operational status for the following
components can be monitored:
hash cluster creations
backup operations
recovery operations

V$SESSION_OBJECT_CACHE This view displays object cache statistics for the current user session on the
local server (instance).

V$SESSION_WAIT This view lists the resources or events for which active sessions are waiting.

V$SESSTAT This view lists user session statistics. To find the name of the statistic
associated with each statistic number (STATISTIC#)

V$SESS_IO This view lists I/O statistics for each user session.

100.24. Tablespace
V$TABLESPACE This view displays tablespace information from the controlfile.

100.25. Temporary File Views


V$TEMPFILE This view displays tempfile information.

100.26. Performance Tuning Views


V$ACCESS This view displays objects in the database that are currently locked and the
sessions that are accessing them.

V$CACHE This is a Parallel Server view. This view contains information from the block
header of each block in the SGA of the current instance as related to
particular database objects.

V$CACHE_LOCK This is a Parallel Server view.


V$CACHE_LOCK is similar to V$CACHE, except for the platform-specific
lock manager identifiers. This information may be useful if the platform-
specific lock manager provides tools for monitoring the PCM lock
operations that are occurring. For example, first query to find the lock
element address using INDX and CLASS, then query V$BH to find the
buffers that are covered by the lock.

V$DB_OBJECT_CACHE This view displays database objects that are cached in the library cache.
Objects include tables, indexes, clusters, synonym definitions, PL/SQL
procedures and packages, and triggers.
g
Oracle 11 – Dynamic Performance Views Page 175 of 242
WK: 6 - Day: 5.2

V$DLM_ALL_LOCKS This is a Parallel Server view. V$DLM_ALL_LOCKS lists information of all


locks currently known to lock manager that are being blocked or blocking
others.

V$DLM_LATCH V$DLM_LATCH is obsolete.

V$DLM_LOCKS This is a Parallel Server view. V$DLM_LOCKS lists information of all locks
currently known to lock manager that are being blocked or blocking others.

V$DLM_MISC V$DLM_MISC displays miscellaneous DLM statistics.

V$DLM_RESS V$DLM_RESS is a Parallel Server view. It displays information of all


resources currently known to the lock manager.

V$ENQUEUE_LOCK This view displays all locks owned by enqueue state objects. The columns
in this view are identical to the columns in V$LOCK.

V$EVENT_NAME This view contains information about wait events.

V$EXECUTION This view displays information on parallel execution.

V$FAST_START_SERVERS V$FAST_START_SERVERS provides information about all the recovery


slaves performing parallel transaction recovery.

V$FAST_START_TRANSACTIONS V$FAST_START_TRANSACTIONS contains information about the


progress of the transactions that Oracle is recovering.

V$FILE_PING The view V$FILE_PING displays the number of blocks pinged per datafile.
This information in turn can be used to determine access patterns to
existing datafiles and deciding new mappings from datafile blocks to PCM
locks.

V$FILESTAT This view contains information about file read/write statistics.

V$GLOBAL_BLOCKED_LOCKS This view displays global blocked locks.

V$LATCH This view lists statistics for non-parent latches and summary statistics for
parent latches. That is, the statistics for a parent latch include counts from
each of its children.

V$LATCHHOLDER This view contains information about the current latch holders.

V$LATCHNAME This view contains information about decoded latch names for the latches
shown in V$LATCH. The rows of V$LATCHNAME have a one-to-one
correspondence to the rows of V$LATCH.

V$LATCH_CHILDREN This view contains statistics about child latches. This view includes all
columns of V$LATCH plus the CHILD# column. Note that child latches
have the same parent if their LATCH# columns match each other.

V$LATCH_MISSES This view contains statistics about missed attempts to acquire a latch.

V$LATCH_PARENT This view contains statistics about the parent latch. The columns of
V$LATCH_PARENT are identical to those in V$LATCH.

V$LIBRARYCACHE This view contains statistics about library cache performance and activity.

V$LOCK This view lists the locks currently held by the Oracle server and outstanding
requests for a lock or latch.

V$LOCK_ACTIVITY This is a Parallel Server view. V$LOCK_ACTIVITY displays the DLM lock
operation activity of the current instance. Each row corresponds to a type of
lock operation.

V$LOCK_ELEMENT This is a Parallel Server view. There is one entry in v$LOCK_ELEMENT for
each PCM lock that is used by the buffer cache. The name of the PCM lock
that corresponds to a lock element is {'BL', indx, class}.

V$LOCKED_OBJECT This view lists all locks acquired by every transaction on the system.

V$LOCKS_WITH_COLLISIONS This is a Parallel Server view. Use this view to find the locks that protect
multiple buffers, each of which has been either force-written or force-read
at least 10 times. It is very likely that those buffers are experiencing false
pings due to being mapped to the same lock.

V$OPEN_CURSOR This view lists cursors that each user session currently has opened and
parsed.

V$PING This is a Parallel Server view. The V$PING view is identical to the
V$CACHE view but only displays blocks that have been pinged at least
g
Oracle 11 – Dynamic Performance Views Page 176 of 242
WK: 6 - Day: 5.2

once. This view contains information from the block header of each block in
the SGA of the current instance as related to particular database objects.

V$PQ_SESSTAT This view lists session statistics for parallel queries.

V$PQ_SLAVE This view lists statistics for each of the active parallel execution servers on
an instance. This view will be replaced/obsoleted in a future release by a
new view called V$PX_PROCESS.

V$PROCESS This view contains information about the currently active processes. While
the LATCHWAIT column indicates what latch a process is waiting for, the
LATCHSPIN column indicates what latch a process is spinning on. On
multi-processor machines, Oracle processes will spin on a latch before
waiting on it.

V$ROWCACHE This view displays statistics for data dictionary activity. Each row contains
statistics for one data dictionary cache.

V$ROWCACHE_PARENT This view displays information for parent objects in the data dictionary.
There is one row per lock owner, and one waiter for each object. This row
shows the mode held or requested. For objects with no owners or waiters,
a single row is displayed.

V$ROWCACHE_SUBORDINATE This view displays information for subordinate objects in the data
dictionary.

V$SHARED_POOL_RESERVED This fixed view lists statistics that help you tune the reserved pool and
space within the shared pool. The following columns of
V$SHARED_POOL_RESERVED are valid only if the initialization
parameter shared_pool_reserved_size is set to a valid value.

V$SORT_SEGMENT This view contains information about every sort segment in a given
instance. The view is only updated when the tablespace is of the
TEMPORARY type.

V$SORT_USAGE This view describes sort usage.

V$SQL This view lists statistics on shared SQL area without the GROUP BY clause
and contains one row for each child of the original SQL text entered.

V$SQL_BIND_DATA This view displays the actual bind data sent by the client for each distinct
bind variable in each cursor owned by the session querying this view if the
data is available in the server.

V$SQL_BIND_METADATA This view displays bind metadata provided by the client for each distinct
bind variable in each cursor owned by the session querying this view.

V$SQL_CURSOR This view displays debugging information for each cursor associated with
the session querying this view.

V$SQL_SHARED_MEMORY This view displays information about the cursor shared memory snapshot.
Each SQL statement stored in the shared pool has one or more child
objects associated with it. Each child object has a number of parts, one of
which is the context heap, which holds, among other things, the query plan.

V$SQLAREA This view lists statistics on shared SQL area and contains one row per SQL
string. It provides statistics on SQL statements that are in memory, parsed,
and ready for execution.

V$SQLTEXT This view contains the text of SQL statements belonging to shared SQL
cursors in the SGA.

V$SQLTEXT_WITH_NEWLINES This view is identical to the V$SQLTEXT view except that, to improve
legibility, V$SQLTEXT_WITH_NEWLINES does not replace newlines and
tabs in the SQL statement with spaces.

V$STATNAME This view displays decoded statistic names for the statistics shown in the
V$SESSTAT and V$SYSSTAT tables.

V$SUBCACHE This view displays information about the subordinate caches currently
loaded into library cache memory. The view walks through the library
cache, printing out a row for each loaded subordinate cache per library
cache object.

V$SYSSTAT This view lists system statistics. To find the name of the statistic associated
with each statistic number (STATISTIC#)

V$SYSTEM_CURSOR_CACHE This view displays similar information to the


V$SESSION_CURSOR_CACHE view except that this information is
g
Oracle 11 – Dynamic Performance Views Page 177 of 242
WK: 6 - Day: 5.2

system wide.

V$SYSTEM_EVENT This view contains information on total waits for an event. Note that the
TIME_WAITED and AVERAGE_WAIT columns will contain a value of zero
on those platforms that do not support a fast timing mechanism. If you are
running on one of these platforms and you want this column to reflect true
wait times, you must set TIMED_STATISTICS to TRUE in the parameter
file. Please remember that doing this will have a small negative effect on
system performance.

V$SYSTEM_PARAMETER This view contains information on system parameters.

V$WAITSTAT This view lists block contention statistics. This table is only updated when
timed statistics are enabled.

Other Views
V$AQ This view describes statistics for the queues in the database.

V$CLASS_PING V$CLASS_PING displays the number of blocks pinged per block class.
Use this view to compare contentions for blocks in different classes.
V$COMPATIBILITY This view displays features in use by the database instance that may
prevent downgrading to a previous release. This is the dynamic (SGA)
version of this information, and may not reflect features that other
instances have used, and may include temporary incompatibilities (like
UNDO segments) that will not exist after the database is shut down
cleanly.
V$COMPATSEG This view lists the permanent features in use by the database that will
prevent moving back to an earlier release.
V$CONTEXT This view lists set attributes in the current session.

V$DLM_CONVERT_LOCAL V$DLM_CONVERT_LOCAL displays the elapsed time for the local lock
conversion operation.
V$DLM_CONVERT_REMOTE V$DLM_CONVERT_REMOTE displays the elapsed time for the remote
lock conversion operation.
V$FALSE_PING V$FALSE_PING is a Parallel Server view. This view displays buffers that
may be getting false pings. That is, buffers pinged more than 10 times
that are protected by the same lock as another buffer that pinged more
than 10 times. Buffers identified as getting false pings can be remapped in
"GC_FILES_TO_LOCKS" to reduce lock collisions.
V$HS_AGENT This view identifies the set of HS agents currently running on a given host,
using one row per agent process.
V$HS_SESSION This view identifies the set of HS sessions currently open for the Oracle
Server.
V$LICENSE This view contains information about license limits.
V$MLS_PARAMETERS This is a Trusted Oracle Server view that lists Trusted Oracle Server-
specific initialization parameters. For more information, see your Trusted
Oracle documentation.
V$OPTION This view lists options that are installed with the Oracle Server.
V$PARALLEL_DEGREE_LIMIT_MTH This view displays all available parallel degree limit resource allocation
methods.
V$PQ_SYSSTAT This view lists system statistics for parallel queries. This view will be
replaced/obsoletes in a future release by a new view called
V$PX_PROCESS_SYSSTAT.
V$PQ_TQSTAT This view contains statistics on parallel execution operations. The
statistics are compiled after the query completes and only remain for the
duration of the session. It displays the number of rows processed through
each parallel execution server at each stage of the execution tree. This
view can help determine skew problems in a query's execution.
V$PX_PROCESS This view contains information about the sessions running parallel
execution.
g
Oracle 11 – Dynamic Performance Views Page 178 of 242
WK: 6 - Day: 5.2

V$PX_PROCESS_SYSSTAT This view contains information about the sessions running parallel
execution.
V$PX_SESSION This view contains information about the sessions running parallel
execution.
V$PX_SESSTAT This view contains information about the sessions running parallel
execution.
V$TEMPORARY_LOBS This view displays temporary lobs.
V$TEMP_EXTENT_MAP This view displays the status of each unit for all temporary tablespaces.
V$TEMP_EXTENT_POOL This view displays the state of temporary space cached and used for a
given instance. Note that loading of the temporary space cache is lazy
and those instances can be dormant. Use GV$TEMP_EXTENT_POOL for
information about all instances.
V$TEMP_PING The view V$TEMP_PING displays the number of blocks pinged per
datafile. This information in turn can be used to determine access patterns
to existing datafiles and deciding new mappings from datafile blocks to
PCM locks.
V$TEMP_SPACE_HEADER This view displays aggregate information per file per temporary
tablespace regarding how much space is currently being used and how
much is free as per the space header.
V$TEMPSTAT This view contains information about file read/write statistics.
V$TIMER This view lists the elapsed time in hundredths of seconds. Time is
measured since the beginning of the epoch, which is operating system
specific, and wraps around to 0 again whenever the value overflows four
bytes (roughly 497 days).
V$TYPE_SIZE This view lists the sizes of various database components for use in
estimating data block capacity.
V$VERSION Version numbers of core library components in the Oracle server. There is
one row for each component.
g
Oracle 11 – Data Dictionary Views Page 179 of 242
WK: 6 - Day: 5.2

101. Data Dictionary Views


1. Dictionary Views
DBA_COL_COMMENTS

2. User Management Views


DBA_ROLES DBA_ROLE_PRIVS DBA_SYS_PRIVS
DBA_PROFILES DBA_TAB_PRIVS DBA_USERS
DBA_COL_PRIVS DBA_SUBSRIPTIONS DBA_CONNECT_ROLE_GRANTEES
DBA_APPLICATION_ROLES

3. Logical Backup Views


DBA_EXP_OBJECTS DBA_EXP_VERSION DBA_EXP_FILES
DBA_DRIECTORIES DBA_EXPORT_OBJECTS

4. Schema Object Views


DBA_CLUSTERS DBA_CLU_COLUMNS DBA_CATALOG
DBA_SEQUENCES DBA_OBJECT_TABLES DBA_COLUMNS
DBA_CONSTRAINTS DBA_CONS_COLUMNS DBA_NESTED_TABLES
DBA_CONS_OBJ_COLUMNS DBA_INDEXTYPES DBA_INDEXTYPE_OPERATORS
DBA_ASSOCIATIONS DBA_SECOUNDARY_OBJECTS DBA_TYPES
DBA_TYPES_METHODS DBA_SQLJ_TYPES DBA_IND_COLUMNS
DBA_OBJECTS DBA_SYNONYM DBA_ALL_TABLES
DBA_NESTED_TABLE_COLS DBA_TAB_COMMENTS DBA_SOURCE_TABLES
DBA_SUBSCRIBED_TABLES DBA_TABLES DBA_TAB_COLS
DBA_TAB_COL_STATISTICS DBA_SEGMENTS DBA_OBJECT_SIZE

5. UNDO Management Views


DBA_ROLLBACK_SEGS DBA_UNDO_EXTENTS

6. Audit Views
DBA_OBJ_AUDIT_OPTS DBA_AUDIT_TRAIL DBA_AUDIT_OBJECT
DBA_STMT_AUDIT_OPTS DBA_AUDIT_SESSION DBA_AUDIT_EXIST
DBA_PRIV_AUDIT_OPTS DBA_AUDIT_STATEMENT DBA_FGA_AUDIT_TRAIL
DBA_AUDIT_POLICIES DBA_COMMON_AUDIT_TRAIL DBA_AUDIT_POLICY_COLUMNS

7. Partition Views
DBA_PART_COL_STATISTICS DBA_SUBPART_HISTOGRAMS DBA_PART_KEY_COLUMNS
DBA_TAB_SUBPARTITIONS DBA_PART_HISTOGRAM DBA_SUBPART_KEY_COLUMNS
DBA_PART_TABLES DBA_TAB_PARTITIONS DBA_IND_SUBPARTITIONS
DBA_SUBPART_COL_STATISTICS DBA_PART_LOBS DBA_SUBPARTITION_TEMPLATES

8. Index Views
DBA_INDEXES DBA_JOIN_IND_COLUMNS DBA_IND_EXPRESSIONS
DBA_IND_STATISTICS DBA_PART_INDEXES DBA_IND_PARTITIONS

9. Large Object Views


DBA_LOB_TEMPLATES DBA_LOB_PARTITIONS DBA_LOB_SUBPARTITIONS

10. Virtual Private Database Views


DBA_ENCRYPTED_COLUMNS

11. Stream Pool Views


DBA_LOG_GROUPS DBA_LOG_GROUP_COLUMNS

12. PL/SQL Views


g
Oracle 11 – Data Dictionary Views Page 180 of 242
WK: 6 - Day: 5.2

DBA_PROCEDURE DBA_TRIGGERS DBA_INTERNAL_TRIGGER


DBA_TRIGGER_COLS DBA_WARNING_SETTINGS DBA_TRIGGER_COLS

13. Resumable Space Views


DBA_RESUMABLE

14. Distributed Transaction Views


DBA_2PC_PENDING DBA_PENDING_TRANSACTIONS

15. Materialized Views


DBA_MVIEW_AGGREGATES DBA_MVIEW_JOINS DBA_MVIEWS_DETAIL_RELATIONS
DBA_MVIEW_COMMENTS DBA_TUNE_MVIEW DBA_MVIEW_ANALYSIS
DBA_MVIEW_KEYS DBA_MVIEW_EQUIVALANCES DBA_SNAPSHOT_LOGS
DBA_REFRESH DBA_MVIEWS DBA_BASE_TABLE_MVIEWS
DBA_REFRESH_CHILDERN DBA_MVIEW_REFRESH_TIMES DBA_REGISTERED_MVIEWS
DBA_SNAPSHOTS DBA_REGISTERED_SNAPSHOTS DBA_MVIEW_LOGS
DBA_MVIEW_LOG_FILTER_COLS

16. Performance Tuning Views


DBA_ADVISOR_DEFINITIONS DBA_ADVISOR_USAGE DBA_ADVISOR_LOG
DBA_ADVISOR_PARAMETERS_PROJ DBA_ADVISOR_RECOMMENDATIONS DBA_ADVISOR_DIRECTIVES
DBA_ADVISOR_SQLA_WK_STMTS DBA_ADVISOR_SQLW_TEMPLATES DBA_ADVISOR_SQLW_TABVOL
DBA_ADVISORSQLW_JOURNAL DBA_TAB_STATS_HISTORY DBA_ADVISOR_COMMENTS
DBA_ADVISOR_TASKS DBA_ADVISOR_DEF_PARAMETERS DBA_ADVISOR_OBJECTS
DBA_ADVISOR_ACTIONS DBA_ADVISOR_JOURNAL DBA_ADVISOR_SQLA_REC_SUM
DBA_ADVISOR_SQLW_STMTS DBA_ADVISOR__SQLW_COLVOL DBA_HIST_PGA_TARGET_ADVICE
DBA_HIST_JAVA_POOL_ADVICE DBA_HIST_SYSSTAT DBA_HIST_OSSTAT
DBA_HIST_UNDOSTAT DBA_HIST_METRIC_NAME DBA_HIST_SESSMETRIC_HISTORY
DBA_AUTO_SEGADV_SUMMARY DBA_ANALYZE_OBJECTS DBA_ADVISOR_OBJECT_TYPES
DBA_ADVISOR_TEMPLATES DBA_ADVISOR_PARAMETERS DBA_ADVISOR_FINDINGS
DBA_ADVISOR_RATIONALE DBA_ADVISOR_SQLA_WK_MAP DBA_ADVISOR_SQLW_SUM
DBA_ADVISOR__SQLW_TABLES DBA_ADVISOR_SQLW_PARAMETERS DBA_HIST_SGA_TARGET_ADVICE
DBA_HIST_THREAD DBA_HIST_SYS_TIME_MODEL DBA_HIST_PARAMETER_NAME
DBA_HIST_SEGSTAT DBA_HIST_SYSMETRIC_HISTORY DBA_HIST_FILEMETRIC_HISTORY
DBA_OUTLINE_HINTS DBA_SEC_RELEVANT_COLS

17. Replication Views


DBA_JOBS DBA_ERRORS DBA_JOBS_RUNNING

18. Tablespace Management Views


DBA_FREE_SPACE DBA_FREE_SPACE_COALESCED_TM DBA_FREE_SPACE_COALESCED_TMP4
P1
DBA_FREE_SPACE_COALESCED DBA_TEMP_FILES DBA_AUTO_SEGADV_CLT
DBA_LMT_USED_EXTENTS DBA_LMT_FREE_SPACE DBA_FREE_SPACE_COALESCED_TMP2
DBA_FREE_SPACE_COALESCED_T DBA_DATA_FILES DBA_TABLESPACE_GROUPS
MP5
DBA_DMT_USED_EXTENTS DBA_DMT_FREE_SPACE DBA_FREE_SPACE_COALESCED_TMP3
DBA_FREE_SPACE_COALESCED_T DBA_TABLESPACES DBA_TABLESPACE_USAGE_METRICS
MP6
DBA_TS_QUOTAS

19. User Object Views


DBA_SUMMARY_AGGREGATES DBA_SUMMARY_JOINS

20. Recycle Bin Views


g
Oracle 11 – Data Dictionary Views Page 181 of 242
WK: 6 - Day: 5.2

DBA_RECYCLEBIN

21. External Table Views


DBA_EXTERNAL_TABLES DBA_EXTERNAL_LOCATIONS

22. Schema Segment Views


DBA_EXTENTS DBA_SEGMENTS_OLD

23. Package Info Views


DBA_SOURCE _

24. Logminer Views


DBA_LOGMNR_LOG DBA_LOGMNR_SESSION DBA_LOHMNR_PURGED_LOG

25. Data Guard Views


DBA_LOGSTDBY_PARAMETERS DBA_LOGSTDBY_SKIP_TRANSACTI DBA_LOGSTDBY_HISTORY
ON
DBA_LOGSTDBY_PROGRESS DBA_LOGSTDBY_LOG DBA_LOGSTDBY_SKIP
DBA_LOGSTDBY_EVENTS

26. Datapump Views


DBA_DATAPUMP_JOBS DBA_DATAPUMP_SESSIONS

27. Context Views


DBA_POLICIES DBA_POLICY_GROUPS DBA_POLICY_CONTEXTS

28. Network View


DBA_DB_LINKS

29. Redefinition Online Views


DBA_REDEFINITION_ERRORS DBA_REDEFINITION_OBJECTS

30. Scheduler Views


DBA_SCHEDULER_JOBS DBA_SCHEDULER_PROGRAM_ARGS DBA_SCHEDULER_JOB_RUN_DETAILS
DBA_SCHEDULER_WINDOW_GROUP DBA_SCHEDULER_RUNNING_JOBS DBA_SCHEDULER_CHAIN_RULES
S
DBA_SCHEDULER_JOB_CLASSES DBA_SCHEDULER_JOB_ARGS DBA_SCHEDULER_WINDOW_LOG
DBA_SCHEDULER_WINGROUP_MEM DBA_SCHEDULER_GLOBAL_ATTRIB DBA_SCHEDULER_CHAIN_STEPS
BERS UTE
DBA_SCHEDULER_PROGRAMS DBA_SCHEDULER_WINDOWS DBA_SCHEDULER_JOB_LOG
DBA_SCHEDULER_SCHEDULES DBA_SCHEDULER_WINDOW_DETAIL DBA_SCHEDULER_SCHEDULES
S
DBA_SCHEDULER_CHAINS DBA_SCHEDULER_RUNNNING_CHAI
NS

31. Streamed Pool Views


DBA_FILE_GROUP_EXPORT_INFO DBA_FILE_GROUP_TABLES DBA_STREAMS_SCHEMA_RULES
DBA_STREAMS_RULES DBA_STREAMS_UNSUPPORTED DBA_STREAMS_RENAME_SCHEMA
DBA_STREAMS_RENAME_COLUMN DBA_RECOVERABLE_SCRIPT_PARA DBA_TSM_SOURCE
MS
DBA_FILE_GROUPS DBA_FILE_GROUPS_FILES DBA_STREAMS_MESSAGE_CONSUMERS
DBA_STREAMS_TABLE_RULES DBA_STREAMS_TRANSFORM_FUNCT DBA_STREAMS_NEWLY_SUPPORTED
ION
DBA_STREAMS_RENAME_TABLE DBA_STREAMS_ADD_COLUMN DBA_RECOVERABLE_SCRIPT_BLOCKS
DBA_TSM_DESTINATION DBA_FILE_GROUP_VERSIONS DBA_FILE_GROUP_TABLESPACES
DBA_STREAMS_GLOBAL_RULES DBA_STREAMS_MESSAGE_RULES DBA_STREAMS_ADMINISTRATOR
DBA_STREAMS_TRANSFORMATION DBA_STREAMS_DELETE_COLUMN DBA_RECOVERABLE_SCRIPT
S
DBA_RECOVERABLE_SCRIPT_ERR DBA_TSM_HISTORY DBA_PROPAGATION
g
Oracle 11 – Data Dictionary Views Page 182 of 242
WK: 6 - Day: 5.2

ORS

32. Resource Consumer Group Views


DBA_RSRC_PLAN_DIRECTIVES DBA_RSRC_PLANS DBA_RSRC_CONSUMER_GROUPS
DBA_RSRC_CONSUMER_GROUP_PR DBA_RSRC_MAPPING_PRIORITY DBA_RSRC_MANAGER_SYSTEM_PRIVS
IVS
DBA_RSRC_GROUP_MAPPINGS

33. Automatic Workload Repository Views


DBA_HIST_SNAPSHOT DBA_HIST_SNAP_ERROR DBA_HIST_DATABASE_INSTANCE
DBA_HIST_WR_CONTROL DBA_HIST_DATAFILE DBA_HIST_BASELINE
DBA_HIST_TEMPFILE DBA_HIST_TEMPSTATXS DBA_HIST_FILESTATXS
DBA_HIST_SQLSTAT DBA_HIST_SQLTEXT DBA_HIST_COMP_IOSTAT
DBA_HIST_SQL_PLAN DBA_HIST_SQL_BIND_METADATA DBA_HIST_SQL_SUMMARY
DBA_HIST_OPTIMIZER_ENV DBA_HIST_EVENT_NAME DBA_HIST_SQLBIND
DBA_HIST_BG_EVENT_SUMMARY DBA_HIST_WAITSTAT DBA_HIST_SYSTEM_EVENT
DBA_HIST_LATCH_NAME DBA_HIST_LATCH DBA_HIST_ENQUEUE_STAT
DBA_HIST_LATCH_PARENT DBA_HIST_LATCH_MISSES_SUMMA DBA_HIST_LATCH_CHILDREN
RY
DBA_HIST_DB_CACHE_ADVICE DBA_HIST_BUFFER_POOL_STAT DBA_HIST_LIBRARYCACHE
DBA_HIST_SGA DBA_HIST_SGASTAT DBA_HIST_ROWCACHE_SUMMARY
DBA_HIST_PROCESS_MEM_SUMMA DBA_HIST_RESOURCE_LIMIT DBA_HIST_PGASTAT
RY
DBA_HIST_STREAMS_POOL_ADVI DBA_HIST_SQL_WORKAREA_HSTGR DBA_HIST_SHARED_POOL_ADVICE
CE M
DBA_HIST_SGA_TARGET_ADVICE DBA_HIST_INSTANCE_RECOVERY DBA_HIST_PGA_TARGET_ADVICE
DBA_HIST_THREAD DBA_HIST_STAT_NAME DBA_HIST_JAVA_POOL_ADVICE
DBA_HIST_SYS_TIME_MODEL DBA_HIST_OSSTAT_NAME DBA_HIST_SYSSTAT
DBA_HIST_PARAMETER_NAME DBA_HIST_PARAMETER DBA_HIST_OSSTAT
DBA_HIST_SEG_STAT DBA_HIST_SEG_STAT_OBJ DBA_HIST_UNDOSTAT
DBA_HIST_SYSMETRIC_HISTORY DBA_HIST_SYSMETRIC_SUMMARY DBA_HIST_METRIC_NAME
DBA_HIST_FILEMETRIC_HISTOR DBA_HIST_WAITCLASSMET_HISTO DBA_HIST_SESSMETRIC_HISTORY
Y RY
DBA_HIST_CR_BLOCK_SERVER DBA_HIST_CURRENT_BLOCK_SERV DBA_HIST_DLM_MISC
ER
DBA_HIST_ACTIVE_SESS_HISTO DBA_HIST_TABLESPACE_STAT DBA_HIST_INST_CACHE_TRANSFER
RY
DBA_HIST_MTTR_TARGET_ADVIC DBA_HIST_TBSPC_SPACE_USAGE DBA_HIST_LOG
E
DBA_HIST_SERVICE_STAT DBA_HIST_SERVICE_WAIT_CLASS DBA_HIST_SERVICE_NAME
DBA_HIST_STREAMS_CAPTURE DBA_HIST_STREAMS_APPLY_SUM DBA_HIST_SESS_TIME_STATS
DBA_HIST_BUFFERED_SUBSCRIB DBA_HIST_RULE_SET DBA_HIST_BUFFERED_QUEUES
ERS

34. Other Views


DBA_TAB_HISTOGRAMS DBA_PLSQL_OBJECT_SETTINGS DBA_LOBS
DBA_2PC_NEIGHBORS DBA_PROXIES DBA_REFS
DBA_OPANCILLARY DBA_LOG_GROUPS DBA_OBJ_COLATTRS
DBA_SQLJ_TYPE_ATTRS DBA_LIBRARIES DBA_OPBINDINGS
DBA_DIM_LEVELS DBA_VARRAYS DBA_OPERATOR_COMMENTS
DBA_DIM_HIERARCHIES DBA_OPERATORS DBA_INDEXTYPE_ARRAYTYPES
DBA_SUMMARIES DBA_OPARGUMENTS DBA_PARTIAL_DROP_TABS
g
Oracle 11 – Data Dictionary Views Page 183 of 242
WK: 6 - Day: 5.2

DBA_SUMMARY_KEYS DBA_INDEXTYPE_COMMENTS DBA_TAB_MODIFICATIONS


DBA_DEPENDENCIES DBA_UNUSED_COL_TABS DBA_PUBLISHED_COLUMNS
DBA_HIST_INSTANCE_RECOVERY DBA_USTATS DBA_SUBSCRIBED_COLUMNS
DBA_HIST_STAT_NAME DBA_TAB_STATISTICS DBA_TYPE_ATTRS
DBA_HIST_OSSTAT_NAME DBA_COLL_TYPES DBA_METHOD_RESULTS
DBA_HIST_PARAMETER DBA_METHOD_PARAMS DBA_PENDING_CONV_TABLES
DBA_HIST_SEG_STAT_OBJ DBA_TYPE_VERSIONS DBA_DIMENSIONS
DBA_HIST_SYSMETRIC_SUMMARY DBA_SQLJ_TYPE_METHODS DBA_DIM_ATTRIBUTES
DBA_HIST_WAITCLASSMET_HIST DBA_DIM_LEVEL_KEY DBA_DIM_JOIN_KEY
ORY
DBA_DEPENDENCIES DBA_DIM_CHILD_OF DBA_SUMMARY_DETAIL_TABLES
DBA_JOBS DBA_GLOBAL_CONTEXT DBA_SERVICES
DBA_SEGMENTS DBA_ATTRIBUTE_TRANSFORMATIO DBA_OPTSTAT_OPERATIONS
NS
DBA_CONTEXT DBA_RULES DBA_RULE_SETS
DBA_TRANSFORMATIONS DBA_EVALUATION_CONTEXT_TABL DBA_RULE_SET_RULES
ES
DBA_RULESETS DBA_QUEUES DBA_EVALUATION_CONTEXT_VARS
DBA_EVALUATION_CONTEXTS DBA_AQ_AGENTS DBA_QUEUE_PUBLISHERS
DBA_QUEUE_TABLES DBA_RCHILD DBA_AQ_AGENT_PRIVS
DBA_QUEUE_SCHEDULES DBA_REGISTRY_HIERARCHY DBA_RGROUP
DBA_QUEUE_SUBSCRIBERS DBA_LOGSTDBY_UNSUPPORTED DBA_REGISTRY
DBA_SERVER_REGISTRY DBA_AWS DBA_REGISTRY_LOG
DBA_REGISTRY_HISTORY DBA_CAPTURE_PARAMETERS DBA_LOGSTDBY_NOT_UNIQUE
DBA_CAPTURE DBA_CAPTURE_PREPARED_TABLES DBA_AW_PS
DBA_CAPTURE_PREPARED_SCHEM DBA_APPLY DBA_CAPTURE_PREPARED_DATABASE
AS
DBA_REGISTERED_ARCHIVED_LO DBA_APPLY_INSTANTIATED_SCHE DBA_CAPTURE_EXTRA_ATTRIBUTES
G MAS
DBA_APPLY_INSTANTIATED_OBJ DBA_APPLY_CONFLICT_COLUMNS DBA_APPLY_PARAMETERS
ECTS
DBA_APPLY_KEY_COLUMNS DBA_APPLY_PROGRESS DBA_APPLY_INSTANTIATED_GLOBAL
DBA_APPLY_DML_HANDLERS DBA_APPLY_EXECUTE DBA_APPLY_TABLE_COLUMNS
DBA_APPLY_ENQUEUE DBA_FEATURE_USAGE_STATISTIC DBA_APPLY_ERROR
S
DBA_CHANGE_NOTIFICATION_RE DBA_OUTSTANDING_ALERTS DBA_APPLY_SPILL_TXN
GS
DBA_CPU_USAGE_STATISTICS DBA_ALERT_ARGUMENTS DBA_HIGH_WATER_MARK_STATISTIC
S
DBA_THRESHOLDS DBA_SQLTUNE_BINDS DBA_ALERT_HISTORY
DBA_ENABLED_AGGREGATIONS DBA_SQLTUNE_RATIONALE_PLAN DBA_ENABLED_TRACES
DBA_SQLTUNE_PLANS DBA_SQLSET_STATEMENTS DBA_SQLTUNE_STATISTICS
DBA_SQLSET_REFERENCES DBA_SQL_PROFILES DBA_SQLSET
DBA_RESOURCE_INCARNATIONS DBA_SQLSET_PLANS DBA_SQLSET_BINDS
g
Oracle 11 – New Features of OLAP Page 184 of 242
WK: 6 - Day: 5.2

102. New Features for OLAP


102.1. The SQL Model clause
The new data warehousing feature in Oracle Database 10g that has probably received the most attention is the SQL
Model clause. The SQL Model clause allows users to embed spreadsheet-like models in a SELECT statement, in a way
that was previously the domain of dedicated multidimensional OLAP servers such as Oracle Express and Oracle9i OLAP.
The SQL Model clause brings an entirely new dimension to Oracle analytical queries and addresses a number of
traditional shortcomings with the way SQL normally works.
The SQL Model clause has been designed to address the sort of situation where, in the past, clients have taken data out
of relational databases and imported it into a model held in a spreadsheet such as Microsoft Excel. Often, these models
involve a series of macros that aggregate data over a number of business dimensions, over varying time periods, and
following a set of complex business rules that would be difficult to express as normal SQL. We've worked on many a client
engagement where the limitations of SQL meant that a number of standalone Excel spreadsheets had to be used, and
while these gave the client the analytical capabilities they required, the usual issues of scalability and reliability of
replicated data, and lack of overall control often became apparent after a while. The aim of the SQL Model clause is to
give normal SQL statements the ability to create a multidimensional array from the results of a normal SELECT statement,
carry out any number of interdependent inter-row and inter-array calculations on this array, and then update the base
tables with the results of the model. An example SQL statement using the MODEL clause would look like;

SELECT SUBSTR(country,1,20) country, SUBSTR(prod,1,15) prod, year, sales


FROM sales_view WHERE country IN ('Italy','Japan') MODEL RETURN UPDATED ROWS
PARTITION BY (country) DIMENSION BY (prod, year) MEASURES (sale sales)
RULES (sales['Bounce', 2002] = sales['Bounce', 2001] + sales['Bounce', 2000],
sales['Y Box', 2002] = sales['Y Box', 2001],
sales['2_Products', 2002] = sales['Bounce', 2002] + sales['Y Box', 2002])
ORDER BY country, prod, year;

102.2. Improvements to the multidimensional OLAP engine


With Oracle9i, the previously standalone Express multidimensional engine is now incorporated into the Oracle database,
and with Oracle Database 10g, benefits of integration with the traditional relational Oracle engine are starting to become
apparent. First up is improvements to the way large analytic workspaces can be partitioned, introducing into the Oracle
OLAP world some of the advanced partitioning options currently enjoyed by Oracle database users. Currently, analytic
workspaces, stored as AW$ tables within an Oracle schema, can be partitioned across multiple rows in the AW$ table by
specifying a maximum segment size, allowing us to split an individual analytic workspace into (say) 10 GB segments, one
in each table row. This table could then be partitioned just like any other Oracle table, allowing us to put one row in one
tablespace, another in another, and each of these tablespaces could of course be stored in datafiles on different physical
disk units. Although this was of some benefit, splitting by segment size was the only way of partitioning the data, and we
couldn't specify what objects within the analytic workspace went in to each partition. Oracle10g OLAP will now include an
enhancement where we can specify exactly which objects within the analytic workspace go in to each partition, and we
can further subdivide this by segment size if objects are particularly large.
In a similar fashion, variables within the analytic workspace can now be partitioned, either by range of dimension
members, a list of dimension members, or by reference to a CONCAT dimension. The 10g multidimensional engine then
stores each variable partition as a separate physical object, which can be directed to separate rows in the AW$ table
(allowing us to partition these across different tablespaces and physical disk drives); the variable, however, appears as
just one object to the application, simplifying the data model and allowing Oracle to do all the hard work in the background.
Another excellent new feature, and a real improvement over what was available with Express, is support for multi-user
read-write access to individual analytic workspaces. In the past, one drawback with Express was that only one user could
attach to an Express database in read-write mode, leading Express developers to develop a whole range of alternative
solutions to allow ad-hoc write access to Express databases. In Oracle 10g OLAP, analytic workspaces can be attached in
MULTI mode, where after applications then ACQUIRE individual variables in the analytic workspace for read-write access.
Once an object has been acquired (and locked by the Oracle multidimensional engine), updates can then take place and
the application can make whatever modifications are necessary. After all changes have been made, the UPDATE
command is issued against the variable, followed by a COMMIT, and then a RELEASE command is issued against the
variable to make it available for other applications to write to. It'll be interesting to see how the multidimensional engine
handles multi-write access; in the past with Express databases could balloon in size when one user had read-write access
to a database, and others were accessing it in read mode, as Express had to clone the database for each user to ensure
that they had a consistent view of the data. We wouldn't be surprised if individual variables were copied out of a 10g
analytic workspace into a temporary workspace while updates happened, with updates being propagated back (as with
the old Express Excel Add-In) when the changes are finally COMMITted -- the key thing here is how database size is dealt
with as the old Express way of doing it was less than optimal.
Aggregation has been improved with Oracle10g OLAP, with formulas now allowed as sources of data for the
AGGREGATE command, eliminating the need to calculate and store data at the detail level. Aggregation, particularly
g
Oracle 11 – New Features of OLAP Page 185 of 242
WK: 6 - Day: 5.2
dynamic aggregation, is another area where Oracle9i and now 10g OLAP are a distinct improvement over Express and it's
well worth looking at this area in more detail if this is an issue with an existing Express system.

102.3. Asynchronous Change Data Capture


Oracle Change Data Capture was introduced with Oracle9i, and provided the ability to track changes to tables and store
them in a change table, for further consumption by an ETL process. Oracle9i Change Data Capture worked by creating
triggers on the source tables, transferring data synchronously but creating a processing overhead and requiring access to
the structure of the source tables. Because of the effect that the triggers had on the underlying tables, many warehouse
projects did without change data capture and used other methods to capture changes. Oracle10g introduces
Asynchronous Change Data Capture, which instead of using triggers uses the database log files to capture changes and
apply them to collection tables. Asynchronous Change Data Capture therefore doesn't require changes to the table
structure and doesn't impact on database performance.

102.4. Improvements to Oracle data mining


Alongside the inclusion of the Oracle Express multidimensional OLAP engine, Oracle9i also embedded data mining
functionality in the database together, and this data mining functionality has been enhanced with Oracle Database 10g.
Oracle Database 10g adds support for two new classification routines, Support Vector Machine (used for top-down rather
than a bottom-up calculations, assuming the best possible fit and then working backwards to what can be achieved) and
Non-Negative Matrix Factorization, together with support for Frequent Itemsets, used for such functions as market basket
analysis and propensity analysis.

102.5. The SQLAccess Adviser


Part of the Oracle Database 10g Server Manageability feature, the SQLAccess Adviser recommends the best combination
of indexes and materialized views for a given database workload. Available either at the command line (via the
DBMS_ADVISOR package) or through the Advisor Central element of the new Web-based Oracle Enterprise Manager,
the SQLAccess Adviser is based on the index and summary advisors previously bundled with Oracle9i and provides a
one-stop-shop for tuning and summarizing our warehouse data.

102.6. The Tune MView Advisor and improvements to Query Rewrite


Query Rewrite (the ability for Oracle to transparently redirect queries from detail level to summary tables) is one of the
best data warehousing features in Oracle8i and 9i, but it's sometimes a bit temperamental and we can often find that
queries don't actually get rewritten. Sometimes this is because we've broken one of the Query Rewrite restrictions,
sometimes it's because our materialized view doesn't contain the correct columns and aggregates. Oracle 10g has a
number of improvements to Query Rewrite and the materialized view tuning process that should make this process a bit
more productive. With Oracle Database 10g, query rewrite is now possible when our SELECT statement contains analytic
functions, full outer joins and set operations such as UNION, MINUS and INTERSECT. In addition, we can now use a hint,
/*+ REWRITE_OR_ERROR */, which will stop the execution of a SQL statement if query rewrite cannot occur.
SQL> SELECT /*+ REWRITE_OR_ERROR */
s.prod_id, sum(s.quantity_sold)
FROM sales s GROUP BY s.prod_id;
FROM sales s
*
g
Oracle 11 – New Features of OLAP Page 186 of 242
WK: 6 - Day: 5.2
ERROR at line 4:
ORA-30393: a query block in the statement did not rewrite
Oracle9i came with two packages, DBMS_MVIEW.EXPLAIN_MVIEW and DBMS_MVIEW.EXPLAIN_REWRITE that could
be used to diagnose why a materialized view wasn't being used for query rewrite. However, although these packages told
us why rewrite hadn't happened, they left it down to us to work out how to alter our CREATE MATERIALIZED VIEW
statement to ensure that rewrite happened correctly. Oracle Database 10g comes with a new advisor package,
DBMS_ADVISOR.TUNE_MVIEW, which takes as its input a CREATE MATERIALIZED VIEW DML statement, and outputs
a corrected version that supports query rewrite and features such as fast refresh.

102.7. Data Pump: The replacement for import and export


Data Pump is a replacement for the venerable IMP and EXP applications used for creating logical backups of Oracle
tables, schemas or databases. Data Pump is a server application (as opposed to IMP and EXP, which were client
applications), which in beta testing was twice as fast as the old EXP for exporting data, and 10 times as fast as the old
IMP for importing data. Data Pump is callable either through the DBMS_DATAPUMP package, through the replacements
for IMP and EXP, known as IMPDB and EXPDB, or through a wizard delivered as part of Oracle Enterprise Manager 10g.

Data Pump (and the new IMPDB and EXPDB applications) offers a number of improvements over the old IMPORT and
EXPORT, including resumable/restartable jobs, automatic two-level parallelism, a network mode that uses
DBLINKs/listener service names instead of pipes, fine-grained object selection (so we can select individual tables, view,
packages, indexes and so on for import or export, not just tables or schemas as with IMPORT and EXPORT), and a fully
callable API that allows Data Pump functionality to be embedded in third-party ETL packages.

102.8. Improvements to storage management


Automatic Storage Management (ASM) is one of the 'cool new features' in Oracle10g that is meant to reduce the workload
for Oracle DBAs. ASM completely automates the process of creating logical volumes, file systems and filenames, with the
DBA only specifying the location of raw disks and ASM doing the rest. Disk I/O is managed by evenly distributing the data
across blocks within a disk group, with ASM in addition handling disk mirroring and the creation of mirror groups and
failure groups.
ASM deals with the problems caused by rapidly expanding data warehouses, where administrators can no longer deal
with the sheer number of disk units, nodes and logical groupings, and is a key feature of the Oracle 10g Grid Architecture,
which aims to 'virtualize' computing power and present database features like processing and storage as utilities that
effectively manage themselves.
g
Oracle 11 – New Features of OLAP Page 187 of 242
WK: 6 - Day: 5.2

102.9. Faster full table scans


Full table scans are common in data warehousing environments, and with this in mind, table scan performance has been
improved in Oracle10g. Code optimization in Oracle Database 10g has decreased CPU consumption and this leads to
faster table scan execution (when queries are CPU bound, rather than I/O bound), and gives the potential for greater
query concurrency, offering up to 30-40% speed improvements when comparing CPU-bound queries.

102.10. Automatic tuning and maintenance


Automatic maintenance and tuning has always been one of the key product differentiators for Microsoft SQL Server and
with Oracle10g, features that meet and match those found in competitor products are being introduced to the server
technology stack.
Surveys show that over 50% of a DBAs time is spend tuning and monitoring the database server, a task that while
important is often complex and difficult to get exactly right. With Oracle Database 10g, Oracle has introduced a number of
components that together make it possible for the database server to monitor itself, make intelligent changes to
configuration, and alert DBAs when situations arise that need manual intervention.
The first component in this framework is the Automatic Workload Repository, which uses an enhanced version of
Statspack to collect instance statistics every 30 minutes and stores these for a rolling seven day period. This enhanced
version of Statspack now collects a broader range of statistics and has a number of optimizations to streamline the way
high-cost SQL statements are captured, ensuring that only SQL activity that has significantly affected performance since
the last snapshot are collected. The usage information stored in the Automatic Workload Repository is then used as the
basis for all the self-management functionality in Oracle Database 10g.
Next up is the Automatic Maintenance Tasks feature, which acts on the statistics gathered by the Automatic Workload
Repository, and carries out tasks such as index rebuilding, refreshing statistics and so on, where such tasks don't require
any manual intervention by the DBA. A new scheduling feature known as Unified Scheduler runs these tasks during a
predefined maintenance window, set by default to be between 10:00 pm and 6:00 am the next day, although these times
can be customized to reduce impact on other tasks (such as batch loads) that might be taking place.
The third component of the self-managing framework is Server Generated Alerts, a method where the database server
sends notifications via e-mail to the DBA -- including a recommendation as to how best to deal with the situation. Alerts
will normally be raised where the database itself cannot deal with the situation that has arisen, such as when there is
insufficient space on a disk unit to extend a datafile.
Lastly, and perhaps the most exiting of all the self-managing component frameworks, is the Automatic Database
Diagnostic Monitor. This component analyzes the data captured in the Automatic Workload Repository and uses an
artificial intelligence algorithm, similar to that found in Oracle Expert, to analyze areas such as lock contention, CPU
bottlenecks, I/O usage and contention, issues with checkpointing and so on, in much the same way that a DBA would
currently do by analyzing statspack reports.
g
Oracle 11 – Real Application Clusters(RAC) Page 188 of 242
WK: 6 - Day: 5.2

103. Real Application Clusters – RAC


103.1. Overview
Oracle Real Application Clusters (RAC) allows Oracle Database to run any packaged or custom application, unchanged
across a set of clustered servers. Introduced with Oracle9i, is the successor to Oracle Parallel Server (OPS). RAC allows
multiple instances to access the same database (storage) simultaneously. RAC provides fault tolerance, load balancing,
and performance benefits by allowing the system to scale out, and at the same time since all nodes access the same
database, the failure of one instance will not cause the loss of access to the database.
Oracle10g RAC is a shared disk subsystem. All nodes in the cluster must be able to access all of the data, redo log files,
control files and parameter files for all nodes in the cluster. The data disks must be globally available in order to allow all
nodes to access the database. Each node has its own redo log and control files, but the other nodes must be able to
access them in order to recover that node in the event of a system failure. The biggest difference between Oracle RAC
and OPS is the addition of Cache Fusion. With OPS a request for data from one node to another required the data to be
written to disk first, then the requesting node can read that data. With cache fusion, data is passed along a high-speed
interconnect using a sophisticated locking algorithm.

103.2. What is Oracle Database 10g RAC?


Oracle RAC is an option of Oracle Database that was first introduced with Oracle 9i. Oracle RAC is now proven
technology used by thousands of customers in every industry in every type of application. Oracle RAC provides options for
scaling applications beyond the capabilities of a single server. This allows customers to take advantage of lower cost
commodity hardware to reduce their total cost of ownership and provide a scaleable computing environment that supports
their application workload. Oracle RAC is a key component of the Oracle High Availability Architecture2, which provides
direction to architect the highest availability for applications. Oracle RAC provides the ability to remove the server as a
single point of failure in any database application environment.

103.3. Real Application Clusters Architecture


A RAC database is a clustered database. A cluster is a group of independent servers that cooperate as a single system.
Clusters provide improved fault resilience and modular incremental system growth over single symmetric multi-processor
(SMP) systems. In the event of a system failure, clustering ensures high availability to users. Access to mission critical
data is not lost. Redundant hardware components such as additional nodes, interconnects, and disks allow the cluster to
provide high availability. Such redundant hardware architectures avoid single points-of-failure and provide exceptional
fault resilience.

With Real Application Clusters, we de-couple the Oracle Instance (the processes and memory structures running on a
server to allow access to the data) from the Oracle database (the physical structures residing on storage which actually
hold the data, commonly known as datafiles). A clustered database is a single database that can be accessed by multiple
instances. Each instance runs on a separate server in the cluster. When additional resources are required, additional
g
Oracle 11 – Real Application Clusters(RAC) Page 189 of 242
WK: 6 - Day: 5.2
nodes and instances can be easily added to the cluster with no downtime. Once the new instance is started, applications
using services can immediately take advantage of it with no changes to the application or application server.
Real Application Clusters is an extension of the Oracle Database and therefore benefits from the manageability, reliability
and security features built into Oracle Database 10g.

103.3.1. Oracle Clusterware


Starting with Oracle Database 10g, Oracle provides Oracle Clusterware, a portable Clusterware solution that is integrated
and designed specifically for Oracle Database. We no longer have to purchase third party Clusterware in order to have a
RAC database. Oracle Clusterware is integrated with the Oracle Universal Installer, which the Oracle DBA is already
familiar with. Support is made easier as there is one support organization to deal with for the Clusterware and cluster
database. We can choose to run Oracle RAC with selected third party Clusterware, Oracle will work with certified third
party Clusterware however, Oracle Clusterware must manage all RAC databases.
Oracle Clusterware monitors and manages Real Application Cluster databases. When a node in the cluster is started, all
instances, listeners and services are automatically started. If an instance fails, the Clusterware will automatically restart
the instance so the service is often restored before the administrator notices it was down.
With Oracle Database 10g Release 2, Oracle provides a High Availability API so that non-Oracle processes can be put
under the control of the high availability framework within Oracle Clusterware. When registering the process with Oracle
Clusterware, information is provided on how to start, stop, and monitor the process. We can also specify if the process
should be relocated to another node in the cluster when the node it is executing on fails.

103.3.2. Hardware Architecture


Oracle Real Application Clusters is a shared everything architecture. All servers in the cluster must share all storage used
for a RAC database. The type of disk storage used can be network attached storage (NAS), storage area network (SAN),
or SCSI disk. Our storage choice is dictated by the server hardware choice and what our hardware vendor supports. The
key to choosing our storage is choosing a storage system that will provide scaleable I/O for our application, an I/O system
that will scale as additional servers are added to the cluster.
A cluster requires an additional network to the Local Area Network (LAN) that a database server is attached to for
application connections. A cluster requires a second private network commonly known as the interconnect. Oracle
recommends that we use 2 network interfaces for this network for high availability purposes. A network interface bonding
external to Oracle should be used to provide failover and load balancing. The interconnect is used by the cluster for inter-
node messaging. The interconnect is also used by RAC to implement the cache fusion technology. Oracle recommends
the use of UDP over GigE for the cluster interconnect. The use of crossover cables as the interconnect is not supported
for a production RAC database.
The cluster is made up of 1 to many servers each having a LAN connection, an interconnect connection, and must be
connected to the shared storage. With Oracle Database 10g Release 2, Oracle Clusterware and Real Application Clusters
support up to 100 nodes in the cluster. Each server in the cluster does not have to be exactly the same but it must run the
same operating system, and the same version of Oracle. All servers must support the same architecture E.G. all 32bit or
all 64bit.

103.3.3. File Systems and Volume Management


Since RAC is a shared everything architecture, the volume management and file system used must be cluster-aware.
Oracle recommends the use of Automatic Storage Management (ASM), which is a feature, included with Oracle Database
10g to automate the management of storage for the database. ASM provides the performance of async I/O with the easy
management of a file system. ASM distributes I/O load across all available resource to optimize performance while
removing the need for manual I/O tuning. Alternatively Oracle supports the use of raw devices and some cluster file
systems such as Oracle Cluster File System (OCFS) which is available on Windows, Linux and Solaris (OCFS for Solaris
will be released following Oracle Database 10g Release 2).

103.3.4. Virtual Internet Protocol Address (VIP)


Oracle RAC’s 10g requires a virtual IP address for each server in the cluster. The virtual IP address is an unused IP
address on the same subnet as the Local Area Network (LAN). This address is used by applications to connect to the
RAC database. If a node fails, the Virtual IP is failed over to another node in the cluster to provide an immediate node
down response to connection requests. This increases the availability for applications as they no longer have to wait for
network timeouts before the connection request fails over to another instance in the cluster.

103.3.5. Cluster Verification Utility


Oracle Database 10g Release 2 introduces a new cluster configuration verification tool. The cluster verification tool
eliminates errors through pre and post validation of installation steps and/or configuration changes. It can also be used for
ongoing cluster validation. The tool is invoked through a command line interface or through an API by other programs
such as Oracle Universal Installer (OUI).
g
Oracle 11 – Real Application Clusters(RAC) Page 190 of 242
WK: 6 - Day: 5.2

103.3.6. RAC on Extended Distance Clusters


RAC on Extended Distance Clusters is an architecture where nodes in the cluster reside in locations that are physically
separate. RAC on Extended Distance Clusters provides extremely fast recovery from a site failure and allows for all
nodes, at all sites, to actively process transactions as part of single database cluster. While this architecture creates great
interest and has been successfully implemented, it is critical to understand where this architecture best fits especially in
regards to distance, latency, and degree of protection it provides. The high impact of latency, and therefore distance,
creates some practical limitations as to where this architecture can be deployed. This architecture fits best where the 2
datacenters are located relatively close (<~100km) and where the extremely expensive costs of setting up direct cables
with dedicated channels between the sites has already been taken.
RAC on Extended Distance Clusters provides greater high availability than local RAC but it may not fit the full Disaster
Recovery requirements of our organization. Feasible separation is great protection for some disasters (local power
outage, airplane crash, server room flooding) but not all. Disasters such as earthquakes, hurricanes, and regional floods
may affect a greater area. Customers should do an analysis to determine if both sites are likely to be affected by the same
disaster. For comprehensive protection against disasters including protection against corruptions and regional disasters,
Oracle recommends the use of Data Guard with RAC as described in Oracle High Availability Architecture documentation.
Data Guard also provides additional benefits such as support for rolling upgrades across Oracle versions.
Configuring an extended distance cluster is more complex than a local cluster. Specific focus needs to go into node
layout, voting disks, and data disk placement. Implemented properly, this architecture can provide greater HA than a local
RAC database. The combination of Oracle Clusterware, Oracle Real Application Clusters and Automatic Storage
Management can be used to create extended distance clusters.

103.4. RAC Benefits


103.4.1. High Availability
Oracle Real Application Clusters 10g provides the infrastructure for datacenter high availability. It is also an integral
component of Oracle’s High Availability Architecture, which provides best practices to provide the highest availability data
management solution. Oracle Real Application Clusters provides protection against the main characteristics of high
availability solutions.

103.4.2. Reliability
Oracle DB is known for its reliability. Real Application Clusters takes this a step further by removing the database server
as a single point of failure. If an instance fails, the remaining instances in the cluster are open and active.

103.4.3. Recoverability
Oracle Database includes many features that make it easy to recover from all types of failures. If an instance fails in a
RAC database, it is recognized by another instance in the cluster and recovery automatically takes place. Fast Application
Notification, Fast Connection Failover and Transparent Application Failover make it easy for applications to mask
component failures from the user.

103.4.4. Error Detection


Oracle Clusterware automatically monitors RAC databases and provides fast detection of problems in the environment.
Also it automatically recovers from failures often before anyone has noticed a failure has occurred. Fast Application
Notification provides the ability for applications to receive immediate notification of cluster component failures and mask
the failure from the user by resubmitting the transaction to a surviving node in the cluster.

103.4.5. Continuous Operations


Real Application Clusters provides continuous service for both planned and unplanned outages. If a node (or instance)
fails, the database remains open and the application is able to access data. Most database maintenance operations can
be completed without down time and are transparent to the user. Many other maintenance tasks can be done in a rolling
fashion so application downtime is minimized or removed. Fast Application Notification and Fast Connection Failover
assist applications in meeting service levels and masking component failures in the cluster.

103.4.6. Scalability
Oracle Real Application Clusters provides unique technology for scaling applications. Traditionally, when the database
server ran out of capacity, it was replaced with a new larger server. As servers grow in capacity, they are more expensive.
For databases using RAC, there are alternatives for increasing the capacity. Applications that have traditionally run on
large SMP servers can be migrated to run on clusters of small servers. Alternatively, we can maintain the investment in
the current hardware and add a new server to the cluster (or to create a cluster) to increase the capacity. Adding servers
to a cluster with Oracle Clusterware and RAC does not require an outage and as soon as the new instance is started, the
application can take advantage of the extra capacity. All servers in the cluster must run the same operating system and
same version of Oracle but they do not have to be exactly the same capacity. Customers today run clusters that fit their
g
Oracle 11 – Real Application Clusters(RAC) Page 191 of 242
WK: 6 - Day: 5.2
needs whether they are clusters of servers where each server is a 2 cpu commodity server to clusters where the servers
have 32 or 64 cpus in each server. Oracle Real Application Clusters architecture automatically accommodates rapidly
changing business requirements and the resulting workload changes. Application users, or mid tier application server
clients, connect to the database by way of a service name. Oracle automatically balances the user load among the
multiple nodes in the cluster. The Real Application Clusters database instances on the different nodes subscribe to all or
some subset of database services. This provides DBAs the flexibility of choosing whether specific application clients that
connect to a particular database service can connect to some or all of the database nodes. Administrators can painlessly
add processing capacity as application requirements grow. The Cache Fusion architecture of RAC immediately utilizes the
CPU and memory resources of the new node. DBAs do not need to manually re-partition data.
Another way of distributing workload in an Oracle database is through the Oracle Database's parallel execution feature.
Parallel execution (I.E. parallel query or parallel DML) divides the work of executing a SQL statement across multiple
processes. In an Oracle Real Application Clusters environment, these processes can be balanced across multiple
instances. Oracle’s cost-based optimizer incorporates parallel execution considerations as a fundamental component in
arriving at optimal execution plans. In a Real Application Clusters environment, intelligent decisions are made with regard
to intra-node and inter-node parallelism. For example, if a particular query requires six query processes to complete the
work and six CPUs are idle on the local node (the node that the user connected to), then the query is processed using
only local resources. This demonstrates efficient intra-node parallelism and eliminates the query coordination overhead
across multiple nodes. However, if there are only two CPUs available on the local node, then those two CPUs and four
CPUs of another node are used to process the query. In this manner, both inter-node and intra-node parallelism are used
to provide speed up for query operations.
g
Oracle 11 – Real Application Clusters(RAC) Page 192 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – Real Application Clusters(RAC) Page 193 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – Real Application Clusters(RAC) Page 194 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – Real Application Clusters(RAC) Page 195 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – Real Application Clusters(RAC) Page 196 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – Real Application Clusters(RAC) Page 197 of 242
WK: 6 - Day: 5.2

Intentionally Left Blank for Your Notes


g
Oracle 11 – Glossary Page 198 of 242
WK: 6 - Day: 5.2

104. Glossary
Automatic Database Diagnostic Monitor (ADDM)
This lets the Oracle Database diagnose its own performance and determine how identified problems could be resolved. It
runs automatically after each AWR statistics capture, making the performance diagnostic data readily available.
Automatic Storage Management (ASM)
A vertical integration of both the file system and the volume manager built specifically for Oracle database files. It extends
the concept of stripe and mirrors everything to optimize performance, while removing the need for manual I/O tuning.
Automatic Storage Management Disk
Storage is added and removed from Automatic Storage Management disk groups in units of Automatic Storage
Management disks.
Automatic Storage Management File
Oracle database file stored in an Automatic Storage Management disk group. When a file is created, certain file attributes
are permanently set. Among these are its protection policy (parity, mirroring, or none) and its striping policy. Automatic
Storage Management files are not visible from the operating system or its utilities, but they are visible to database
instances, RMAN, and other Oracle-supplied tools.
Automatic Storage Management Instance
An Oracle instance that mounts Automatic Storage Management disk groups and performs management functions
necessary to make Automatic Storage Management files available to database instances. Automatic Storage
Management instances do not mount databases.
Automatic Storage Management Template
Collections of attributes used by Automatic Storage Management during file creation.
Templates simplify file creation by mapping complex file attribute specifications into a single name. A default template
exists for each Oracle file type. Users can modify the attributes of the default templates or create new templates.
Automatic Undo Management Mode
A mode of the database in which undo data is stored in a dedicated undo tablespace. Unlike manual undo management
mode, the only undo management that we must perform is the creation of the undo tablespace. All other undo
management is performed automatically.
Automatic Workload Repository (AWR)
A built-in repository in every Oracle Database. At regular intervals, the Oracle Database makes a snapshot of all its vital
statistics and workload information and stores them here.
Background Process
Background processes consolidate functions that would otherwise be handled by multiple Oracle programs running for
each user process. The background processes asynchronously perform I/O and monitor other Oracle processes to
provide increased parallelism for better performance and reliability.
Buffer Cache
The portion of the SGA that holds copies of Oracle data blocks. All user processes concurrently connected to the instance
share access to the buffer cache. The buffers in the cache are organized in two lists: the dirty list and the least recently
used (LRU) list. The dirty list holds dirty buffers, which contain data that has been modified but has not yet been written to
disk. The least recently used (LRU) list holds free buffers (unmodified and available), pinned buffers (currently being
accessed), and dirty buffers that have not yet been moved to the dirty list.
Byte Semantics
The length of string is measured in bytes.
Cache Recovery
The part of instance recovery where Oracle applies all committed and uncommitted changes in the redo log files to the
affected data blocks. Also known as the rolling forward phase of instance recovery.
Character Semantics
The length of string is measured in characters.
Checkpoint
g
Oracle 11 – Glossary Page 199 of 242
WK: 6 - Day: 5.2
A data structure that defines an SCN in the redo thread of a database. Checkpoints are recorded in the control file and
each datafile header, and are a crucial element of recovery.
Client
In client/server architecture, the front-end database application, which interacts with a user through the keyboard, display,
and pointing device such as a mouse. The client portion has no data access responsibilities. It concentrates on
requesting, processing, and presenting data managed by the server portion.
Client/Server architecture
Software architecture based on a separation of processing between two CPUs, one acting as the client in the transaction,
requesting and receiving services, and the other as the server that provides services in a transaction.
Cluster
Optional structure for storing table data. Clusters are groups of one or more tables physically stored together because
they share common columns and are often used together. Because related rows are physically stored together, disk
access time improves.
Concurrency
Simultaneous access of the same data by many users. A multi-user database management system must provide
adequate concurrency controls, so that data cannot be updated or changed improperly, compromising data integrity.
Connection
Communication pathway between a user process and an Oracle instance.
Database
Collection of data that is treated as a unit. The purpose of a database is to store and retrieve related information.
Database Buffer
One of several types of memory structures that stores information within the system global area. Database buffers store
the most recently used blocks of data.
Database Buffer Cache
Memory structure in the system global area that stores the most recently used blocks of data.
Database Link
A named schema object that describes a path from one database to another. Database links are implicitly used when a
reference is made to a global object name in a distributed database.
Data Block
Smallest logical unit of data storage in an Oracle database. Also called logical blocks, Oracle blocks, or pages. One data
block corresponds to a specific number of bytes of physical database space on disk.
Data Integrity
Business rules that dictate the standards for acceptable data. These rules are applied to a database by using integrity
constraints and triggers to prevent the entry of invalid information into tables.
Data Segment
Each nonclustered table has a data segment. All of the table’s data is stored in the extents of its data segment. For a
partitioned table, each partition has a data segment. Each cluster has a data segment. The data of every table in the
cluster is stored in the cluster’s data segment.
Dedicated Server
A database server configuration in which a server process handles requests for a single user process.
Define Variables
Variables defined (location, size, and datatype) to receive each fetched value.
Disk Group
One or more Automatic Storage Management disks managed as a logical unit. Automatic Storage Management disks can
be added or dropped from a disk group while preserving the contents of the files in the group, and with only a minimal
amount of automatically initiated I/O required to redistribute the data evenly. All I/O to a disk group is automatically spread
across all the disks in the group.
Distributed Processing
g
Oracle 11 – Glossary Page 200 of 242
WK: 6 - Day: 5.2
Software architecture that uses more than one computer to divide the processing for a set of related jobs. Distributed
processing reduces the processing load on a single computer.
DDL
Data definition language. Includes statements like CREATE/ALTER TABLE/INDEX, which define or change data
structure.
DML
Data manipulation language. Includes statements like INSERT, UPDATE, and DELETE, which change data in tables.
DOP
The degree of parallelism of an operation.
Enterprise Manager
An Oracle system management tool that provides an integrated solution for centrally managing our heterogeneous
environment. It combines a graphical console, Oracle Management Servers, Oracle Intelligent Agents, common services,
and administrative tools for managing Oracle products.
Extent
Second level of logical database storage. An extent is a specific number of contiguous data blocks allocated for storing a
specific type of information.
Failure Group
Administratively assigned sets of disks that share a common resource whose failure must be tolerated. Failure groups are
used to determine which Automatic Storage Management disks to use for storing redundant copies of data.
Indextype
An object that registers a new indexing scheme by specifying the set of supported operators and routines that manage a
domain index.
Index Segment
Each index has an index segment that stores all of its data. For a partitioned index, each partition has an index segment.
Integrity Constraint
Declarative method of defining a rule for a column of a table. Integrity constraints enforce the business rules associated
with a database and prevent the entry of invalid information into tables.
Logical Structures
Logical structures of an Oracle database include tablespaces, schema objects, data blocks, extents, and segments.
Because the physical and logical structures are separate, the physical storage of data can be managed without affecting
the access to logical storage structures.
LogMiner
A utility that lets administrators use SQL to read, analyze, and interpret log files. It can view any redo log file, online or
archived. The Oracle Enterprise Manager application Oracle LogMiner Viewer adds a GUI-based interface.
Mean Time To Recover (MTTR)
The desired time required to perform instance or media recovery on the database. For example, we may set 10 minutes
as the goal for media recovery from a disk failure. A variety of factors influence MTTR for media recovery, including the
speed of detection, the type of method used to perform media recovery, and the size of the database.
Mounted Database
An instance that is started and has the control file associated with the database open. We can mount a database without
opening it; typically, we put the database in this state for maintenance or for restore and recovery operations.
Object Type
An object type consists of two parts: a spec and a body. The type body always depends on its type spec.
Operator
In memory management, the term operator refers to a data flow operator, such as a sort, hash join, or bitmap merge.
Oracle XA
The Oracle XA library is an external interface that allows global transactions to be coordinated by a transaction manager
other than the Oracle database server.
Partition
g
Oracle 11 – Glossary Page 201 of 242
WK: 6 - Day: 5.2
A smaller and more manageable piece of a table or index.
Priority Inversion
Priority inversion occurs when a high priority job is run with lower amount of resources than a low priority job. Thus the
expected priority is "inverted."
Query Block
A self-contained DML against a table. A query block can be a top-level DML or a subquery.
Real Application Clusters (RAC)
Option that allows multiple concurrent instances to share a single physical database.
Recovery Manager (RMAN)
A utility that backs up, restores, and recovers Oracle databases. We can use it with or without the central information
repository called a recovery catalog. If we do not use a recovery catalog, RMAN uses the database's control file to store
information necessary for backup and recovery operations. We can use RMAN in conjunction with a media manager to
back up files to tertiary storage.
Redo Thread
The redo generated by an instance. If the database runs in a single instance configuration, then the database has only
one thread of redo. If we run in an Oracle Real Application Clusters configuration, then we have multiple redo threads, one
for each instance.
Schema
Collection of database objects, including logical structures such as tables, views, sequences, stored procedures,
synonyms, indexes, clusters, and database links. A schema has the name of the user who controls it.
Segment
Third level of logical database storage. A segment is a set of extents, each of which has been allocated for a specific data
structure, and all of which are stored in the same tablespace.
Sequence
A sequence generates a serial list of unique numbers for numeric columns of a database’s tables.
Server
In a client/server architecture, the computer that runs Oracle software and handles the functions required for concurrent,
shared data access. The server receives and processes the SQL and PL/SQL statements that originate from client
applications.
Shared Server
A database server configuration that allows many user processes to share a small number of server processes,
minimizing the number of server processes and maximizing the use of available system resources.
Standby Database
A copy of a production database that we can use for disaster protection. We can update the standby database with
archived redo logs from the production database in order to keep it current. If a disaster destroys the production database,
we can activate the standby database and make it the new production database.
Subtype
In the hierarchy of user-defined datatypes, a subtype is always a dependent on its supertype.
Synonym
An alias for a table, view, materialized view, sequence, procedure, function, package, type, Java class schema object,
user-defined object type, or another synonym.
System Change Number (SCN)
A stamp that defines a committed version of a database at a point in time. Oracle assigns every committed transaction a
unique SCN.
System Global Area (SGA)
A group of shared memory structures that contain data and control information for one Oracle database instance. If
multiple users are concurrently connected to the same instance, then the data in the instance’s SGA is shared among the
users. Consequently, the SGA is sometimes referred to as the shared global area.
Unicode
g
Oracle 11 – Glossary Page 202 of 242
WK: 6 - Day: 5.2
A way of representing all the characters in all the languages in the world. Characters are defined as a sequence of
codepoints, a base codepoint followed by any number of surrogates. There are 64K codepoints.
Unicode column
A column of type NCHAR, NVARCHAR2, or NCLOB guaranteed to hold Unicode.
User process
User processes execute the application or Oracle tool code.
UTC
Coordinated Universal Time, previously called Greenwich Mean Time, or GMT.
View
A view is a custom-tailored presentation of the data in one or more tables. A view can also be thought of as a "stored
query." Views do not actually contain or store data; they derive their data from the tables on which they are based. Like
tables, views can be queried, updated, inserted into, and deleted from, with some restrictions. All operations performed on
a view affect its base tables.
g
Oracle 11 – FAQs Page 203 of 242
WK: 6 - Day: 5.2

105. Oracle Certification Details


105.1. What is OCP?
Oracle and Sylvan Prometric developed Oracle Certification. OCP is a valuable industry recognized credential that
signifies a proven level of knowledge and ability. An Oracle Certified Professional establishes a standard of competence in
a specific job role.

105.2. What are the benefits from being certified?


 Valuable to hiring managers
 Added credibility (Excellent for contractors who sell narrow skill sets and must claim to be immediately
productive.)
 Increased job opportunities (OCP member’s site and other job sites). Depends on economy (job market).
 Use of Oracle certification program logo for endorsement of your proven skill by Oracle Corporation.
 Invaluable experience as a result of preparing
 New tricks and skills to add to your arsenal
 Looks good on resume (Receive certificate, logo, business card, and access to OCA and OCP websites)
 Special discounts and offers
 Exposure to features we haven’t used
Note: In order to get any Oracle Certified Professional (OCP) Certificate we have to do a course (like Oracle 9i SQL
course) from any of the Oracle Certified Training Center like (SQL * Plus, CMC etc.,).

105.3. How to Emboss OCP Logo on your Resume


 Copy and paste the OCP logo in Header of your Resume.

105.4. Oracle 7.3 DBA


Upgrade Level - If you have completed Oracle 8 certification earlier then take the upgrade exam (1 exam).
Upgrade from Oracle 7.3 to 8 DBA

Track 1Z0-010
Price Rs. 5,440/-
No. of Questions
Pass Mark
Exam Time
Discount Offered 40%

105.5. Oracle 8i DBA


Upgrade Level - If you have completed Oracle 8 certification earlier then take the upgrade exam (1 exam).
Upgrade from Oracle 8 to 8i DBA

Track 1Z0-020
Price Rs. 5,440/-
No. of Questions
Pass Mark
Exam Time
Discount Offered 40%

105.6. Oracle 9i DBA


1. Associate Level - OCA (2 Exams). You need to clear the following exams:
Introduction to Oracle 9i: SQL Database: Fundamentals I

Track 1Z0-007 1Z0-031


g
Oracle 11 – FAQs Page 204 of 242
WK: 6 - Day: 5.2

Price Rs. 3,910/- Rs. 5,440/-


No. of Questions 52 63
Pass Mark 37 49
Exam Time 2 Hours 90 Min
Reg. Req. Introduction to Oracle 9i SQL & PL/SQL Introduction to Oracle 9i SQL & PL/SQL
Course Fee Rs. 10,500/- (SQL * PLUS) Rs. 10,500/- (SQL * PLUS)
Discount Offered 40% 40%

1. Professional Level - OCP (2 Exams and one Oracle University hands-on course within the Oracle 9i DBA learning
path). In order to get OCP certificate you have to clear OCA papers. You need to clear following exams:

Oracle 9i Database: Fundamentals II Oracle 9i Database: Performance Tuning

Track 1Z0-032 1Z0-033


Price Rs. 5,440/- Rs. 5,440/-
No. of Questions 63 59
Pass Mark 49 38
Exam Time 90 Min 90 Min
Reg. Req. Introduction to Oracle 9i SQL & PL/SQL Introduction to Oracle 9i SQL & PL/SQL
Course Fee Rs. 10,500/- (SQL * PLUS) Rs. 10,500/- (SQL * PLUS)
Discount Offered 40% 40%

2. Upgrade Level - If you do not want to take the hands-on course from Oracle you can clear the Oracle 8i track first
(5 exams) and then take the upgrade exam (1 exam).
New Features for Oracle7.3 and Oracle
Upgrade from Oracle 8i to 9i DBA
8 OCPs
Track 1Z0-030 1Z0-035
Price Rs. 5,440/- Rs. 5,440/-
No. of Questions 53 84
Pass Mark 37 58
Exam Time 90 Min 2 Hours
Discount Offered 40% 40%

105.7. Oracle 10g DBA


1. Associate Level - OCA (1 Exams). You need to clear the following exam:
Oracle Database 10g: Administration I

Track 1Z0-042
Price Rs. 5,440/-
No. of Questions 77
Pass Mark 51
Exam Time 2 Hours
Reg. Req. Introduction to Oracle 9i SQL & PL/SQL
Course Fee Rs. 10,500/- (SQL * PLUS)
Discount Offered 40%

2. Professional Level - OCP (1 Exam and one Oracle University hands-on course within the Oracle 10g DBA learning
path). In order to get OCP certificate you have to clear OCA papers. You need to clear following exam:

Oracle Database 10g: Administration II

Track 1Z0-043
Price Rs. 5,440/-
No. of Questions 70
g
Oracle 11 – FAQs Page 205 of 242
WK: 6 - Day: 5.2

Pass Mark 46
Exam Time 90 Min
Reg. Req. Introduction to Oracle 9i SQL & PL/SQL
Course Fee Rs. 10,500/- (SQL * PLUS)
Discount Offered 40%

3. Upgrade Level - If you do not want to take the hands-on course from Oracle you can clear the Oracle 8i, 9i track
first (5 exams) and then take the upgrade exam (1 exam).

Upgrade from Oracle 9i to 10g DBA

Track 1Z0-040
Price Rs. 5,440/-
No. of Questions 61
Pass Mark 37
Exam Time 90 Min
Discount Offered 40%

105.8. Prometric Centers for OCP Exams


THOMSON PROMETRIC TESTING (P) LTD NIIT LTD
Avenue 1 Street 20, Above SBI III Floor, Prashanthi Complex
Plot 1672 Road 12, Banjara Hills, Basheerbagh, Hyderabad, AP - 500063
Hyderabad, - AP 500034 Phone: 5562-2249
Phone: 2330-8504.
CODE TECHNOLOGIES-(Unisoft Franchise) SQL STAR INTERNATIONAL LTD
Plot-G-2, Megasree Clasics, Ward#6 4 Motilal Nehru Nagar, Begumpet,
Block-3,Dwarakapuri Colony, Punjagutta Hyderabad, AP - 500016
Hyderabad, AP - 500082 Phone: 2776-6501
Phone: 5527-6727
JAITHRI TECHNOLOGIES PRIVATE LIMITED CMC
206 A 2nd FLOOR, Minerva Complex, Posnate Bhavan, Tilak Road,
S.D.Road, Secundrabad, AP – 5000003 Hyderabad
Phone: 5590-6179 Phone: 2475-0371
International Institute of Information Technology (IIIT)
Gachibowli, Hyderabad, A.P - 500 032
Phone: 2300-1416, 2300-1417

105.9. Website Links for Oracle OCP Dumps and Oracle FAQ’s
 www.certsbraindumps.com
 www.best-braindumps.com
 www.certificationking.com
 www.testking.com
 www.braindumps.com
 www.dbaclick.com
 www.selftestsoftware.com
 www.actualtests.com
 www.orafaq.com
 www.dbasupport.com

106. FAQs
1. Which of the following file is read to start the Instance?
a. Controlfile b. Initialization Parameter file
c. Data files d. None
g
Oracle 11 – FAQs Page 206 of 242
WK: 6 - Day: 5.2
Answer: B
Explanation: It will read Init.ora parameter file for starting the instance.
2. Which of the following file is read when database is mounted?
a. Controlfile b. Initialization Parameter file
c. Data files d. All of the above

Answer: A
Explanation: Controlfile is read while we are mounting the database
3. Which of the following actions will occur if we issue command startup at SQL prompt immediately then
a. Instance is started b. Database is mounted
c. Database is opened d. All of the above

Answer: D
Explanation: It will perform all of the above action in startup command.
4. What do Dirty buffers comprises of?
a. Buffers modified but written to disk b. Buffers not yet modified
c. Buffers only accessed for data d. Buffers modified but not yet written to disk

Answer: D
Explanation: Modified buffers in database buffer cache (SGA) which has not written to disk.
5. Which init.ora parameter is used to size database buffer cache?
a. db_cache_buffers b. data_block_buffers
c. block_buffers d. None answer

Answer: D
Explanation: If we want to change the size of daatbase buffer cache we have to spcify db_cache_size or
db_block_buffers,None of the above.
6. What do the library cache consists of?
a. Hold parsed versions of excuted sql b. Hold compiled versions of pl/sql program unit
statements
c. Both a and b d. Metadata

Answer: C
Explanation: Consists of both parsed versions of sql and pl/sql
7. How can we size shared pool?
a. shared_pool_size b. db_shared_pool
c. set db_shared_pool d. alter database

Answer: A
Explanation: We have to specify shared_pool_size=<value> in inialization parameter file.
8. What cannot be contents of Program global area?
a. Users program variables b. Users session information
c. Users own sql statements d. User defined Cursors

Answer: C
Explanation: PGA contains program variables,session information,cursors.Not a SQL statement.
9. What happens during process of checkpoint?
a. Its a event of recording redolog buffer b. Its a event of recording number of rollbacks
entries onto redolog files. and commits
c. Its a event of recording modified d. None of the above
blocks in database buffer cache onto
data files.

Answer: C
g
Oracle 11 – FAQs Page 207 of 242
WK: 6 - Day: 5.2
Explanation: When checkpoint occures it will invoke the DBWR to write dirty blocks from database buffer cache
to Database Files.
10. Which of the following is not a function of SMON?
a. Crash Recovery b. Clean up of temporary segments
c. Coalescing free space d. Taking care of background processes of the
system

Answer: D
Explanation: SMON will do the crash recovery,cleaning of temporary segments and coalescing free space. But
it doesnot take care of background process.
11. Which of the following file is read to start the Instance?
a. Controlfile b. Initialization Parameter file
c. Data files d. None

Answer: B
Explanation: It will read Init.ora parameter file for starting the instance.
12. Which of the following file is read when database is mounted?
a. Controlfile b. Initialization Parameter file
c. Data files d. All of the above

Answer: A
Explanation: Controlfile is read while we are mounting the database
13. Which of the following actions will occur if we issue command startup at SQL prompt immediately then
a. Instance is started b. Database is mounted
c. Database is opened d. All of the above

Answer: D
Explanation: It will perform all of the above action in startup command.
14. Which of the following can not be a part of System Global Area?
a. Database buffer cache b. Large Pool
c. Program global area d. Java Poo

Answer: C
Explanation: PGA(program global area) is not part of SGA.It is a seprate memory structure.
15. What do Dirty buffers comprises of?
a. Buffers modified but written to disk b. Buffers not yet modified
c. Buffers only accessed for data d. Buffers modified but not yet written to disk

Answer: D
Explanation: Modified buffers in database buffer cache (SGA) which has not written to disk.
16. Which init.ora parameter is used to size database buffer cache?
a. db_cache_buffers b. data_block_buffers
c. block_buffers d. None answer

Answer: D
Explanation: If we want to change the size of daatbase buffer cache we have to spcify db_cache_size or
db_block_buffers,None of the above.
17. What do the library cache consists of ?
a. Hold parsed versions of excuted sql b. Hold compiled versions of pl/sql program unit
statements
c. Both a and b d. Metadata

Answer: C
g
Oracle 11 – FAQs Page 208 of 242
WK: 6 - Day: 5.2
Explanation: Consists of both parsed versions of sql and pl/sql
18. How can we size shared pool?
a. shared_pool_size b. db_shared_pool
c. set db_shared_pool d. alter database

Answer: A
Explanation: we have to specify shared_pool_size=<value> in inialization parameter file.
19. What cannot be contents of Program global area?
a. Users program variables b. Users session information
c. Users own sql statements d. User defined Cursors
Answer: C
Explanation: PGA contains program variables,session information,cursors.Not a SQL statement.
20. What happens during process of checkpoint?
a. Its a event of recording redolog buffer b. Its a event of recording number of rollbacks
entries onto redolog files. and commits
c. Its a event of recording modified d. None of the above
blocks in database buffer cache onto
data files.
Answer: C
Explanation: When checkpoint occures it will invoke the DBWR to write dirty blocks from database buffer cache
to Database Files.
21. Which of the following is not a function of SMON?
a. Crash Recovery b. Clean up of temporary segments
c. Coalescing free space d. Taking care of background processes of the
system
Answer: D
Explanation: SMON will do the crash recovery, cleaning of temporary segments and coalescing free space. But
it doesnot take care of background process.
22. The total number of Base tables that get created into sys account?
a. 1000 b. 82
c. 1762 d. 100
Answer: C
Explanation: 1762 base tables will get created in sys account.
23. What is the status of your database when we run the create databse file(ex cr8demo.sql)
a. Nomount b. mount
c. open d. shutdown
Answer: A
Explanation: Database status should be in nomount status.Because for mounting a database it requires a
controlfile.
24. Who are the users that gets created automatically the moment the database is created
a. sys,system b. sys,system,scott
c. scott d. no users get created by default
Answer: A
Explanation: sys and system users will get created when we create a database.
25. What is the tablespaces that accomodates base tables?
a. Undotbs b. temp
c. system d. user_data
Answer: C
Explanation: It creates Base tables in SYSTEM tablespace.
g
Oracle 11 – FAQs Page 209 of 242
WK: 6 - Day: 5.2
26. What is the default tablespace for sys user
a. user_data b. system
c. temp d. undotbs
Answer: B
Explanation: Default tablespace for SYS user is SYSTEM.
27. Data Dictionary Views are Static?
a. True b. False
Answer: A
Explanation: Data dictionary view are nothing but DBA_, ALL_, USER_.
28. Is the database creation successful with this command?
SQL> create database;
a. True b. False
Answer: A
Explanation: It will use OMF for creating controlfile and datafiles.
29. Which of the following is not true when a command 'SHUTDOWN NORMAL' is issued?
a. Database and Redo buffers are b. The next startup doest not require any
written to disk instance recovery
c. New Connections can be made d. Background process are terminated
Answer: C
Explanation: When we issue a command SHUTDOWN then it will wait for connected users to disconnect but it
dosenot allow to any user to log in to the database.
30. Which of the following is not the content of parameter file?
a. Names and locations of controlfiles b. Information on UNDO segments
c. Names and locations of datafiles d. Allocations for memory structures of SGA
Answer: C
Explanation: It doesnot maintain the location and names of datafiles that will be maintain by controlfile.remaining
are the content of parameter file.
31. Can we create a tablespace with multiple datafiles at a single stroke
a. Yes b. No
Answer: A
Explanation: We can create tablespace in single stroke with command SQL>create tablespace
<tablespacename> datafile '<path of datafile 1>' size 2m,'<path of datafile 2>' size 3m; like this we can specify
multiple datafiles for 1 tablespace.
32. Can a datafile be associated with two different tablespaces
a. Yes b. No
Answer: B
Explanation: One datafile can associated to one tablespace not more than one tablespace.
33. Suppose your database has max_datafiles limit of 80 and we want to add files above this limit which file we
need to modify
a. Controlfile b. Init.ora
c. Alertfile d. None
Answer: A
Explanation: In controlfile we have to change MAXDATAFILES=<number> and we have to recreate controlfile
then only it will change the limit of datafiles for the database.
34. select the view which tells us about all the tablespaces in your database
a. v$database b. dba_tables
c. v$tablespace d. dba_table_space
Answer: C
g
Oracle 11 – FAQs Page 210 of 242
WK: 6 - Day: 5.2
Explanation: V$TABLESPACE view give the tablespace details in a database.
35. Can we bring system tablespace offline when the database is up
a. Yes b. No
Answer: B
Explanation: We cannot make system tablespace offline because it contains base tables.
36. What is default initial extent size when the tablespace is dictionary managed
a. 64k b. 20K
c. 10 blocks d. 5 blocks
Answer: D
Explanation: When we creates a dictionary managed tablespace it will give the initial extent as 5*<block_size>.
37. Which parameter should be added in init.ora file for creating tablespace with multiple blocksizes.
a. db_nk_cache_size=n b. block_size=n
c. multiple_blocks=n d. multiple_cache_size=n
Answer: A
Explanation: We have to add db_Nk_cache_size=<value> where N in 2,4,8,16,32.we are specifying value for
that perticular block size, oracle allocate buffers in database buffer cache for that block size. whenever we
perform any transaction on that particular block size tablespace it will use that buffers.
38. What is the value for the storage clause pctincrease when the tablespace extent management is local
(uniform)
a. 50% b. 0%
c. 100% d. 10%
Answer: B
Explanation: PCTINCREASE for locally managed tablespace is 0%.
39. What is the command that combines all the smaller contiguousfree extents in the tablespace into one larger
extent
a. Merge b. sum
c. coalesce d. add extents
Answer: C
Explanation: Coalesce is used to combine all the smaller continous free extents in the tablespace into one larger
extent.merge and sum are SQL commands related to table and add extents is not a valid.
40. If the system datafileis to be renamed, the database must be in which mode?
a. Nomount b. mount
c. open d. close
Answer: B
Explanation: For renameing a datafile beloging to system tablespace our database should be in mount state
because system contains all the base tables when we open a database it will continuosly update the base tables
evenif we are not performing transactions.
41. After creating a tablespace what is the default value for segment space management in 9i?
a. Auto b. dictionary
c. local d. manual
Answer: D
Explanation: Its MANUAL in 9i and Oracle10g its AUTO.
42. A tablespace was created with extent management as local. After that the tablespace extent management
was changed from local to dictionary. What would be the next extent size?
a. 64k b. 1m
c. 10k d. null
Answer: B
Explanation: Its 1m after the change
g
Oracle 11 – FAQs Page 211 of 242
WK: 6 - Day: 5.2
43. If we create a tablespace with extent management dictionary and block size 8k with default storage initial
10k.After creating this tablespace whatvalue it will show for initial_extent in dba_tablespaces?
a. 10k b. 64k
c. 40k d. 1m
Answer: C
Explanation: If extent management is dictionary then database reqiures initial extent size atleast (block_size *
5).here its 8k*5=40k.
44. Can we create a tablewith your own parameters like (initial 300k next 300k minextents ) on tablespace
whoseextent management is local
a. Yes b. No
Answer: A
Explanation: Yes we can create
45. A locally managed tablespace is madeoffline what is the status of bytes column in dba_data_files?
a. It shows the orginal bytes b. It shows the null value
c. It shows the used value d. None
Answer: B
Explanation: It will show null value
46. Can we resize a datafile where the related tablespace is inoffline mode?
a. No b. Yes
Answer: A
Explanation: We cannot do it
47. DBA changeda datafile's autoextend value to on, what is the default value for increment_by(column)located
in dba_data_files?
a. 100m b. om
c. 10m d. 1m
Answer: D
Explanation: When we will changed datafile to Autoextend on then value of increment_by column in
DBA_DATA_FILES will be 1m (bydefault) means after filling of datafile complete it will increase the size datafile
by 1m everytime.
48. Can we drop a object when the tablespace is in read only mode?
a. No b. Yes
Answer: B
Explanation: Yes we can do it
49. We are trying to create a table with your own storage parameters in a locally managed tablespace. Guess
what happens?
a. Unable to create b. It gets created with your given storage
parameters
c. It will create the table with default d. None
storage parameters at tablespace
level.
Answer: C
Explanation: It wil create tablespace with default storage parameters
50. Extent deallocation for a segment is done when _______________
a. dropped, truncate b. delete, truncate
c. delete, alter d. none
Answer: A
Explanation: When we dropped or truncate a object it will deallocate the extents for that segment.
51. What type of data is available in rollback segments
a. previous image b. post updated image
g
Oracle 11 – FAQs Page 212 of 242
WK: 6 - Day: 5.2

c. meta data d. no data


Answer: A
Explanation: The main purpose of rollback segments are to maintain before/previous image of data.
52. One of these is not the purpose of rollback segments
a. undo previous command b. read consistency
c. crash recovery d. backup support
Answer: D
Explanation: Backup support is not a function of rollback segment.
53. What is the default status of rollback segment the moment it is created
a. Offline b. online
c. deffered d. pending
Answer: A
Explanation: After creating rollback segment the default status will be offline.
54. What is the storageparameter that is unique to rollback segments
a. Initial b. dictionary
c. optimal d. shrink
Answer: C
Explanation: For a rollback segment unique storage parameter is optimal for shrinking.
55. Suppose a rollback segment is occupied by a tranasaction and in the mean time the rollback segment is
bought offline at that moment what is the status of that rollback segment
a. Offline b. deffered
c. pending offline d. cannot be made offline
Answer: C
Explanation: When rollback supporting one transaction and in mean time if your making that rollback segment
offline then the status of rollback segment will be pending offline because one active transaction was going on
that rollback segment.
56. What does high water mark size(hwm size)in rollback segments state
a. the max size rollback segment has b. the optimal size of rollback segment
grown ever
c. the min size rollback segment has d. None
ever been
Answer: A
Explanation: High water mark size indicates the max size of rollback segment has grown ever in his life time.
57. Suppose the users tablespace is bought offline which has some open transaction later the user said commit
what is the status of the rollback segment at this stage
a. deffered b. optimal
c. pending offline d. offline
Answer: A
Explanation: The status of rollback segment will be deffered.
To make rollback segments online the moment the database is started what is the file we need to modify
a. controlfile b. logfile
c. init.ora d. orapwd file
Answer: C
Explanation: We have to modify init.ora parameter file to online any rollback segment when DB is started.
58. Can a rollback segment hold multiple entries
a. No b. Yes
g
Oracle 11 – FAQs Page 213 of 242
WK: 6 - Day: 5.2
Answer: B
Explanation: Yes rollback segment can hold multiple entries. It work on First in First serve basis.
59. Can we drop an undo tablespace which currently in use?
a. Yes b. No
Answer: B
Explanation: Oracle donot allow to drop undo tablespace which in use,Because other than sys use system
rollback segment cannot use any other users.
60. Can we create the permanent objects in default temporary tablespace of a DB
a. Yes b. No
Answer: B
Explanation: We cannot create any permanent object in any temporary tablespace.
61. Can we make a tempfile read only.
a. Yes b. No
Answer: B
Explanation: DBA cannot make any tempfile read only.
62. Which of the following view, by which we can find out the default temporary tablespaceof a DB
a. dba_temp_files b. v$tempfile
c. database_properties d. db_properties
e. None
Answer: C
Explanation: By database_properties we can find which TS is default temporary TS for a database.
63. What is the extent_management value for the temporary tablespace created in 10g
a. Local b. Dictionary
c. System d. User
e. None
Answer: A
Explanation: Its a Local, because extent information of dictionary managed tablespace will be stored in data
dictionary and locally managed tablespace extent information will be stored in locally in same tablespace so it will
reduce the burden on dictionary. If temporary tablespace is dictionary managed then its burden is on data
dictionary.
64. What is the min. size for a temporary file to be created.
a. 1030k b. 1040k
c. 1041 k d. 1031 k
e. 1050 k
Answer: C
Explanation: We can create temporary file with minimun 1041k size.
65. What is the value for allocation_type column in dba_tablespaces view for temporary tablespace
a. SYSTEM b. LOCAL
c. USER d. UNIFORM
e. NONE
Answer: D
Explanation: Oracle will uniformly allocate extents for temporary tablespace.
66. Which of the following cmd is used to make the temporary TS as default temporary TS of a DB
a. SQL>alter database default b. SQL>alter database default tablespace
temporary tablespace temporary <tablespace_name>
<tablespace_name
c. SQL>alter database temporary d. SQL>alter system set default temporary
g
Oracle 11 – FAQs Page 214 of 242
WK: 6 - Day: 5.2

tablespace <tablespace_name tablespace <tablespace_name


e. NONE
Answer: A
Explanation: SQL>alter database default temporary tablespace <tablespace_name>; to make a default
temporary tablespace for a database.
67. Which of the following conditions should meet to convert permenant TS into Temporary
a. Extent Management Local Auto and b. Extent Management Local Uniform and TS
TS must be empty must be empty
c. Extent Management Dictionary and d. None
TS must be empty
Answer: C
Explanation: That tablespace should be dictionary managed and must be empty.
68. What is the command to convert a permenant TS into temporary
a. SQL>alter database tablespace b. SQL>alter tablespace <tablespace_name>
<tablespace_name> temporary; temporary;
c. SQL>alter tablespace permenant d. SQL>alter database permenant
<tablespace_name> temporary; <tablespace_name> temporary;
e. None
Answer: B
Explanation: Alter tablespace <tablespace_name> temporary; it is possible only for Dictionary managed TS and
must be empty.
69. Can we create temporary tablespace with "SEGMENT SPACE MANAGEMENT AUTO"
a. Yes b. No
c.
Answer: B
Explanation: No we cannot create temporary tablespace with segment space management auto.
70. Create user with out mentioning default tablespace clause.Then bydefault which tablespace allocate for that
user?
a. System b. user_data
c. temp d. SYSAUX
e. Default TS for DB
Answer: E
Explanation: From 10g onwords which is the default tablespace for database that will be assign to the user. In 9i
it is system tablespace.
71. One user assigned select on <table> to another user with grant option after that this second user assined
same privilage to third user.After that first user revoked this privilege from second user.Then third user can
he use that already assined privilege?
a. No b. Yes
Answer: A
Explanation: No the user cannot.
72. If DBA created one role with some privileges and assigned this role to users. After that he want revoke on
privilege from that users how?

a. revoke from <username> b. revoke from <username,username,....>


c. revoke from <role> d. We can't
Answer: C
Explanation: We have to revoke the privilege from the role. We cannot revoke directly from the user.
73. DBA created one profile and assined to users for applicable for that which parameter we need set in init.ora?
g
Oracle 11 – FAQs Page 215 of 242
WK: 6 - Day: 5.2

a. timed_statistics=true b. resource_limits=true
c. resource_limit=true d. none
Answer: C
Explanation: Resource_limit=true we have set init.ora file so profile will effect on user.
74. Which privilege is necessary for a normal user to change his password?
a. create any table b. create session
c. alter user d. alter any user
Answer: B
Explanation: User require create session privilege to change his own password because he is own that whole
schema.
75. How to manually lock user account?
a. user <username> account lock b. alter user <username> account lock
c. alter user <username> identified by d. none
<new password>
Answer: B
Explanation: Alter user <username> account lock;
76. From which view user can see his privileges?
a. user_role_privs b. dba_sys_privs
c. session_privs d. role_role_privs
Answer: C
Explanation: session_privs will show the privileges for that user.
77. One user has quota on two tablespaces.Can he create his tables other than default tablespace?
a. No b. Yes
Answer: B
Explanation: Yes if user is having quota on different tablespaces he can create his own objects on that
tablespaces.
78. System user granted DBA to normal user.Now can this user revoke DBA from system?
a. Yes b. No
Answer: A
Explanation: User can revoke a dba privilege from sysdba.Because Oracle database requires at any point of
time one DBA only.
79. For creating password file for sys in which directory we are created?
a. $HOME b. $ORACLE_HOME/rdbms/admin
c. $ORACLE_HOME/dbs d. $ORACLE_HOME/sqlplus/admin
Answer: C
Explanation: We have to create your password file in ORACLE_HOME/dbs directory only then only it will read
that password file.
80. What is the default location of listener.ora file?
a. $ORACLE_HOME/rdbms/admin b. $ORACLE_HOME/dbs
c. $ORACLE_HOME/network/admin/sa d. $ORACLE_HOME/network/tools/samples
mples
Answer: C
Explanation: Bydefault listener.ora file will be available in $ORACLE_HOME/network/admin/samples.
81. The Listener service is stopped after giving a connection to a client. What is the status of client?
a. connection will lost b. connection will be continued by giving an error
message
c. the client session hangs d. connection will be continued without any
messages
g
Oracle 11 – FAQs Page 216 of 242
WK: 6 - Day: 5.2
Answer: D
Explanation: Connection will be continued without any messages because listener has already authenticated
that user for database.
82. What is the command to Start the Listener for a particular parameters set?
a. lsnrctl reload <listenername> b. lsnrctl start <listenername>
c. tnsping <tnsname> start d. lsnrctl startall;
Answer: B
Explanation: LSNRCTL START <listenername> to start the listener.lsnrctl reload will be used when we add
more sevices in listener.ora.lsnrctl start all doesnot work because all is not a listenername, every listener is
having different names,tnsping utility is not for listener.
83. Can we start the listener service for a database which is not yet started/opened?
a. Yes b. No [ minimum the database should be in
mount state ]
c. No [ first the database should be d. None of the above
opened ]
Answer: A
Explanation: Listener is independent fromdatabase thats why we can start/stop the listener without database
also.
84. In which file do we set this parameter FAILOVER = ON to use failover server option of Oracle Networking?
a. init.ora file b. tnsnames.ora
c. listener.ora d. both listener.ora & tnsnames.ora
e. controlfile
Answer: B
Explanation: In TNSNAMES.ORA file we have to specify FAILOVER=ON to use failover server option of Oracle
networking.
85. Can we start Multiple database services with in one listerer service?
a. No b. Yes
c. Yes & its Only in Oracle9i
Answer: B
Explanation: Yes we can start n number of services with one listener.
86. Can I have multiple listeners for a single database?
a. Yes b. No
c. Yes & its only in Oracle9i
Answer: A
Explanation: Yes we can configure n number of listener for one database.
87. What is the view do we query to find out the users who are connected using oracle networking?
a. DBA_USERS b. DBA_NET_INFO
c. V$SESSION d. DBA_CLIENT_INFO
Answer: C
Explanation: We can query V$SESSION to find out all the information of users who are logged in to your
database.We can find out from where user logged in,at what time he logged in etc.
88. After some modifications to listener file, How can I refresh the already running listener service without
stopping it?
a. lsnrctl start <listenername> b. lsnrctl reload <listenername>
c. lsnrctl status <listenername> d. lsnrctl restart <listenername>
Answer: B
Explanation: We can use reload option to refresh already running listener.
89. Which operations we can perform using network connections?
g
Oracle 11 – FAQs Page 217 of 242
WK: 6 - Day: 5.2

a. DML b. DDL
c. A&B d. Only DML's
Answer: C
Explanation: We can use DML and DDL oprations using network connection because using oracle networking
directly your login to user SCHEMA.
90. Which background process is needed to create materialized view?
a. ckpt & cjq0 b. lgwr & reco
c. dbw0 & reco d. reco & cjq0
Answer: D
Explanation: cjq0 process is reqiured to refresh materailized views and reco is required for maintaining
distributed transactions between database.
91. Which parameter do we use to start the reco process?
a. job_queue_process b. reco_processes
c. distributed_transactions d. global_names
Answer: C
Explanation: DISTRIBUTED_TRANSACTIONS parameter is responsible for distributed transactions. From 9i
onwords this reco is mandatory background process so Oracle depricated this parameter.
92. Is it mandatory to put the parameter global_names=true for creating database links?
a. Yes b. No
Answer: B
Explanation: It is not mandatory to put parameter global_names=true for creating database link because this
parameter we have to set when we are creating global database links.
93. For creating database links is it necessary to put some value for distributed_transactions parameter?
a. No [ not required at client ] b. Yes [ needed only at client ]
c. Yes [ needed at both client and d. Yes [ needed only at server ]
server ]
Answer: B
Explanation: Its required only at client side.
94. Which background process will refresh the materialized view on a given refresh interval?
a. cjq0 b. reco
c. arc0 d. ckpt
Answer: A
Explanation: CJQ0 background process will refresh the materialized view after every refresh interval.
95. Can we do any DML operations on materialized view?
a. Yes [ Only its not possible with b. Yes [ Only with refresh fast option ]
refresh fast option ]
c. No d. Yes
Answer: C
Explanation: No materialized view is only read only we cannot perform any DML operations on materialized
views.
96. How many refresh options do we have for creating materialized view?
a. 2 b. 3
c. 1 d. 4
Answer: B
Explanation: We have only three refresh option for creating materialized view (COMPELETE, FAST, FORCE).
97. Can we manually refresh any materialized view?
a. Yes b. No
g
Oracle 11 – FAQs Page 218 of 242
WK: 6 - Day: 5.2
Answer: A
Explanation: Yes we can refresh materialized view manually using DBMS_MVIEW package.
98. What is the segment type for a materialized view?
a. View b. table
c. materialized view d. synonym
Answer: B
Explanation: When we create a materialized view that a local table in database so segment type of materialized
view is TABLE.
99. What is the syntax to drop materialized view?
a. SQL> DROP VIEW <view_name>; b. SQL> DROP TABLE <mview_name>
cascade;
c. SQL> DROP MATERIALIZED VIEW d. SQL> DROP <mview_name> cascade;
<mview_name>;
Answer: C
Explanation: SQL> DROP MATERIALIZED VIEW <mview_name>; other options are not valid for materialized
views.
100. What is the status of the tablespace when itsgetting exported(transportable tablespace)
a. read write b. read only
c. offline d. pending offline
Answer: B
Explanation: Since data should not be manipulated while its getting exported that's what the tablespace must be
in read only mode.
101.What should be the status of the database when we are taking full database backup
a. Mount b. shutdown
c. open d. nomount
Answer: C
Explanation: Since in open state data object's definitions and data are available and in other states its not
available that's what the database must be in open mode.
102.What does compress parameter mean
a. compress all the extents of a table b. compress the data of a tablespace
and make one single big extent
c. compress the datafiles d. changes data to binary mode
Answer: A
Explanation: By default the compress parameter value is 'Y' if u don't want to compress all the extents into a
single extent then u can mention compress=N while taking export.
103.One of these levels is NOT supported by export utility
a. table level b. schema level
c. database level d. block level
Answer: D
Explanation: Exp is a logical backup it'll not support block level backup.
104.What does volsize parameter in exports mean
a. volume of database exported b. number of bytes to write to each tape volume
c. the max volume of data we can d. the min volume of data we can export at a
export at a single stroke single stroke
Answer: B
Explanation: volsize parameter gives the size which can be written to each tape volume.
105.Suppose we had taken a full database export and now want to import it to a non oracle database can we do
it?
g
Oracle 11 – FAQs Page 219 of 242
WK: 6 - Day: 5.2

a. Yes b. No
c. Possible but need to use sqlloader d. Yes if that database is also of same size
utility
Answer: B
Explanation: No, it's not possible because in the dump file the object definitions and data format will be in the
format of Oracle.
106.What does feedback parameter in exports state
a. it states whether the export is b. it gives us feedback after every 'n' rows are
successfull or unsucessfull exported
c. such parameter is not available d. it states whetherconstraints are exported or
skipped
Answer: B
Explanation: If we give some value for feedbackup parameter after that many rows exported it'll give the
feedback default value is 0.
107.Can we export a table's structure but not records
a. Yes it is possible b. No it is not possible
Answer: A
Explanation: We can export only a table's structure by giving the parameter rows=N.
108.What is the purpose of destroy parameter while importing
a. destroys the previous export and b. destroys all the constraints on a table
creates a fresh export
c. destroys all the indexes associated d. destroys the datafile and recreates datafile
with a table
Answer: D
Explanation: If u give destroy=y then the it'll destroys the datafiles and recreates datafiles.
109.Is it possible to export single partition of a table
a. Yes b. No
Answer: A
Explanation: Yes, by giving the parameter tables=(<table_name>:<partition_name>)
110.Which oracle utility is faster for downloading data into a dumpfile
a. Exp b. expdp
c. sqlldr d. all of the above
Answer: B
Explanation: In EXPDP we can start multiple process to download the data by specifing PARALLEL=<value>
111.Which role is required for performing a FULL export datadump
a. EXP_FULL_DATABASE b. DBA
c. A or B d. Both A and B
e. None
Answer: C
Explanation: DBA or EXP_FULL_DATABASE any one of these roles are required to take full export of
database.
112.In which schema is the master table created when performing expdb
a. SYS b. SYSTEM
c. Schema through which the utility is d. No table will be created
invoked
e. None
Answer: C
g
Oracle 11 – FAQs Page 220 of 242
WK: 6 - Day: 5.2
Explanation: It will create master table in user schema through which expdp utility is invoked and after
completion of job it will delete master table from the schema.
113.Which parameter of expdb should I use when a particular tablespace needs to be backed up
a. TABLESPACES b. TABLESPACE
c. TRANSPORT_TABLESPACE d. Both A and C
Answer: A
Explanation: Tablespace is not a valid parameter and transport_tablespace will be used when we are
transporting tablespace. For exporting one tablespace we have to use TABLESPACES=<tablespacename>.
114.Which parameter should I use to speedup the expdb
a. PROCESSES b. PARALLEL
c. THREADS d. None
Answer: B
Explanation: To speedup the process of expdp we should use parallel. Processes and Threads are not a vaild
parameters in expdp.
115.Can i interrupt a expdb process?
a. Yes b. No
Answer: A
Explanation: Yes we can.This is the main advantage of expdp over tradational exp/imp.
116.Which two PL/SQL Packages are used by the Oracle Data Dump?
a. UTL_DATADUMP b. DBMS_METADATA
c. DBMS_DATADUMP d. ULT_FILE
e. DBMS_SQL
Answer: B, C
Explanation: Oracle has given dbms_metadata and dbms_datapump for Oracle Data Pump utility. With help of
this packages we can start/stop/view the datapupm jobs.
117.Which command-line parameter of expdb and impdb clients connects we to an existing job?
a. CONNECT_CLIENT b. CONTINUE_CLIENT
c. APPEND d. ATTACH
Answer: D
Explanation: Continue_client is for starting the job. Attach is to attach existing job and remaining not a vaild
parameters in expdp.
118. Which parameter is not a valid for using impdp client?
a. REMAP_TABLE b. REMAP_SCHMA
c. REMAP_TABLESPACE d. REMAP_DATAFILE
e. None
Answer: A
Explanation: REMAP_TABLE is not a vaild parameter in impdp. all other are valid parameters if we want to see
all the valid parameter which we can specify with impdp at \$impdp help=y
119. Users are experiencing delays in query response time in a database application. Which area should we look
at first to resolve the problem?
a. Memory b. SGA
c. SQL statements d. I/O
Answer: C
Explanation: Inefficient SQL coding may cause bottelnecks in application performance.
120. Which of the following is measurable tuning goal that can be used to evaluate system performance?
a. Number of concurrent users b. Database size
c. Making the system run faster d. Database hit percentages
Answer: D
g
Oracle 11 – FAQs Page 221 of 242
WK: 6 - Day: 5.2
Explanation: If overall successfull Database hit percentage is more than 85% then we can say the performance
of the db is good.
121.Performance has degraded on your system. And we discover paging and swapping is occuring. What is
possible cause of this problem?
a. The SGA is too small b. PGA is too large
c. SGA is too large d. None
Answer: B
Explanation: If pga size is too large then users are able to communicate with server but the performance will be
degraded if enough space is not available in ram, to support this user transactions oracle start using swapping
and paging.
122.Which facility of oracle is used to format the output of an sql*trace?
a. Analyze b. tkprof
c. explain plan d. server manager
Answer: B
Explanation: tkprof is the utility by which we can convert a oracle trace file into ASCII file.
123.Which role is needed for a user to run autotrace facility?
a. Plustrace b. plusrole
c. plusgrant d. none
Answer: A
Explanation: To runautotrace user should have plustrace role.
124.Which script creates plan_table?
a. utlexcpt.sql b. utlmontr.sql
c. utlsidsx.sql d. utlxplan.sq
Answer: D
Explanation: For creating a plan table we have to execute utlxplan.sql from $ORACLE_HOME/rdbms/admin
directory.
125.Which tkprof option is used to set recursive sql statements?
a. SYS b. record
c. sort d. insert
Answer: A
Explanation: If we specify SYS=Y then it will dump recursive sql statements also.
126.We query v$librarycache view. Which column contains executions of an item stored in library cache?
a. Reloads b. Pins
c. Invalidations d. Gets
Answer: B
Explanation: In v$librarycache view "PINS" column contains executions of an item
127.Which change would we make if we wanted to decrease the number of disk sorts?
a. Increase sort_area_retained_size b. Decrease sort_Area_size
c. Increase sort_area_size d. Decrease sort_Area_retained_size
Answer: C
Explanation: we have to increase the value of sort_area_size to decrease the number of disk sorts.
128.What happens in M.T.S. configured environment?
a. A server process is allocated for a b. A server process is allocated for many client
client process processes
c. A server process is deallocated for d. none
client process
Answer: B
g
Oracle 11 – FAQs Page 222 of 242
WK: 6 - Day: 5.2
Explanation: One server process will be allocated for many client processes it will respond requests on the basis
first come first serve.
129.Which part of SGA becomes mandatory if we configure M.T.S?
a. Java pool b. Shared pool
c. Database buffer cache d. Large pool
Answer: D
Explanation: While configuring MTS it is mandatory to use large_pool_size=<value> because it will create user
global area in large_pool otherwise it will use shared_pool for creating user global area.
130.Where does Oracle store user session and cursor state information in a MTS environment?
a. User global area (UGA) b. Program global area (PGA)
c. System global area (SGA) d. None
Answer: A
Explanation: User global area(UGA) store user session and cursor state information in MTS.SGA is for
database and PGA is used when we are using dedicated server.
131.Which of the following options of dispatcher parameter of init.ora would we consider while starting
dispacthers of MTS? (choose two)
a. Dispatchers=3 b. Protocol=tcp
c. Port=1768 d. All
Answer: A, B
Explanation: We can start dispatchers for different protocols so it is mandatory to specify protocol and
dispatchers.
132.MTS is configured in our database and we want to use dedicated connection, then which of the following
clause we will use in tnsnames.ora file?
a. server=shared b. srvr=dedicated
c. client=dedicated d. All
Answer: B
Explanation: We have to specify srvr=dedicated for dedicated server request.
133.Which of the following parameter of init.ora informs to the database on which listener process listens for
database connection requests?
a. LOCAL_LISTENER b. SHARED_SERVERS
c. MAX_SHARED_SERVERS d. Not possible
Answer: A
Explanation: LOCAL_LISTENER informs to the database on which listener process listens requests.
SHARED_SERVER is for starting shared server process and max_shared_server is for maximum how many
shared server process we can start for this database.
134.Which of following is true for MTS?
a. start the listener and then start the b. start the database and then start the listener
database
c. start the database and listener d. none
simultaneously
Answer: A
Explanation: We have to start the listener first and then database because we specified local_listener parameter
in init.ora file. In this we specfied address of same listener which we specified in listener.ora file.
135. Which of the following decides how many dispacthers does your MTS server shall have?
a. MTS_DISPATCHERS b. SHARED_SERVERS
c. LOCAL_LISTENER d. None
Answer: A
Explanation: MTS_DISPATCHERS will show how many dispatchers does MTS server shall have. This depends
on how busy is your dispatchers.
g
Oracle 11 – FAQs Page 223 of 242
WK: 6 - Day: 5.2
136. Which of the following view provides information about dispatcher processes like name, network address …
a. v$queue b. v$sga
c. v$dispatcher_rate d. v$dispatcher
Answer: D
Explanation: V$DISPATCHER will provide an information about dispatchers we have started and the network
address etc. Other are for v$sga for sga infomation,v$queue for message queues information, and
dispatcher_rate will show the infomation about how frequntly perticular dispatchers processing a requests.
137.Which of the following provide statistics for the dispatcher processes?
a. v$queue b. v$sga
c. v$dispatcher_rate d. v$dispatcher
Answer: C
Explanation: V$DISPATCHER_RATE will provide the dispatcher statistics.v$dispatcher_rate will show the
infomation about how frequntly perticular dispatchers processing a requests.
138.Which of the following contains information about shared server message queues?
a. v$queue b. v$sga
c. v$dispatcher_rate d. v$dispatcher
Answer: A
Explanation: V$QUEUE will contains the information of message queues.
139.Which of the following view provides information about connections to the database through dispatchers?
a. v$queue b. v$circuit
c. v$dispatcher_rate d. v$dispatcher
Answer: B
Explanation: V$CIRCUIT will provide the infrmation about all the connections made to the database through
dispatchers.
140.How many types of partitions we can create?
a. 3 b. 4
c. 2 d. unlimited
Answer: B
Explanation: There are 4 partitions. range partition, hash partition, list partion and composite partition
141.We are trying to create partition based on range. Can we create sub partition is also on range?
a. Yes b. No
Answer: B
Explanation: We can create a subpartition using hash on a partitioned table not using range.
142.The following statements which is true?
a. SQL> update <table> set b. SQL> update <table> set <column>=<value>
<column>=<value> partition partition where deptno in (select <column> from
name; <table> partition (partition name);
c. SQL> update <table> set d. none of the above
partition(partition name) set
<column>=<value>;
Answer: B
Explanation: Directly we cannot select partition values to update in partition both a and c are wrong syntaxs.If
we want to modify one partition values we have to use subquery.
143.Can we create all partitions on the same tablespace?
a. Yes b. No
Answer: A
Explanation: Specifying different tablespace for each partition is optional. We can have all the partitions in a
single tablespace.
g
Oracle 11 – FAQs Page 224 of 242
WK: 6 - Day: 5.2
144.One user is trying to create on table with four(4) partitions.The specified first three partitions in different
tablespaces. He did not specify any tablespace from the last partition? Then which tablespace it will took?
a. Third partition tablespace b. User default tablespace
c. System tablespace d. One of the three partitioned tablespaces
Answer: B
Explanation: If we don't specify any tablespace for any partition it'll take the user default tablespace.
145.How many types of indexes we can create on partition table?
a. 3 b. 5
c. 7 d. 4
Answer: D
Explanation: We can create global index,local index,local prefixindex,local nonprefixindex.These four types of
indexes we can create on partition table.
146.Can we create partitioned index on non-partitioned table?
a. Yes b. No
Answer: A
Explanation: We can create a partitioned index on non-partitioned table or a non-partitionede index on
partitioned table.
147.Can we create subpartition after creating the partition?
a. Yes b. No
Answer: B
Explanation: Once we create the table we can't change it into partitioned table in the same way once we create
the partitioned table we cann't go for sub-partition.
148.Can we create local index by giving the range for your partitions?
a. Yes b. No
Answer: B
Explanation: While creating local index we have to follow same no. of partitions and range we can't specify
range.
149.Can we create partitioned index with subpartition?
a. Yes b. No
Answer: A
Explanation: Yes, we can create a partioned index with subpartition.
150.Identify the new partitioning method available for global indexes.
a. Range Partitioned b. Range-hash partitioned
c. Has Partitioned d. List-hash Partitioned
e. None
Answer: C
Explanation: From 9i onwords we can create Hash partitioned global index on ant table.
151.What is the main reason to create a reverse-key index on a column?
a. Column is populated using a b. Column contains many different values.
sequence.
c. Column is mainly used for value d. Column implements an inverted list attribute.
range scans.
Answer: A
Explanation: If in your table we are using sequence to insert a value in one column then reverse-key index will
give best performance.Because it will maintain reverse values of columns.
152.What are the two main benefits of index-organized tables?(Choose two)
a. More concurrency. b. Faster full table scan.
c. Fast primary key based access. d. Less contention in the segment header.
g
Oracle 11 – FAQs Page 225 of 242
WK: 6 - Day: 5.2

e. No duplication of primary key values


storage.
Answer: C, E
Explanation: Index oraganized table we can create on table which is haveing primary key without primary key
we cannot create a Index-oraganized tables.
153.Which three statements about rebuilding indexes are true?(Choose three)
a. The ALTER INDEX REBUILD b. Using the ALTER INDEX REBUILD is usually
command is used to change the faster then dropping and recreating an index
storage characteristics of an index. because it uses the fast full scan feature.
c. Oracle allows for the creation of an d. When building an index, the NOLOGGING and
index or recreation of an existing UNRECOVERABLE keywords can be used
index while allowing concurrent concurrently to reduce the time it takes to
operations on the base table. rebuild.
Answer: A, B, C
Explanation: This two keywords NOLOGGING and UNRECOVERABLE is not used for to reduce the time to
rebuild index.
154.What is the main reason for a 'row overflow area' when creating index-organized tables?
a. Avoid row chaining and migration. b. Keep the B-tree structure densely clustered.
c. Speed up full table scans and fast full d. Improve performance when the index-
index scans. organized table is clustered.
Answer: B
Explanation: While creating Index-organized tables our goal shold be to store maximum number of rows in a
block.
155.Which type of index should be created to spread the distribution of an index across the index tree?
a. B-tree indexes. b. Bitmap indexes.
c. Reverse-key indexes. d. Function-based indexes.
Answer: A
Explanation: As name specifies B-tree it will maintain index values in the form of tree.All the other indexes not
maintain values in the form of tree.
156.Which three things can the ALTER INDEX REBUILD command accomplish?
a. Convert a bitmap index to a B-tree b. Move the index to a different tablespace
index
c. Change the storage parameters for d. Rebuild a reverse index from an existing B-
the index tree index
Answer: B, C, D
Explanation: Using rebuild index command we can change bitmap index to B-tree index.
157.A company's DBA has issued the the follwoing statement to validate and Check the indexes. From which of
following View, he can obtain information about indexes. ANALYZE INDEX INDEX_NAME VALIDATE
STRUCTURE.
a. DBA_INDEXES b. INDEX_STATS
c. DBA_TAB_INDEXES d. DBA_INDEX_STATS
e. DBA_COL_INDEXES
Answer: B
Explanation: If we want to check validity of an index then we can go for the statistics of that index by seeing the
statistics we can find out the validity any segment.so for seeing statistics we can query INDEX_STATS.
158.A company's DBA migrated the applications from an earlier release of the orale server which caused some of
the indexes need to be converted into a reverse key index. How he can do the operation.
a. By rebuilding b. Drop and recreating
c. Move the index to other tablespace d. Create another index
Answer: A
Explanation: While rebuilding indexes we can change the index type.
g
Oracle 11 – FAQs Page 226 of 242
WK: 6 - Day: 5.2
159.In one table 5 million rows will be dumped every month end. After dumping the data, reports will be taken
based on particular column which contains low cordinality. Which of the following index will give better
performence.
a. B-Tree index b. Reverse Key index
c. Bitmap index d. Function based index
e. Composit index.
Answer: C
Explanation: If column cordinality is low then bitmap index will give the best prformance.Not other index. While
creating any index we have to see cordinality of column depending on that only we have to deside which index
will give the best performance.
160.Which command is not a valid syntax?
a. alter index b. alter index summit.orders_region_id_idx resize
summit.orders_region_id_idx 500m;
coalesce;
c. alter index d. alter index summit.orders_region_id_idx
summit.orders_region_id_idx rebuild rebuild tablespace indx02;
online;
Answer: B
Explanation: We cannnot resize the logical segment manually.We can only resize the physical things manually
(like datafile).
161.Which of the following is false about control files?
a. maxlogfiles can be set b. maxdatafiles can be set
c. maxinstances can be set d. maxtablespaces can be set
Answer: D
Explanation: Controlfile contains the record of maxlogfiles,maxlogmembers,maxdatafiles,maxinstances and
maxloghistory not maxtablespaces.
162.Control files cannot store which of the following information?
a. location of datafiles b. location of redo log files
c. location of archived redo log files d. none
Answer: C
Explanation: Controlfile have the locations of datafiles,tempfiles and online redolog files not archived redolog
files.
163.Which of the following is false about redolog files?
a. There can be only one logfile b. There can be only one logmember
c. There can be more than one redolog d. There can be more than one redolog members
files
Answer: A
Explanation: We must have atleast two redolog groups in database.
164.Which of the following is false about redolog files?
a. Logswitch can be forced by DBA b. A logswitch occurs when LGWR has filled one
log file group
c. Logswitch occurs when DB shutdown d. none of the above
Answer: C
Explanation: A log switch can be occured 'alter system switch logfile' statement or log file group filled up.When
we shutdown a database logswitch will not occure.
165.What happens when a user issues commit? (choose the best answer)
a. The tranactions present in redolog b. The transactions are written from database
buffer are written to redolog files and block buffers to redolog buffers and then
then commit complete message is commit complete message is issued.
issued.
c. The transactions are written to d. All of the above.
g
Oracle 11 – FAQs Page 227 of 242
WK: 6 - Day: 5.2

datafiles directly by passing the


buffers andthen commit complete
message is issued.
Answer: A
Explanation: Whenever a user says commit the transaction will be written to online redolog file and the
acknowledgement will be return to the user.
166.How do we change name of the database?
a. Just modify db_name in the init.ora b. Take the trace of the controlfile and modify db
and then restart the database name and then recreate controlfile.
c. Its not possible to change name of d. Both A and B
the database.
Answer: D
Explanation: To change the DB name take the trace of controlfile which would be created in
USER_DUMP_DEST then we edit the DB NAME in trace file and init.ora file,remove the controlfiles which are
existing go to the nomount state and then recreate the controlfile.
167.What is rolling forward mechanism?
a. Applying contents of archived log b. Copying contents of database buffers onto
files datafiles
c. Copying contents of redologfiles onto d. We cannot roll forward rather we can only roll
archived redolog files back
Answer: A
Explanation: Rolling forward is applying the the transactions which are there in the archivelog files upon the
datafiles.
168.Assume that we lost redolog group 2.Then as a DBA how will we recover them?
a. SQL> ALTER DATABASE CLEAR b. SQL> ALTER DATABASE RECOVER
UNARCHIVED LOGFILE GROUP 2; UNARCHIVED LOGFILE GROUP 2;
c. SQL> ALTER LOGFILE CLEAR d. Once lost we cannot recover them without
GROUP 2; having backup.
Answer: A
Explanation: If u don't have an inactive redo log file database will be shutdown and then u have to go to the
mount state and say 'alter database clear unarchived logfile group 2' so that the lost redolog group file will be
created.
169.Through which data dictionary view will we find out where archived log files are created?
a. V$ARCHIVE_DEST b. V$ARCHIVED_DEST
c. V$ARCHIVE_DESTINATION d. We have to manually go to os prompt and
check
Answer: A
Explanation: v$archive_dest will display the info. about archivelog destinations in oracle9i we can specify 10
destinations where as in oracle8i only 5 destinations.
170.Which of the following is true about archive log files? (choose one best answer)
a. The contents of current redolog is b. Contents of active redolog is written to
written to archived logfiles archived logfiles
c. All commited transactions are written d. All of the above
to archived logfiles
Answer: B
Explanation: The contents of the current redolog file will be written to the archivelog file when the switch occurs
and the file name will be given by the sequence no of the online redolog group.
171.What are the minimum files will be copied in whole database cold backup
a. Controlfiles,Redologfiles and not b. Controlfile,datafiles,archivelogfiles
datafiles
c. Controlfiles, datafies, redolog files d. Controlfiles,datafiles,redologfiles and orapwd
and not compulsory archivelog files file
g
Oracle 11 – FAQs Page 228 of 242
WK: 6 - Day: 5.2
Answer: C
Explanation: Controlfiles, datafiles, redologfiles in the sense whole database and external files like init.ora and
password file are minimum files will be copied in cold backup.
172.To take Cold backup, database will be in which mode
a. Up and running b. mount stage
c. Nomount stage d. C or D
e. All of the above.
Answer: E
Explanation: In Cold Backup the datafiles must be synchronized this will be possible in nomount or shutting
down the database only possible.
173.To take cold backup database should be
a. Running in archive log mode b. Running no archive log mode
c. No mater whether running archive log d. All then above
mode or not
Answer: C
Explanation: Cold Bkup is possible either DB in ALM or NALM where as Hot Bkup is possible when the DB is
running in ALM.
174.If We lose the database, to recover it
a. Running in archive log mode b. No need to running in archive log mode
c. Backup (which we have taken after d. B and C.
putting it in archive log mode) should
be available.
e. A and C f. All the above
Answer: E
Explanation: If u want to recover the database must have the consistent backup of database and the
archivelogs in order to apply all the archivelogs upon the database files.
175.After performing a data load of static data on a tablespace, the DBA performs a backup. If the DBA then sets
the tablespace to read-only mode, when is the next time a backup should be taken?
a. Immediately after placing the b. The next time the database is put into read-
tablespace into read-only mode only mode
c. The next time a data load is d. Never, The data will not change.
performed on the tablespace
e. The next time the DBA shuts down
the database
Answer: A
Explanation: If u take the backup of tablespace immediately after putting it in read-only mode the data will be in
the backup of tablespace so that u cann't lose data.
176.We are attempting to identify synchronization between files on a mounted database that was just backed up.
Which of the following dictionary views may offer assistance in this task?
a. V$DATAFILE_HEADER b. V$BACKUP_CORRUPTION
c. V$LOGFILE d. V$BACKUP
Answer: A
Explanation: V$DATAFILE_HEADER view provides the status of each datafile along with SCN in DB.
177.An Oracle database ensures detection of the need for database recovery by checking for synchronization
using which of the following background processes?
a. CKPT b. DBW0
c. LGWR d. SMON
Answer: D
Explanation: SMON will do the instance recovery if the datafile SCN is not matching with the controlfile SCN.
g
Oracle 11 – FAQs Page 229 of 242
WK: 6 - Day: 5.2
178.After a session makes a change to a table in the Oracle database, at what point is that change physically
written to the database files?
a. When a checkpoint occurs b. When the user disconnects
c. When the transaction commits d. As soon as the change is made
Answer: A
Explanation: When a Checkpoint occurs the current SCN will be written to each datafile and dirty buffers will be
written to the corresponding datafiles.
179.We are concerned that media recovery performance will be slow because we have very large redo logs to
apply to the Oracle database. Which of the following choices best identifies the way we can speed recovery
performance in the future?
a. Take more frequent checkpoints. b. Use smaller redo logs.
c. Generate less redo on the database. d. Take more frequent backups.
Answer: D
Explanation: If we take the backup of database more frequently we need not use more archive logfiles to do the
recovery.
180.While the DB is running in NO ARCHIVELOG MODE can we take a datafile offline by giving the command
'ALTER DATABASE DATAFILE <FILE_ID> OFFLINE'
a. Yes b. No
Answer: B
Explanation: Unless we enable the DB in ALM won't allow we to do the above statement but u can do the following
statement 'ALTER DATABASE DATAFILE <FILE_ID> OFFLINE DROP'
181.To take hot backup database should be
a. Up and running b. Should be in archive log mode
c. Shutdown mode d. A and B
e. None of the above.
Answer: D
Explanation: While taking the Hot Bkup the DB must up & run and it must be in ALM.
182.To take hot backup the tablespaces will be in which mode
a. Offline mode b. Online mode
c. Read write mode d. Begin backup mode
e. Read only mode
Answer: D
Explanation: When we make a tablespace begin backup mode the corresponding datafiles are eligible to take
the consistent backup.
183.In order to minimize risks associated with performing a tablespace point-in-time recovery, each of the
following choices identifies the items we should have on hand, except one. Which is it?
a. A complete backup of the database b. All archive redo logs since the database was
backed up
c. Only the datafiles comprising the d. A recent database export
tablespace to be recovered
Answer: C
Explanation: Tablespace point-in-time recovery exclusively for that tablespace only.
184.Which view we can query to know our database is in backup mode
a. V$BACKUP b. V$DATAFILE
c. V$TABLESPACE d. V$DATABASE
Answer: A
Explanation: V$BACKUP will give u the SCN, status as active or not active of each datafile in DB.
185.The DBA is attempting to diagnose database corruption with dbverify. Which of the following command-line
options are not associated with the use of this tool (choose two)?
g
Oracle 11 – FAQs Page 230 of 242
WK: 6 - Day: 5.2

a. FILE b. LOGFILE
c. BADFILE d. DIRECT
Answer: C, D
Explanation: file, logfile are valid options for dbv utilitiy.
186.The Oracle database is experiencing peak transaction volume. In order to reduce I/O bottlenecks created by
large amounts of redo write activity, which of the following steps can be taken?
a. Increase the size of the buffer cache. b. Increase the size of the rollback segments.
c. Increase the size of the log buffer. d. Increase the size of the shared pool.
Answer: C
Explanation: Increase the size of the redolog buffer so that u can minimise the I/O to write the transaction info.
to the current redolog file.
187.The alert log can contain specific information about which database backup activity?
a. Placing datafiles in begin and end b. Placing tablespace in begin and end backup
backup mode. mode.
c. Changing the database backup mode d. Performing an operating system backup of the
from open to close. database files.
Answer: B
Explanation: When we places any tablespace in begin backup or end backup mode oracle write information in
alert file.Directly we cannot put one datafile in begin backup mode. Alert file doesnot maintain the information
what we perform on os.
188.Currently, there is only one copy of a control file for a production Oracle database. How could the DBA
reduce the likelihood that a disk failure would eliminate the only copy of the control file in the Oracle
database?
a. Add another control filename to the b. Issue alter database backup to trace and
init.ora file restart the instance
c. Copy the control file and issue the d. Shutdown database, copy control file to
alter database statement second location, and add second name and
location on to init.ora.
Answer: D
Explanation: In order to multiplex the controlfile shutdown the database, copy the controlfile to new
location,specify the location in init.ora and open the DB.
189.The DBA has just created a database in noarchivelog mode. Which of the following two reasons may cause
him or her to leave the database in that mode for production operation? (Choose two)
a. Medium transaction volume on the b. Business requirement for point-in-time
database system between backups database recoveries
c. Low transaction volume on the d. Limited available disk space on the machine
database system between backups hosting Oracle
Answer: C, D
Explanation: If u are entering the data which is reproducable, or u don't have the enough space on disk to
generate archivelogs then ALM is preferrable.
190.In archive log multiplexing environments on Oracle8i databases, which of the following parameters is used
for defining the name of the archive log in the additional destinations?
a. LOG_ARCHIVE_DEST_N b. LOG_ARCHIVE_MIN_SUCCEED_DEST
c. LOG_ARCHIVE_DEST d. LOG_ARCHIVE_DUPLEX_DEST
Answer: A
Explanation: In Oracle8 we can specify only two destinations, Oracle8i we can specify five destinations where
as in Oracle9i u can specify 10 destinations.
191.To take the backup through Rman utility
a. Database should be open mode b. Database should be archive log mode
c. Database should be mounted d. Database should be shutdown mode
e. Instance only should be started up
g
Oracle 11 – FAQs Page 231 of 242
WK: 6 - Day: 5.2
Answer: C
Explanation: To take the Bkup through RMANatleast the DB must be mounted doesn't matter whether the DB is
running in ALM or NALM
192.When ever we connect target database through Rman utility
a. It needs dedicated server b. It starts two dedicated servers
c. It supports shared severs d. A And B
e. C and A
Answer: D
Explanation: RMAN utility needs dedicated server process and whenever we connect to target database it will
start two dedicated server processes.
193.The DBA is evaluating the use of RMAN in a backup and recovery strategy for Oracle databases. Which of the
following are reasons not to use RMAN in conjunction with the overall method for backing up and recovering
those databases?
a. When automation of the backup b. When the database consists of large but
processing is required mostly static datafiles
c. When use of a recovery catalog is d. When your backup strategy must encompass
not feasible files other than Oracle database files
Answer: D
Explanation: RMAN backup the datafiles,controlfiles and archivelogs only.
194.The DBA is developing scripts in RMAN. In order to store but not process commands in a script in RMAN for
database backup, which of the following command choices are appropriate?
a. execute script { ... } b. create script { ... }
c. run { ... } d. allocate channel { ... }
Answer: B
Explanation: create script {..} will create the script and u can refer whenever u want to execute this script.
195.Which of the following best describes multiplexing in backup sets?
a. one archive log in one backup set b. multiple controlfiles in one backup set with file
with file blocks stored contiguously blocks for each stored noncontiguously
c. multiple datafiles in one backup set d. one datafile in multiple backup sets with file
with file blocks for each stored blocks stored contiguously
noncontiguously
Answer: C
Explanation: Multiple datafiles in one backupset describes the multiplexing.
196.The DBA is planning backup capacity using RMAN. Which of the following choices best describes
streaming?
a. The ability RMAN has to take multiple b. The process by which RMAN communicates
backups of multiple databases at the with the underlying OS
same time
c. The method used for writing backup d. The performance gain added by parallel
sets to tape processing in RMAN
Answer: C
Explanation: While doing the backup using RMAN it will use a specific format and minimises the backup size of
database. RMAN can write backupsets in disk or tape.
197.The DBA has executed a level 0 backup and a level 1 backup. If the DBA then executes another level 1
backup, what information in the database will be backed up?
a. All changed blocks since the level 2 b. All changed blocks since the level 1 backup
backup only
c. No changes will be saved d. All blocks in the database
Answer: B
Explanation: level 1 backup will copy the objects which have been changed since previous same level (1)
backup or less than that level (0) backup.
g
Oracle 11 – FAQs Page 232 of 242
WK: 6 - Day: 5.2
198.When is the best time to execute the command "resync database"?
a. After creating a recovery catalog and b. After closing a recovery catalog and taking the
starting Recovery Manager. database offline.
c. After closing a recovery catalog and d. After closing a recovery catalog and closing
taking your database online. Recovery Manager.
e. After creating a recovery catalog and
closing and closing Recovery
Manager.
Answer: A
Explanation: After connecting to recovery catalog just issue a command resync catalog so that if we did any
changes in database that will be stored in recovery catalog for backup and recovery in future.
199.Which statement regarding memory usage by the Recovery Manager is true?
a. Memory is allocate from the database b. Memory could never be allocated from the
shared pool. database large pool
c. Memory could be allocate from PGA d. Memory allocated to a Recovery Manager
of the backup process. buffer is a function of the
DB_FILE_MULTIBLOCK_READ_COUNT
value.
Answer: B
Explanation: Memory could be allocated in large_pool because for RMAN utility we cannot shared server
process.We must use Dedicated server process. Large pool is only useful when your using MTS.
g
Oracle 11 – using Unix Commands Page 233 of 242
WK: 2 - Day: 6.14

107. Common UNIX Commands


These are the common UNIX commands Oracle DBAs would use. We have provided brief explanation of commands and
examples. In UNIX, most commands have a lot of options available. For a complete list of options, see the UNIX online
manual pages. All UNIX commands and file names are case sensitive.
Man man command Manual Pages - Help with any UNIX command
man ps Help on the UNIX ps command
clear Clear To clear the screen
pwd Pwd Present / Current Working Directory
cd cd [directoryname] Change directory, without argument will change your
working directory to your home directory.
cd work Change working directory to "work"
cd .. Change working directory to parent directory (.. is
parent and . is current directory)
ls ls [-options] [names] List files. [names] if omitted, will list all files and
subdirectories in the directory. Wild cards can be
specified.
ls -l List files with date and permissions
-rw-rw-r-- 1 oracle dba 706 Sep 23 17:26
storparms.sql

-rwxrwx--- 1 oracle dba 377 Aug 28 15:00


sysdelstat.sql

drwxrwxr-- 2 oracle dba 2048 Oct 22


16:12 work
[column1] [2] [3] [4] [5] [6]
[7]
Column1 - Permissions of the file or directory; r-read,
w-write, x-execute
Position 1 indicates if it is a directory
Positions 2-4 is the permission for owner
Positions 5-7 is the permission for group
Positions 8-10 is the permission for others
Column2 - Owner of the file/directory
Column3 - Group which the owner belongs to
Column4 - Size of the file in bytes
Column5 - Last Modified Date
Column6 - Last Modified Time
Column7 - Name of the file/directory
ls -al List files with date and permissions including hidden
files
ls -lt List files with date, sorted in the date modified
ls -ltr bt* List files with date, sorted in the date modified, oldest
first, with filenames starting with bt
Wildcards * Any character, any number of positions
? Any character, one position
[] A set of characters which match a single character
position.
- To specify a range within []
ls *x* List all files which contains an x in any position of the
name.
ls x* List all files which start with x
ls *T0[1-3]ZZ List all files which contain T0 followed by 1,2 or 3
followed by ZZ. The following files match this condition:
analyzeall.AAAT01ZZ
g
Oracle 11 – using Unix Commands Page 234 of 242
WK: 2 - Day: 6.14

dbaoc_err.AAAT03ZZ
dbstart_log.AAAT03ZZ
calerterr.AAAT01ZZ
dbaoc_log.AAAT01ZZ
ls job?.sql List files which start with job followed by any single
character followed by .sql
Example: jobd.sql jobr.sql
ls alert*.???[0-1,9] alert_AAAT01ZZ.1019
alert_AAAD00ZZ.1020
alert_AAAI09ZZ.1021
touch - touch filename Create a 0 byte file or to change the timestamp of file
to current time (wild cards as above can be used with
the file names)
mkdir mkdir directoryname Create Directory
mkdir -p directorypath Create directory down many levels in single pass
mkdir -p /home/biju/work/yday/tday
rmdir rmdir directoryname Remove directory
rm rm filename Remove file
rm -rf directoryname Remove directory with files. Important - There is no
way to undelete a file or directory in UNIX. So be
careful in deleting files and directories. It is always
good to have rm -i filename for deletes
cp cp filename newfilename Copy a file
cp -r * newloc To copy all files and subdirectories to a new location,
use -r, the recursive flag.
mv mv filename newfilename Rename (Move) a file. Rename filename to
newfilename.
mv filename directoryname Move filename under directoryname with the same file
name.
mv filename Move filename to directoryname as newfilename.
directoryname/newfilename
mv * destination If you use a wildcard in the filename, mv catenates all
files to one single file, unless the destination is a
directory.
cp -i file1 file2 Use the -i flag with rm, mv and cp to confirm before
mv -i file1 file2 destroying a file.
rm -i file*
file file filename To see what kind of file, whether editable. Executable
files are binary and you should not open them.
file d* dbshut: ascii text
dbsnmp: PA-RISC1.1 shared executable dynamically
linked -not stripped
dbstart: ascii text
dbv: PA-RISC1.1 shared executable dynamically
linked -not stripped
demobld: commands text
demodrop: commands text
vi vi filename Edit a text file. Vi is a very powerful and "difficult to
understand" editor. But once you start using, you'll love
it! All you want to know about vi are here. More vi tricks
later!!
cat cat filename See contents of a text file. cat (catenate) will list the
whole file contents. Cat is mostly used to catenate two
or more files to one file using the redirection operator.
cat file1 file2 file3 > files Catenate the contents of file1, file2 and file3 to a single
file called file. If you do not use the redirection, the
result will be shown on the standard output, i.e.,
g
Oracle 11 – using Unix Commands Page 235 of 242
WK: 2 - Day: 6.14

screen.
more more filename Show the contents of the file, one page at a time. In
page page filename more/page, use space to see next page and ENTER to
see next line. If you wish to edit the file (using vi),
press v; to quit press q.

tail tail -n filename To see the specified number of lines from the end of
the file.

head head -n filename To see the specified number of lines from the top of
the file.
pg pg filename To show the contents of the file, page by page. In pg,
you go up and down the pages with + and - and
numbers.
1 First Page of the file
$ Last Page of the file
+5 Skip 5 pages
-6 Go back 6 pages
ENTER Next page
- Previous Page
q Quit
/string Search for string
env Env To see value of all environment variables.
To set an environment variable: In ksh or sh "export VARIABLENAME=value", Note
there is no space between =.
In csh "setenv VARIABLENAME value"
echo $VARIABLENAME See value of an environment variable
echo echo string To print the string to standard output
echo "Oracle SID is $ORACLE_SID" Will display "Oracle SID is ORCL" if the value of
ORACLE_SID is ORCL.
lp lp filename To print a file to system default printer.
chmod chmod permission filename Change the permissions on a file - As explained under
ls -l, the permissions is read, write, execute for owner,
group and others.
You can change permissions by using numbers or the
characters r,w,x. Basically, you arrive at numbers
using the binary format.
Examples:
rwx = 111 = 7
rw_ = 110 = 6
r__ = 100 = 4
r_x = 101 = 5
chmod +rwx filename Give all permissions to everyone on filename
chmod 777 filename
chmod u+rwx,g+rx,o-rwx filename Read, write, execute for owner, read and execute for
chmod 750 filename group and no permission for others
chown chown newuser filename Change owner of a file
chgrp chgrp newgroup filename Change group of a file
chown newuser:newgroup filename Change owner and group of file
compress compress filename Compress a file - compressed files have extension .Z.
To compress file you need to have enough space to
hold the temporary file.
uncompress uncompress filename Uncompress a file
df df [options] [moutpoint] Freespace available on the system (Disk Free); without
arguments will list all the mount points.
df -k /ora0 Freespace available on /ora0 in Kilobytes. On HP-UX,
you can use "bdf /ora0".
g
Oracle 11 – using Unix Commands Page 236 of 242
WK: 2 - Day: 6.14
df -k . If you're not sure of the mount point name, go to the
directory where you want to see the freespace and
issue this command, where "." indicates current
directory.
du du [-s] [directoryname] Disk used; gives operating system blocks used by
each subdirectory. To convert to KB, for 512K OS
blocks, divide the number by 2.
du –s Gives the summary, no listing for subdirectories
find Find files. Find is a very useful command, searches recursively
through the directory tree looking for files that match a
logical expression. It has may options and is very
powerful.
find /ora0/admin -name "*log" Simple use of find - to list all files whose name end in
-print log under /ora0/admin and its subdirectories
find . -name "*log" -print -exec rm To delete files whose name ends in log. If you do not
{} \; use the "-print" flag, the file names will not be listed on
the screen.
grep Global regular expression print To search for an expression in a file or group of files.
grep has two flavors egrep (extended - expands wild
card characters in the expression) and frep (fixed-
string - does not expand wild card characters). This is
a very useful command, especially to use in scripts.
grep oracle /etc/passwd To display the lines containing "oracle" from
/etc/passwd file.
grep -i -l EMP_TAB *.sql To display only the file names (-l option) which
contains the string EMP_TAB, ignore case for the
string (-i option), in all files with SQL extension.
grep -v '^#' /etc/oratab Display only the lines in /etc/oratab where the lines do
not (-v option; negation) start with # character (^ is a
special character indicating beginning of line, similarly
$ is end of line).
ftp ftp [hostname] File Transfer Protocol - to copy file from one computer
to another
ftp AAAd01hp Invoke ftp, connect to server AAAd01hp.
Connected to AAAd01hp.com. Program prompts for user name, enter the login name
220 AAAd01hp.com FTP server to AAAd01hp.
(Version 1.1.214.2 Mon May 11
12:21:14 GMT 1998) ready.
Name (AAAd01hp:oracle): BIJU
331 Password required for BIJU. Enter password - will not be echoed.
Password:
230 User BIJU logged in. Specifying to use ASCII mode to transfer files. This is
Remote system type is UNIX. used to transfer text files.
Using binary mode to transfer
files.
ftp> ascii
200 Type set to A. Specifying to use binary mode to transfer files. This is
ftp> binary used for program and your export dump files.
200 Type set to I. To see the files in the remote computer.
ftp> ls
200 PORT command successful. Transfer the file check.sql from the remote computer to
150 Opening ASCII mode data the local computer. The file will be copied to the
connection for /usr/bin/ls. present directory with the same name. You can
total 8 optionally specify a new name and directory location.
-rw-rw-rw- 1 b2t dba 43 Sep 22
16:01 afiedt.buf
drwxrwxrwx 2 b2t dba 96 Jul 9 08:47
app
drwxrwxrwx 2 b2t dba 96 Jul 9 08:49
bin
g
Oracle 11 – using Unix Commands Page 237 of 242
WK: 2 - Day: 6.14
-rw-rw-rw- 1 b2t dba 187 Jul 30
14:44 check.sql
226 Transfer complete.
ftp> get check.sql
200 PORT command successful. ! Runs commands on the local machine.
150 Opening BINARY mode data
connection for check.sql (187
bytes).
226 Transfer complete.
187 bytes received in 0.02 seconds
(7.79 Kbytes/s)
ftp> !ls
AAAP02SN a4m08.txt tom3.txt Transfer file from local machine to remote machine,
a4m01.txt under /tmp directory with name test.txt.
ftp> put a4m01.txt /tmp/test.txt
mail mail "xyz@abc.com" < message.log Mail a file to internet/intranet address. mail the
contents of message.log file to xyz@abc.com
mail -s "Messages from Me" Mail the contents of message.log to xyz and abc with a
"xyz@abc.com" "abc@xyz.com" < subject.
message.log
who who [options] To see who is logged in to the computer.
who –T Shows the IP address of each connection
who –r Shows when the computer was last rebooted, run-
level.
ps Ps Process status - to list the process id, parent process,
status etc. ps without any arguments will list current
sessions processes.
ps –f ull listing of my processes, with time, terminal id,
parent id, etc.
ps –ef As above for all the processes on the server.
kill kill [-flag] processid To kill a process - process id is obtained from the ps
command or using the v$process table in oracle.
kill 12345 Kill the process with id 12345
kill -9 12345 To force termination of process id 12345
script script logfilename To record all your commands and output to a file.
Mostly useful if you want to log what you did, and sent
to customer support for them to debug. Start logging to
the log filename. The logging is stopped when you do
"exit".
hostname Hostname Displays the name of the computer.
uname uname –a To see the name of the computer along with Operating
system version and license info.
date Date Displays the current date and time.
date "+%m%d%Y" displays date in MM/DD/YYYY format
cal Cal displays calendar of current month
cal 01 1991 Displays January 1991 Calendar
telnet telnet [hostname] To open a connection to another computer in the
network. Provide the alias name or IP address of the
computer.
& command & Add & to the end of the command to run in background
nohup command & No hangup - do not terminate the background job even
if the shell terminates.
fg Fg To bring a background job to foreground
bg Bg To take a job to the background. Before issuing this
command, press ^Z, to suspend the process and then
g
Oracle 11 – using Unix Commands Page 238 of 242
WK: 2 - Day: 6.14

use bg, to put it in the background.


jobs Jobs To list the current jobs in the shell.
rcp rcp [-r] sourcehost:filename Remote copy. Copy files from one computer to
destinationhost:filename another. To set up the computer for remote copy and
remote login (rlogin) will be discussed later.
rcp host1:/ora0/file1.txt Copy file from host1 to host2. If the computer name is
host2:/ora0/temp/file1.txt omitted, the hostname is assumed.

Using UNIX Commands


 To see errors from Alert log file
cd alertlogdirectory;
grep ORA- alertSID.log
 To see the name of a user from his UNIX ID (Provided your UNIX admin keeps them!)
grep userid /etc/passwd
 To see if port number 1521 is reserved for Oracle
grep 1521 /etc/services
 To see the latest 20 lines in the Alert log file:
tail -20 alertSID.log
 To see the first 20 lines in the Alert log file:
head -20 alertSID.log
 To find a file named "whereare.you" under all sub-directories of /usr/oracle
find /usr/oracle -name whereare.you -print
 To remove all the files under /usr/oracle which end with .tmp
find /usr/oracle -name "*.tmp" -print -exec rm -f {} \;
 To list all files under /usr/oracle which are older than a week.
find /usr/oracle -mtime +7 -print
 To list all files under /usr/oracle which are modified within a week.
find /usr/oracle -mtime -7 -print
 To compress all files which end with .dmp and are more than 1 MB.
find /usr/oracle -size +1048576c -name "*.dmp" -print -exec compress {} \;
 To see the shared memory segment sizes
ipcs -mb
 To see the space used and available on /oracle mount point
df -k /oracle
 To see the users logged in to the server and their IP address
who -T
 To change passwd of oracle user
passwd oracle
 To convert the contents of a text file to UPPERCASE
tr "[a-z]" "[A-Z]" < filename > newfilename
 To convert the contents of a text file to lowercase.
tr "[A-Z]" "[a-z]" < filename > newfilename
 To kill a process from UNIX.
kill unixid
OR
kill -9 unixid
 To see the oracle processes
ps -ef | grep SIDNAME
g
Oracle 11 – using Unix Commands Page 239 of 242
WK: 2 - Day: 6.14
 To see the number of lines in a text file (can be used to find the number of records while loading data from text
file).
wc -l filename
 To change all occurrences of SCOTT with TIGER in a file
sed 's/SCOTT/TIGER/g' filename > newfilename
 To see lines 100 to 120 of a file
head -120 filename | tail -20
 To truncate a file (for example listener.log file)
rm filename; touch filename
 To see if SQL*Net connection is OK.
tnsping SIDNAME
 To see if the server is up.
ping servername
OR
ping IPADDRESS
 To see the versions of all Oracle products installed on the server.
$ORACLE_HOME/orainst/inspdver
g
Oracle 11 – using Unix Commands Page 240 of 242
WK: 2 - Day: 6.14

108. Important Websites for Oracle


Datawarehousing Sites Oracle HELP Sites Database Tools
           
www.datawarehousing.com/ www.dbasupport.com www.embarcadero.com/  
www.dmreview.com   www.oracleguru.com/ www.quest.com  
www.businessobjects.com www.orafans.com   www.cai.com/solutions/oracle/  
www.microstrategies.com/ www.oramag.com   www.datamirror.com  
www.cognos.com/   www.teamdba.com   www.datajunction.com  
www.dw-institute.com/ www.revealnet.com   www.keeptool.com  
www.comshare.com   www.dbdomain.com   www.precise.com  
www.intelligententerprise.com/ www.sampoorna.com www.veritas.com  
  www.dbatoolz.com/   www.pocketdba.com  
  www.orapub.com   www.esti.com      
  www.oraclezone.com/  
OCP Help Sites www.oracle-home.com/ Database Magazines
    www.searchdatabase.techtarget.com    
www.tagsystems.com/Oracle.htm www.oracle.com/think9i www.dbmsmag.com  
www.informit.com   www.oracle.com/products/trial/ www.db2mag.com  
www.sqlcourse.com   www.devshed.com   www.oramag.com  
www.sqlcourse2.com   www.oraclefoundation.org www.tdan.com/  
www.dbdomain.com/dbaexam.htm www.metalink.oracle.com www.elementkjournals.com/dbm/index.htm
www.hot-oracle.com/   www.oracle-base.com  
www.kevinloney.com   www.education.oracle.com  
  www.asktom.oracle.com  
  www.tahiti.oracle.com  
  www.orafaq.com    
  www.oaug.org/    
                         
g
Oracle 11 – using Unix Commands Page 241 of 242
WK: 2 - Day: 6.14

Important Websites for Oracle


Datawarehousing Sites Oracle HELP Sites Database Tools
           
www.datawarehousing.com/ www.dbasupport.com www.embarcadero.com/  
www.dmreview.com   www.oracleguru.com/ www.quest.com  
www.businessobjects.com www.orafans.com   www.cai.com/solutions/oracle/  
www.microstrategies.com/ www.oramag.com   www.datamirror.com  
www.cognos.com/   www.teamdba.com   www.datajunction.com  
www.dw-institute.com/ www.revealnet.com   www.keeptool.com  
www.comshare.com   www.dbdomain.com   www.precise.com  
www.intelligententerprise.com/ www.sampoorna.com www.veritas.com  
  www.dbatoolz.com/   www.pocketdba.com  
  www.orapub.com   www.esti.com      
  www.oraclezone.com/  
OCP Help Sites www.oracle-home.com/ Database Magazines
    www.searchdatabase.techtarget.com    
www.tagsystems.com/Oracle.htm www.oracle.com/think9i www.dbmsmag.com  
www.informit.com   www.oracle.com/products/trial/ www.db2mag.com  
www.sqlcourse.com   www.devshed.com   www.oramag.com  
www.sqlcourse2.com   www.oraclefoundation.org www.tdan.com/  
www.dbdomain.com/dbaexam.htm www.metalink.oracle.com www.elementkjournals.com/dbm/index.htm
www.hot-oracle.com/   www.oracle-base.com  
www.kevinloney.com   www.education.oracle.com  
  www.asktom.oracle.com  
  www.tahiti.oracle.com  
  www.orafaq.com    
  www.oaug.org/    
                         
g
Oracle 11 – using Unix Commands Page 242 of 242
WK: 2 - Day: 6.14

Você também pode gostar