Você está na página 1de 127

Some dba interview question generally faced by me.

1)How do you verify the No. of Databases running on a Host


A: ps -ef|grep pmon
either check cat /etc/oratab/ file
2)How do you verify the name of the database
A:select name from v$database; Or Show parameter db_name;
3)How do you verify whether your database is running o rnot
A: Here is how to check if your database is running:
check_stat=`ps -ef|grep ${ORACLE_SID}|grep pmon|wc -l`;
oracle_num=`expr $check_stat`
if [ $oracle_num -lt 1 ]
then
exit 0
fi
4)How do you verify when the database is created
5)How do you verify since when your database is running
6)How do you verify the name of your instance
A:Sql>ssshow instance;
7)How do you verify the mode of your database
A: SQL> SELECT LOG_MODE FROM SYS.V$DATABASE;
LOG_MODE
-----------NOARCHIVELOG
SQL> startup mount
SQL> alter database archivelog;
SQL> alter database open;

There are several system views that can provide us with information
reguarding archives, such as:
V$DATABASE
Identifies whether the database is in ARCHIVELOG or NOARCHIVELOG
mode and whether MANUAL (archiving mode) has been specified.
V$ARCHIVED_LOG
Displays historical archived log information from the control file. If you
use a recovery catalog, the RC_ARCHIVED_LOG view contains similar
information.
V$ARCHIVE_DEST
Describes the current instance, all archive destinations, and the current
value, mode, and status of these destinations.
V$ARCHIVE_PROCESSES
Displays information about the state of the various archive processes
for an instance.
V$BACKUP_REDOLOG
Contains information about any backups of archived logs. If you use a
recovery catalog, the RC_BACKUP_REDOLOG contains similar
information.
V$LOG
Displays all redo log groups for the database and indicates which need
to be archived.
V$LOG_HISTORY
Contains log history information such as which logs have been archived
and the SCN range for each archived log.
Using these tables we can verify that we are infact in ARCHIVELOG
mode:
SQL> select log_mode from v$database;
LOG_MODE
a

-----------ARCHIVELOG
SQL> select DEST_NAME,STATUS,DESTINATION from V$ARCHIVE_DEST
8)How do you enable automatic archiving
A: SQL> shutdown
SQL> startup mount
SQL> alter database archivelog;
SQL> alter database open;
9)How do you do manual archiving
A: SQL>startup mount
SQL>alter database archivelog manual;
SQL> archive log list
10)How do you set the archive file format
11)What is the physical structure of your database
12)How do you verify whether instance is using pfile or spfile
A: 1) SELECT name, value FROM v$parameter WHERE name =
'spfile'; //This query will return NULL if you are using PFILE
2) SHOW PARAMETER spfile // This query will returns NULL in the value
column if you are using pfile and not spfile
3) SELECT COUNT(*) FROM v$spparameter WHERE value IS NOT
NULL; // if the count is non-zero then the instance is using a spfile, and
if the count is zero then it is using a pfile:
By Default oracle will look into the default location depends on the o/s.
Like in unix, oracle will check in $oracle_home/dbs directory and on
windows it will check in oracle_home/database directory, and the
content of pfile is just text based, but spfile content is in binary format,
that is understandable by oracle very well.
13)How do you start an instance with spfile
A: SQL> startup pfile=\\C0027\database\SPFILEZWEITEDB.ORA
14)How do you start an instance with pfile
15)How do you read the contents of spfilesss &
A: i could read the content of the spfile with , Vi editor
16)How do you change the contents of spfile
a

A: it's content can ,only be altered with


SQL>alter system set <parameter=value> scope=memory or spfile;

17)List out the Initialisation parameters used by your instance


A:
Parameter Name

Description

BACKGROUND_DUMP_DEST:Specifies the directory where the trace


files generated by the background processes are to be written. This is
also the location of the alert log for the database.
COMPATIBLE: Provides Oracle with the understanding of what features
you intend the database to have. If there is a feature in 9i that was not
available in 8i and this parameter is set to 8.1.7, the feature will fail to
perform.
CONTROL_FILES :The location of the control files for the database.
DB_BLOCK_SIZE:The default block size for the database.
USER_DUMP_DEST:Specifies the directory where the trace files
generated by user sessions are written.

CORE_DUMP_DEST :Specifies the location where core dump files


generated by Oracle are written.
DB_NAME :The name of the database and also of the SID.
INSTANCE_NAME :The name of the instance and, with the exception of
a RAC environment, also the database and the SID.

OPEN_CURSORS :The maximum number of cursors that you want to


have opened in your instance at any given time.

18)Who is the owner of your oracle software


19)What is the version of your database A: SQL> select * from
v$version;
20)What is the version of your sqlplus
21)Where is the SQLPLUs located
A: $ORACLE_HOME/bin directory
22)Who is the owner of data dictionary
A: The SYS user owns the data dictionary. The SYS and SYSTEM users
are created when the database is created.
23)Where is data dictionary located
24)What are the dynamic views available in Nomount stage
25)What are the dynamic views available in Mount stage
26)What are the data dictionary views available in NOmount Stage
27)What are the data dictionary view available in Mount STage
28)How do you change the database from a Mount stage to Nomount
stage
29)How do you view the current log sequence No.
A:sql> ARCHIVE LOG LIST;
Below it will display current sequence number
30)What is the difference between instance name and databasename
A: Database: A collection of physical operating system files or disk.
When using Oracle 10g Automatic Storage Management (ASM) or RAW
partitions, the database may not appear as individual separate files in
the operating system, but the definition remains the same.

* Instance: A set of Oracle background processes/threads and a


shared memory area, which is memory that is shared across those
threads/processes running on a single computer
31)Write down the steps to change the database mode to
NoarchiveLog
A: SQL> select log_mode from v$database;
LOG_MODE
-----------NOARCHIVELOG
SQL> startup nomount;
SQL> alter database noarchivelog;
SQL> alter database open;
32)What are the contents of the alert log file
33)Where are the Background processes logging information written to
34)How do you specify the location of the Background processes
logging information
35)How do you specify the location of the User Processes logging
information
Installing Oracle9i/10g at unix (RedHat) environment
2) What are the components of physical database structure of Oracle
database
3) What is the difference between materialized view and snapshots
A: Snapshot is a copy of a table on a remote system but materialized
view is used to make a snapshot of a table available on a remote
system.
4) What is difference between base table and data dictionary views?
A: Base tables are made when the database is created and these
tables are stored in data dictionary. these base tables contains the
information related to the database. these tables are own by sys.
the information in these tables is crypted and cannot be modified.
So we use the views to access the information in these base tables.
these views are called data dictionary views. these views are created
when we run the script
a

@ ORACLE_HOME/RDBMS/ADMIN/CATALOG.SQL
5) Cloning and Standby Databases
A: Cloning is nothing but the copy of your database which can be open
in read write mode. The standby database is also a copy of your
database which is in standby mode and which is made in sink with
production database by applying the redo log generated at source
database (prodcution database). This database can not be open in read
write mode. This standby database can be mode in read write mode by
activating the database which reset's it's redo log sequence.
6) What is SCN number in Oracle? Plz any one give me the explanation
for SCN
A: The system change number (SCN) is an ever-increasing value that
uniquely identifies a committed version of the database. Every time a
user commits a transaction Oracle records a new SCN. You can obtain
SCNs in a number of ways for example from the alert log. You can then
use the SCN as an identifier for purposes of recovery.
7) Is VARCHAR2 size optimization worthwhile ?
8) How to manager Memory in Oracle Database? How to maximize nos.
of user in Oracle Database?
9) Index tablespace for a Database
A: There is no such provision in oracle to have default index
tablespace. Workaround is- you can have a job which will scan for
indexes in other tablespaces and rebuild into desired one.
10) What are the fixed memory structures inside the SGA?
A: Part of the SGA contains general information about the state of the
database and the instance which the background processes need to
access; this is called the fixed SGA. No user data is stored here. The
SGA also includes information communicated between processes such
as locking information.
With the dynamic SGA infrastructure the size of the buffer cache the
shared pool the large pool and the process-private memory can be
changed without shutting down the instance.
Dynamic SGA allows Oracle to set at run time limits on how much
virtual memory Oracle uses for the SGA. Oracle can start instances
underconfigured and allow the instance to use more memory by
growing the SGA components up to a maximum of SGA_MAX_SIZE.

11) what is directory naming in oracle9i ?


A: Oracle Net Services use a centralized directory server as one of the
primary methods for storage of connect identifiers. Clients configured
directory usage can use the connect identifiers in their connect string.
The directory server resolves the connect identifier to a connect
descriptor that is passed back to the client.
oracle net services support oracle internet directory and microsoft
active directory
12) What is the most important action a DBA must perform after
changing the database from NOARCHIVELOG TO ARCHIVELOG ?
A: backup the entire database...bcoz, alter system archive log start
otherwise the database halts if unable to rotate redo logs
13) What is the difference between Pctused and PctFree?
A: PCT USERD - The Percentage of Minimum space user for insertion of
New Row data. The value determines when the block gets into the
FREELISTS structure
PCTFREE - The Percentage of Space reserved for future updation of
existing data
14) how to find which tablespace belongs to which datafile ?
A: SQL> select tablespace_name,file_name from dba_data_files;
15) What is a synonym
A: A synonym is an alternative name for objects such as tables, views,
sequences, stored procedures, and other database objects.
SQL> CREATE SYNONYM emp FOR SCOTT.EMP; SQL> DROP SYNONYM
emp
16) What is a Schema ?
17) What is a deadlock ? Explain .
A: Two processes wating to update the rows of a table which are locked
by the other process then deadlock arises.
18) What is a latch?
A: A latch is a serialization mechanism. In order to gain access to a
shared data structure, you must "latch" that structure. that will
prevent others from modifying it while you are looking at it or
modifying it your self. It is a programming tool.
a

19) Latches vs Enqueues


A: Enqueues are another type of locking mechanism used in Oracle. An
enqueue is a more sophisticated mechanism which permits several
concurrent processes to have varying degree of sharing of "known"
resources. Any object which can be concurrently used, can be
protected with enqueues. A good example is of locks on tables. We
allow varying levels of sharing on tables e.g. two processes can lock a
table in share mode or in share update mode
20) What is difference between Logical Standby Database and Physical
Standby database?
A: Physical standby differs from logical standby:
Physical standby schema matches exactly the source database.
Archived redo logs and FTP'ed directly to the standby database which
is always running in "recover" mode. Upon arrival, the archived redo
logs are applied directly to the standby database.
Logical standby is different from physical standby:Logical standby
database does not have to match the schema structure of the source
database.
Logical standby uses LogMiner techniques to transform the archived
redo logs into native DML statements (insert, update, delete). This
DML is transported and applied to the standby database.
Logical standby tables can be open for SQL queries (read only), and all
other standby tables can be open for updates.Logical standby
database can have additional materialized views and indexes added for
faster performance.
21) Explain about Oracle Statistics parameter in export?
A: Export is one of taking backup. In export backup you can specify
many parameters in par file or at command line. One of the parameter
is statistics=y or n. If you specify y it is going to export statistics
generated in the database placed in dump file.22) what is difference
between latch,locks and enqueue ?
23) Waht is the frequency of log Updated..?
A: 1.COMMIT or ROLLABCK
2.time out occurs (3 secs)
3 1/3 of log is full 4. 1 mb of redo 5. Checkpoint occurs
a

LGWR writes:
1) COMMIT/ ROLLBACK
2) 1 MB of large transaction
3) Before DBWR writes
4) 1/3 of REDO LOG is full
5) time out occurs
6) Check Point encounters whenever commit,checkpoint or redolog
buffer is 1/3rd full
24) Which process writes data from data files to database buffer
cache?
25) what is the difference between local managed tablespace &
dictionary managed tablespace ?
A: The basic diff between a locally managed tablespace and a
dictionary managed tablespace is that in the dictionary managed
tablespace every time a extent is allocated or deallocated data
dictionary is updated which increases the load on data dictionary while
in case of locally managed tablespace the space information is kept
inside the datafile in the form of bitmaps every time a extent is
allocated or deallocated only the bitmap is updated which removes
burden from data dictionary.
26) What is clusters ?
A: A cluster is a data structure that improves retrieval performance. A
cluster, like an index, does not affect the logical view of the table. A
cluster is a way of storing related data values together on disk. Oracle
reads data a block at a time, so storing related values together reduces
the number of I/O operations needed to retrieve related values, since a
single data block will contain only related rows.
A cluster is composed of one or more tables. The cluster includes a
cluster index, which stores all the values for the corresponding cluster
key. Each value in the cluster index points to a data block that contains
only rows with the same value for the cluster key.
27) What is an extent
28) Database Auto extend question
A: This is an Interview Question By BMC Software.. " while installing the
Oracle 9i ( 9.2) version, automatically system takes the space of
approximately 4 GB.. thats fine.... Now, if my database is growing up
and it is reaching the 4GB of my database space...Now, i would like to
a

extend my Database space to 20 GB or 25 GB... what are the things i


have to do " Pls give me the accurate Solutions or alternates for this
Query.
29) How to know which query is taking long time?
30) where does the SCN resides (system change number)
A: SCN changes for every 3 minutes in 10g and for each and every
action in 9i. It resides in control files and data files. CKPT (checkpoint)
background process updates the SCN numbers from control files to the
datafiles. SMON (system monitor) background process checks for the
SCN numbers to be same in both datafiles and control files while
starting the database. If same then only the database is consistent.
Otherwise the database will not start.
31) What is RAC? What is Data Migration? What is Data Pumping?
32) Is it possible to drop more than one table using single sql
statement? if yes then how.
A: No because we can drop only one table or table data by using drop.
33) One DDL SQL script that has kept at certain location should be run
on multiple servers to keep database synchronize. This task has to do
in oracle database and this should be done as a job from scheduler.
How will you do it?
A: There are many ways to do that. Following is the one of the ways I
would prefer as I do it usually.
You can achieve this by having a small Unix Scripting / Windows Shell
Scripting / any other scripting (including PR*C).
This Unix/WSH/ script has to go through a loop for each database and
get connect and execute the SQL script.
34) How to you move from Dedicated server Process to a Shared
Server Process
A: Use DBCA toolYou will get the option to select shared server mode.
1. set SHARED_SERVERS=(more than 1) in init.ora
2. make changes in tnsnames.ora file to get the connection with
DISPATHERS rather than dedicated servers
35) What are the attributes of the Virtual Indexes
A: It does not store any data value in it unlike normal index do.Queries
will not get benefitted. This can be used only for analysis.
1. These are permanent and continue to exist unless we drop them.
2. Their creation will not affect existing and new sessions. Only
a

sessions marked for Virtual Index usage will become aware of their
existence.
3. Such indexes will be used only when the hidden parameter
_use_nosegment_indexes is set to true.
36) HOW 2 ENABLE PARTITIONING FEAUTURE IN ORACLE 8i
37) You have taken import of a table in a database. you have got the
Integrity constraint violation error. How you are going to resolve it.
A: use this DDL statement in create table script to avoid integrity
constraint violation error DROP TABLE tabl_name CASCADE
CONSTRAINTS ; cascade constraints delete foreign keys associated
with table and table frees with foreign keys.
38) Why in 10G, when you use real time apply feature in conjunction
with Maximum Protection, you can achive zero data loss, but not zero
database downtime??
A: When you say If the last standby database configured in this mode
becomes unavailable processing stops on the primary database. does
'processing stops' mean committing a transaction stops on the primary
database...if so there is data loss correct?
39) What is ORA-1555?
A: ORA-1555 error can occur in Oracle 10g also even with UNDO
RETENTION GUARANTEE enabled.
ORA-1555 happens when Oracle server process could not find the
block-image in UNDO tablespace for the read-consistency.
40) How can the problem be resolved if a SYSDBA, forgets his
password for logging into enterprise manager?
A: I think there are 2 ways to do that.
1. Login as SYSTEM and change the SYS password by using ALTER
USER.
2. Recreate the password file using orapwd and set
remote_password_file=exclusive and then restart the instance.
41) What is the correct sequence among FETCH, EXECUTE, And PARSE
A: 1. Parse
2. Execute
3. Fetch
42) What is database link
A: A database link is a pointer in the local database that allows you to
access on a remote database.
a

43) What is a Database instance ? Explain


44) What is an Index ? How it is implemented in Oracle Database ?
45) What is Parallel Server ?
46) What is a deadlock and Explain
47) What are the components of logical database structure of Oracle
database
48) What is an Oracle index
49) What is a tablespace
50) When a database is started, Which file is accessed first?
51) How to handle data curreption for ASM type files?
52) Do a view contain data
A: No, View never contain the the data, Only defination of it stores in
the data base, when ever you invoke them they show you the data
based on their defination.Only Materlized view or SnaptShot contain
the the data.
53) How many maximum number of columns can be part of Primary
Key in a table in Oracle 9i and 10g?
A: The maximum number of columns that can be a part of Primary key
in a table in Oracle 10g is 32.
54) I am getting error "No Communication channel" after changing the
domain name? what is the solution?
A: Change the domain name in the sqlnet.ora file in
NAMES.DEFAULT_DOMAIN parameter.
55) When a user comes to you and asks that a particular SQL query is
taking more time. How will you solve this?
A: If you find the Sql Query (which make problem) then take a Sqltrace
with explain plan it will show how the sql query will executed by oracle
depending upon the report you will tune your database

for example: one table have 10000 record but you want to fetch only 5
rows but in that query oracle does the full table scan.
only for 5 rows full table is scan is not a good thing so create a index
on the particular column by this way to tune the datatabse
56) How to find how many database reside in Oracle server in query?
A: select count(*) from v$database;
Or open oratab
57) What process writes from data files to buffer cache?
58) Can you tell something about Oracle password Security?
A: If user authentication is managed by the database security
administrators should develop a password security policy to maintain
database access security. For example database users should be
required to change their passwords at regular intervals and of course
when their passwords are revealed to others. By forcing a user to
modify passwords in such situations unauthorized database access can
be reduced.
Set the ORA_ENCRYPT_LOGIN environment variable to TRUE on the
client machine.
Set the DBLINK_ENCRYPT_LOGIN server initialization parameter to
TRUE.
59) What is the function of redo log
A: redo log is a part of physical structure of oracle. its basic
function is to record all the changesmade to daatabase
information. wheneveer an abnormal shutdown take place
preventing system to update the database changes can be
obtained from redolog and hence the changes are not lost.
60) What is SYSTEM tablespace and when is it created
61) How to DROP an Oracle Database?
A: You can do it at the OS level or go to dbca and click on
delete database
62) what is RAP?
63) Can you start a database without SPfile in oracle 9i?
A:no
a

64) Where we use bitmap index ?


A: Bitmap indexes are most appropriate for columns having
low distinct values
65) what are the diffrent file types that are supported by
SQL*Loader?
A: 1. .txt 2. .dat 3. .csv 4. .mdb
66) how do sql statement processing oracle database?
A: When a select statement is executed first of all the
statements hash code is genrated then that hash code is
matched in library cache if the hash code matched then
statement is directly executed and if the hash code is not
present then hard parsing is done and statement is
executed
67) How to Estimate the size of Tablespace???
68) How to query to know the structure of a single
Database and from more than one database.
69) how to estimate size of database?
70) What is difference between spfile and init.ora file???
72. Explain the relationship among database, tablespace
and data file.What is schema
A: -- A Oracle Database consists of one or more
tablespaces
--- Each Table space in an Oracle database consists of one
or more files called datafiles.
--- A database's data is collectively stored in the datafiles
that constitute each tablespace of the database.
73. Name init.ora parameters which effects system
performance.
A: These are the Parameters for init.ora
DB_BLOCK_BUFFERS
SHARED_POOL_SIZE
SORT_AREA_SIZE

DBWR_IO_SLAVES
ROLLBACK_SEGMENTS
SORT_AREA_RETAINED_SIZE
DB_BLOCK_LRU_EXTENDED_STATISTICS
SHARED_POOL_RESERVE_SIZE

74. What is public database link


A: Database link is a schema object in one database to access objects
in another database. When you create database link with Public clause
it is available for access to all the users and if you omit this clause then
database link is privat and available only to you.
75. What are the uses of rollback segment
A: The uses of Roll Back Segment are :
1. Transaction Rollback 2. Transaction Recovery 3. Read Consistency
76. What is the use of control file
77. What is difference between SQLNET.ORA AND TNSNAMES.ORA AND
LISTENER.ORA??
A: Oracle uses all three files (tnsnames.ora, sqlnet.ora, listener.ora) for
network configuration.
78. What is the difference between .ora and net file or .ora and .net or
tnsnames.ora sqlnet.ora listener.ora what ever the differnence makes
between ora and net.
A: .ora files contain Oracle Engine papameters info
.net files contain O.S engine parameter info
79. What are materialized views? when are they used?
A: Use of Meterialized view:Expensive operations such as joins and aggregations do not need to be
reexecuted.
If the query is astisfied with data in a Meterialized view, the server
transforms the query to reference the view rather than the base tables.
80. How many memory layers are in the shared pool?
a

81. What is the database holding Capacity of Oracle ?


A: database holding capacity of oracle 10 g is 8 trillion tera bytes.
82. How do you rename a database?
A: STEP 1: Backup the database.
STEP 2: Mount the database after a clean shutdown:
SHUTDOWN IMMEDIATE
STARTUP MOUNT
STEP 3: Invoke the DBNEWID utility (nid) specifying the new DBNAME
from the command line using a user with SYSDBA privilege:
nid TARGET=sys/password@TSH1 DBNAME=TSH2
Assuming the validation is successful the utility prompts for
confirmation before performing the actions. Typical output may look
something like:
C:\oracle\920\bin>nid TARGET=sys/password@TSH1 DBNAME=TSH2
DBNEWID: Release 9.2.0.3.0 - Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
Connected to database TSH1 (DBID=1024166118)
Control Files in database:

C:\ORACLE\ORADATA\TSH1\CONTROL01.CTL

C:\ORACLE\ORADATA\TSH1\CONTROL02.CTL
C:\ORACLE\ORADATA\TSH1\CONTROL03.CTL
Change database ID and database name TSH1 to TSH2? (Y/[N]) => Y
Proceeding with operation
Changing database ID from 1024166118 to 1317278975
Changing database name from TSH1 to TSH2
Control File C:\ORACLE\ORADATA\TSH1\CONTROL01.CTL - modified
Control File C:\ORACLE\ORADATA\TSH1\CONTROL02.CTL - modified
a

Control File C:\ORACLE\ORADATA\TSH1\CONTROL03.CTL - modified


Datafile C:\ORACLE\ORADATA\TSH1\SYSTEM01.DBF - dbid changed,
wrote new name
Datafile C:\ORACLE\ORADATA\TSH1\UNDOTBS01.DBF - dbid changed,
wrote new name
Datafile C:\ORACLE\ORADATA\TSH1\CWMLITE01.DBF - dbid changed,
wrote new name
Control File C:\ORACLE\ORADATA\TSH1\CONTROL01.CTL - dbid
changed, wrote new name
Control File C:\ORACLE\ORADATA\TSH1\CONTROL02.CTL - dbid
changed, wrote new name
Control File C:\ORACLE\ORADATA\TSH1\CONTROL03.CTL - dbid
changed, wrote new name
Database name changed to TSH2.
Modify parameter file and generate a new password file before
restarting.
Database ID for database TSH2 changed to 1317278975.
All previous backups and archived redo logs for this database are
unusable.
Shut down database and open with RESETLOGS option.
Succesfully changed database name and ID.
DBNEWID - Completed succesfully.
STEP 4: Shutdown the database:
SHUTDOWN IMMEDIATE
STEP 5: Modify the DB_NAME parameter in the initialization parameter
file. The startup will result in an error but proceed anyway.
STARTUP MOUNT
ALTER SYSTEM SET DB_NAME=TSH2 SCOPE=SPFILE;
a

SHUTDOWN IMMEDIATE
STEP 6: Create a new password file:
orapwd file=c:\oracle\920\database\pwdTSH2.ora password=password
entries=10
STEP 7: Rename the SPFILE to match the new DBNAME.
STEP 8: If you are using Windows you must recreate the service so the
correct name and parameter file are used:
oradim -delete -sid TSH1
oradim -new -sid TSH2 -intpwd password -startmode a -pfile
c:\oracle\920\database\spfileTSH2.ora
If you are using UNIX/Linux simply reset the ORACLE_SID environment
variable:
ORACLE_SID=TSH2; export ORACLE_SID
STEP 9: Alter the listener.ora and tnsnames.ora setting to match the
new database name and restart the listener:
lsnrctl reload
STEP 10: Open the database with RESETLOGS:
STARTUP MOUNT
ALTER DATABASE OPEN RESETLOGS;
STEP 11: Backup the database.
83. What are clusters
84. What is private database link
85. How can be determine the size of the database?
A: select sum(bytes)/1024/1024/1024 Size_in_GB from dba_data_files
86. Can you name few DBMS packages and their use?
A: DBMS_METADATA
DBMS_STATS
a

DBMS_SUPPORT
DBMS_SESSION
87. What is the view name where i can get the space in MB for tables
or views?
88. Assuming today is Monday, how would you use the DBMS_JOB
package to schedule the execution of a given procedure owned by
SCOTT to start Wednesday at 9AM and to run subsequently every other
day at 2AM?
89. How can you check which user has which Role.
A: select * from dba_role_privs order by grantee;
90. How do you find wheather the instance was started with pfile or
spfile
91. What are the Advantages of Using DBCA
A: You can use its wizards to guide you through a selection of options
providing an easy means of creating and tailoring your database. It
allows you to provide varying levels of detail. You can provide a
minimum of input and allow Oracle to make decisions for you,
eliminating the need to spend time deciding how best to set
parameters or structure the database. Optionally, it allows you to be
very specific about parameter settings and file allocations.
92. State new features of Oracle 10g.
93. What spfile/init.ora file parameter exists to force the CBO to make
the execution path of a given statement use an index, even if the index
scan may appear to be calculated as more costly?
A: CBO (Cost Based Optimizer):Generates an execution plan for a SQL
statement
optimizer_index_cost_adj parameter can be set to help CBO to
decide an execution plan which effects the speed of SQL query.
we can also make necessary changes to the following parameters to
effect CBO performance:
optimizer_search_limit & optimizer_max_permutations
94. What is a redo log
a

A: The Primary function of the redo log is to record all changes made to
data.
95. can we create index on long raw column?
A: NO we can't create index on long raw column.
96. What does database do during mounting process?
A: During database mount process, Oracle would check for the
existence of controlfiles mentioned in init.ora file but it wont check the
contents of the controlfile which is done during the opening of
database.
97. What is a database instance and Explain
98. What is Oracle table
99. What are the characteristics of data files
A: A data file can be associated with only one database. Once created
a data file can't change size.One or more data files form a logical unit
of database storage called a tablespace.
71) What are the different types of segments
72) What are the Advantages of Using DBCA
84) What are the types of database links
A: Private Database Link: You can create a private database link in a
specific schema of a database. Only the owner of a private database
link or PL/SQL subprograms in the schema can use a private database
link to access data and database objects in the corresponding remote
database.
Public Database Link : You can create a public database link for a
database. All users and PL/SQL subprograms in the database can use a
public database link to access data and database objects in the
corresponding remote database.
Global Database Link - When an Oracle network uses Oracle Names the
names servers in the system automatically create and manage global
database links for every Oracle database in the network. All users and
PL/SQL subprograms in any database can use a global database link to
access data and database objects in the corresponding remote
database.
85) When can hash cluster used
A: Hash clusters are better choice when a table is often queried with
equality queries. For such queries the specified cluster key value is
a

hashed. The resulting hash key value points directly to the area on disk
that stores the specified rows.
86) What is cluster key
A: The related columns of the tables in a cluster is called the Cluster
Key.
87) What is a private synonym
88) What is an Oracle view
89) What are Schema Objects
A: Schema objects include tables, views, sequences, synonyms,
indexes, clusters, database triggers, procedures, functions packages
and database links.
90) Can a tablespace hold objects from different schemes
A: It can be the only required option is that your tablespace have quota
assigned to any user that want to store objects in it.
91) What is a segment
92) What is row chaining
A: In Circumstances, all of the data for a row in a table may not be able
to fit in the same data block. When this occurs , the data for the row is
stored in a chain of data block (one or more) reserved for that
segment.
93) What is an index and How it is implemented in Oracle database
A: Indexes are used both to improve performence and to ensure
uniquness of a column. Oracle automatically creates an index when a
UNIQUE or PRIMARY key constarints clause is specified in a create
table command.
94) What is a schema
95) What does a control file contains
96) How to define data block size
A: stansard block size which is set with parameter DB_BLOCK_SIZE
cannot be changed after creating database. We can set non standard
parameter size later with parameter DB_nk_BLOCK_SIZE and it can be
changed.
97) What is hash cluster

98) What is index cluster


99) What are clusters
100) How are the index updates
101) What is a public synonym
102) What is an Oracle sequence
103) Can a view based on another view
104) What is the use of redo log information
A: The Information in a redo log file is used only to recover the
database from a system or media failure prevents database data from
being written to a database's data files.
105) How do you pin an object.
106) Is it possible to configure primary server and stand by server on
different OS?
A: NO. Standby database must be on same version of database and
same version of Operating system.
107) Explain Oracle memory structure.

Oracle uses memory to store information such as the following:

Program code
Information about a connected session, even if it is not currently active
Information needed during program execution (for example, the current
state of a query from which rows are being fetched)
Information that is shared and communicated among Oracle processes
(for example, locking information)
Cached data that is also permanently stored on peripheral memory (for
example, data blocks and redo log entries)

The basic memory structures associated with Oracle include:

System Global Area (SGA), which is shared by all server and


background processes and holds the following:
o Database buffer cache
o Redo log buffer
o Shared pool
o Large pool (if configured)

Program Global Areas (PGA), which is private to each server and


background process; there is one PGA for each process. The PGA holds
the following:
o Stack areas
o Data areas

Figure 7-1 illustrates the relationships among these memory structures.


Figure 7-1 Oracle Memory Structures

108) What are memory structures in Oracle?


109) What is a datafile
110) What is data block
111) What are synonyms used for
A: Synonyms are used to : Mask the real name and owner of an object.
112) What is a cluster Key ?
113) What are the basic element of Base configuration of an oracle
Database ?
A: It consists of
one or more data files.
one or more control files.

two or more redo log files.


The Database contains
multiple users/schemas
one or more rollback segments
one or more tablespaces
Data dictionary tables
User objects (table,indexes,views etc.,)
The server that access the database consists of
SGA (Database buffer, Dictionary Cache Buffers, Redo log buffers,
Shared SQL pool)
SMON (System MONito)
PMON (Process MONitor)
LGWR (LoG Write)
DBWR (Data Base Write)
ARCH (ARCHiver)
CKPT (Check Point)
RECO
Dispatcher
User Process with associated PGS
114) How Materialized Views Work with Object Types and Collections

11.Backup and Recovery Interview Questions


Some of the Common Backup and Recovery Interview Questions for
Oracle Database Administrator. These questions are common for both
Senior Oracle DBA or Junior DBA. I have compiled these questions

based upon the feedback I got from many candidates who have
attended interviews in various MNC's
1. Which types of backups you can take in Oracle?
2. A database is running in NOARCHIVELOG mode then which type of
backups you can take?
A: If your Databse is in No Archivelog Mode then you must take a Cold
backup of your Database.
3. Can you take partial backups if the Database is running in
NOARCHIVELOG mode?
4. Can you take Online Backups if the the database is running in
NOARCHIVELOG mode?
A:no
5. How do you bring the database in ARCHIVELOG mode from
NOARCHIVELOG mode?
6. You cannot shutdown the database for even some minutes, then in
which mode you should run the database?
7. Where should you place Archive logfiles, in the same disk where DB
is or another disk?
8. Can you take online backup of a Control file if yes, how?
9. What is a Logical Backup?
10. Should you take the backup of Logfiles if the database is running in
ARCHIVELOG mode?
11. Why do you take tablespaces in Backup mode?
12. What is the advantage of RMAN utility?
Advantage over tradition backup system:
1). copies only the filled blocks i.e. even if 1000 blocks is allocated to datafile but 500 are
filled with data then RMAN will only create a backup for that 500 filled blocks.
2). incremental and accumulative backup.
3). catalog and no catalog option.

4). detection of corrupted blocks during backup;


5). can create and store the backup and recover scripts.
6). increase performance through automatic parallelization( allocating channels), less redo
generation.

What is Channel?
Latest Answer : Channel is a link that RMAN requires to link to target database. This link is
required when backup and recovery operations are performed and recorded. This channel can
be allocated manually or can be preconfigured by using automatic channel ...

13. How RMAN improves backup time?


A: Add channel to improve the performance of rman but it create
session on DB and I/O on disk will increase so configure channel at
proper number.

14. Can you take Offline backups using RMAN?

Recall that an offline backup is a backup of the database while it is not running.
Hence, to perform our backup we will shutdown the database from RMAN and
then mount the database. We will perform the backup. Once the backup is
complete we will restart the database again. Here is an example of this process:
RMAN>shutdown immediate
RMAN>startup mount
RMAN>backup database;
RMAN>sql alter database open;

Once this process is complete, you have completed your first backup

15. How do you see information about backups in RMAN?


A: RMAN> List Backup;
16. What is a Recovery Catalog?

A recovery catalog can be used to store metadata about multiple target databases. The tables
and views constituting a recovery catalog are owned by a recovery catalog schema. Oracle
recommends creating a recovery catalog schema in a separate dedicated database and not in
the target database. A database containing a recovery catalog schema is called a recovery
catalog database.

A: Recovery catalog is a repository of metadata that is available in the


control file of the target database. Whenver we take backups using
RMAN the copy of the backup is placed in the control file in the form of
reusable records and as well as in the recovery catalog in the form of
tables. So that while taking recovery also these table info is useful to
apply the backup data
17. Should you place Recovery Catalog in the Same DB?
a: Recovery catalog not in same target db
Can take backup without catalog
18. Can you use RMAN without Recovery catalog?
19. Can you take Image Backups using RMAN?
20. Can you use Backupsets created by RMAN with any other utility?
20.what is difference b/w hot backup & Rman backup?

To take both backups we should keep database in


archive log
mode.
RMAN will take the backup of database used block only
where as
hot backup will take physical existing database files
completely

21. Where RMAN keeps information of backups if you are using RMAN
without Catalog?
A: RMAN keeps information of backups in the control file.

22. You have taken a manual backup of a datafile using o/s. How RMAN
will know about it?
23. You want to retain only last 3 backups of datafiles. How do you go

for it in RMAN?
24. Which is more efficient Incremental Backups using RMAN or
Incremental Export?
25. Can you start and shutdown DB using RMAN?
26. How do you recover from the loss of datafile if the DB is running in
NOARCHIVELOG mode?
27. You loss one datafile and it does not contain important objects. The
important objects are there in other datafiles which are intact. How do
you proceed in this situation?
28. You lost some datafiles and you don't have any full backup and the
database was running in NOARCHIVELOG mode. What you can do now?
29. How do you recover from the loss of datafile if the DB is running in
ARCHIVELOG mode?
30. You loss one datafile and DB is running in ARCHIVELOG mode. You
have full database backup of 1 week old and partial backup of this
datafile which is just 1 day old. From which backup should you restore
this file?
31. You loss controlfile how do you recover from this?
c
32. The current logfile gets damaged. What you can do now?
33. What is a Complete Recovery?
34. What is Cancel Based, Time based and Change Based Recovery?
35. Some user has accidentally dropped one table and you realize this
after two days. Can you recover this table if the DB is running in
ARCHIVELOG mode?
36. Do you have to restore Datafiles manually from backups if you are
doing recovery using RMAN?
37. A database is running in ARCHIVELOG mode since last one month.
A datafile is added to the database last week. Many objects are created
in this datafile. After one week this datafile gets damaged before you
can take any backup. Now can you recover this datafile when you don't
have any backups?

38. How do you recover from the loss of a controlfile if you have
backup of controlfile?
39. Only some blocks are damaged in a datafile. Can you just recover
these blocks if you are using RMAN?
40. Some datafiles were there on a secondary disk and that disk has
become damaged and it will take some days to get a new disk. How
will you recover from this situation?
41. Have you faced any emergency situation. Tell us how you resolved
it?
42. At one time you lost parameter file accidentally and you don't have
any backup. How you will recreate a new parameter file with the
parameters set to previous values.
some more oracle dba interview questions
1. explain the difference between a hot backup and a cold backup and
the
benefits associated with each.
A:a hot backup is basically taking a backup of the database while it is
still up and running and it must be in archive log mode. a cold backup
is taking a backup of the database while it is shut down and does not
require being in archive log mode. the benefit of taking a hot backup is
that the database is still available for use while the backup is occurring
and you can recover the database to any ball in time. the benefit of
taking a cold backup is that it is typically easier to administer the
backup and recovery process. in addition, since you are taking cold
backups the database does not require being in archive log mode and
thus there will be a slight performance gain as the database is not
cutting archive logs to disk.
2. you have just had to restore from backup and do not have any
control files.
how would you go about bringing up this database?
A:i would create a text based backup control file, stipulating where on
disk all the data files where and then issue the recover command with
the using backup control file clause.
3. how do you switch from an init.ora file to a spfile?
a

A:issue the create spfile from pfile command.


4. explain the difference between a data block, an extent and a
segment.
A:a data block is the smallest unit of logical storage for a database
object. as objects grow they take chunks of additional storage that are
composed of contiguous data blocks. these groupings of contiguous
data blocks are called extents. all the extents that an object takes
when grouped together are considered the segment of the database
object.
5. give two examples of how you might determine the structure of the
table
dept.
A:use the describe command or use the dbms_metadata.get_ddl
package.
6. where would you look for errors from the database engine?
A:in the alert log.
7. compare and contrast truncate and delete for a table.A:both the
truncate and delete command have the desired outcome of getting rid
of allthe rows in a table. the difference between the two is that the
truncate command is a ddl operation and just moves the high water
mark and produces anow rollback. the delete command, on the other
hand, is a dml operation, which will produce a rollback and thus take
longer to complete.
8. give the reasoning behind using an index.
A:faster access to data blocks in a table.
9. give the two types of tables involved in producing a star schema and
the type
of data they hold.
A:fact tables and dimension tables. a fact table contains
measurements while
dimension tables will contain data that will help describe the fact
tables.
a

10. what type of index should you use on a fact table?


A:a bitmap index.
11. give two examples of referential integrity constraints.
A:a primary key and a foreign key.
12. a table is classified as a parent table and you want to drop and recreate it.
how would you do this without affecting the children tables?
A:disable the foreign key constraint to the parent, drop the table, recreate the table, enable the foreign key constraint.
13. explain the difference between archivelog mode and noarchivelog
mode and
the benefits and disadvantages to each.
A:archivelog mode is a mode that you can put the database in for
creating a backup of all transactions that have occurred in the
database so that you can recover to any ball in time. noarchivelog
mode is basically the absence of archivelog mode and has the
disadvantage of not being able to recover to any ball in time.
noarchivelog mode does have the advantage of not having to write
transactions to an archive log and thus increases the performance of
the database slightly.
14. what command would you use to create a backup control file?
A:alter database backup control file to trace.
15. give the stages of instance startup to a usable state where normal
users may access it.
A:startup nomount - instance startup startup mount - the database is
mounted startup open - the database is opened
16. what column differentiates the v$ views to the gv$ views and how?
A:the inst_id column which indicates the instance in a rac environment
the information came from.
17. how would you go about generating an explain plan?

A:create a plan table with utlxplan.sql. use the explain plan set
statement_id = 'tst1' into plan_table for a sql statement look at the
explain plan with utlxplp.sql or utlxpls.sql
18. how would you go about increasing the buffer cache hit ratio?
A:use the buffer cache advisory over a given workload and then query
the
v$db_cache_advice table. if a change was necessary then i would use
the alter system set db_cache_size command.
19. explain an ora-01555
A:you get this error when you get a snapshot too old within rollback. it
can usually be solved by increasing the undo retention or increasing
the size of rollbacks. You should also look at the logic involved in the
application getting the error message.
20. explain the difference between $oracle_home and $oracle_base.
A:oracle_base is the root directory for oracle. oracle_home located
beneath oracle_base is where the oracle products reside.
12.INDEXES-ORACLE
All About Indexes in Oracle
What is an Index?
A:An index is used to increase read access performance. A book,
having an index, allows rapid access to a particular subject area within
that book. Indexing a database table provides rapid location of specific
rows within that table, where indexes are used to optimize the speed of
access to rows. When indexes are not used or are not matched by SQL
statements submitted to that database then a full table scan is
executed. A full table scan will read all the data in a table to find a
specific row or set of rows, this is extremely inefficient when there are
many rows in the table.
*It is often more efficient to full table scan small tables. The optimizer
will often assess full table scan on small tables as being more efficient
than reading both index and data space, particularly where a range
scan rather than an exact match would be used against the index. An
index of columns on a table contains a one-to-one ratio of rows
between index and indexed table, excluding binary key groupings,
a

more on this later. An index is effectively a separate table to that of the


data table. Tables and indexes are often referred to as data and index
spaces. An index contains the indexed columns plus a ROWID value for
each of those column combination rows. When an index is searched
through the indexed columns rather than all the data in the row of a
table is scanned. The index space ROWID is then used to access the
table row directly in the data space. An index row is generally much
smaller than a table row, thus more index rows are stored in the same
physical space, a block. As a result less of the database is accessed
when using indexes as opposed to tables to search for data. This is the
reason why indexes enhance performance.
The Basic "How to" of Indexing
A:There are a number of important factors with respect to efficient and
effective creation
and use of indexing.
The number of indexes per table.
The number of table columns to be indexed.
What datatypes are sensible for use in columns to be indexed?
Types of indexes from numerous forms of indexes available.
How does SQL behave with indexes?
What should be indexed?
What should not be indexed?
Number of Indexes per Table
Whenever a table is inserted into, updated or deleted from, all indexes
plus the table must be updated. Thus if one places ten indexes onto a
single table then every change to that table requires an effective
change to a single table and ten indexes. The result is that
performance will be substantially degraded since one insert requires
eleven inserts to insert the new row into both data and index spaces.
Be frugal with indexing and be conscious of the potential ill as well as
the good effects produced by indexing. The general rule is that the
more dynamic a table is the fewer indexes it should have.

A dynamic table is a table changes constantly, such as a transactions


table. Catalog tables on the other hand store information such as
customer details; customers change a lot less often than invoices.
Customer details are thus static in nature and over-indexing may be
advantageous to performance.
Number of Columns to Index
Composite indexes are indexes made up of multiple columns. Minimize
on the number of columns in a composite key. Create indexes with
single columns. Composite indexes are often a requirement of
traditional relational database table structures.
With the advent of object-oriented application programming
languages such as Java, sequence identifiers tend to be used to
identify every row in every table uniquely. The result is single column
indexes for every table. The only exceptions are generally manytomany join resolution entities.
It may sometimes be better to exclude some of the lower-level or less
relevant columns from the index since at that level there may not be
much data, if there are not many rows to index it can be more efficient
to read a group of rows from the data space. For instance, a composite
index comprised of five columns could be reduced to the first three
columns based on a limited number of rows traversed as a result of
ignoring the last two columns. Look at your data carefully when
constructing indexes. The more columns you add to a composite index
the slower the search will be since there is a more complex
requirement for that search and the indexes get physically larger. The
benefit of indexes is that an index occupies less physical space than
the data. If the index gets so large that it is as large as the data then it
will become less efficient to read both the index and data spaces
rather than just the data space.
Most database experts recommend a maximum of three columns for
composite keys.
Datatypes of Index Columns
Integers make the most efficient indexes. Try to always create indexes
on columns with fixed length values. Avoid using VARCHAR2 and any
object data types. Use integers if possible or fixed length, short strings.
Also try to avaoid indexing on dates and floatingpoint values. If using
dates be sure to use the internal representation or just the date, not
the date and the time. Use integer generating sequences wherever
possible to create consistently sequential values.
a

Types of Indexes
There are different types of indexes available in different databases.
These different indexes are applicable under specific circumstances,
generally for specific search patterns, for instance exact matches or
range matches.
The simplest form of indexing is no index at all, a heap structure. A
heap structure is effectively a collection of data units, rows, which is
completely unordered. The most commonly used indexed structure is a
B tree (Binary Tree). A B tree index is best used for exact matches and
range searches. Other methods of indexing exist.
1. Hashing algorithms produce a pre-calculated best guess on general
row location and are best used for exact matches.
2. ISAM or Indexed Sequential Access Method indexes are not used in
Oracle. 3. Bitmaps contain maps of zero's and 1's and can be highly
efficient access methods for read-only data.
4. There are other types of indexing which involve clustering of data
with indexes. In general every index type other than a B tree involves
overflow. When an index is required to overflow it means that the index
itself cannot be changed when rows are added, changed or removed.
The result is inefficiency because a search to find overflowing data
involves a search through originally indexed rows plus overflowing
rows. Overflow index space is normally not ordered. A B tree index can
be altered by changes to data. The only exception to a B tree index
coping with data changes in Oracle is deletion of rows. When rows are
deleted from a table, physical space previously used by the index for
the deleted row is never reclaimed unless the index is rebuilt.
Rebuilding of B tree indexes is far less common than that for other
types of indexes since non-B tree indexes simply overflow when row
changes are applied to them. Oracle uses has the following types of
indexing available.
B tree index. A B tree is a binary tree. General all-round index and
common in OLTP systems. An Oracle B tree index has three layers, the
first two are branch node layers and the third, the lowest, contains leaf
nodes. The branch nodes contain pointers to the lower level branch or
leaf node. Leaf nodes contain index column values plus a ROWID
pointer to the table row. The branch and leaf nodes are optimally
arranged in the tree such that each branch will contain an equal
number of branch or leaf nodes.

Bitmap index. Bitmap containing binary representations for each


row. A zero implies that a row does not have a specified value and a 1
denotes that row having that value. Bitmaps are very susceptible to
overflow in OLTP systems and should only be used for read-only data
such as in Data Warehouses.
Function-Based index. Contains the result of an expression precalculated on each row in a table.
Index Organized Tables. Clusters index and data spaces together
physically for a single table and orders the merged physical space in
the order of the index, usually the primary key. An index organized
table is a table as well as an index, the two are merged.
Clusters. Partial merge of index and data spaces, ordered by an
index, not necessarily the primary key. A cluster is similar to an index
organized table except that it can be built on a join (more than a single
table). Clusters can be ordered using binary tree structures or hashing
algorithms. A cluster could also be viewed as a table as well as an
index since clustering partially merges index and data spaces.
Bitmap Join index. Creates a single bitmap for one table in a join.
Domain index. Specific to certain application types using
contextual or spatial data, amongst others.
Indexing Attributes
Various types of indexes can have specific attributes or behaviors
applied to them. These behaviors are listed below, some are Oracle
specific and some are not.
Ascending or Descending. Indexes can be order in either way.
Uniqueness. Indexes can be unique or non-unique. Primary keys
must be unique since a primary key uniquely identifies a row in a table
referentially. Other columns such as names sometimes have unique
constraints or indexes, or both, added to them.
Composites. A composite index is an index made up of more than
one column in a table.
Compression. Applies to Btree indexes where duplicated prefix
values are removed. Compression speeds up data retrieval but can
slow down table changes.

Reverse keys. Bytes for all columns in the index are reversed,
retaining the order of the columns. Reverse keys can help performance
in clustered server environments (Oracle8i Parallel Server / RAC
Oracle9i) by ensuring that changes to similar key values will be better
physically spread. Reverse key indexing can apply to rows inserted into
OLTP tables using sequence integer generators, where each number is
very close to the previous number. When searching for and updating
rows with sequence identifiers, where rows are searched for
Null values. Null values are generally not included in indexes.
Sorting (NOSORT). This option is Oracle specific and does not sort
an index. This assumes that data space is physically ordered in the
desired manner.
What SQL does with Indexes
A:In general a SQL statement will attempt to match the structure of
itself to an index, the where clause ordering will attempt to match
available indexes and use them if possible. If no index is matched then
a full table scan will be executed. A table scan is extremely inefficient
for anything but the smallest of tables. Obviously if a table is read
sequentially, in physical order then an index is not required. A table
does not always need an index.
What to Index
A:Use indexes where frequent queries are performed with where and
order by clause matching the ordering of columns in those indexes.
Use indexing generally on larger tables or multi-table, complex joins.
Indexes are best created in the situations listed below.
Columns used in joins.
Columns used in where clauses.
Columns used in order by clauses.
In most relational databases the order-by clause is generally
executed on the subset retrieved by the where clause, not the entire
data space. This is not always unfortunately the case for Oracle.
Traditionally the order-by clause should never include the columns
contained in the where cause. The only case where the order-by clause
will include columns contained in the where clause is the case of the
where clause not matching any index in the database or a requirement
a

for the order by clause to override the sort order of the where, typically
in highly complex, multi-table joins.
The group-by clause can be enhanced by indexing when the range
of values being
grouped is small in relation to the number of rows in the table selected.
What not to Index
A:Indexes will degrade performance of inserts, updates and deletes,
sometimes
substantially.
Tables with a small number of rows.
Static tables.
Columns with a wide range of values.
Tables changed frequently and with a low amount of data retrieval.
Columns not used in data access query select statements.
Tuning Oracle SQL Code and Using
Indexes
What is SQL Tuning?
A:Tune SQL based on the nature of your application, OLTP or read-only
Data Warehouse. OLTP applications have high volumes of concurrent
transactions and are better served with exact match SQL where many
transactions compete for small amounts of data. Read-only Data
Warehouses require rapid access to large amounts of information at
once and thus many records are accessed at once, either by many or a
small number of sessions.
The EXPLAIN PLAN command can be used to compare different
versions of SQL statements, and tune your application SQL code as
required. When tuning OLTP applications utilize sharing of SQL code in
PL/SQL procedures and do not use triggers unless absolutely necessary.
Triggers can cause problems such as self-mutating transactions where
a table can expect a lock on a row already locked by the same
transaction. This is because triggers do not allow transaction
a

termination commands such as COMMIT and ROLLBACK. In short, do


not use triggers unless absolutely necessary.
The best approach to tuning of SQL statements is to seek out those
statements consuming the greatest amount of resources (CPU,
memory and I/O). The more often a SQL statement is executed the
more finely it should be tuned. Additionally SQL statements executed
many times more often than other SQL statements can cause issues
withlocking. SQL code using bind variables will execute much faster
than those not. Constant re-parsing of similar SQL code can over stress
CPU time resources.
Tuning is not necessarily a never-ending process but can be iterative. It
is always best to take small steps and then assess improvements.
Small changes are always more manageable and more easier to
implement. Use the Oracle performance views plus tools such as
TKPROF, tracing, Oracle Enterprise Manager, Spotlight, automated
scripts and other tuning tools or packages which aid in monitoring and
Oracle performance tuning. Detection of bottlenecks and SQL
statements causing problem is as important asresolving those issues.
In general tuning falls into three categories as listed below, in
order of importance and performance impact.
1. Data model tuning.
2. SQL statement tuning.
3. Physical hardware and Oracle database configuration.
Physical hardware and Oracle database configuration installation will,
other than bottleneck resolution, generally only affect performance by
between 10% and 20%. Most performance issues occur from poorly
developed SQL code, with little attention to SQL tuning during
development, probably causing around 80% of general system
performance problems. Poor data model design can cause even more
serious performance problems than SQL code but it is rare because
data models are usually built more carefully than SQL code. It is a
common problem that SQL code tuning is often left to DBA personnel.
DBA people are often trained as Unix Administrators, SQL tuning is
conceptually a programming skill; programming skills of Unix
Administrators are generally very lowlevel,if present at all, and very
different in skills requirements to that of SQL code.
How to Tune SQL
a

Indexing
A:When building and restructuring of indexing never be afraid of
removing unused indexes.The DBA should always be aware of where
indexes are used and how.
Oracle9i can automatically monitor index usage using the ALTER
INDEX index name [NO]MONITORING USAGE; command with
subsequent selection of the USED column from the V$OBJECT_USAGE
column.
Taking an already constructed application makes alterations of any
kind much more complex. Pay most attention to indexes most often
utilized. Some small static tables may not require indexes at all. Small
static lookup type tables can be cached but will probably be force
table-scanned by the optimizer anyway; table-scans may be adversely
affected by the addition of unused superfluous indexes. Sometimes
table-scans are faster than anything else. Consider the use of
clustering, hashing, bitmaps and even index organized tables, only in
Data Warehouses. Many installations use bitmaps in OLTP databases,
this often a big mistake! If you have bitmap indexes in your OLTP
database and are having performance problems, get rid of them!
Oracle recommends the profligate use of function-based indexes,
assuming of course there will not be too many of them. Do not allow
too many programmers to create their indexes, especially not functionbased indexes, because you could end-up with thousands of indexes.
Application developers tend to be unaware of what other developers
are doing and create indexes specific to a particular requirement where
indexes may be used in only one place. Some DBA control and
approval process must be maintained on the creation of new indexes.
Remember, every table change requires a simultaneous update to all
indexes created based on that table.
SQL Statement Reorganisation
SQL statement reorganization encompasses factors as listed below,
amongst others.
WHERE clause filtering and joining orders matching indexes.
Use of hints is not necessarily a good idea. The optimizer is probably
smarter than you are.
Simplistic SQL statements and minimizing on table numbers in joins.

Use bind variables to minimize on re-parsing. Buying lots of


expensive RAM and sizing your shared pool and database buffer cache
to very large values may make performance worse. Firstly, buffer
cache reads are not as fast as you might think. Secondly, a large SQL
parsing shared pool, when not using bind variables in SQL code, will
simply fill up and take longer for every subsequent SQL statement to
search.
Oracle9i has adopted various SQL ANSI standards. The ANSI join
syntax standard could cause SQL code performance problems. The
most effective tuning approach to tuning Oracle SQL code is to remove
rows from joins using where clause filtering prior to joining multiple
tables, obviously the larger tables, requiring the fewest rows should be
filtered first. ANSI join syntax applies joins prior to where clause
filtering; this could cause major performance problems. Nested
subquery SQL statements can be effective under certain
circumstances. However, nesting of SQL statements increases the level
of coding complexity and if sometimes looping cursors can be utilized
in PL/SQL procedures, assuming the required SQL is not completely adhoc.
Avoid ad-hoc SQL if possible. Any functionality, not necessarily
business logic, is always better provided at the application level.
Business logic, in the form of referential integrity, is usually best
catered for in Oracle using primary and foreign key constraints and
explicitly created indexes. Nested subquery SQL statements can
become over complicated and impossible for even the most brilliant
coder to tune to peak efficiency. The reason for this complexity could
lie in an over-Normalized underlying data model. In general use of
subqueries is a very effective approach to SQL code performance
tuning. However, the need to utilize intensive, multi-layered subquery
SQL code is often a symptom of a poor data model due to
requirements for highly complex SQL statement joins.
Some Oracle Tricks
Use [NOT] EXISTS Instead of [NOT] IN
In the example below the second SQL statement utilizes an index in
the subquery because of the use of EXISTS in the second query as
opposed to IN. IN will build a set first and
EXISTS will not. IN will not utilize indexes whereas EXISTS will.
SELECT course_code, name FROM student

WHERE course_code NOT IN


(SELECT course_code FROM maths_dept);
SELECT course_code, name FROM student
WHERE NOT EXISTS
(SELECT course_code FROM maths_dept
WHERE maths_dept.course_code = student.course_code);
In the example below the nesting of the two queries could be reversed
depending on which table has more rows. Also if the index is not used
or not available, reversal of the subquery is required if tableB has
significantly more rows than tableA.
DELETE FROM tableA WHERE NOT EXISTS
(SELECT columnB FROM tableB WHERE tableB.columnB =
tableA.columnA);
Use of value lists with the IN clause could indicate a missing entity.
Also that missing entity is probably static in nature and can potentially
be cached, although caching causing increased data buffer size
requirements is not necessarily a sensible solution.
SELECT country FROM countries WHERE continent IN
('africa','europe','north america');
Equijoins and Column Value Transformations
AND and = predicates are the most efficient. Avoid transforming of
column values in any form, anywhere in a SQL statement, for instance
as shown below.
SELECT * FROM <table name> WHERE TO_NUMBER(BOX_NUMBER) =
94066;
And the example below is really bad! Typically indexes should not be
placed on descriptive fields such as names. A function-based index
would be perfect in this case but would probably be unnecessary if the
data model and the data values were better organized.
SELECT * FROM table1, table2

WHERE UPPER(SUBSTR(table1.name,1,1)) =
UPPER(SUBSTR(table2.name,1,1));
Transforming literal values is not such a problem but application of a
function to a column within a table in a where clause will use a tablescan regardless of the presence of indexes. When using a functionbased index the index is the result of the function which the optimizer
will recognize and utilize for subsequent SQL statements.
Datatypes
Try not to use mixed datatypes by setting columns to appropriate types
in the first place. If mixing of datatypes is essential do not assume
implicit type conversion because it will not always work, and implicit
type conversions can cause indexes not to be used. Function-based
indexes can be used to get around type conversion problems but this is
not the most appropriate use of function-based indexes. If types must
be mixed, try to place type conversion onto explicit values and not
columns. For instance, as shownbelow.
WHERE zip = TO_NUMBER('94066') as opposed to WHERE
TO_CHAR(zip) = '94066'
The DECODE Function
The DECODE function will ignore indexes completely. DECODE is very
useful in certain circumstances where nested looping cursors can
become extremely complex. DECODE is intended for specific
requirements and is not intended to be used prolifically, especially not
with respect to type conversions. Most SQL statements containing
DECODE function usage can be altered to use explicit literal selection
criteria perhaps using separate SELECT statements combined with
UNION clauses. Also Oracle9i contains a CASE statement which is much
more versatile than DECODE and may be much more efficient.
Join Orders
Always use indexes where possible, this applies to all tables accessed
in a join, both those in the driving and nested subqueries. Use indexes
between parent and child nested subqueries in order to utilize indexes
across a join. A common error is that of accessing a single row from the
driving table using an index and then to access all rows from a nested
subquery table where an index can be used in the nested subquery
table based on the rowretrieved by the driving table.

Put where clause filtering before joins, especially for large tables where
only a few rows are required. Try to use indexes fetching the minimum
number of rows. The order in which tables are accessed in a query is
very important. Generally a SQL statement is parsed from toptobottom and from left-to-right. The further into the join or SQL
statement, then the fewer rows should be accessed. Even consider
constructing a SQL statement based on the largest table being the
driving table even if that largest table is not the logical driver of the
SQL statement. When a join is executed, each join will overlay the
result of the previous part of the join, effectively each section (based
on each table) is executed sequentially. In the example below table1
has the most rows and table3 has the fewest rows.
SELECT * FROM table1, table2, table3
WHERE table1.index = table2.index AND table1.index = table3.index;
Hints
Use them? Perhaps. When circumstances force their use. Generally the
optimizer will succeed where you will not. Hints allow, amongst many
other things, forcing of index usage rather than full table-scans. The
optimizer will generally find full scans faster with small tables and
index usage faster with large tables. Therefore if row numbers and the
ratios of row between tables are known then using hints will probably
make performance worse. One specific situation where hints could help
are generic applications where rows in specific tables can change
drastically depending on the installation. However, once again the
optimizer may still be more capable than any programmer or DBA.
Use INSERT, UPDATE and DELETE ... RETURNING
When values are produced and contained in insert or update
statements, such as new sequence numbers or expression results, and
those values are required in the same transaction by following SQL
statements, the values can be returned into variables and used later
without recalculation of expression being required. This tactic would be
used in PL/SQL and anonymous procedures. Examples are shown
below.
INSERT INTO table1 VALUES (test_id.nextval, 'Jim Smith', '100.12',
5*10)
RETURNING col1, col4 * 2 INTO val1, val2;
UPDATE table1 SET name = 'Joe Soap'
a

WHERE col1 = :val1 AND col2 = :val2;


DELETE FROM table1 RETURN value INTO :array;
Triggers
Simplification or complete disabling and removal of triggers is
advisable. Triggers are very slow and can cause many problems, both
performance related and can even cause serious locking and database
errors. Triggers were originally intended for messaging and are not
intended for use as rules triggered as a result of a particular event.
Other databases have full-fledged rule systems aiding in the
construction of Rule-Based Expert systems. Oracle triggers are more
like database events than event triggered rules causing other
potentially recursive events to occur. Never use triggers to validate
referential integrity. Try not to use triggers at all. If you do use triggers
and have performance problems, their removal and recoding into
stored procedures, constraints or application code could solve a lot of
your problems.
Data Model Restructuring
Data restructuring involves partitioning, normalization and even
denormalization. Oracle recommends avoiding the use of primary and
foreign keys for validation of referential integrity, and suggests
validating referential integrity in application code. Applicationcode is
more prone to error since it changes much faster. Avoiding constraintbased referential is not necessarily the best solution.
Referential integrity can be centrally controlled and altered in a single
place in the database. Placing referential integrity in application code
is less efficient due to increased network traffic and requires more code
to be maintained in potentially many applications.
All foreign keys must have indexes explicitly created and these indexes
will often be used in general application SQL calls, other than just for
validation of referential integrity. Oracle does not create internal
indexes when creating foreign reference keys. In fact Oracle
recommends that indexes should be explicitly created for all primary,
foreign and unique hey constraints. Foreign keys not indexed using the
CREATE INDEX statement can cause table locks on the table containing
the foreign key. It is highly likely that foreign keys will often be used by
the optimizer in SQL statement filtering where clauses, if the data
model and the application are consistent with each other structurally,
which should be the case.

Views
Do not use views as a basis for SQL statements taking a portion of the
rows defined by that view. Views were originally intended for security
and access privileges. No matter what where clause is applied to a
view the entire view will always be executed first. On the same basis
also avoid things such as SELECT * FROM , GROUP BY clauses and
aggregations such as DISTINCT. DISTINCT will always select all rows
first. Do not create new entities using joined views, it is better to create
those intersection view joins as entities themselves; this applies
particularly in the case of many-to-many relationships. Also Data
Warehouses can benefit from materialized views which are views
actually containing data, refreshed by the operator at a chosen
juncture.
Maintenance of Current Statistics and Cost Based Optimization
Maintain current statistics as often as possible, this can be automated.
Cost-based optimization, using statistics, is much more efficient than
rule-based optimization.
Regeneration and Coalescing of Indexes
Indexes subjected to constant DML update activity can become skewed
and thus become less efficient over a period of time. Use of Oracle
Btree indexes implies that when a value is searched for within the tree,
a series of comparisons are made in order to depth-firsttraverse down
through the tree until the appropriate value is found.
Oracle Btree indexes are usually only three levels, requiring three hits
on the index to find a resulting ROWID pointer to a table row. Index
searches, even into very large tables, especially unique index hits, not
index range scans, can be incredibly fast.In some circumstances
constant updating of a binary tree can cause the tree to becomemore
heavily loaded in some parts or skewed. Thus some parts of the tree
require more intensive searching which can be largely fruitless. Indexes
should sometimes be rebuilt where the binary tree is regenerated from
scratch, this can be done online in Oracle9i, as shown below.
ALTER INDEX index name REBUILD ONLINE;
Coalescing of indexes is a more physical form of maintenance where
physical space chunks which are fragmented. Index fragmentation is
usually a result of massivedeletions from table, at once or over time.
Oracle Btree indexes do not reclaim physical space as a result of row
deletions. This can cause serious performance problems as a result of
a

fruitless searches and very large index files where much of the index
space is irrelevant. The command shown below will do a lot less than
rebuilding but it can help. If PCTINCREASE is not set to zero for the
index then extents could vary in size greatly and not be reusable. In
that the only option is rebuilding.
ALTER INDEX index name REBUILD COALESCE NOLOGGING;
17.DBA BASIC QUESTIONS
1)What are the prerequisites for connecting to a database
> 1) oracle net services should be available in both server and client.
2) listner should up and running. In case of remote connection..
[oracle listiner starts up a dedicated server process and passes the
server protocal adress to client using that address the clients connect
to the server. Once the connection is established the listiner
connection is terminated]
***********************************************************************
[AND]
1) check wether database server is installed on server or not.
2) client software should be installed on client machine.
3) check database and client are running on the same network or not.
(with the help of ping
4) ensure thar oracle listiner is up and running
5) connect to server using server protocal address
2) Create a User "TESTAPPS" identified by "TESTAPPS"
> create user identified by ;
3) Connect to DB using TESTAPPS from DB Node and MT Node
> first give grant options to the user....
grant connect,resource to ;
4)How do you identify remote connections on a DB Server
> ps -ef|grep -i local [where local=no it is a remote connection... in the
os level]
5)How do you identify local connections on a DB Server
> ps -ef|grep -i local [where local=yes it is a local connection... in the
os level]
6)Can you connect remotely as a user on DB Server. If so, how?
> /@ [with the help of connecting string]
7)Do you need to acess to DB Server to connect to a system schema?
> NO , just knowing the username&password u connect from the
a

client...
8)What is the difference between "SYS" & "SYSTEM" Schema
> SYS is a super most user...
SYS has additional roles sysdba, sysoper
SYS can do only startup, shudown options
> SYSTEM schema has owns certain additional data dictonary tables..
SYSTEM donot use startup and shutdown options....
9)What are the roles/priviliges for a "SYS" Schema
>***ROLES***[select granted_role from dba_role_privs where
grantee='SYS']
IMP_FULL_DATABASE, DELETE_CATALOG_ROLE,
RECOVERY_CATALOG_OWNER, DBA, EXP_FULL_DATABASE,
HS_ADMIN_ROLE, AQ_ADMINISTRATOR_ROLE, OEM_MONITOR,
RESOURCE, EXECUTE_CATALOG_ROLE, LOGSTDBY_ADMINISTRATOR,
AQ_USER_ROLE,
SCHEDULER_ADMIN, CONNECT, SELECT_CATALOG_ROLE,
GATHER_SYSTEM_STATISTICS,
OEM_ADVISOR
***PRIVILAGES****[select privileges from dba_sys_privs where
grantee='SYS';]
CREATE ANY RULE
CREATE ANY EVALUATION CONTEXT
MANAGE ANY QUEUE
EXECUTE ANY PROCEDURE
ALTER ANY RULE
CREATE RULE SET
EXECUTE ANY EVALUATION CONTEXT
INSERT ANY TABLE
SELECT ANY TABLE
LOCK ANY TABLE
UPDATE ANY TABLE
DROP ANY RULE SET
ENQUEUE ANY QUEUE
EXECUTE ANY TYPE
CREATE RULE
ALTER ANY EVALUATION CONTEXT
CREATE EVALUATION CONTEXT
ANALYZE ANY
EXECUTE ANY RULE
DROP ANY EVALUATION CONTEXT
EXECUTE ANY RULE SET
a

ALTER ANY RULE SET


DEQUEUE ANY QUEUE
DELETE ANY TABLE
DROP ANY RULE
CREATE ANY RULE SET
SELECT ANY SEQUENCE
10)What are the role/privileges for a SYSTEM Schema
> **ROLES**
[select granted_role from dba_role_privs where grantee='SYSTEM';]
AQ_ADMINISTRATOR_ROLE
DBA
>**PRIVILEGES***
[select privilege from dba_sys_privs where grantee='SYSTEM';]
GLOBAL QUERY REWRITE
CREATE MATERIALIZED VIEW
CREATE TABLE
UNLIMITED TABLESPACE
SELECT ANY TABLE
11)What is the difference between SYSDBA & DBA
> SYSDBA has startup and shutdown options
> DBA has no startup and shutdown options
12)What is the difference between X$ , V$ ,V_$,GV$
> X$ is permenent views
> GV$ are used in RAC environment....
> v$, V_$ are the temporary views which exist during the run time....
13)How do you verify whether your DB is a single node or Multinode
> sho parameter cluster;
its show false ..means single node.
14)From MT connect to db using "connect / as sysdba"
> /as sysdba cannot connect to database from MT ...
or
u can connect to DB from MT by creating password file
15)Is a Listener required to be up and running for a Local Connection
> NO
16)Is a Listener required to be up and running for a remote Connection
> YES
a

17)How do you verify the Background processes running from the


Database
> desc v$bgprocess
select * from v$bgprocess;
18)How do you verify whether a init.ora parameter is modifiable or not.
> desc v$parameter
select
name,value,ISSES_MODIFIABLE,ISSYS_MODIFIABLE,ISINSTANCE_MODIFI
ABLE from v$parameter;
19)What are the various ways to modify an init.ora parameter
> Two ways ... static & dynamic
static... editing text in the init.ora file......
dynamic... alter system set = scope=both (or) scope=spfile (or)
scope=memory;
20)Why is init.ora file required
> For starting instance...
defining the parameters values...[memory structures]
defining the control files location....
21)Why is a DB required to be in archive Log
> To recover the database
22)List the total No. of objects available in an apps database with
respect to a owner,object type,status
23)When an DB is being started where is the information being
recorded
> Alert logfile.....
24)What is the information that is being recorded at the time of db
Start
....
25)What is the difference between instance and Database
> INSTANCE is group of memory structures and background processes,
its a volatile memory.
> DATABASE is physical memory.. collection of C, R, D files.....
26)How is an instance created
> when ever issue the startup command....
a

server process reads the inti.ora file...


and its internally reads the SGA allocated size, memory structures
values, parametrs values... using these parameters instance is created
27)What are the files essential to start an instance
> init.ora file.... and internally its parameters...
> if its remote connection needs init.orafile & password file..
28)While the instance is being created can users connect to Database.
> NO, normal users cannot connect the database.. only sys user can
connect
> normal users has no privileges... to connect the database in nomunt
& mount stages....
29)Startup an instance. Connect as user Testapps. Verify the data
dictionary table dba_data_files. What are the data dictionary objects
that can be viewed
> normal users cannot connect to database...
30)After completing step 31, exit out of sql session, connect as "apps"
31)When the instance is created how many Unix processes are created
and how do you view them
> startup nomount
ps -ux....
or alert log file...
PMON started with pid=2, OS id=4482
PSP0 started with pid=3, OS id=4484
MMAN started with pid=4, OS id=4486
DBW0 started with pid=5, OS id=4488
LGWR started with pid=6, OS id=4490
CKPT started with pid=7, OS id=4492
SMON started with pid=8, OS id=4494
RECO started with pid=9, OS id=4496
MMON started with pid=10, OS id=4498
MMNL started with pid=11, OS id=4500
ten background process and one server and cleint process...total 12..
32)When the database is mounted how many Unix processes are
created and how do you view them
> ps -ux or alert logfile...
> In mount stage (alter database mount)
no extra process are not created....
33)How do you mount a database after an instance is created. What
a

are the messages recorded while changing to mount stage


> alter database mount
> Setting recovery target incarnation to 1
> Successful mount of redo thread 1, with mount id 4186671158
> Database mounted in Exclusive Mode
> Completed: alter database mount
34) What are the data dictionary objects that can be viewed in a mount
stage.
35)How do you open a database after an instance is mounted. What
are the messages recorded while changing to open stage
> alter database open
> opening redolog files.
> Successful open of redo files.
> MTTR advisory is disabled because FAST_START_MTTR_TARGET is not
set
> SMON: enabling cache recovery
> Successfully onlined Undo Tablespace 1
> SMON: enabling tx recovery
> Database Characterset is US7ASCII
> Completed: alter database open
************************************************************************
****
No answer qus:13, 22, 29, 30, 34
18.Tablespaces-Oracle DBA
Tablespaces, Datafiles, and Control Files
This chapter describes tablespaces, the primary logical database
structures of any Oracle database, and the physical datafiles that
correspond to each tablespace.
This chapter contains the following topics:
*
*
*
*

Introduction to Tablespaces, Datafiles, and Control Files


Overview of Tablespaces
Overview of Datafiles
Overview of Control Files

Introduction to Tablespaces, Datafiles, and Control Files


Oracle stores data logically in tablespaces and physically in datafiles
associated with the corresponding tablespace. Figure 3-1 illustrates
a

this relationship.

Figure 3-1 Datafiles and Tablespaces


Your browser may not support display of this image.
Description of "Figure 3-1 Datafiles and Tablespaces"
Databases, tablespaces, and datafiles are closely related, but they
have important differences:
* An Oracle database consists of one or more logical storage units
called tablespaces, which collectively store all of the database's data.
* Each tablespace in an Oracle database consists of one or more files
called datafiles, which are physical structures that conform to the
operating system in which Oracle is running.
* A database's data is collectively stored in the datafiles that constitute
each tablespace of the database. For example, the simplest Oracle
database would have one tablespace and one datafile. Another
database can have three tablespaces, each consisting of two datafiles
(for a total of six datafiles).
Oracle-Managed Files
Oracle-managed files eliminate the need for you, the DBA, to directly
manage the operating system files comprising an Oracle database. You
specify operations in terms of database objects rather than filenames.
Oracle internally uses standard file system interfaces to create and
delete files as needed for the following database structures:
* Tablespaces
* Redo log files
* Control files
a

Through initialization parameters, you specify the file system directory


to be used for a particular type of file. Oracle then ensures that a
unique file, an Oracle-managed file, is created and deleted when no
longer needed.
Allocate More Space for a Database
The size of a tablespace is the size of the datafiles that constitute the
tablespace. The size of a database is the collective size of the
tablespaces that constitute the database.
You can enlarge a database in three ways:
* Add a datafile to a tablespace
* Add a new tablespace
* Increase the size of a datafile
When you add another datafile to an existing tablespace, you increase
the amount of disk space allocated for the corresponding tablespace.
Figure 3-2 illustrates this kind of space increase.

Figure 3-2 Enlarging a Database by Adding a Datafile to a Tablespace

Description of "Figure 3-2 Enlarging a Database by Adding a Datafile to


a Tablespace"
Alternatively, you can create a new tablespace (which contains at least
one additional datafile) to increase the size of a database. Figure 3-3
illustrates this.

Figure 3-3 Enlarging a Database by Adding a New Tablespace


Your browser may not support display of this image.
Description of "Figure 3-3 Enlarging a Database by Adding a New
Tablespace"
The third option for enlarging a database is to change a datafile's size
or let datafiles in existing tablespaces grow dynamically as more space
is needed. You accomplish this by altering existing files or by adding
files with dynamic extension properties. Figure 3-4 illustrates this.

Figure 3-4 Enlarging a Database by Dynamically Sizing Datafiles


Your browser may not support display of this image.
Description of "Figure 3-4 Enlarging a Database by Dynamically Sizing
Datafiles"
Overview of Tablespaces
A database is divided into one or more logical storage units called
tablespaces. Tablespaces are divided into logical units of storage called
segments, which are further divided into extents. Extents are a
collection of contiguous blocks.
This section includes the following topics about tablespaces:
*
*
*
*
*
*
*
*
*
*
*
*

Bigfile Tablespaces
The SYSTEM Tablespace
The SYSAUX Tablespace
Undo Tablespaces
Default Temporary Tablespace
Using Multiple Tablespaces
Managing Space in Tablespaces
Multiple Block Sizes
Online and Offline Tablespaces
Read-Only Tablespaces
Temporary Tablespaces for Sort Operations
Transport of Tablespaces Between Databases

See Also:
o Chapter 2, "Data Blocks, Extents, and Segments" for more
information about segments and extents
o Oracle Database Administrator's Guide for detailed information on
creating and configuring tablespaces
Bigfile Tablespaces
Oracle lets you create bigfile tablespaces. This allows Oracle Database
to contain tablespaces made up of single large files rather than
numerous smaller ones. This lets Oracle Database utilize the ability of
64-bit systems to create and manage ultralarge files. The consequence
of this is that Oracle Database can now scale up to 8 exabytes in size.
With Oracle-managed files, bigfile tablespaces make datafiles
completely transparent for users. In other words, you can perform
a

operations on tablespaces, rather than the underlying datafile. Bigfile


tablespaces make the tablespace the main unit of the disk space
administration, backup and recovery, and so on. Bigfile tablespaces
also simplify datafile management with Oracle-managed files and
Automatic Storage Management by eliminating the need for adding
new datafiles and dealing with multiple files.
The system default is to create a smallfile tablespace, which is the
traditional type of Oracle tablespace. The SYSTEM and SYSAUX
tablespace types are always created using the system default type.
Bigfile tablespaces are supported only for locally managed tablespaces
with automatic segment-space management. There are two
exceptions: locally managed undo and temporary tablespaces can be
bigfile tablespaces, even though their segments are manually
managed.
An Oracle database can contain both bigfile and smallfile tablespaces.
Tablespaces of different types are indistinguishable in terms of
execution of SQL statements that do not explicitly refer to datafiles.
You can create a group of temporary tablespaces that let a user
consume temporary space from multiple tablespaces. A tablespace
group can also be specified as the default temporary tablespace for the
database. This is useful with bigfile tablespaces, where you could need
a lot of temporary tablespace for sorts.
Benefits of Bigfile Tablespaces
* Bigfile tablespaces can significantly increase the storage capacity of
an Oracle database. Smallfile tablespaces can contain up to 1024 files,
but bigfile tablespaces contain only one file that can be 1024 times
larger than a smallfile tablespace. The total tablespace capacity is the
same for smallfile tablespaces and bigfile tablespaces. However,
because there is limit of 64K datafiles for each database, a database
can contain 1024 times more bigfile tablespaces than smallfile
tablespaces, so bigfile tablespaces increase the total database
capacity by 3 orders of magnitude. In other words, 8 exabytes is the
maximum size of the Oracle database when bigfile tablespaces are
used with the maximum block size (32 k).
* Bigfile tablespaces simplify management of datafiles in ultra large
databases by reducing the number of datafiles needed. You can also
adjust parameters to reduce the SGA space required for datafile
information and the size of the control file.
* They simplify database management by providing datafile
transparency.
a

Considerations with Bigfile Tablespaces


* Bigfile tablespaces are intended to be used with Automatic Storage
Management or other logical volume managers that support
dynamically extensible logical volumes and striping or RAID.
* Avoid creating bigfile tablespaces on a system that does not support
striping because of negative implications for parallel execution and
RMAN backup parallelization.
* Avoid using bigfile tablespaces if there could possibly be no free
space available on a disk group, and the only way to extend a
tablespace is to add a new datafile on a different disk group.
* Using bigfile tablespaces on platforms that do not support large file
sizes is not recommended and can limit tablespace capacity. Refer to
your operating system specific documentation for information about
maximum supported file sizes.
* Performance of database opens, checkpoints, and DBWR processes
should improve if data is stored in bigfile tablespaces instead of
traditional tablespaces. However, increasing the datafile size might
increase time to restore a corrupted file or create a new datafile.
The SYSTEM Tablespace
Every Oracle database contains a tablespace named SYSTEM, which
Oracle creates automatically when the database is created. The
SYSTEM tablespace is always online when the database is open.
To take advantage of the benefits of locally managed tablespaces, you
can create a locally managed SYSTEM tablespace, or you can migrate
an existing dictionary managed SYSTEM tablespace to a locally
managed format.
In a database with a locally managed SYSTEM tablespace, dictionary
managed tablespaces cannot be created. It is possible to plug in a
dictionary managed tablespace using the transportable feature, but it
cannot be made writable.
Note:
If a tablespace is locally managed, then it cannot be reverted back to
being dictionary managed.
The Data Dictionary
The SYSTEM tablespace always contains the data dictionary tables for
the entire database. The data dictionary tables are stored in datafile 1.
a

PL/SQL Program Units Description


All data stored on behalf of stored PL/SQL program units (that is,
procedures, functions, packages, and triggers) resides in the SYSTEM
tablespace. If the database contains many of these program units, then
the database administrator must provide the space the units need in
the SYSTEM tablespace.
The SYSAUX Tablespace
The SYSAUX tablespace is an auxiliary tablespace to the SYSTEM
tablespace. Many database components use the SYSAUX tablespace as
their default location to store data. Therefore, the SYSAUX tablespace
is always created during database creation or database upgrade.
The SYSAUX tablespace provides a centralized location for database
metadata that does not reside in the SYSTEM tablespace. It reduces
the number of tablespaces created by default, both in the seed
database and in user-defined databases.
During normal database operation, the Oracle database server does
not allow the SYSAUX tablespace to be dropped or renamed.
Transportable tablespaces for SYSAUX is not supported.
Note:
If the SYSAUX tablespace is unavailable, such as due to a media
failure, then some database features might fail.
Undo Tablespaces
Undo tablespaces are special tablespaces used solely for storing undo
information. You cannot create any other segment types (for example,
tables or indexes) in undo tablespaces. Each database contains zero or
more undo tablespaces. In automatic undo management mode, each
Oracle instance is assigned one (and only one) undo tablespace. Undo
data is managed within an undo tablespace using undo segments that
are automatically created and maintained by Oracle.
When the first DML operation is run within a transaction, the
transaction is bound (assigned) to an undo segment (and therefore to a
transaction table) in the current undo tablespace. In rare
circumstances, if the instance does not have a designated undo
tablespace, the transaction binds to the system undo segment.

Caution:
Do not run any user transactions before creating the first undo
tablespace and taking it online.
Each undo tablespace is composed of a set of undo files and is locally
managed. Like other types of tablespaces, undo blocks are grouped in
extents and the status of each extent is represented in the bitmap. At
any point in time, an extent is either allocated to (and used by) a
transaction table, or it is free.
You can create a bigfile undo tablespace.
Creation of Undo Tablespaces
A database administrator creates undo tablespaces individually, using
the CREATE UNDO TABLESPACE statement. It can also be created when
the database is created, using the CREATE DATABASE statement. A set
of files is assigned to each newly created undo tablespace. Like regular
tablespaces, attributes of undo tablespaces can be modified with the
ALTER TABLESPACE statement and dropped with the DROP TABLESPACE
statement.
Note:
An undo tablespace cannot be dropped if it is being used by any
instance or contains any undo information needed to recover
transactions.
Assignment of Undo Tablespaces
You assign an undo tablespace to an instance in one of two ways:
* At instance startup. You can specify the undo tablespace in the
initialization file or let the system choose an available undo tablespace.
* While the instance is running. Use ALTER SYSTEM SET
UNDO_TABLESPACE to replace the active undo tablespace with another
undo tablespace. This method is rarely used.
You can add more space to an undo tablespace by adding more
datafiles to the undo tablespace with the ALTER TABLESPACE
statement.
You can have more than one undo tablespace and switch between
them. Use the Database Resource Manager to establish user quotas for
undo tablespaces. You can specify the retention period for undo
a

information See Also:


Tablespace
When the SYSTEM tablespace is locally managed, you must define at
least one default temporary tablespace when creating a database. A
locally managed SYSTEM tablespace cannot be used for default
temporary storage.
If SYSTEM is dictionary managed and if you do not define a default
temporary tablespace when creating the database, then SYSTEM is still
used for default temporary storage. However, you will receive a
warning in ALERT.LOG saying that a default temporary tablespace is
recommended and will be necessary in future releases.
How to Specify a Default Temporary Tablespace
Specify default temporary tablespaces when you create a database,
using the DEFAULT TEMPORARY TABLESPACE extension to the CREATE
DATABASE statement.
If you drop all default temporary tablespaces, then the SYSTEM
tablespace is used as the default temporary tablespace.
You can create bigfile temporary tablespaces. A bigfile temporary
tablespaces uses tempfiles instead of datafiles.
Note:
You cannot make a default temporary tablespace permanent or take it
offline.
Using Multiple Tablespaces
A very small database may need only the SYSTEM tablespace;
however, Oracle recommends that you create at least one additional
tablespace to store user data separate from data dictionary
information. This gives you more flexibility in various database
administration operations and reduces contention among dictionary
objects and schema objects for the same datafiles.
You can use multiple tablespaces to perform the following tasks:
* Control disk space allocation for database data
* Assign specific space quotas for database users
* Control availability of data by taking individual tablespaces online or
a

offline
* Perform partial database backup or recovery operations
* Allocate data storage across devices to improve performance
A database administrator can use tablespaces to do the following
actions:
* Create new tablespaces
* Add datafiles to tablespaces
* Set and alter default segment storage settings for segments created
in a tablespace
* Make a tablespace read only or read/write
* Make a tablespace temporary or permanent
* Rename tablespaces
* Drop tablespaces
Managing Space in Tablespaces
Tablespaces allocate space in extents. Tablespaces can use two
different methods to keep track of their free and used space:
* Locally managed tablespaces: Extent management by the tablespace
* Dictionary managed tablespaces: Extent management by the data
dictionary
When you create a tablespace, you choose one of these methods of
space management. Later, you can change the management method
with the DBMS_SPACE_ADMIN PL/SQL package.
Note:
If you do not specify extent management when you create a
tablespace, then the default is locally managed.
Locally Managed Tablespaces
A tablespace that manages its own extents maintains a bitmap in each
datafile to keep track of the free or used status of blocks in that
datafile. Each bit in the bitmap corresponds to a block or a group of
blocks. When an extent is allocated or freed for reuse, Oracle changes
the bitmap values to show the new status of the blocks. These changes
do not generate rollback information because they do not update
tables in the data dictionary (except for special cases such as
tablespace quota information).
Locally managed tablespaces have the following advantages over
a

dictionary managed tablespaces:


* Local management of extents automatically tracks adjacent free
space, eliminating the need to coalesce free extents.
* Local management of extents avoids recursive space management
operations. Such recursive operations can occur in dictionary managed
tablespaces if consuming or releasing space in an extent results in
another operation that consumes or releases space in a data dictionary
table or rollback segment.
The sizes of extents that are managed locally can be determined
automatically by the system. Alternatively, all extents can have the
same size in a locally managed tablespace and override object storage
options.
The LOCAL clause of the CREATE TABLESPACE or CREATE TEMPORARY
TABLESPACE statement is specified to create locally managed
permanent or temporary tablespaces, respectively.
Segment Space Management in Locally Managed Tablespaces
When you create a locally managed tablespace using the CREATE
TABLESPACE statement, the SEGMENT SPACE MANAGEMENT clause lets
you specify how free and used space within a segment is to be
managed. Your choices are:
* AUTO
This keyword tells Oracle that you want to use bitmaps to manage the
free space within segments. A bitmap, in this case, is a map that
describes the status of each data block within a segment with respect
to the amount of space in the block available for inserting rows. As
more or less space becomes available in a data block, its new state is
reflected in the bitmap. Bitmaps enable Oracle to manage free space
more automatically; thus, this form of space management is called
automatic segment-space management.
Locally managed tablespaces using automatic segment-space
management can be created as smallfile (traditional) or bigfile
tablespaces. AUTO is the default.
* MANUAL
This keyword tells Oracle that you want to use free lists for managing
free space within segments. Free lists are lists of data blocks that have
space available for inserting rows.
a

Dictionary Managed Tablespaces


If you created your database with an earlier version of Oracle, then you
could be using dictionary managed tablespaces. For a tablespace that
uses the data dictionary to manage its extents, Oracle updates the
appropriate tables in the data dictionary whenever an extent is
allocated or freed for reuse. Oracle also stores rollback information
about each update of the dictionary tables. Because dictionary tables
and rollback segments are part of the database, the space that they
occupy is subject to the same space management operations as all
other data.
Multiple Block Sizes
Oracle supports multiple block sizes in a database. The standard block
size is used for the SYSTEM tablespace. This is set when the database
is created and can be any valid size. You specify the standard block
size by setting the initialization parameter DB_BLOCK_SIZE. Legitimate
values are from 2K to 32K.
In the initialization parameter file or server parameter, you can
configure subcaches within the buffer cache for each of these block
sizes. Subcaches can also be configured while an instance is running.
You can create tablespaces having any of these block sizes. The
standard block size is used for the system tablespace and most other
tablespaces.
Note:
All partitions of a partitioned object must reside in tablespaces of a
single block size.
Multiple block sizes are useful primarily when transporting a
tablespace from an OLTP database to an enterprise data warehouse.
This facilitates transport between databases of different block sizes.
Online and Offline Tablespaces
A database administrator can bring any tablespace other than the
SYSTEM tablespace online (accessible) or offline (not accessible)
whenever the database is open. The SYSTEM tablespace is always
online when the database is open because the data dictionary must
always be available to Oracle.
A tablespace is usually online so that the data contained within it is
a

available to database users. However, the database administrator can


take a tablespace offline for maintenance or backup and recovery
purposes.
Bringing Tablespaces Offline
When a tablespace goes offline, Oracle does not permit any
subsequent SQL statements to reference objects contained in that
tablespace. Active transactions with completed statements that refer
to data in that tablespace are not affected at the transaction level.
Oracle saves rollback data corresponding to those completed
statements in a deferred rollback segment in the SYSTEM tablespace.
When the tablespace is brought back online, Oracle applies the
rollback data to the tablespace, if needed.
When a tablespace goes offline or comes back online, this is recorded
in the data dictionary in the SYSTEM tablespace. If a tablespace is
offline when you shut down a database, the tablespace remains offline
when the database is subsequently mounted and reopened.
You can bring a tablespace online only in the database in which it was
created because the necessary data dictionary information is
maintained in the SYSTEM tablespace of that database. An offline
tablespace cannot be read or edited by any utility other than Oracle.
Thus, offline tablespaces cannot be transposed to other databases.
Oracle automatically switches a tablespace from online to offline when
certain errors are encountered. For example, Oracle switches a
tablespace from online to offline when the database writer process,
DBWn, fails in several attempts to write to a datafile of the tablespace.
Users trying to access tables in the offline tablespace receive an error.
If the problem that causes this disk I/O to fail is media failure, you must
recover the tablespace after you correct the problem.
Use of Tablespaces for Special Procedures
If you create multiple tablespaces to separate different types of data,
you take specific tablespaces offline for various procedures. Other
tablespaces remain online, and the information in them is still available
for use. However, special circumstances can occur when tablespaces
are taken offline. For example, if two tablespaces are used to separate
table data from index data, the following is true:
* If the tablespace containing the indexes is offline, then queries can
still access table data because queries do not require an index to
access the table data.
a

* If the tablespace containing the tables is offline, then the table data
in the database is not accessible because the tables are required to
access the data.
If Oracle has enough information in the online tablespaces to run a
statement, it does so. If it needs data in an offline tablespace, then it
causes the statement to fail.
Read-Only Tablespaces
The primary purpose of read-only tablespaces is to eliminate the need
to perform backup and recovery of large, static portions of a database.
Oracle never updates the files of a read-only tablespace, and therefore
the files can reside on read-only media such as CD-ROMs or WORM
drives.
Note:
Because you can only bring a tablespace online in the database in
which it was created, read-only tablespaces are not meant to satisfy
archiving requirements.
Read-only tablespaces cannot be modified. To update a read-only
tablespace, first make the tablespace read/write. After updating the
tablespace, you can then reset it to be read only.
Because read-only tablespaces cannot be modified, and as long as
they have not been made read/write at any point, they do not need
repeated backup. Also, if you need to recover your database, you do
not need to recover any read-only tablespaces, because they could not
have been modified.
Temporary Tablespaces for Sort Operations
You can manage space for sort operations more efficiently by
designating one or more temporary tablespaces exclusively for sorts.
Doing so effectively eliminates serialization of space management
operations involved in the allocation and deallocation of sort space. A
single SQL operation can use more than one temporary tablespace for
sorting. For example, you can create indexes on very large tables, and
the sort operation during index creation can be distributed across
multiple tablespaces.
All operations that use sorts, including joins, index builds, ordering,
computing aggregates (GROUP BY), and collecting optimizer statistics,
benefit from temporary tablespaces. The performance gains are
a

significant with Real Application Clusters.


Sort Segments
One or more temporary tablespaces can be used only for sort
segments. A temporary tablespace is not the same as a tablespace
that a user designates for temporary segments, which can be any
tablespace available to the user. No permanent schema objects can
reside in a temporary tablespace.
Sort segments are used when a segment is shared by multiple sort
operations. One sort segment exists for every instance that performs a
sort operation in a given tablespace.
Temporary tablespaces provide performance improvements when you
have multiple sorts that are too large to fit into memory. The sort
segment of a given temporary tablespace is created at the time of the
first sort operation. The sort segment expands by allocating extents
until the segment size is equal to or greater than the total storage
demands of all of the active sorts running on that instance.
Creation of Temporary Tablespaces
Create temporary tablespaces by using the CREATE TABLESPACE or
CREATE TEMPORARY TABLESPACE statement.
Transport of Tablespaces Between Databases
A transportable tablespace lets you move a subset of an Oracle
database from one Oracle database to another, even across different
platforms. You can clone a tablespace and plug it into another
database, copying the tablespace between databases, or you can
unplug a tablespace from one Oracle database and plug it into another
Oracle database, moving the tablespace between databases.
Moving data by transporting tablespaces can be orders of magnitude
faster than either export/import or unload/load of the same data,
because transporting a tablespace involves only copying datafiles and
integrating the tablespace metadata. When you transport tablespaces
you can also move index data, so you do not have to rebuild the
indexes after importing or loading the table data.
You can transport tablespaces across platforms. (Many, but not all,
platforms are supported for cross-platform tablespace transport.) This
can be used for the following:

* Provide an easier and more efficient means for content providers to


publish structured data and distribute it to customers running Oracle
on a different platform
* Simplify the distribution of data from a data warehouse environment
to data marts which are often running on smaller platforms
* Enable the sharing of read only tablespaces across a heterogeneous
cluster
* Allow a database to be migrated from one platform to another
Tablespace Repository
A tablespace repository is a collection of tablespace sets. Tablespace
repositories are built on file group repositories, but tablespace
repositories only contain the files required to move or copy tablespaces
between databases. Different tablespace sets may be stored in a
tablespace repository, and different versions of a particular tablespace
set also may be stored. A version of a tablespace set in a tablespace
repository consists of the following files:
* The Data Pump export dump file for the tablespace set
* The Data Pump log file for the export
* The datafiles that comprise the tablespace set

How to Move or Copy a Tablespace to Another Database


To move or copy a set of tablespaces, you must make the tablespaces
read only, copy the datafiles of these tablespaces, and use
export/import to move the database information (metadata) stored in
the data dictionary. Both the datafiles and the metadata export file
must be copied to the target database. The transport of these files can
be done using any facility for copying flat files, such as the operating
system copying facility, ftp, or publishing on CDs.
After copying the datafiles and importing the metadata, you can
optionally put the tablespaces in read/write mode.
The first time a tablespace's datafiles are opened under Oracle
Database with the COMPATIBLE initialization parameter set to 10 or
higher, each file identifies the platform to which it belongs. These files
have identical on disk formats for file header blocks, which are used for
file identification and verification. Read only and offline files get the
compatibility advanced after they are made read/write or are brought
online. This implies that tablespaces that are read only prior to Oracle
Database 10g must be made read/write at least once before they can
a

use the cross platform transportable feature.


Note:
In a database with a locally managed SYSTEM tablespace, dictionary
tablespaces cannot be created. It is possible to plug in a dictionary
managed tablespace using the transportable feature, but it cannot be
made writable.
Overview of Datafiles
A tablespace in an Oracle database consists of one or more physical
datafiles. A datafile can be associated with only one tablespace and
only one database.
Oracle creates a datafile for a tablespace by allocating the specified
amount of disk space plus the overhead required for the file header.
When a datafile is created, the operating system under which Oracle
runs is responsible for clearing old information and authorizations from
a file before allocating it to Oracle. If the file is large, this process can
take a significant amount of time. The first tablespace in any database
is always the SYSTEM tablespace, so Oracle automatically allocates the
first datafiles of any database for the SYSTEM tablespace during
database creation.
See Also:
Your Oracle operating system-specific documentation for information
about the amount of space required for the file header of datafiles on
your operating system
Datafile Contents
When a datafile is first created, the allocated disk space is formatted
but does not contain any user data. However, Oracle reserves the
space to hold the data for future segments of the associated
tablespaceit is used exclusively by Oracle. As the data grows in a
tablespace, Oracle uses the free space in the associated datafiles to
allocate extents for the segment.
The data associated with schema objects in a tablespace is physically
stored in one or more of the datafiles that constitute the tablespace.
Note that a schema object does not correspond to a specific datafile;
rather, a datafile is a repository for the data of any schema object
within a specific tablespace. Oracle allocates space for the data
associated with a schema object in one or more datafiles of a
a

tablespace. Therefore, a schema object can span one or more


datafiles. Unless table striping is used (where data is spread across
more than one disk), the database administrator and end users cannot
control which datafile stores a schema object.
Size of Datafiles
You can alter the size of a datafile after its creation or you can specify
that a datafile should dynamically grow as schema objects in the
tablespace grow. This functionality enables you to have fewer datafiles
for each tablespace and can simplify administration of datafiles.
Note:
You need sufficient space on the operating system for expansion.
Offline Datafiles
You can take tablespaces offline or bring them online at any time,
except for the SYSTEM tablespace. All of the datafiles of a tablespace
are taken offline or brought online as a unit when you take the
tablespace offline or bring it online, respectively.
You can take individual datafiles offline. However, this is usually done
only during some database recovery procedures.
Temporary Datafiles
Locally managed temporary tablespaces have temporary datafiles
(tempfiles), which are similar to ordinary datafiles, with the following
exceptions:
* Tempfiles are always set to NOLOGGING mode.
* You cannot make a tempfile read only.
* You cannot create a tempfile with the ALTER DATABASE statement.
* Media recovery does not recognize tempfiles:
o BACKUP CONTROLFILE does not generate any information for
tempfiles.
o CREATE CONTROLFILE cannot specify any information about
tempfiles.
* When you create or resize tempfiles, they are not always guaranteed
allocation of disk space for the file size specified. On certain file
systems (for example, UNIX) disk blocks are allocated not at file
creation or resizing, but before the blocks are accessed.
Caution:
a

This enables fast tempfile creation and resizing; however, the disk
could run of space later when the tempfiles are accessed.
* Tempfile information is shown in the dictionary view DBA_TEMP_FILES
and the dynamic performance view V$TEMPFILE, but not in
DBA_DATA_FILES or the V$DATAFILE view.
Overview of Control Files
The database control file is a small binary file necessary for the
database to start and operate successfully. A control file is updated
continuously by Oracle during database use, so it must be available for
writing whenever the database is open. If for some reason the control
file is not accessible, then the database cannot function properly.
Each control file is associated with only one Oracle database.
Control File Contents
A control file contains information about the associated database that
is required for access by an instance, both at startup and during
normal operation. Control file information can be modified only by
Oracle; no database administrator or user can edit a control file.
Among other things, a control file contains information such as:
*
*
*
*
*
*
*
*
*
*
*
*

The database name


The timestamp of database creation
The names and locations of associated datafiles and redo log files
Tablespace information
Datafile offline ranges
The log history
Archived log information
Backup set and backup piece information
Backup datafile and redo log information
Datafile copy information
The current log sequence number
Checkpoint information

The database name and timestamp originate at database creation. The


database name is taken from either the name specified by the
DB_NAME initialization parameter or the name used in the CREATE
DATABASE statement.
Each time that a datafile or a redo log file is added to, renamed in, or
a

dropped from the database, the control file is updated to reflect this
physical structure change. These changes are recorded so that:
* Oracle can identify the datafiles and redo log files to open during
database startup
* Oracle can identify files that are required or available in case
database recovery is necessary
Therefore, if you make a change to the physical structure of your
database (using ALTER DATABASE statements), then you should
immediately make a backup of your control file.
Control files also record information about checkpoints. Every three
seconds, the checkpoint process (CKPT) records information in the
control file about the checkpoint position in the redo log. This
information is used during database recovery to tell Oracle that all redo
entries recorded before this point in the redo log group are not
necessary for database recovery; they were already written to the
datafiles.
19.CONTROLFILES-Oracle DBA
What Is a Control File?
Every Oracle database has a control file. A control file is a small binary
file that records the physical structure of the database and includes:
* The database name
* Names and locations of associated datafiles and online redo log files
* The timestamp of the database creation
* The current log sequence number
* Checkpoint information
The control file must be available for writing by the Oracle database
server whenever the database is open. Without the control file, the
database cannot be mounted and recovery is difficult.
The control file of an Oracle database is created at the same time as
the database. By default, at least one copy of the control file is created
during database creation. On some operating systems the default is to
create multiple copies. You should create two or more copies of the
control file during database creation. You might also need to create
a

control files later, if you lose control files or want to change particular
settings in the control files.
Guidelines for Control Files
This section describes guidelines you can use to manage the control
files for a database, and contains the following topics:
* Provide Filenames for the Control Files
* Multiplex Control Files on Different Disks
* Place Control Files Appropriately
* Back Up Control Files
* Manage the Size of Control Files
Provide Filenames for the Control Files
You specify control file names using the CONTROL_FILES initialization
parameter in the database's initialization parameter file (see "Creating
Initial Control Files"). The instance startup procedure recognizes and
opens all the listed files. The instance writes to and maintains all listed
control files during database operation.
If you do not specify files for CONTROL_FILES before database creation,
and you are not using the Oracle Managed Files feature, Oracle creates
a control file and uses a default filename. The default name is
operating system specific.
Multiplex Control Files on Different Disks
Every Oracle database should have at least two control files, each
stored on a different disk. If a control file is damaged due to a disk
failure, the associated instance must be shut down. Once the disk drive
is repaired, the damaged control file can be restored using the intact
copy of the control file from the other disk and the instance can be
restarted. In this case, no media recovery is required.
The following describes the behavior of multiplexed control files:
* Oracle writes to all filenames listed for the initialization parameter
CONTROL_FILES in the database's initialization parameter file.

* The first file listed in the CONTROL_FILES parameter is the only file
read by the Oracle database server during database operation.
* If any of the control files become unavailable during database
operation, the instance becomes inoperable and should be aborted.
Note:
Oracle strongly recommends that your database has a minimum of two
control files and that they are located on separate disks.
Place Control Files Appropriately
As already suggested, each copy of a control file should be stored on a
different disk drive. One practice is to store a control file copy on every
disk drive that stores members of online redo log groups, if the online
redo log is multiplexed. By storing control files in these locations, you
minimize the risk that all control files and all groups of the online redo
log will be lost in a single disk failure.
Back Up Control Files
It is very important that you back up your control files. This is true
initially, and at any time after you change the physical structure of
your database. Such structural changes include:
* Adding, dropping, or renaming datafiles
* Adding or dropping a tablespace, or altering the read-write state of
the tablespace
* Adding or dropping redo log files or groups
The methods for backing up control files are discussed in "Backing Up
Control Files".
Manage the Size of Control Files
The main determinants of a control file's size are the values set for the
MAXDATAFILES, MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY,
and MAXINSTANCES parameters in the CREATE DATABASE statement
that created the associated database. Increasing the values of these
parameters increases the size of a control file of the associated
database.
Creating Control Files
a

This section describes ways to create control files, and contains the
following topics:
* Creating Initial Control Files
* Creating Additional Copies, Renaming, and Relocating Control Files
* Creating New Control Files
Creating Initial Control Files
The initial control files of an Oracle database are created when you
issue the CREATE DATABASE statement. The names of the control files
are specified by the CONTROL_FILES parameter in the initialization
parameter file used during database creation. The filenames specified
in CONTROL_FILES should be fully specified and are operating system
specific. The following is an example of a CONTROL_FILES initialization
parameter:
CONTROL_FILES = (/u01/oracle/prod/control01.ctl,
/u02/oracle/prod/control02.ctl,
/u03/oracle/prod/control03.ctl)
If files with the specified names currently exist at the time of database
creation, you must specify the CONTROLFILE REUSE clause in the
CREATE DATABASE statement, or else an error occurs. Also, if the size
of the old control file differs from the SIZE parameter of the new one,
you cannot use the REUSE option.
The size of the control file changes between some releases of Oracle,
as well as when the number of files specified in the control file
changes. Configuration parameters such as MAXLOGFILES,
MAXLOGMEMBERS, MAXLOGHISTORY, MAXDATAFILES, and
MAXINSTANCES affect control file size.
You can subsequently change the value of the CONTROL_FILES
initialization parameter to add more control files or to change the
names or locations of existing control files.
Creating Additional Copies, Renaming, and Relocating Control Files
You can create an an additional control file copy by copying an existing
control file to a new location and adding the file's name to the list of
control files. Similarly, you rename an existing control file by copying
a

the file to its new name or location, and changing the file's name in the
control file list. In both cases, to guarantee that control files do not
change during the procedure, shut down the instance before copying
the control file.
To Multiplex or Move Additional Copies of the Current Control Files
1. Shut down the database.
2. Copy an existing control file to a different location, using operating
system commands.
3. Edit the CONTROL_FILES parameter in the database's initialization
parameter file to add the new control file's name, or to change the
existing control filename.
4. Restart the database.
Creating New Control Files
This section discusses when and how to create new control files.
When to Create New Control Files
It is necessary for you to create new control files in the following
situations:
* All control files for the database have been permanently damaged
and you do not have a control file backup.
* You want to change one of the permanent database parameter
settings originally specified in the CREATE DATABASE statement. These
settings include the database's name and the following parameters:
MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY, MAXDATAFILES,
and MAXINSTANCES.
For example, you would change a database's name if it conflicted with
another database's name in a distributed environment. Or, as another
example, you can change the value of MAXLOGFILES if the original
setting is too low.
The CREATE CONTROLFILE Statement
You can create a new control file for a database using the CREATE
CONTROLFILE statement. The following statement creates a new

control file for the prod database (formerly a database that used a
different database name):
CREATE CONTROLFILE
SET DATABASE prod
LOGFILE GROUP 1 ('/u01/oracle/prod/redo01_01.log',
'/u01/oracle/prod/redo01_02.log'),
GROUP 2 ('/u01/oracle/prod/redo02_01.log',
'/u01/oracle/prod/redo02_02.log'),
GROUP 3 ('/u01/oracle/prod/redo03_01.log',
'/u01/oracle/prod/redo03_02.log')
NORESETLOGS
DATAFILE '/u01/oracle/prod/system01.dbf' SIZE 3M,
'/u01/oracle/prod/rbs01.dbs' SIZE 5M,
'/u01/oracle/prod/users01.dbs' SIZE 5M,
'/u01/oracle/prod/temp01.dbs' SIZE 5M
MAXLOGFILES 50
MAXLOGMEMBERS 3
MAXLOGHISTORY 400
MAXDATAFILES 200
MAXINSTANCES 6
ARCHIVELOG;
Cautions:
* The CREATE CONTROLFILE statement can potentially damage
specified datafiles and online redo log files. Omitting a filename can
cause loss of the data in that file, or loss of access to the entire
a

database. Employ caution when using this statement and be sure to


follow the instructions in "Steps for Creating New Control Files".
* If the database had forced logging enabled before creating the new
control file, and you want it to continue to be enabled, then you must
specify the FORCE LOGGING clause in the CREATE CONTROLFILE
statement. See "Specifying FORCE LOGGING Mode".

Steps for Creating New Control Files


Complete the following steps to create a new control file.
1. Make a list of all datafiles and online redo log files of the database.
If you follow recommendations for control file backups as discussed in
"Backing Up Control Files" , you will already have a list of datafiles and
online redo log files that reflect the current structure of the database.
However, if you have no such list, executing the following statements
will produce one.
SELECT MEMBER FROM V$LOGFILE;
SELECT NAME FROM V$DATAFILE;
SELECT VALUE FROM V$PARAMETER WHERE NAME = 'CONTROL_FILES';
If you have no such lists and your control file has been damaged so
that the database cannot be opened, try to locate all of the datafiles
and online redo log files that constitute the database. Any files not
specified in Step 5 are not recoverable once a new control file has been
created. Moreover, if you omit any of the files that make up the
SYSTEM tablespace, you might not be able to recover the database.
2. Shut down the database.
If the database is open, shut down the database normally if possible.
Use the IMMEDIATE or ABORT options only as a last resort.
3. Back up all datafiles and online redo log files of the database.
4. Start up a new instance, but do not mount or open the database:
STARTUP NOMOUNT

5. Create a new control file for the database using the CREATE
CONTROLFILE statement.
When creating a new control file, select the RESETLOGS option if you
have lost any online redo log groups in addition to control files. In this
case, you will need to recover from the loss of the redo logs (Step 8).
You must also specify the RESETLOGS option if you have renamed the
database. Otherwise, select the NORESETLOGS option.
6. Store a backup of the new control file on an offline storage device.
See "Backing Up Control Files" for instructions for creating a backup.
7. Edit the CONTROL_FILES initialization parameter for the database to
indicate all of the control files now part of your database as created in
Step 5 (not including the backup control file). If you are renaming the
database, edit the DB_NAME parameter to specify the new name.
8. Recover the database if necessary. If you are not recovering the
database, skip to Step 9.
If you are creating the control file as part of recovery, recover the
database. If the new control file was created using the NORESETLOGS
option (Step 5), you can recover the database with complete, closed
database recovery.
If the new control file was created using the RESETLOGS option, you
must specify USING BACKUP CONTROL FILE. If you have lost online or
archived redo logs or datafiles, use the procedures for recovering those
files.
9. Open the database using one of the following methods:
* If you did not perform recovery, or you performed complete, closed
database recovery in Step 8, open the database normally.
ALTER DATABASE OPEN;
* If you specified RESETLOGS when creating the control file, use the
ALTER DATABASE statement, indicating RESETLOGS.
ALTER DATABASE OPEN RESETLOGS;
The database is now open and available for use.
Troubleshooting After Creating Control Files

After issuing the CREATE CONTROLFILE statement, you may encounter


some common errors. This section describes the most common control
file usage errors, and contains the following topics:
* Checking for Missing or Extra Files
* Handling Errors During CREATE CONTROLFILE
Checking for Missing or Extra Files
After creating a new control file and using it to open the database,
check the alert file to see if Oracle has detected inconsistencies
between the data dictionary and the control file, such as a datafile that
the data dictionary includes but the control file does not list.
If a datafile exists in the data dictionary but not in the new control file,
Oracle creates a placeholder entry in the control file under the name
MISSINGnnnn (where nnnn is the file number in decimal). MISSINGnnnn
is flagged in the control file as being offline and requiring media
recovery.
The actual datafile corresponding to MISSINGnnnn can be made
accessible by renaming MISSINGnnnn so that it points to the datafile
only if the datafile was read-only or offline normal. If, on the other
hand, MISSINGnnnn corresponds to a datafile that was not read-only or
offline normal, then the rename operation cannot be used to make the
datafile accessible, because the datafile requires media recovery that
is precluded by the results of RESETLOGS. In this case, you must drop
the tablespace containing the datafile.
In contrast, if a datafile indicated in the control file is not present in the
data dictionary, Oracle removes references to it from the new control
file. In both cases, Oracle includes an explanatory message in the
alert.log file to let you know what was found.
Handling Errors During CREATE CONTROLFILE
If Oracle sends you an error (usually error ORA-01173, ORA-01176,
ORA-01177, ORA-01215, or ORA-01216) when you attempt to mount
and open the database after creating a new control file, the most likely
cause is that you omitted a file from the CREATE CONTROLFILE
statement or included one that should not have been listed. In this
case, you should restore the files you backed up in Step 3 and repeat
the procedure from Step 4, using the correct filenames.
Backing Up Control Files
a

Use the ALTER DATABASE BACKUP CONTROLFILE statement to back up


your control files. You have two options:
1. Back up the control file to a binary file (duplicate of existing control
file) using the following statement:
ALTER DATABASE BACKUP CONTROLFILE TO
'/oracle/backup/control.bkp';
2. Produce SQL statements that can later be used to re-create your
control file:
ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
This command writes a SQL script to the database's trace file where it
can be captured and edited to reproduce the control file.

Recovering a Control File Using a Current Copy


This section presents ways that you can recover your control file from a
current backup or from a multiplexed copy.
Recovering from Control File Corruption Using a Control File Copy
This procedure assumes that one of the control files specified in the
CONTROL_FILES parameter is corrupted, the control file directory is still
accessible, and you have a multiplexed copy of the control file.
1. With the instance shut down, use an operating system command to
overwrite the bad control file with a good copy:
% cp /u01/oracle/prod/control03.ctl /u01/oracle/prod/control02.ctl
2. Start SQL*Plus and open the database:
SQL> STARTUP
Recovering from Permanent Media Failure Using a Control File Copy
This procedure assumes that one of the control files specified in the
CONTROL_FILES parameter is inaccessible due to a permanent media
failure, and you have a multiplexed copy of the control file.

1. With the instance shut down, use an operating system command to


copy the current copy of the control file to a new, accessible location:
% cp /u01/oracle/prod/control01.ctl /u04/oracle/prod/control03.ctl
2. Edit the CONTROL_FILES parameter in the initialization parameter
file to replace the bad location with the new location:
CONTROL_FILES = (/u01/oracle/prod/control01.ctl,
/u02/oracle/prod/control02.ctl,
/u04/oracle/prod/control03.ctl)
3. Start SQL*Plus and open the database:
SQL> STARTUP
In any case where you have multiplexed control files, and you must get
the database up in minimum time, you can do so by editing the
CONTROL_FILES initialization parameter to remove the bad control file
and restarting the database immediately. Then you can perform the
reconstruction of the bad control file and at some later time shut down
and restart the database after editing the CONTROL_FILES initialization
parameter to include the recovered control file.
Dropping Control Files
You can drop control files from the database. For example, you might
want to do so if the location of a control file is no longer appropriate.
Remember that the database must have at least two control files at all
times.
1. Shut down the database.
2. Edit the CONTROL_FILES parameter in the database's initialization
parameter file to delete the old control file's name.
3. Restart the database.
Note:
This operation does not physically delete the unwanted control file
from the disk. Use operating system commands to delete the
unnecessary file after you have dropped the control file from the
database.
a

Displaying Control File Information


The following views display information about control files:
View Description
V$DATABASE
Displays database information from the control file
V$CONTROLFILE
Lists the names of control files
V$CONTROLFILE_RECORD_SECTION
Displays information about control file record sections
V$PARAMETER
Can be used to display the names of control files as specified in the
CONTROL_FILES initialization parameter
This example lists the names of the control files.
SQL> SELECT NAME FROM V$CONTROLFILE;
NAME
------------------------------------/u01/oracle/prod/control01.ctl
/u02/oracle/prod/control02.ctl
/u03/oracle/prod/control03.ctl
2.OracleDBA-Interview Questions-1
1)What are the contents of redolog files
Ans) redolog files contains redo entries contain ddl,
2)How are redolog files significant to the database
Ans)for recovery purpose
3)How are redlogs organized
Ans)in the form of redolog groups
4)Create a database dbredo with following specs
a

1)DMT System tablespace of 100M


2)LMT Users tablespace of 50M
3)Multiplex controlfiles (2) - OMF
4)One Loggroup multiplex (4) - OMF - 512k size of the logfile
5)Limit the No. of groups to 2
6)Limit the No. of Members to 3
7)Limit the No. of datafiles 5
8)Instance recovery should not take more than 5 minutes
9)Database in archivelog mode
5)What is the alternate name for identical log files
Ans)Log members
6)How is the Numbering of the members in a log group done
Ans) 1a 1b 1c
2a 2b 2c
3a 3b 3c
7)How is the sizing done for the members of a log group
Ans)>alter database
8)What is an online Log Member
9)How do you verify what is the current Log sequence No.
Ans) archive log list
10)How are the redologs used
11)What happens when the pointer from one Log file to another file
12)When are the contents of the logfile written
13) In a database of 4 log groups, after theh 4th log group is full, what
is the next step
Ans) If the db in archivelog mode then the first log group redo entries
archived after that writes on the first redo log group
If the db in non archive mode then it over writes the first log group
14)How and where are the dirty buffers written
15)What determines the dirty buffers to be written
Ans) 1) for every checkpoint occuring
2) It reaches threshold value
3) there is no free buffers in the DBC
16)What is a checkpoint. Under what situations does a checkpoint
occur
Ans)it is a point in log files up to which every thing is gaurenteed to be
a

writen to the data filles. and it is the point from where


17)Where is the checkpoint information recorded
Ans) 1) controlfile
2) datafile header
18)How can be checkpoint forced
Ans) Alter system checkpoint;
20.Redo Log File Management
What Is the Online Redo Log?
The most crucial structure for recovery operations is the online redo
log, which consists of two or more preallocated files that store all
changes made to the database as they occur. Every instance of an
Oracle database has an associated online redo log to protect the
database in case of an instance failure.
Redo Threads
Each database instance has its own online redo log groups. These
online redo log groups, multiplexed or not, are called an instance's
thread of online redo. In typical configurations, only one database
instance accesses an Oracle database, so only one thread is present.
When running Oracle Real Application Clusters, however, two or more
instances concurrently access a single database and each instance has
its own thread.
This chapter describes how to configure and manage the online redo
log when the Oracle9i Real Application Clusters feature is not used.
Hence, the thread number can be assumed to be 1 in all discussions
and examples of statements.
Online Redo Log Contents
Online redo log files are filled with redo records. A redo record, also
called a redo entry, is made up of a group of change vectors, each of
which is a description of a change made to a single block in the
database. For example, if you change a salary value in an employee
table, you generate a redo record containing change vectors that
describe changes to the data segment block for the table, the rollback
segment data block, and the transaction table of the rollback
segments.
Redo entries record data that you can use to reconstruct all changes
made to the database, including the rollback segments. Therefore, the
a

online redo log also protects rollback data. When you recover the
database using redo data, Oracle reads the change vectors in the redo
records and applies the changes to the relevant blocks.
Redo records are buffered in a circular fashion in the redo log buffer of
the SGA (see "How Oracle Writes to the Online Redo Log") and are
written to one of the online redo log files by the Oracle background
process Log Writer (LGWR). Whenever a transaction is committed,
LGWR writes the transaction's redo records from the redo log buffer of
the SGA to an online redo log file, and a system change number (SCN)
is assigned to identify the redo records for each committed transaction.
Only when all redo records associated with a given transaction are
safely on disk in the online logs is the user process notified that the
transaction has been committed.
Redo records can also be written to an online redo log file before the
corresponding transaction is committed. If the redo log buffer fills, or
another transaction commits, LGWR flushes all of the redo log entries
in the redo log buffer to an online redo log file, even though some redo
records may not be committed. If necessary, Oracle can roll back these
changes.
How Oracle Writes to the Online Redo Log
The online redo log of a database consists of two or more online redo
log files. Oracle requires a minimum of two files to guarantee that one
is always available for writing while the other is being archived (if in
ARCHIVELOG mode).
LGWR writes to online redo log files in a circular fashion. When the
current online redo log file fills, LGWR begins writing to the next
available online redo log file. When the last available online redo log
file is filled, LGWR returns to the first online redo log file and writes to
it, starting the cycle again. Figure 7-1 illustrates the circular writing of
the online redo log file. The numbers next to each line indicate the
sequence in which LGWR writes to each online redo log file.
Filled online redo log files are available to LGWR for reuse depending
on whether archiving is enabled.
* If archiving is disabled (NOARCHIVELOG mode), a filled online redo
log file is available once the changes recorded in it have been written
to the datafiles.

* If archiving is enabled (ARCHIVELOG mode), a filled online redo log


file is available to LGWR once the changes recorded in it have been
written to the datafiles and once the file has been archived.
Figure Circular Use of Online Redo Log Files by LGWR

Active (Current) and Inactive Online Redo Log Files


At any given time, Oracle uses only one of the online redo log files to
store redo records written from the redo log buffer. The online redo log
file that LGWR is actively writing to is called the current online redo log
file.
Online redo log files that are required for instance recovery are called
active online redo log files. Online redo log files that are not required
for instance recovery are called inactive.
If you have enabled archiving (ARCHIVELOG mode), Oracle cannot
reuse or overwrite an active online log file until ARCn has archived its
contents. If archiving is disabled (NOARCHIVELOG mode), when the last
online redo log file fills writing continues by overwriting the first
available active file.
Log Switches and Log Sequence Numbers
A log switch is the point at which Oracle ends writing to one online
redo log file and begins writing to another. Normally, a log switch
occurs when the current online redo log file is completely filled and
writing must continue to the next online redo log file. However, you
can specify that a log switch occurs in a time-based manner,
a

regardless of whether the current online redo log file is completely


filled. You can also force log switches manually.
Oracle assigns each online redo log file a new log sequence number
every time that a log switch occurs and LGWR begins writing to it. If
Oracle archives online redo log files, the archived log retains its log
sequence number. The online redo log file that is cycled back for use is
given the next available log sequence number.
Each online or archived redo log file is uniquely identified by its log
sequence number. During crash, instance, or media recovery, Oracle
properly applies redo log files in ascending order by using the log
sequence number of necessary archived and online redo log files.
Planning the Online Redo Log
This section describes guidelines you should consider when configuring
a database instance's online redo log, and contains the following
topics:
* Multiplexing Online Redo Log Files
* Placing Online Redo Log Members on Different Disks
* Setting the Size of Online Redo Log Members
* Choosing the Number of Online Redo Log Files
* Controlling Archive Lag
Multiplexing Online Redo Log Files

Oracle provides the capability to multiplex an instance's online redo log


files to safeguard against damage to its online redo log files. When
multiplexing online redo log files, LGWR concurrently writes the same
redo log information to multiple identical online redo log files, thereby
eliminating a single point of redo log failure.
Note:
Oracle recommends that you multiplex your redo log files. The loss of
the log file data can be catastrophic if recovery is required.
Figure Multiplexed Online Redo Log Files
The corresponding online redo log files are called groups. Each online
redo log file in a group is called a member. In Figure , A_LOG1 and
B_LOG1 are both members of Group 1, A_LOG2 and B_LOG2 are both
members of Group 2, and so forth. Each member in a group must be
exactly the same size.
Notice that each member of a group is concurrently active, or,
concurrently written to by LGWR, as indicated by the identical log
sequence numbers assigned by LGWR. In Figure 7-2, first LGWR writes
to A_LOG1 in conjunction with B_LOG1, then A_LOG2 in conjunction
with B_LOG2, and so on. LGWR never writes concurrently to members
of different groups (for example, to A_LOG1 and B_LOG2).
Responding to Online Redo Log Failure
Whenever LGWR cannot write to a member of a group, Oracle marks
that member as INVALID and writes an error message to the LGWR
trace file and to the database's alert file to indicate the problem with
the inaccessible files. LGWR reacts differently when certain online redo
log members are unavailable, depending on the reason for the
unavailability.
If Then
LGWR can successfully write to at least one member in a group
Writing proceeds as normal. LGWR simply writes to the available
members of a group and ignores the unavailable members.
LGWR cannot access the next group at a log switch because the group
needs to be archived

Database operation temporarily halts until the group becomes


available, or, until the group is archived.
All members of the next group are inaccessible to LGWR at a log switch
because of media failure
Oracle returns an error and the database instance shuts down. In this
case, you may need to perform media recovery on the database from
the loss of an online redo log file.
If the database checkpoint has moved beyond the lost redo log, media
recovery is not necessary since Oracle has saved the data recorded in
the redo log to the datafiles. Simply drop the inaccessible redo log
group. If Oracle did not archive the bad log, use ALTER DATABASE
CLEAR UNARCHIVED LOG to disable archiving before the log can be
dropped.
If all members of a group suddenly become inaccessible to LGWR while
it is writing to them
Oracle returns an error and the database instance immediately shuts
down. In this case, you may need to perform media recovery. If the
media containing the log is not actually lost--for example, if the drive
for the log was inadvertently turned off--media recovery may not be
needed. In this case, you only need to turn the drive back on and let
Oracle perform instance recovery.
Legal and Illegal Configurations
To safeguard against a single point of online redo log failure, a
multiplexed online redo log is ideally symmetrical: all groups of the
online redo log have the same number of members. Nevertheless,
Oracle does not require that a multiplexed online redo log be
symmetrical. For example, one group can have only one member,
while other groups have two members. This configuration protects
against disk failures that temporarily affect some online redo log
members but leave others intact.
The only requirement for an instance's online redo log is that it have at
least two groups. Figure legal and illegal multiplexed online redo log
configurations. The second configuration is illegal because it has only
one group.
Figure Legal and Illegal Multiplexed Online Redo Log Configuration

Placing Online Redo Log Members on Different Disks


When setting up a multiplexed online redo log, place members of a
group on different disks. If a single disk fails, then only one member of
a group becomes unavailable to LGWR and other members remain
accessible to LGWR, so the instance can continue to function.
If you archive the redo log, spread online redo log members across
disks to eliminate contention between the LGWR and ARCn background
processes. For example, if you have two groups of duplexed online
redo log members, place each member on a different disk and set your
archiving destination to a fifth disk. Consequently, there is never
contention between LGWR (writing to the members) and ARCn (reading
the members).
Datafiles and online redo log files should also be on different disks to
reduce contention in writing data blocks and redo records.
Setting the Size of Online Redo Log Members
When setting the size of online redo log files, consider whether you will
be archiving the redo log. Online redo log files should be sized so that a
filled group can be archived to a single unit of offline storage media
(such as a tape or disk), with the least amount of space on the medium
left unused. For example, suppose only one filled online redo log group
can fit on a tape and 49% of the tape's storage capacity remains
a

unused. In this case, it is better to decrease the size of the online redo
log files slightly, so that two log groups could be archived for each
tape.
With multiplexed groups of online redo logs, all members of the same
group must be the same size. Members of different groups can have
different sizes. However, there is no advantage in varying file size
between groups. If checkpoints are not set to occur between log
switches, make all groups the same size to guarantee that checkpoints
occur at regular intervals.
Choosing the Number of Online Redo Log Files
The best way to determine the appropriate number of online redo log
files for a database instance is to test different configurations. The
optimum configuration has the fewest groups possible without
hampering LGWR's writing redo log information.
In some cases, a database instance may require only two groups. In
other situations, a database instance may require additional groups to
guarantee that a recycled group is always available to LGWR. During
testing, the easiest way to determine if the current online redo log
configuration is satisfactory is to examine the contents of the LGWR
trace file and the database's alert log. If messages indicate that LGWR
frequently has to wait for a group because a checkpoint has not
completed or a group has not been archived, add groups.
Consider the parameters that can limit the number of online redo log
files before setting up or altering the configuration of an instance's
online redo log. The following parameters limit the number of online
redo log files that you can add to a database:
* The MAXLOGFILES parameter used in the CREATE DATABASE
statement determines the maximum number of groups of online redo
log files for each database. Group values can range from 1 to
MAXLOGFILES. The only way to override this upper limit is to re-create
the database or its control file. Thus, it is important to consider this
limit before creating a database. If MAXLOGFILES is not specified for
the CREATE DATABASE statement, Oracle uses an operating system
specific default value.
* The MAXLOGMEMBERS parameter used in the CREATE DATABASE
statement determines the maximum number of members for each
group. As with MAXLOGFILES, the only way to override this upper limit
is to re-create the database or control file. Thus, it is important to
consider this limit before creating a database. If no MAXLOGMEMBERS
a

parameter is specified for the CREATE DATABASE statement, Oracle


uses an operating system default value.
You can force all enabled online redo log threads to switch their current
logs in a time-based fashion. In a primary/standby configuration,
changes are made available to the standby database by archiving and
shipping logs of the primary site to the standby database. The changes
that are being applied by the standby database can lag the changes
that are occurring on the primary database.
This lag can happen because the standby database must wait for the
changes in the primary database's online redo log to be archived (into
the archived redo log) and then shipped to it. To control or limit this
lag, you set the ARCHIVE_LAG_TARGET initialization parameter. Setting
this parameter allows you to limit, measured in time, how long the lag
can become.
Setting the ARCHIVE_LAG_TARGET Initialization Parameter
When you set the ARCHIVE_LAG_TARGET initialization parameter, you
cause Oracle to examine an instance's current online redo log
periodically. If the following conditions are met the instance will switch
the log:
* The current log was created prior to n seconds ago, and the
estimated archival time for the current log is m seconds (proportional
to the number of redo blocks used in the current log), where n + m
exceeds the value of the ARCHIVE_LAG_TARGET initialization
parameter.
* The current log contains redo records.
In an Oracle Real Application Clusters environment, the instance also
nudges other threads into switching and archiving logs if they are
falling behind. This can be particularly useful when one instance in the
cluster is more idle than the other instances (as when you are running
a 2-node primary/secondary configuration of Oracle Real Application
Clusters).
Initialization parameter ARCHIVE_LAG_TARGET specifies the target of
how many seconds of redo the standby could lose in the event of a
primary shutdown or crash if the Data Guard environment is not
configured in a no-data-loss mode. It also provides an upper limit of
how long (in the number of seconds) the current log of the primary
database can span. Because the estimated archival time is also
considered, this is not the exact log switch time.
a

The following initialization parameter setting sets the log switch


interval to 30 minutes (a typical value).
ARCHIVE_LAG_TARGET = 1800
A value of 0 disables this time-based log switching functionality. This is
the default setting.
You can set the ARCHIVE_LAG_TARGET initialization parameter even if
there is no standby database. For example, the ARCHIVE_LAG_TARGET
parameter can be set specifically to force logs to be switched and
archived.
ARCHIVE_LAG_TARGET is a dynamic parameter and can be set with the
ALTER SYSTEM SET statement.
Caution:
The ARCHIVE_LAG_TARGET parameter must be set to the same value in
all instances of an Oracle Real Application Clusters environment. Failing
to do so results in unspecified behavior and is strongly discouraged.
Factors Affecting the Setting of ARCHIVE_LAG_TARGET
Consider the following factors when determining if you want to set the
ARCHIVE_LAG_TARGET parameter and in determining the value for this
parameter.
* Overhead of switching (as well as archiving) logs
* How frequently normal log switches occur as a result of log full
conditions
* How much redo loss is tolerated in the standby database
Setting ARCHIVE_LAG_TARGET may not be very useful if natural log
switches already occur more frequently than the interval specified.
However, in the case of irregularities of redo generation speed, the
interval does provide an upper limit for the time range each current log
covers.
If the ARCHIVE_LAG_TARGET initialization parameter is set to a very
low value, there can be a negative impact on performance. This can
force frequent log switches. Set the parameter to a reasonable value
so as not to degrade the performance of the primary database.

Creating Online Redo Log Groups and Members


Plan the online redo log of a database and create all required groups
and members of online redo log files during database creation.
However, there are situations where you might want to create
additional groups or members. For example, adding groups to an online
redo log can correct redo log group availability problems.
To create new online redo log groups and members, you must have the
ALTER DATABASE system privilege. A database can have up to
MAXLOGFILES groups.
To create a new group of online redo log files, use the SQL statement
ALTER DATABASE with the ADD LOGFILE clause.
The following statement adds a new group of redo logs to the
database:
ALTER DATABASE
ADD LOGFILE ('/oracle/dbs/log1c.rdo', '/oracle/dbs/log2c.rdo') SIZE
500K;
Note:
Use fully specify filenames of new log members to indicate where the
operating system file should be created. Otherwise, the files will be
created in either the default or current directory of the database
server, depending upon your operating system.
You can also specify the number that identifies the group using the
GROUP option:
ALTER DATABASE
ADD LOGFILE GROUP 10 ('/oracle/dbs/log1c.rdo',
'/oracle/dbs/log2c.rdo')
SIZE 500K;
Using group numbers can make administering redo log groups easier.
However, the group number must be between 1 and MAXLOGFILES. Do
not skip redo log file group numbers (that is, do not number your
groups 10, 20, 30, and so on), or you will consume space in the control
files of the database.

Creating Online Redo Log Members


In some cases, it might not be necessary to create a complete group of
online redo log files. A group could already exist, but not be complete
because one or more members of the group were dropped (for
example, because of a disk failure). In this case, you can add new
members to an existing group.
To create new online redo log members for an existing group, use the
SQL statement ALTER DATABASE with the ADD LOG MEMBER
parameter. The following statement adds a new redo log member to
redo log group number 2:
ALTER DATABASE ADD LOGFILE MEMBER '/oracle/dbs/log2b.rdo' TO
GROUP 2;
Notice that filenames must be specified, but sizes need not be. The
size of the new members is determined from the size of the existing
members of the group.
When using the ALTER DATABASE statement, you can alternatively
identify the target group by specifying all of the other members of the
group in the TO parameter, as shown in the following example:
ALTER DATABASE ADD LOGFILE MEMBER '/oracle/dbs/log2c.rdo'
TO ('/oracle/dbs/log2a.rdo', '/oracle/dbs/log2b.rdo');
Note:
Fully specify the filenames of new log members to indicate where the
operating system files should be created. Otherwise, the files will be
created in either the default or current directory of the database
server, depending upon your operating system. You may also note that
the status of the new log member is shown as INVALID. This is normal
and it will change to active (blank) when it is first used.
Relocating and Renaming Online Redo Log Members
You can use operating system commands to relocate online redo logs,
then use the ALTER DATABASE statement to make their new names
(locations) known to the database. This procedure is necessary, for
example, if the disk currently used for some online redo log files is
going to be removed, or if datafiles and a number of online redo log
files are stored on the same disk and should be separated to reduce
contention.
a

To rename online redo log members, you must have the ALTER
DATABASE system privilege. Additionally, you might also need
operating system privileges to copy files to the desired location and
privileges to open and back up the database.
Before relocating your redo logs, or making any other structural
changes to the database, completely back up the database in case you
experience problems while performing the operation. As a precaution,
after renaming or relocating a set of online redo log files, immediately
back up the database's control file.
Use the following steps for relocating redo logs. The example used to
illustrate these steps assumes:
* The log files are located on two disks: diska and diskb.
* The online redo log is duplexed: one group consists of the
members /diska/logs/log1a.rdo and /diskb/logs/log1b.rdo, and the
second group consists of the members /diska/logs/log2a.rdo and
/diskb/logs/log2b.rdo.
* The online redo log files located on diska must be relocated to diskc.
The new filenames will reflect the new location: /diskc/logs/log1c.rdo
and /diskc/logs/log2c.rdo.
Steps for Renaming Online Redo Log Members
1. Shut down the database.
SHUTDOWN
2. Copy the online redo log files to the new location.
Operating system files, such as online redo log members, must be
copied using the appropriate operating system commands. See your
operating system specific documentation for more information about
copying files.
Note:
You can execute an operating system command to copy a file (or
perform other operating system commands) without exiting SQL*Plus
by using the HOST command. Some operating systems allow you to
use a character in place of the word HOST. For example, you can use !
in UNIX.

The following example uses operating system commands (UNIX) to


move the online redo log members to a new location:
mv /diska/logs/log1a.rdo /diskc/logs/log1c.rdo
mv /diska/logs/log2a.rdo /diskc/logs/log2c.rdo
3. Startup the database, mount, but do not open it.
CONNECT / as SYSDBA
STARTUP MOUNT
4. Rename the online redo log members.
Use the ALTER DATABASE statement with the RENAME FILE clause to
rename the database's online redo log files.
ALTER DATABASE
RENAME FILE '/diska/logs/log1a.rdo', '/diska/logs/log2a.rdo'
TO '/diskc/logs/log1c.rdo', '/diskc/logs/log2c.rdo';
5. Open the database for normal operation.
The online redo log alterations take effect when the database is
opened.
ALTER DATABASE OPEN;
Dropping Online Redo Log Groups and Members
In some cases, you may want to drop an entire group of online redo log
members. For example, you want to reduce the number of groups in an
instance's online redo log. In a different case, you may want to drop
one or more specific online redo log members. For example, if a disk
failure occurs, you may need to drop all the online redo log files on the
failed disk so that Oracle does not try to write to the inaccessible files.
In other situations, particular online redo log files become unnecessary.
For example, a file might be stored in an inappropriate location.
Dropping Log Groups

To drop an online redo log group, you must have the ALTER DATABASE
system privilege. Before dropping an online redo log group, consider
the following restrictions and precautions:
* An instance requires at least two groups of online redo log files,
regardless of the number of members in the groups. (A group is one or
more members.)
* You can drop an online redo log group only if it is inactive. If you need
to drop the current group, first force a log switch to occur.
* Make sure an online redo log group is archived (if archiving is
enabled) before dropping it. To see whether this has happened, use the
V$LOG view.
SELECT GROUP#, ARCHIVED, STATUS FROM V$LOG;
GROUP# ARC STATUS
--------- --- ---------------1 YES ACTIVE
2 NO CURRENT
3 YES INACTIVE
4 YES INACTIVE
Drop an online redo log group with the SQL statement ALTER
DATABASE with the DROP LOGFILE clause.
The following statement drops redo log group number 3:
ALTER DATABASE DROP LOGFILE GROUP 3;
When an online redo log group is dropped from the database, and you
are not using the Oracle Managed Files feature, the operating system
files are not deleted from disk. Rather, the control files of the
associated database are updated to drop the members of the group
from the database structure. After dropping an online redo log group,
make sure that the drop completed successfully, and then use the
appropriate operating system command to delete the dropped online
redo log files.

When using Oracle-managed files, the cleanup of operating systems


files is done automatically for you.
Dropping Online Redo Log Members
To drop an online redo log member, you must have the ALTER
DATABASE system privilege. Consider the following restrictions and
precautions before dropping individual online redo log members:
* It is permissible to drop online redo log files so that a multiplexed
online redo log becomes temporarily asymmetric. For example, if you
use duplexed groups of online redo log files, you can drop one member
of one group, even though all other groups have two members each.
However, you should rectify this situation immediately so that all
groups have at least two members, and thereby eliminate the single
point of failure possible for the online redo log.
* An instance always requires at least two valid groups of online redo
log files, regardless of the number of members in the groups. (A group
is one or more members.) If the member you want to drop is the last
valid member of the group, you cannot drop the member until the
other members become valid. To see a redo log file's status, use the
V$LOGFILE view. A redo log file becomes INVALID if Oracle cannot
access it. It becomes STALE if Oracle suspects that it is not complete or
correct. A stale log file becomes valid again the next time its group is
made the active group.
* You can drop an online redo log member only if it is not part of an
active or current group. If you want to drop a member of an active
group, first force a log switch to occur.
* Make sure the group to which an online redo log member belongs is
archived (if archiving is enabled) before dropping the member. To see
whether this has happened, use the V$LOG view.
To drop specific inactive online redo log members, use the ALTER
DATABASE statement with the DROP LOGFILE MEMBER clause.
The following statement drops the redo log /oracle/dbs/log3c.rdo:
ALTER DATABASE DROP LOGFILE MEMBER '/oracle/dbs/log3c.rdo';
When an online redo log member is dropped from the database, the
operating system file is not deleted from disk. Rather, the control files
of the associated database are updated to drop the member from the
database structure. After dropping an online redo log file, make sure
a

that the drop completed successfully, and then use the appropriate
operating system command to delete the dropped online redo log file.
To drop a member of an active group, you must first force a log switch.
Forcing Log Switches
A log switch occurs when LGWR stops writing to one online redo log
group and starts writing to another. By default, a log switch occurs
automatically when the current online redo log file group fills.
You can force a log switch to make the currently active group inactive
and available for online redo log maintenance operations. For example,
you want to drop the currently active group, but are not able to do so
until the group is inactive. You may also wish to force a log switch if the
currently active group needs to be archived at a specific time before
the members of the group are completely filled. This option is useful in
configurations with large online redo log files that take a long time to
fill.
To force a log switch, you must have the ALTER SYSTEM privilege. Use
the ALTER SYSTEM statement with the SWITCH LOGFILE clause.
The following statement forces a log switch:
ALTER SYSTEM SWITCH LOGFILE;
Verifying Blocks in Redo Log Files
You can configure Oracle to use checksums to verify blocks in the redo
log files. If you set the initialization parameter DB_BLOCK_CHECKSUM
to TRUE, block checking is enabled for all Oracle database blocks
written to disk, including redo log blocks. The default value of
DB_BLOCK_CHECKSUM is FALSE.
If you enable block checking, Oracle computes a checksum for each
redo log block written to the current log. Oracle writes the checksum in
the header of the block. Oracle uses the checksum to detect corruption
in a redo log block. Oracle tries to verify the redo log block when it
writes the block to an archive log file and when the block is read from
an archived log during recovery.
If Oracle detects a corruption in a redo log block while trying to archive
it, the system attempts to read the block from another member in the
group. If the block is corrupted in all members the redo log group, then
archiving cannot proceed.
a

Note:
There is some overhead and decrease in database performance with
DB_BLOCK_CHECKSUM enabled. Monitor your database performance to
decide if the benefit of using data block checksums to detect
corruption outweights the performance impact.
Clearing an Online Redo Log File
An online redo log file might become corrupted while the database is
open, and ultimately stop database activity because archiving cannot
continue. In this situation the ALTER DATABASE CLEAR LOGFILE
statement can be used reinitialize the file without shutting down the
database.
The following statement clears the log files in redo log group number 3:
ALTER DATABASE CLEAR LOGFILE GROUP 3;
This statement overcomes two situations where dropping redo logs is
not possible:
* If there are only two log groups
* The corrupt redo log file belongs to the current group
If the corrupt redo log file has not been archived, use the UNARCHIVED
keyword in the statement.
ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3;
This statement clears the corrupted redo logs and avoids archiving
them. The cleared redo logs are available for use even though they
were not archived.
If you clear a log file that is needed for recovery of a backup, then you
can no longer recover from that backup. Oracle writes a message in
the alert log describing the backups from which you cannot recover.
Note:
If you clear an unarchived redo log file, you should make another
backup of the database.

If you want to clear an unarchived redo log that is needed to bring an


offline tablespace online, use the UNRECOVERABLE DATAFILE clause in
the ALTER DATABASE CLEAR LOGFILE statement.
If you clear a redo log needed to bring an offline tablespace online, you
will not be able to bring the tablespace online again. You will have to
drop the tablespace or perform an incomplete recovery. Note that
tablespaces taken offline normal do not require recovery.
Viewing Online Redo Log Information
Use the following views to display online redo log information.
View Description
V$LOG
Displays the redo log file information from the control file
V$LOGFILE
Identifies redo log groups and members and member status
V$LOG_HISTORY
Contains log history information
The following query returns the control file information about the online
redo log for a database.
SELECT * FROM V$LOG;
GROUP# THREAD# SEQ BYTES MEMBERS ARC STATUS
FIRST_CHANGE# FIRST_TIM
------ ------- ----- ------- ------- --- --------- ------------- --------1 1 10605 1048576 1 YES ACTIVE 11515628 16-APR-00
2 1 10606 1048576 1 NO CURRENT 11517595 16-APR-00
3 1 10603 1048576 1 YES INACTIVE 11511666 16-APR-00
4 1 10604 1048576 1 YES INACTIVE 11513647 16-APR-00

To see the names of all of the member of a group, use a query similar
to the following:
SELECT * FROM V$LOGFILE;
GROUP# STATUS MEMBER
------ ------- ---------------------------------1 D:\ORANT\ORADATA\IDDB2\REDO04.LOG
2 D:\ORANT\ORADATA\IDDB2\REDO03.LOG
3 D:\ORANT\ORADATA\IDDB2\REDO02.LOG
4 D:\ORANT\ORADATA\IDDB2\REDO01.LOG
21.Undo Tablespace Management
Managing the Undo Tablespace
This chapter describes how to manage the undo tablespace, which
stores information used to roll back changes to the Oracle Database. It
contains the following topics:
What Is Undo?
Every Oracle Database must have a method of maintaining information
that is used to roll back, or undo, changes to the database. Such
information consists of records of the actions of transactions, primarily
before they are committed. These records are collectively referred to
as undo.
Undo records are used to:
Roll back transactions when a ROLLBACK statement is issued
Recover the database
Provide read consistency
Analyze data as of an earlier point in time by using Oracle Flashback
Query
Recover from logical corruptions using Oracle Flashback features

When a ROLLBACK statement is issued, undo records are used to undo


changes that were made to the database by the uncommitted
transaction. During database recovery, undo records are used to undo
any uncommitted changes applied from the redo log to the datafiles.
Undo records provide read consistency by maintaining the before
image of the data for users who are accessing the data at the same
time that another user is changing it.
Introduction to Automatic Undo Management
This section introduces the concepts of Automatic Undo Management
and discusses the following topics:
Overview of Automatic Undo Management
Oracle provides a fully automated mechanism, referred to as automatic
undo management, for managing undo information and space. In this
management mode, you create an undo tablespace, and the server
automatically manages undo segments and space among the various
active sessions.
You set the UNDO_MANAGEMENT initialization parameter to AUTO to
enable automatic undo management. A default undo tablespace is
then created at database creation. An undo tablespace can also be
created explicitly.
When the instance starts, the database automatically selects the first
available undo tablespace. If no undo tablespace is available, then the
instance starts without an undo tablespace and stores undo records in
the SYSTEM tablespace. This is not recommended in normal
circumstances, and an alert message is written to the alert log file to
warn that the system is running without an undo tablespace.
If the database contains multiple undo tablespaces, you can optionally
specify at startup that you want to use a specific undo tablespace. This
is done by setting the UNDO_TABLESPACE initialization parameter, as
shown in this example:
UNDO_TABLESPACE = undotbs_01
In this case, if you have not already created the undo tablespace (in
this example, undotbs_01), the STARTUP command fails. The
UNDO_TABLESPACE parameter can be used to assign a specific undo
tablespace to an instance in an Oracle Real Application Clusters
environment.

The following is a summary of the initialization parameters for


automatic undo management:

Initialization
Parameter

Description

UNDO_MANAGEMENT If AUTO, use automatic undo management. The default


is MANUAL.
UNDO_TABLESPACE

An optional dynamic parameter specifying the name of


an undo tablespace. This parameter should be used
only when the database has multiple undo tablespaces
and you want to direct the database instance to use a
particular undo tablespace.

When automatic undo management is enabled, if the initialization


parameter file contains parameters relating to manual undo
management, they are ignored.
Undo Retention
After a transaction is committed, undo data is no longer needed for
rollback or transaction recovery purposes. However, for consistent read
purposes, long-running queries may require this old undo information
for producing older images of data blocks. Furthermore, the success of
several Oracle Flashback features can also depend upon the
availability of older undo information. For these reasons, it is desirable
to retain the old undo information for as long as possible.
When automatic undo management is enabled, there is always a
current undo retention period, which is the minimum amount of time
that Oracle Database attempts to retain old undo information before
overwriting it. Old (committed) undo information that is older than the
current undo retention period is said to be expired. Old undo
information with an age that is less than the current undo retention
period is said to be unexpired.
Oracle Database automatically tunes the undo retention period based
on undo tablespace size and system activity. You can specify a
minimum undo retention period (in seconds) by setting the
UNDO_RETENTION initialization parameter. The database makes its
best effort to honor the specified minimum undo retention period,

provided that the undo tablespace has space available for new
transactions. When available space for new transactions becomes
short, the database begins to overwrite expired undo. If the undo
tablespace has no space for new transactions after all expired undo is
overwritten, the database may begin overwriting unexpired undo
information. If any of this overwritten undo information is required for
consistent read in a current long-running query, the query could fail
with the snapshot too old error message.
The following points explain the exact impact of the UNDO_RETENTION
parameter on undo retention:
The UNDO_RETENTION parameter is ignored for a fixed size undo
tablespace. The database may overwrite unexpired undo information
when tablespace space becomes low.
For an undo tablespace with the AUTOEXTEND option enabled, the
database attempts to honor the minimum retention period specified by
UNDO_RETENTION. When space is low, instead of overwriting
unexpired undo information, the tablespace auto-extends. If the
MAXSIZE clause is specified for an auto-extending undo tablespace,
when the maximum size is reached, the database may begin to
overwrite unexpired undo information.
Retention Guarantee
To guarantee the success of long-running queries or Oracle Flashback
operations, you can enable retention guarantee. If retention guarantee
is enabled, the specified minimum undo retention is guaranteed; the
database never overwrites unexpired undo data even if it means that
transactions fail due to lack of space in the undo tablespace. If
retention guarantee is not enabled, the database can overwrite
unexpired undo when space is low, thus lowering the undo retention
for the system. This option is disabled by default.
WARNING:
Enabling retention guarantee can cause multiple DML operations to
fail. Use with caution.
You enable retention guarantee by specifying the RETENTION
GUARANTEE clause for the undo tablespace when you create it with
either the CREATE DATABASE or CREATE UNDO TABLESPACE statement.
Or, you can later specify this clause in an ALTER TABLESPACE
statement. You disable retention guarantee with the RETENTION
NOGUARANTEE clause.
a

You can use the DBA_TABLESPACES view to determine the retention


guarantee setting for the undo tablespace. A column named
RETENTION contains a value of GUARANTEE, NOGUARANTEE, or NOT
APPLY (used for tablespaces other than the undo tablespace).
Automatic Tuning of Undo Retention
Oracle Database automatically tunes the undo retention period based
on how the undo tablespace is configured.
If the undo tablespace is fixed size, the database tunes the retention
period for the best possible undo retention for that tablespace size and
the current system load. This tuned retention period can be
significantly greater than the specified minimum retention period.
If the undo tablespace is configured with the AUTOEXTEND option, the
database tunes the undo retention period to be somewhat longer than
the longest-running query on the system at that time. Again, this tuned
retention period can be greater than the specified minimum retention
period.
Note:
Automatic tuning of undo retention is not supported for LOBs. This is
because undo information for LOBs is stored in the segment itself and
not in the undo tablespace. For LOBs, the database attempts to honor
the minimum undo retention period specified by UNDO_RETENTION.
However, if space becomes low, unexpired LOB undo information may
be overwritten.
You can determine the current retention period by querying the
TUNED_UNDORETENTION column of the V$UNDOSTAT view. This view
contains one row for each 10-minute statistics collection interval over
the last 4 days. (Beyond 4 days, the data is available in the
DBA_HIST_UNDOSTAT view.) TUNED_UNDORETENTION is given in
seconds.
select to_char(begin_time, 'DD-MON-RR HH24:MI') begin_time,
to_char(end_time, 'DD-MON-RR HH24:MI') end_time,
tuned_undoretention
from v$undostat order by end_time;
BEGIN_TIME END_TIME TUNED_UNDORETENTION

--------------- --------------- ------------------04-FEB-05 00:01 04-FEB-05 00:11 12100


...
07-FEB-05 23:21 07-FEB-05 23:31 86700
07-FEB-05 23:31 07-FEB-05 23:41 86700
07-FEB-05 23:41 07-FEB-05 23:51 86700
07-FEB-05 23:51 07-FEB-05 23:52 86700
576 rows selected.
.
Undo Retention Tuning and Alert Thresholds For a fixed size undo
tablespace, the database calculates the maximum undo retention
period based on database statistics and on the size of the undo
tablespace. For optimal undo management, rather than tuning based
on 100% of the tablespace size, the database tunes the undo retention
period based on 85% of the tablespace size, or on the warning alert
threshold percentage for space used, whichever is lower. (The warning
alert threshold defaults to 85%, but can be changed.) Therefore, if you
set the warning alert threshold of the undo tablespace below 85%, this
may reduce the tuned length of the undo retention period.
Setting the Undo Retention Period
You set the undo retention period by setting the UNDO_RETENTION
initialization parameter. This parameter specifies the desired minimum
undo retention period in seconds. the current undo retention period
may be automatically tuned to be greater than UNDO_RETENTION, or,
unless retention guarantee is enabled, less than UNDO_RETENTION if
space is low.
To set the undo retention period:
Do one of the following:
Set UNDO_RETENTION in the initialization parameter file.
UNDO_RETENTION = 1800

Change UNDO_RETENTION at any time using the ALTER SYSTEM


statement:
ALTER SYSTEM SET UNDO_RETENTION = 2400;

The effect of an UNDO_RETENTION parameter change is immediate,


but it can only be honored if the current undo tablespace has enough
space.
Sizing the Undo Tablespace
You can size the undo tablespace appropriately either by using
automatic extension of the undo tablespace or by using the Undo
Advisor for a fixed sized tablespace.
Using Auto-Extensible Tablespaces
Oracle Database supports automatic extension of the undo tablespace
to facilitate capacity planning of the undo tablespace in the production
environment. When the system is first running in the production
environment, you may be unsure of the space requirements of the
undo tablespace. In this case, you can enable automatic extension of
the undo tablespace so that it automatically increases in size when
more space is needed. You do so by including the AUTOEXTEND
keyword when you create the undo tablespace.
Sizing Fixed-Size Undo Tablespaces
If you have decided on a fixed-size undo tablespace, the Undo Advisor
can help you estimate needed capacity. You can access the Undo
Advisor through Enterprise Manager or through the DBMS_ADVISOR
PL/SQL package. Enterprise Manager is the preferred method of
accessing the advisor.
The Undo Advisor relies for its analysis on data collected in the
Automatic Workload Repository (AWR). It is therefore important that
the AWR have adequate workload statistics available so that the Undo
Advisor can make accurate recommendations. For newly created
databases, adequate statistics may not be available immediately. In
such cases, an auto-extensible undo tablespace can be used.

An adjustment to the collection interval and retention period for AWR


statistics can affect the precision and the type of recommendations
that the advisor produces.
To use the Undo Advisor, you first estimate these two values:
The length of your expected longest running query
After the database has been up for a while, you can view the Longest
Running Query field on the Undo Management page of Enterprise
Manager.
The longest interval that you will require for flashback operations
For example, if you expect to run Flashback Queries for up to 48 hours
in the past, your flashback requirement is 48 hours.
You then take the maximum of these two undo retention values and
use that value to look up the required undo tablespace size on the
Undo Advisor graph.
The Undo Advisor PL/SQL Interface
You can activate the Undo Advisor by creating an undo advisor task
through the advisor framework. The following example creates an undo
advisor task to evaluate the undo tablespace. The name of the advisor
is 'Undo Advisor'. The analysis is based on Automatic Workload
Repository snapshots, which you must specify by setting parameters
START_SNAPSHOT and END_SNAPSHOT. In the following example, the
START_SNAPSHOT is "1" and END_SNAPSHOT is "2".
DECLARE
tid NUMBER;
tname VARCHAR2(30);
oid NUMBER;
BEGIN
DBMS_ADVISOR.CREATE_TASK('Undo Advisor', tid, tname, 'Undo
Advisor Task');
DBMS_ADVISOR.CREATE_OBJECT(tname, 'UNDO_TBS', null, null, null,
'null', oid);
a

DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'TARGET_OBJECTS',
oid);
DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'START_SNAPSHOT', 1);
DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'END_SNAPSHOT', 2);
DBMS_ADVISOR.SET_TASK_PARAMETER(name, 'INSTANCE', 1);
DBMS_ADVISOR.execute_task(tname);
end;
/
After you have created the advisor task, you can view the output and
recommendations in the Automatic Database Diagnostic Monitor in
Enterprise Manager. This information is also available in the
DBA_ADVISOR_* data dictionary views.
Managing Undo Tablespaces
This section describes the various steps involved in undo tablespace
management and contains the following sections:
Creating an Undo Tablespace
There are two methods of creating an undo tablespace. The first
method creates the undo tablespace when the CREATE DATABASE
statement is issued. This occurs when you are creating a new
database, and the instance is started in automatic undo management
mode (UNDO_MANAGEMENT = AUTO). The second method is used with
an existing database. It uses the CREATE UNDO TABLESPACE
statement.
You cannot create database objects in an undo tablespace. It is
reserved for system-managed undo data.
Oracle Database enables you to create a single-file undo tablespace.
Single-file, or bigfile, tablespaces are discussed in "Bigfile
Tablespaces".
Using CREATE DATABASE to Create an Undo Tablespace
You can create a specific undo tablespace using the UNDO TABLESPACE
clause of the CREATE DATABASE statement.
a

The following statement illustrates using the UNDO TABLESPACE clause


in a CREATE DATABASE statement. The undo tablespace is named
undotbs_01 and one datafile, /u01/oracle/rbdb1/undo0101.dbf, is
allocated for it.
CREATE DATABASE rbdb1
CONTROLFILE REUSE
.
.
.
UNDO TABLESPACE undotbs_01 DATAFILE
'/u01/oracle/rbdb1/undo0101.dbf';
If the undo tablespace cannot be created successfully during CREATE
DATABASE, the entire CREATE DATABASE operation fails. You must
clean up the database files, correct the error and retry the CREATE
DATABASE operation.
The CREATE DATABASE statement also lets you create a single-file
undo tablespace at database creation.
Using the CREATE UNDO TABLESPACE Statement
The CREATE UNDO TABLESPACE statement is the same as the CREATE
TABLESPACE statement, but the UNDO keyword is specified. The
database determines most of the attributes of the undo tablespace,
but you can specify the DATAFILE clause.
This example creates the undotbs_02 undo tablespace with the
AUTOEXTEND option:
CREATE UNDO TABLESPACE undotbs_02
DATAFILE '/u01/oracle/rbdb1/undo0201.dbf' SIZE 2M REUSE
AUTOEXTEND ON;
You can create more than one undo tablespace, but only one of them
can be active at any one time.
Altering an Undo Tablespace

Undo tablespaces are altered using the ALTER TABLESPACE statement.


However, since most aspects of undo tablespaces are system
managed, you need only be concerned with the following actions:
Adding a datafile
Renaming a datafile
Bringing a datafile online or taking it offline
Beginning or ending an open backup on a datafile
Enabling and disabling undo retention guarantee
These are also the only attributes you are permitted to alter.
If an undo tablespace runs out of space, or you want to prevent it from
doing so, you can add more files to it or resize existing datafiles.
The following example adds another datafile to undo tablespace
undotbs_01:
ALTER TABLESPACE undotbs_01
ADD DATAFILE '/u01/oracle/rbdb1/undo0102.dbf' AUTOEXTEND ON
NEXT 1M
MAXSIZE UNLIMITED;
You can use the ALTER DATABASE...DATAFILE statement to resize or
extend a datafile.
Dropping an Undo Tablespace
Use the DROP TABLESPACE statement to drop an undo tablespace. The
following example drops the undo tablespace undotbs_01:
DROP TABLESPACE undotbs_01;
An undo tablespace can only be dropped if it is not currently used by
any instance. If the undo tablespace contains any outstanding
transactions (for example, a transaction died but has not yet been
recovered), the DROP TABLESPACE statement fails. However, since
DROP TABLESPACE drops an undo tablespace even if it contains
unexpired undo information (within retention period), you must be

careful not to drop an undo tablespace if undo information is needed


by some existing queries.
DROP TABLESPACE for undo tablespaces behaves like DROP
TABLESPACE...INCLUDING CONTENTS. All contents of the undo
tablespace are removed.
Switching Undo Tablespaces
You can switch from using one undo tablespace to another. Because
the UNDO_TABLESPACE initialization parameter is a dynamic
parameter, the ALTER SYSTEM SET statement can be used to assign a
new undo tablespace.
The following statement switches to a new undo tablespace:
ALTER SYSTEM SET UNDO_TABLESPACE = undotbs_02;
Assuming undotbs_01 is the current undo tablespace, after this
command successfully executes, the instance uses undotbs_02 in place
of undotbs_01 as its undo tablespace.
If any of the following conditions exist for the tablespace being
switched to, an error is reported and no switching occurs:
The tablespace does not exist
The tablespace is not an undo tablespace
The tablespace is already being used by another instance (in a RAC
environment only)
The database is online while the switch operation is performed, and
user transactions can be executed while this command is being
executed. When the switch operation completes successfully, all
transactions started after the switch operation began are assigned to
transaction tables in the new undo tablespace.
The switch operation does not wait for transactions in the old undo
tablespace to commit. If there are any pending transactions in the old
undo tablespace, the old undo tablespace enters into a PENDING
OFFLINE mode (status). In this mode, existing transactions can
continue to execute, but undo records for new user transactions cannot
be stored in this undo tablespace.

An undo tablespace can exist in this PENDING OFFLINE mode, even


after the switch operation completes successfully. A PENDING OFFLINE
undo tablespace cannot be used by another instance, nor can it be
dropped. Eventually, after all active transactions have committed, the
undo tablespace automatically goes from the PENDING OFFLINE mode
to the OFFLINE mode. From then on, the undo tablespace is available
for other instances (in an Oracle Real Application Cluster environment).
If the parameter value for UNDO TABLESPACE is set to '' (two single
quotes), then the current undo tablespace is switched out and the next
available undo tablespace is switched in. Use this statement with care
because there may be no undo tablespace available.
The following example unassigns the current undo tablespace:
ALTER SYSTEM SET UNDO_TABLESPACE = '';
Establishing User Quotas for Undo Space
The Oracle Database Resource Manager can be used to establish user
quotas for undo space. The Database Resource Manager directive
UNDO_POOL allows DBAs to limit the amount of undo space consumed
by a group of users (resource consumer group).
You can specify an undo pool for each consumer group. An undo pool
controls the amount of total undo that can be generated by a
consumer group. When the total undo generated by a consumer group
exceeds its undo limit, the current UPDATE transaction generating the
undo is terminated. No other members of the consumer group can
perform further updates until undo space is freed from the pool.
When no UNDO_POOL directive is explicitly defined, users are allowed
unlimited undo space.
Migrating to Automatic Undo Management
If you are currently using rollback segments to manage undo space,
Oracle strongly recommends that you migrate your database to
automatic undo management. Oracle Database provides a function
that provides information on how to size your new undo tablespace
based on the configuration and usage of the rollback segments in your
system. DBA privileges are required to execute this function:
DECLARE
utbsiz_in_MB NUMBER;
a

BEGIN
utbsiz_in_MB := DBMS_UNDO_ADV.RBU_MIGRATION;
end;
/
The function returns the sizing information directly.
Viewing Information About Undo
This section lists views that are useful for viewing information about
undo space in the automatic undo management mode and provides
some examples. In addition to views listed here, you can obtain
information from the views available for viewing tablespace and
datafile information.
Oracle Database also provides proactive help in managing tablespace
disk space use by alerting you when tablespaces run low on available
space.
In addition to the proactive undo space alerts, Oracle Database also
provides alerts if your system has long-running queries that cause
SNAPSHOT TOO OLD errors. To prevent excessive alerts, the long query
alert is issued at most once every 24 hours. When the alert is
generated, you can check the Undo Advisor Page of Enterprise
Manager to get more information about the undo tablespace.
The following dynamic performance views are useful for obtaining
space information about the undo tablespace:

View

Description

V$UNDOSTAT

Contains statistics for monitoring and tuning undo


space. Use this view to help estimate the amount of
undo space required for the current workload. The
database also uses this information to help tune undo
usage in the system. This view is meaningful only in
automatic undo management mode.

V$ROLLSTAT

For automatic undo management mode, information


reflects behavior of the undo segments in the undo
tablespace

V$TRANSACTION

Contains undo segment information

DBA_UNDO_EXTENTS Shows the status and size of each extent in the undo
tablespace.
DBA_HIST_UNDOSTAT Contains statistical snapshots of V$UNDOSTAT
information.

The V$UNDOSTAT view is useful for monitoring the effects of


transaction execution on undo space in the current instance. Statistics
are available for undo space consumption, transaction concurrency,
the tuning of undo retention, and the length and SQL ID of longrunning queries in the instance.
Each row in the view contains statistics collected in the instance for a
ten-minute interval. The rows are in descending order by the
BEGIN_TIME column value. Each row belongs to the time interval
marked by (BEGIN_TIME, END_TIME). Each column represents the data
collected for the particular statistic in that time interval. The first row
of the view contains statistics for the (partial) current time period. The
view contains a total of 576 rows, spanning a 4 day cycle.
The following example shows the results of a query on the
V$UNDOSTAT view.
SELECT TO_CHAR(BEGIN_TIME, 'MM/DD/YYYY HH24:MI:SS')
BEGIN_TIME,
TO_CHAR(END_TIME, 'MM/DD/YYYY HH24:MI:SS') END_TIME,
UNDOTSN, UNDOBLKS, TXNCOUNT, MAXCONCURRENCY AS "MAXCON"
FROM v$UNDOSTAT WHERE rownum <= 144;
BEGIN_TIME END_TIME UNDOTSN UNDOBLKS TXNCOUNT MAXCON
------------------- ------------------- ---------- ---------- ---------- ---------10/28/2004 14:25:12 10/28/2004 14:32:17 8 74 12071108 3
10/28/2004 14:15:12 10/28/2004 14:25:12 8 49 12070698 2

10/28/2004 14:05:12 10/28/2004 14:15:12 8 125 12070220 1


10/28/2004 13:55:12 10/28/2004 14:05:12 8 99 12066511 3
...
10/27/2004 14:45:12 10/27/2004 14:55:12 8 15 11831676 1
10/27/2004 14:35:12 10/27/2004 14:45:12 8 154 11831165 2
144 rows selected.
The preceding example shows how undo space is consumed in the
system for the previous 24 hours from the time 14:35:12 on
10/27/2004.ATUS is blank for a member, then the file is in use.
22.Oracle DBA-interview qns(dont get any where)
What are day to day activities?
What type of ora-errors you get?
Difference between physical and logical structure?
What are the views to know the growth of the tablespace and how do
you see the growth of database
What is the block size of your database and how do you see it?
What is the size of extent and how do you see which extent is using?
Difference between segment and extent/
I have a table of one column what is the size of segment, extent, and
block?
How do you identify the growth of extents?
Can I decrease the size of extent?
What is PCT INC,pct free,pct used?
When the database startup which process reads the blocks?
When I issue startup which background process reads the init.ora file?

Which background process reads the control file?


How can you find the size of control file?
Can you startup and shutdown the database from client/
What happens when is issue startup no mount?
Database is in no archive log mode Can I kill archive process?
How can I know size of redo log buffer?
What is the size of your log file and how to see it? Why do you assign
that size?
Disadvantages of LMT?
How to assign a tempfile and what will happen? (steps)
What are the situations when the control file is updated?
What is your database size? And difference between size of database
and exported database file size?
What is crash?
What is oracle/
What is a user?
What is schema/
How do you define a object?
In how many ways I can startup a database?
During startup at what stage the checkpoint comes into picture?
At which stage roll forward comes into picture?
Which background process talks to Listner?
Who else talks to pmon?
How can I find size of pmon?

Shut immediate I used and I use shut abort how can I know which has
brought down the database?
Which background process write to log buffer?
what does contains Shared pool ?
23.Oracle DBA-interview qns(dont get anywhere)

1. What is your OS?


2. What is your database version?
3. What is the other version you used?
4. Which parameter tells it is undo table space?
5. What is the methodology of oracle installation?
6. Where does the oracle software exist?
7. How may sessions will connect to your database/
8. How may process are created?( if one session access another
process with same process how will it works will create another process
or not)
9. What is the front-end software you use?
10. How do you connect to database?( What do you mean by port and
thin client)?
11. How do you see which process is running at O.S?
12. What you mean by virtual memory?
13. How do you find the location of control file?
14. What is the difference between grant option and Adim option?
15. Which will create more undo segment?
16. How transactions know it has to get from the rollback segment?

17. What do you mean by snap shot too old ?


18. Difference between rollback segments and undo segments?
19. How do you kill a session? ( steps and command)
20. What is your backup strategy?
21. What is hot backup and how will it function?
22. What does it mean opening with reset logs and noresetlogs/
23. What is log sequence/
24. When will we use reset logs and no reset logs?
25. Explain about the architecture of database?
26. Sequence of startup and shutdown?
27. What is log feature?
28. What is a free list?
29. I am exporting with compress=y?
30. During import ignore=y?
31. Explain about cloning steps?
32. What is transportable space?
33. What is spfile and pfile?
34. When database is startup which will create PGA and SGA?
35. What is jump start installation?
36. What is the GUI toll to install oracle software?
37. How to change the user password?
38. A user is drop effect of tablespace?
39. If a tablespace is drop?

40. Will the contents of datafile are drop?


41. How do you drop a database?
42. Can I drop a database by connecting to database?
43. What is use of read-only database?
44. How do you know the size of logfile?
45. What will happen when log switch occurs?
46. What will happen when I drop a tablespace in the dbbuffer?
47. Which script is use to compile?
48. How to compile a view?
49. How to compile a table?
50. What is checkpoint?
51. Difference between synonym and view? ( I create a view as select *
from)
52. What happen to redolog when database in backup mode?
53. What happen to controlfile when checkpoint occurs?
24.kernel parameter settings in oracle
This article is to define the default kernel parameter settings for the
Linux Intel Operating system running Oracle 9.X Enterprise Edition.
Kernel Parameters:
==================
Oracle9i uses UNIX resources such as shared memory, swap space,
and semaphores
extensively for interprocess communication. If your kernel parameter
settings
are insufficient for Oracle9i, you will experience problems during
installation
and instance startup. The greater the amount of data you can store in
memory,
the faster your database will operate. In addition, by maintaining data
in
memory, the UNIX kernel reduces disk I/O activity.
a

Use the ipcs command to obtain a list of the system's current shared
memory and
semaphore segments, and their identification number and owner.
You can modify the kernel parameters by using the /proc file system.
To modify kernel parameters using the /proc file system:
1.Log in as root user.
2.Change to the /proc/sys/kernel directory.
3.Review the current semaphore parameter values in the sem file using
the
cat or more utility. For example,
# cat sem
The output will list, in order, the values for the SEMMSL, SEMMNS,
SEMOPM,
and SEMMNI parameters. The following example shows how the output
will appear.
250 32000 32 128
In the preceding example, 250 is the value of the SEMMSL parameter,
32000 is
the value of the SEMMNS parameter, 32 is the value of the SEMOPM
parameter, and
128 is the value of the SEMMNI parameter.
4.Modify the parameter values using the following command:
# echo SEMMSL_value SEMMNS_value SEMOPM_value SEMMNI_value >
sem
In the preceding command, all parameters must be entered in order.
5.Review the current shared memory parameters using the cat or more
utility.
For example,
# cat shared_memory_parameter
In the preceding example, the shared_memory_parameter is either the
SHMMAX or
SHMMNI parameter. The parameter name must be entered in
lowercase letters.
6.Modify the shared memory parameter using the echo utility. For
example,
to modify the SHMMAX parameter, enter the following:
# echo 2147483648 > shmmax
7.Write a script to initialize these values during system startup and
include
the script in your system init files.
See Also:
For more information on script files and init files, refer to your system
vendor's documentation.
Refer to the following table to determine if your system shared
memory and
semaphore kernel parameters are set high enough for Oracle9i. The
a

parameters
in the following table are the minimum values required to run Oracle9i
with a
single database instance.
SEMMNI=100
- Defines the maximum number of semaphore sets in the entire
system.
SEMMNS=256
- Defines the maximum semaphores on the system. This setting is a
minimum
recommended value, for initial installation only. The SEMMNS
parameter should
be set to the sum of the PROCESSES parameter for each Oracle
database, adding
the largest one twice, and then adding an additional 10 for each
database.
SEMMSL=100
- Defines the maximum number of semaphores for each Oracle
database. The SEMMSL
setting should be 10 plus the largest PROCESSES parameter of any
Oracle
database on the system.
SEMOPM=100
- Defines the maximum number of operations per semop call.
SEMVMX=32767
- Defines the maximum value of a semaphore.
SHMMAX=2147483648
- Defines the maximum allowable size of the shared memory. The
SHMMAX parameter
does not affect how much shared memory is used or needed by
Oracle9i, the
operating system, or the operating system kernel. One-half the size of
your
system's physical memory. Check your system for additional
restrictions.
SHMMIN=1
- Defines the minimum allowable size of a single shared memory
segment.
SHMMNI=100
- Defines the maximum number of shared memory segments in the
entire system.
SHMSEG=4096
- Defines the maximum number of shared memory segments one
process can attach.
Note: These are minimum kernel requirements for Oracle9i. If you have
a

previously
tuned your kernel parameters to levels equal to or higher than these
values,
continue to use the higher values. A system restart is necessary for
kernel
changes to take effect.