Você está na página 1de 977

/***********************************************************************/

/* Document : Oracle 8i,9i,10g queries, information, and tips */


/* Doc. Versie : 58 */
/* File : oracle9i10g.txt */
/* Date : 23-05-2008 */
/* Content : Just a series of handy DBA queries. */
/* Compiled by : Albert */
/***********************************************************************/

CONTENTS:

0. Common data dictionary queries for sessions, locks, perfoRMANce etc..


1. DATA DICTIONARY QUERIES m.b.t. files, tablespaces, logs:
2. NOTES ON PERFORMANCE:
3. Data dictonary queries m.b.t perfoRMANce:
4. IMP and EXP, 10g IMPDB and EXPDB, and SQL*Loader Examples
5. Add, Move AND Size Datafiles,logfiles, create objects etc..:
6. Install Oracle 92 on Solaris:
7. install Oracle 9i on Linux:
8. Install Oracle 9.2.0.2 on OpenVMS:
9. Install Oracle 9.2.0.1 on AIX
9. Installation Oracle 8i - 9i:
10. CONSTRAINTS:
11. DBMS_JOB and scheduled Jobs:
12. Net8,9,10 / SQLNet:
13. Datadictionary queries Rollback segments:
14. Data dictionary queries m.b.t. security, permissions:
15. INIT.ORA parameters:
16. Snapshots:
17. Triggers:
19. BACKUP RECOVERY, TROUBLESHOOTING:
20. TRACING:
21. Overig:
22. DBA% and v$ views
23 TUNING:
24 RMAN:
25. UPGRADE AND MIGRATION
26. Some info on Rdb:
27. Some info on IFS
28. Some info on 9iAS rel. 2
29 - 35 9iAS configurations and troubleshooting
30. BLOBS
31. BLOCK CORRUPTION
32. iSQL*Plus and EM 10g
33. ADDM
34. ASM and 10g RAC
35. CDC and Streams
36. X$ Tables

==================================================================================
==========
0. QUICK INFO/VIEWS ON SESSIONS, LOCKS, AND UNDO/ROLLBACK INFORMATION IN A SINGLE
INSTANCE:
==================================================================================
=========

SINGLE INSTANCE QUERIES:


========================

-- ---------------------------
-- 0.1 QUICK VIEW ON SESSIONS:
-- ---------------------------

SELECT substr(username, 1, 10), osuser, sql_address, to_char(logon_time, 'DD-MM-


YYYY;HH24:MI'),
sid, serial#, command, substr(program, 1, 30), substr(machine, 1, 30),
substr(terminal, 1, 30)
FROM v$session;

SELECT sql_text, rows_processed from v$sqlarea where address=''

-- ------------------------
-- 0.2 QUICK VIEW ON LOCKS: (use the sys.obj$ to find ID1:)
-- ------------------------

First, lets take a look at some important dictionary views with respect to locks:

SQL> desc v$lock;


Name Null? Type
----------------------------- -------- --------------------
ADDR RAW(8)
KADDR RAW(8)
SID NUMBER
TYPE VARCHAR2(2)
ID1 NUMBER
ID2 NUMBER
LMODE NUMBER
REQUEST NUMBER
CTIME NUMBER
BLOCK NUMBER

This view stores all information relating to locks in the database. The
interesting columns in this view are sid (identifying the session
holding or aquiring the lock), type, and the lmode/request pair. Important
possible values of type are TM (DML or Table Lock),
TX (Transaction), MR (Media Recovery), ST (Disk Space Transaction). Exactly one of
the lmode, request pair is either 0 or 1
while the other indicates the lock mode. If lmode is not 0 or 1, then the session
has aquired the lock, while it waits to aquire the lock
if request is other than 0 or 1. The possible values for lmode and request are:

1: null,
2: Row Share (SS),
3: Row Exclusive (SX),
4: Share (S),
5: Share Row Exclusive (SSX) and
6: Exclusive(X)

If the lock type is TM, the column id1 is the object's id and the name of the
object can then be queried like so:
select name from sys.obj$ where obj# = id1 A lock type of JI indicates that a
materialized view is being

SQL> desc v$locked_object;


Name Null? Type
----------------------------- -------- --------------------
XIDUSN NUMBER
XIDSLOT NUMBER
XIDSQN NUMBER
OBJECT_ID NUMBER
SESSION_ID NUMBER
ORACLE_USERNAME VARCHAR2(30)
OS_USER_NAME VARCHAR2(30)
PROCESS VARCHAR2(12)
LOCKED_MODE NUMBER

SQL> desc dba_waiters;


Name Null? Type
----------------------------- -------- --------------------
WAITING_SESSION NUMBER
HOLDING_SESSION NUMBER
LOCK_TYPE VARCHAR2(26)
MODE_HELD VARCHAR2(40)
MODE_REQUESTED VARCHAR2(40)
LOCK_ID1 NUMBER
LOCK_ID2 NUMBER

SQL> desc v$transaction;


Name Null? Type
----------------------------- -------- --------------------
ADDR RAW(8)
XIDUSN NUMBER
XIDSLOT NUMBER
XIDSQN NUMBER
UBAFIL NUMBER
UBABLK NUMBER
UBASQN NUMBER
UBAREC NUMBER
STATUS VARCHAR2(16)
START_TIME VARCHAR2(20)
START_SCNB NUMBER
START_SCNW NUMBER
START_UEXT NUMBER
START_UBAFIL NUMBER
START_UBABLK NUMBER
START_UBASQN NUMBER
START_UBAREC NUMBER
SES_ADDR RAW(8)
FLAG NUMBER
SPACE VARCHAR2(3)
RECURSIVE VARCHAR2(3)
NOUNDO VARCHAR2(3)
PTX VARCHAR2(3)
NAME VARCHAR2(256)
PRV_XIDUSN NUMBER
PRV_XIDSLT NUMBER
PRV_XIDSQN NUMBER
PTX_XIDUSN NUMBER
PTX_XIDSLT NUMBER
PTX_XIDSQN NUMBER
DSCN-B NUMBER
DSCN-W NUMBER
USED_UBLK NUMBER
USED_UREC NUMBER
LOG_IO NUMBER
PHY_IO NUMBER
CR_GET NUMBER
CR_CHANGE NUMBER
START_DATE DATE
DSCN_BASE NUMBER
DSCN_WRAP NUMBER
START_SCN NUMBER
DEPENDENT_SCN NUMBER
XID RAW(8)
PRV_XID RAW(8)
PTX_XID RAW(8)

Queries you can use in investigating locks:


===========================================

SELECT XIDUSN,OBJECT_ID,SESSION_ID,ORACLE_USERNAME,OS_USER_NAME,PROCESS from


v$locked_object;

SELECT d.OBJECT_ID, substr(OBJECT_NAME,1,20), l.SESSION_ID, l.ORACLE_USERNAME,


l.LOCKED_MODE
from v$locked_object l, dba_objects d
where d.OBJECT_ID=l.OBJECT_ID;

SELECT ADDR, KADDR, SID, TYPE, ID1, ID2, LMODE, BLOCK from v$lock;

SELECT a.sid, a.saddr, b.ses_addr, a.username, b.xidusn, b.used_urec, b.used_ublk


FROM v$session a, v$transaction b
WHERE a.saddr = b.ses_addr;

SELECT s.sid, l.lmode, l.block, substr(s.username, 1, 10), substr(s.schemaname, 1,


10),
substr(s.osuser, 1, 10), substr(s.program, 1, 30), s.command
FROM v$session s, v$lock l
WHERE s.sid=l.sid;
SELECT p.spid, s.sid, p.addr,s.paddr,substr(s.username, 1, 10),
substr(s.schemaname, 1, 10),
s.command,substr(s.osuser, 1, 10), substr(s.machine, 1, 10)
FROM v$session s, v$process p
WHERE s.paddr=p.addr

SELECT sid, serial#, command,substr(username, 1, 10), osuser,


sql_address,LOCKWAIT,
to_char(logon_time, 'DD-MM-YYYY;HH24:MI'), substr(program, 1, 30)
FROM v$session;

SELECT sid, serial#, username, LOCKWAIT from v$session;

SELECT v.SID, v.BLOCK_GETS, v.BLOCK_CHANGES, w.USERNAME, w.OSUSER, w.TERMINAL


FROM v$sess_io v, V$session w
WHERE v.SID=w.SID ORDER BY v.SID;

SELECT * from dba_waiters;

SELECT waiting_session, holding_session, lock_type, mode_held


FROM dba_waiters;

SELECT
p.spid unix_spid,
s.sid sid,
p.addr,
s.paddr,
substr(s.username, 1, 10) username,
substr(s.schemaname, 1, 10) schemaname,
s.command command,
substr(s.osuser, 1, 10) osuser,
substr(s.machine, 1, 25) machine
FROM v$session s, v$process p
WHERE s.paddr=p.addr
ORDER BY p.spid;

Usage of v$session_longops:
===========================

SQL> desc v$session_longops;

SID NUMBER Session identifier


SERIAL# NUMBER Session serial number
OPNAME VARCHAR2(64) Brief description of the operation
TARGET VARCHAR2(64) The object on which the operation is carried out
TARGET_DESC VARCHAR2(32) Description of the target
SOFAR NUMBER The units of work done so far
TOTALWORK NUMBER The total units of work
UNITS VARCHAR2(32) The units of measurement
START_TIME DATE The starting time of operation
LAST_UPDATE_TIME DATE Time when statistics last updated
TIMESTAMP DATE Timestamp
TIME_REMAINING NUMBER Estimate (in seconds) of time remaining for the
operation to complete
ELAPSED_SECONDS NUMBER The number of elapsed seconds from the start of
operations
CONTEXT NUMBER Context
MESSAGE VARCHAR2(512) Statistics summary message
USERNAME VARCHAR2(30) User ID of the user performing the operation
SQL_ADDRESS RAW(4 | 8) Used with the value of the SQL_HASH_VALUE column to
identify the SQL statement associated with the operation
SQL_HASH_VALUE NUMBER Used with the value of the SQL_ADDRESS column to
identify the SQL statement associated with the operation
SQL_ID VARCHAR2(13) SQL identifier of the SQL statement associated with
the operation
QCSID NUMBER Session identifier of the parallel coordinator

This view displays the status of various operations that run for longer than 6
seconds (in absolute time). These operations currently
include many backup and recovery functions, statistics gathering, and query
execution, and more operations are added for every Oracle release.

To monitor query execution progress, you must be using the cost-based optimizer
and you must:
Set the TIMED_STATISTICS or SQL_TRACE parameter to true

Gather statistics for your objects with the ANALYZE statement or the DBMS_STATS
package

You can add information to this view about application-specific long-running


operations by using the
DBMS_APPLICATION_INFO.SET_SESSION_LONGOPS procedure.

Select 'long', to_char (l.sid), to_char (l.serial#), to_char(l.sofar),


to_char(l.totalwork), to_char(l.start_time, 'DD-Mon-YYYY HH24:MI:SS' ),
to_char ( l.last_update_time , 'DD-Mon-YYYY HH24:MI:SS'),
to_char(l.time_remaining), to_char(l.elapsed_seconds),

l.opname,l.target,l.target_desc,l.message,s.username,s.osuser,s.lockwait from
v$session_longops l, v$session s
where l.sid = s.sid and l.serial# = s.serial#;

Select 'long', to_char (l.sid), to_char (l.serial#), to_char(l.sofar),


to_char(l.totalwork), to_char(l.start_time, 'DD-Mon-YYYY HH24:MI:SS' ),
to_char ( l.last_update_time , 'DD-Mon-YYYY HH24:MI:SS'),
s.username,s.osuser,s.lockwait from v$session_longops l, v$session s
where l.sid = s.sid and l.serial# = s.serial#;

select substr(username,1,15),target,to_char(start_time, 'DD-Mon-YYYY


HH24:MI:SS' ), SOFAR,substr(MESSAGE,1,70) from v$session_longops;

select USERNAME, to_char(start_time, 'DD-Mon-YYYY HH24:MI:SS' ),


substr(message,1,90),to_char(time_remaining) from v$session_longops;
9i and 10G note:
================

Oracle has a view inside the Oracle data buffers. The view is called v$bh, and
while v$bh was originally
developed for Oracle Parallel Server (OPS), the v$bh view can be used to show the
number of data blocks
in the data buffer for every object type in the database.

The following query is especially exciting because you can now see what objects
are consuming
the data buffer caches. In Oracle9i, you can use this information to segregate
tables to separate
RAM buffers with different blocksizes.

Here is a sample query that shows data buffer utilization for individual objects
in the database.
Note that this script uses an Oracle9i scalar sub-query, and will not work in pre-
Oracle9i systems
unless you comment-out column c3.

column c0 heading 'Owner' format a15


column c1 heading 'Object|Name' format a30
column c2 heading 'Number|of|Buffers' format 999,999
column c3 heading 'Percentage|of Data|Buffer' format 999,999,999

select
owner c0,
object_name c1,
count(1) c2,
(count(1)/(select count(*) from v$bh)) *100 c3
from
dba_objects o,
v$bh bh
where
o.object_id = bh.objd
and
o.owner not in ('SYS','SYSTEM','AURORA$JIS$UTILITY$')
group by
owner,
object_name
order by
count(1) desc
;

-- -----------------------------
-- 0.3 QUICK VIEW ON TEMP USAGE:
-- -----------------------------

select total_extents, used_extents, total_extents, current_users, tablespace_name


from v$sort_segment;

select username, user, sqladdr, extents, tablespace from v$sort_usage;

SELECT b.tablespace,
ROUND(((b.blocks*p.value)/1024/1024),2),
a.sid||','||a.serial# SID_SERIAL,
a.username,
a.program
FROM sys.v_$session a,
sys.v_$sort_usage b,
sys.v_$parameter p
WHERE p.name = 'db_block_size'
AND a.saddr = b.session_addr
ORDER BY b.tablespace, b.blocks;

-- --------------------------------
-- 0.4 QUICK VIEW ON UNDO/ROLLBACK:
-- --------------------------------

SELECT substr(username, 1, 10), substr(terminal, 1, 10), substr(osuser, 1, 10),


t.start_time, r.name, t.used_ublk "ROLLB BLKS", log_io, phy_io
FROM sys.v_$transaction t, sys.v_$rollname r, sys.v_$session s
WHERE t.xidusn = r.usn
AND t.ses_addr = s.saddr;

SELECT substr(n.name, 1, 10), s.writes, s.gets, s.waits, s.wraps, s.extents,


s.status,
s.optsize, s.rssize
FROM V$ROLLNAME n, V$ROLLSTAT s
WHERE n.usn=s.usn;

SELECT substr(r.name, 1, 10) "RBS", s.sid, s.serial#, s.taddr, t.addr,


substr(s.username, 1, 10) "USER", t.status,
t.cr_get, t.phy_io, t.used_ublk, t.noundo,
substr(s.program, 1, 15) "COMMAND"
FROM sys.v_$session s, sys.v_$transaction t, sys.v_$rollname r
WHERE t.addr = s.taddr
AND t.xidusn = r.usn
ORDER BY t.cr_get, t.phy_io;

SELECT substr(segment_name, 1, 20), substr(tablespace_name, 1, 20), status,


INITIAL_EXTENT, NEXT_EXTENT, MIN_EXTENTS, MAX_EXTENTS, PCT_INCREASE
FROM DBA_ROLLBACK_SEGS;

select 'FREE',count(*) from sys.fet$


union
select 'USED',count(*) from sys.uet$;

-- Quick view active transactions

SELECT NAME, XACTS "ACTIVE TRANSACTIONS"


FROM V$ROLLNAME, V$ROLLSTAT
WHERE V$ROLLNAME.USN = V$ROLLSTAT.USN;

SELECT to_char(BEGIN_TIME, 'DD-MM-YYYY;HH24:MI'), to_char(END_TIME, 'DD-MM-


YYYY;HH24:MI'),
UNDOTSN, UNDOBLKS, TXNCOUNT, MAXCONCURRENCY AS "MAXCON"
FROM V$UNDOSTAT WHERE trunc(BEGIN_TIME)=trunc(SYSDATE);

select TO_CHAR(MIN(Begin_Time),'DD-MON-YYYY HH24:MI:SS')


"Begin Time",
TO_CHAR(MAX(End_Time),'DD-MON-YYYY HH24:MI:SS')
"End Time",
SUM(Undoblks) "Total Undo Blocks Used",
SUM(Txncount) "Total Num Trans Executed",
MAX(Maxquerylen) "Longest Query(in secs)",
MAX(Maxconcurrency) "Highest Concurrent TrCount",
SUM(Ssolderrcnt),
SUM(Nospaceerrcnt)
from V$UNDOSTAT;

SELECT used_urec FROM v$session s, v$transaction t


WHERE s.audsid=sys_context('userenv', 'sessionid') and
s.taddr = t.addr;

(used_urec = Used Undo records)

SELECT a.sid, a.username, b.xidusn, b.used_urec, b.used_ublk


FROM v$session a, v$transaction b
WHERE a.saddr = b.ses_addr;

SELECT v.SID, v.BLOCK_GETS, v.BLOCK_CHANGES, w.USERNAME, w.OSUSER, w.TERMINAL


FROM v$sess_io v, V$session w
WHERE v.SID=w.SID ORDER BY v.SID;

-- --------------------------------
-- 0.5 SOME EXPLANATIONS:
-- --------------------------------

-- explanation of "COMMAND":

1: CREATE TABLE 2: INSERT 3: SELECT 4: CREATE CLUSTER 5: ALTER CLUSTER 6: UPDATE


7: DELETE 8: DROP CLUSTER
9: CREATE INDEX 10: DROP INDEX 11: ALTER INDEX 12: DROP TABLE 13: CREATE SEQUENCE
14: ALTER SEQUENCE
15: ALTER TABLE 16: DROP SEQUENCE 17: GRANT 18: REVOKE 19: CREATE SYNONYM 20: DROP
SYNONYM 21: CREATE VIEW
22: DROP VIEW 23: VALIDATE INDEX 24: CREATE PROCEDURE 25: ALTER PROCEDURE 26: LOCK
TABLE 27: NO OPERATION
28: RENAME 29: COMMENT 30: AUDIT 31: NOAUDIT 32: CREATE DATABASE LINK 33: DROP
DATABASE LINK 34: CREATE DATABASE
35: ALTER DATABASE 36: CREATE ROLLBACK SEGMENT 37: ALTER ROLLBACK SEGMENT 38: DROP
ROLLBACK SEGMENT
39: CREATE TABLESPACE 40: ALTER TABLESPACE 41: DROP TABLESPACE 42: ALTER SESSION
43: ALTER USE 44: COMMIT
45: ROLLBACK 46: SAVEPOINT 47: PL/SQL EXECUTE 48: SET TRANSACTION 49: ALTER SYSTEM
SWITCH LOG 50: EXPLAIN
51: CREATE USER 25: CREATE ROLE 53: DROP USER 54: DROP ROLE 55: SET ROLE 56:
CREATE SCHEMA 57: CREATE CONTROL FILE
58: ALTER TRACING 59: CREATE TRIGGER 60: ALTER TRIGGER 61: DROP TRIGGER 62:
ANALYZE TABLE 63: ANALYZE INDEX
64: ANALYZE CLUSTER 65: CREATE PROFILE 66: DROP PROFILE 67: ALTER PROFILE 68: DROP
PROCEDURE 69: DROP PROCEDURE
70: ALTER RESOURCE COST 71: CREATE SNAPSHOT LOG 72: ALTER SNAPSHOT LOG 73: DROP
SNAPSHOT LOG 74: CREATE SNAPSHOT
75: ALTER SNAPSHOT 76: DROP SNAPSHOT 79: ALTER ROLE 85: TRUNCATE TABLE 86:
TRUNCATE COUSTER 88: ALTER VIEW
91: CREATE FUNCTION 92: ALTER FUNCTION 93: DROP FUNCTION 94: CREATE PACKAGE 95:
ALTER PACKAGE 96: DROP PACKAGE
97: CREATE PACKAGE BODY 98: ALTER PACKAGE BODY 99: DROP PACKAGE BODY

-- explanation of locks:

Locks:
0, 'None', /* Mon Lock equivalent */
1, 'Null', /* N */
2, 'Row-S (SS)', /* L */
3, 'Row-X (SX)', /* R */
4, 'Share', /* S */
5, 'S/Row-X (SRX)', /* C */
6, 'Exclusive', /* X */
to_char(b.lmode)
TX: enqueu, waiting
TM: DDL on object
MR: Media Recovery

A TX lock is acquired when a transaction initiates its first change and is


held until the transaction does a COMMIT or ROLLBACK. It is used mainly as
a queuing mechanism so that other sessions can wait for the transaction to
complete.

TM Per table locks are acquired during the execution of a transaction


when referencing a table with a DML statement so that the object is
not dropped or altered during the execution of the transaction,
if and only if the dml_locks parameter is non-zero.

LOCKS: locks op user objects, zoals tables en rows


LATCH: locks op system objects, zoals shared data structures in memory en data
dictionary rows

LOCKS - shared of exclusive


LATCH - altijd exclusive

UL= user locks, geplaats door programmatuur m.b.v. bijvoorbeeld DBMS_LOCK package

DML LOCKS: data manipulatie: table lock, row lock


DDL LOCKS: preserves de struktuur van object (geen simulane DML, DDL statements)

DML locks:

row lock (TX): voor rows (insert, update, delete)


row lock plus table lock: row lock, maar ook voorkomen DDL statements
table lock (TM): automatisch bij insert, update, delete, ter voorkoming DDL op
table

table lock: S: share lock


RS: row share
RSX: row share exlusive
RX: row exclusive

X: exclusive (ANDere tansacties kunnen alleen SELECT..)


in V$LOCK lmode column:

0, None
1, Null (NULL)
2, Row-S (SS)
3, Row-X (SX)
4, Share (S)
5, S/Row-X (SSX)
6, Exclusive (X)

Internal Implementation of Oracle Locks (Enqueue)

Oracle server uses locks to provide concurrent access to shared resources whereas
it uses latches to provide
exclusive and short-term access to memory structures inside the SGA. Latches also
prevent more than one process
to execute the same piece of code, which other process might be executing. Latch
is also a simple lock,
which provides serialize and only exclusive access to the memory area in SGA.
Oracle doesn�t use latches to
provide shared access to resources because it will increase CPU usage. Latches are
used for big memory structure
and allow operations required for locking the sub structures. Shared resources can
be tables, transactions,
redo threads, etc. Enqueue can be local or global. If it is a single instance then
enqueues will be local to
that instance. There are global enqueus also like ST enqueue, which is held before
any space transaction
can be occurred on any tablespace in RAC. ST enqueues are held only for
dictionary-managed tablespaces.
These oracle locks are generally known as Enqueue, because whenever there is a
session request for a lock
on any shared resource structure, it's lock data structure is queued to one of the
linked list attached to
that resource structure (Resource structure is discussed later).

Before proceeding further with this topic, here is little brief about Oracle
locks. Oracle locks can be applied
to compound and simple objects like tables and the cache buffer. Locks can be held
in different modes like shared,
excusive, null, sub-shared, sub-exclusive and shared sub-exclusive. Depending on
the type of object,
different modes are applied. Foe example, a compound object like a table with
rows, all above mentioned modes
could be applicable whereas for simple objects only the first three will be
applicable. These lock modes don�t
have any importance of their own but the importance is how they are being used by
the subsystem.
These lock modes (compatibility between locks) define how the session will get a
lock on that object.

-- Explanation of Waits:

SQL> desc v$system_event;


Name
------------------------
EVENT
TOTAL_WAITS
TOTAL_TIMEOUTS
TIME_WAITED
AVERAGE_WAIT
TIME_WAITED_MICRO

v$system_event
This view displays the count (total_waits) of all wait events since startup of the
instance.
If timed_statistics is set to true, the sum of the wait times for all events are
also displayed
in the column time_waited. The unit of time_waited is one hundreth of a second.
Since 10g, an additional column (time_waited_micro) measures wait times in
millionth of a second.
total_waits where event='buffer busy waits' is equal the sum of count in
v$waitstat.
v$enqueue_stat can be used to break down waits on the enqueue wait event. While
this view totals all
events in an instance, v$session

select event, total_waits, time_waited


from v$system_event
where event like '%file%'
Order by total_waits desc;

column c1 heading 'Event|Name' format a30


column c2 heading 'Total|Waits' format 999,999,999
column c3 heading 'Seconds|Waiting' format 999,999
column c4 heading 'Total|Timeouts' format 999,999,999
column c5 heading 'Average|Wait|(in secs)' format 99.999

ttitle 'System-wide Wait Analysis|for current wait events'

select
event c1,
total_waits c2,
time_waited / 100 c3,
total_timeouts c4,
average_wait /100 c5
from
sys.v_$system_event
where
event not in (
'dispatcher timer',
'lock element cleanup',
'Null event',
'parallel query dequeue wait',
'parallel query idle wait - Slaves',
'pipe get',
'PL/SQL lock timer',
'pmon timer',
'rdbms ipc message',
'slave wait',
'smon timer',
'SQL*Net break/reset to client',
'SQL*Net message from client',
'SQL*Net message to client',
'SQL*Net more data to client',
'virtual circuit status',
'WMON goes to sleep'
)
AND
event not like 'DFS%'
and
event not like '%done%'
and
event not like '%Idle%'
AND
event not like 'KXFX%'
order by
c2 desc
;

Create table beg_system_event as select * from v$system_event


Run workload through system or user task
Create table end_system_event as select * from v$system_event
Issue SQL to determine true wait events
drop table beg_system_event;
drop table end_system_event;

SELECT b.event,
(e.total_waits - b.total_waits) total_waits,
(e.total_timeouts - b.total_timeouts) total_timeouts,
(e.time_waited - b.time_waited) time_waited
FROM beg_system_event b,
end_system_event e
WHERE b.event = e.event;

Cumulative info, after startup:


-------------------------------

SELECT * FROM v$system_event WHERE event = 'enqueue';

SELECT *
FROM v$sysstat
WHERE class=4;

select c.name,a.addr,a.gets,a.misses,a.sleeps,
a.immediate_gets,a.immediate_misses,a.wait_time, b.pid
from v$latch a, v$latchholder b, v$latchname c
where a.addr = b.laddr(+) and a.latch# = c.latch#
order by a.latch#;

-- ---------------------------------------------------------------
-- 0.6. QUICK INFO ON HIT RATIO, SHARED POOL etc..
-- ---------------------------------------------------------------

-- Hit ratio:

SELECT (1-(pr.value/(dbg.value+cg.value)))*100
FROM v$sysstat pr, v$sysstat dbg, v$sysstat cg
WHERE pr.name = 'physical reads'
AND dbg.name = 'db block gets'
AND cg.name = 'consistent gets';

SELECT * FROM V$SGA;

-- free memory shared pool:

SELECT * FROM v$sgastat


WHERE name = 'free memory';

-- hit ratio shared pool:

SELECT gethits,gets,gethitratio FROM v$librarycache


WHERE namespace = 'SQL AREA';

SELECT SUM(PINS) "EXECUTIONS",


SUM(RELOADS) "CACHE MISSES WHILE EXECUTING"
FROM V$LIBRARYCACHE;

SELECT sum(sharable_mem) FROM v$db_object_cache;

-- finding literals in SP:

SELECT substr(sql_text,1,50) "SQL",


count(*) ,
sum(executions) "TotExecs"
FROM v$sqlarea
WHERE executions < 5
GROUP BY substr(sql_text,1,50)
HAVING count(*) > 30
ORDER BY 2;

-- ---------------------------------------
-- 0.7 Quick Table and object information
-- ---------------------------------------

SELECT distinct substr(t.owner, 1, 25), substr(t.table_name,1,50),


substr(t.tablespace_name,1,20),
t.chain_cnt, t.logging, s.relative_fno
FROM dba_tables t, dba_segments s
WHERE t.owner not in ('SYS','SYSTEM',
'OUTLN','DBSNMP','WMSYS','ORDSYS','ORDPLUGINS','MDSYS','CTXSYS','XDB')
AND t.table_name=s.segment_name
AND s.segment_type='TABLE'
AND s.segment_name like 'CI_PAY%';

SELECT substr(segment_name, 1, 30), segment_type, substr(owner, 1, 10),


extents, initial_extent, next_extent, max_extents
FROM dba_segments
WHERE extents > max_extents - 100
AND owner not in ('SYS','SYSTEM');

SELECT segment_name, owner, tablespace_name, extents


FROM dba_segments
WHERE owner='SALES' -- you use the correct schema here
and extents > 700;
SELECT owner, substr(object_name, 1, 30), object_type, created,
last_ddl_time, status
FROM dba_objects where OWNER='RM_LIVE';
WHERE created > SYSDATE-1;

SELECT owner, substr(object_name, 1, 30), object_type, created,


last_ddl_time, status
FROM dba_objects
WHERE status='INVALID';

Compare 2 owners:
-----------------

select table_name from dba_tables


where owner='MIS_OWNER' and
table_name not in (SELECT table_name from dba_tables where OWNER='MARPAT');

Table and column information:


-----------------------------

select
substr(table_name, 1, 3) schema
, table_name
, column_name
, substr(data_type,1 ,1) data_type
from
user_tab_columns
where COLUMN_NAME='ENV_ID'
where
table_name like 'ALG%'
or table_name like 'STG%'
or table_name like 'ODS%'
or table_name like 'DWH%'
or table_name like 'MKM%'
order by
decode(substr(table_name, 1, 3), 'ALG', 10, 'STG', 20, 'ODS', 30, 'DWH',
40, 'MKM', 50, 60)
, table_name
, column_id

Check on existence of JServer:


------------------------------

select count(*) from all_objects where object_name = 'DBMS_JAVA';


should return a count of 3

-- --------------------------------------
-- 0.8 QUICK INFO ON PRODUCT INFORMATION:
-- --------------------------------------
ersa
SELECT * FROM PRODUCT_COMPONENT_VERSION;
SELECT * FROM NLS_DATABASE_PARAMETERS;
SELECT * FROM NLS_SESSION_PARAMETERS;
SELECT * FROM NLS_INSTANCE_PARAMETERS;
SELECT * FROM V$OPTION;
SELECT * FROM V$LICENSE;
SELECT * FROM V$VERSION;

Oracle RDBMS releases:


----------------------

9.2.0.1 is the terminal release for Oracle 9i. Rel 2.


Normally it's patched to 9.2.0.4.
As from october patches 9.2.0.5 and little later 9.2.0.6 were available

9.2.0.4 is patch ID 3095277.

9.0.1.4 is the terminal release for Oracle 9i Rel. 1.


8.1.7 is the terminal release for Oracle8i. Additional patchsets exists.
8.0.6 is the terminal release for Oracle8. Additional patchsets exists.
7.3.4 is the terminal release for Oracle7. Additional patchsets exists.

IS ORACLE 32BIT or 64BIT?


-------------------------

Starting with version 8, Oracle began shipping 64bit versions of it's RDBMS
product on UNIX platforms
that support 64bit software. IMPORTANT: 64bit Oracle can only be installed on
Operating Systems that are 64bit enabled.
In general, if Oracle is 64bit, '64bit' will be displayed on the opening banners
of Oracle executables
such as 'svrmgrl', 'exp' and 'imp'. It will also be displayed in the headers of
Oracle trace files.
Otherwise if '64bit' is not display at these locations, it can be assumed that
Oracle is 32bit.

or

From the OS level: % cd $ORACLE_HOME/bin % file oracle ...if 64bit, '64bit'


will be indicated.

To verify the wordsize of a downloaded patchset:


------------------------------------------------
The filename of the downloaded patchset usually dictates which version and
wordsize of Oracle
it should be applied against. For instance: p1882450_8172_SOLARIS64.zip is the
8.1.7.2 patchset for 64bit
Oracle on Solaris. Also refer to the README that is included with the patch or
patch set and this Note:

Win2k Server Certifications:


----------------------------
OS Product Certified With Version Status Addtl. Info. Components Other Install
Issue
2000 10g N/A N/A Certified Yes None None None
2000 9.2 32-bit -Opteron N/A N/A Certified Yes None None None
2000 9.2 N/A N/A Certified Yes None None None
2000 9.0.1 N/A N/A Desupported Yes None N/A N/A
2000 8.1.7 (8i) N/A N/A Desupported Yes None N/A N/A
2000 8.1.6 (8i) N/A N/A Desupported Yes None N/A N/A
2000, Beta 3 8.1.5 (8i) N/A N/A Withdrawn Yes N/A N/A N/A
Solaris Server certifications:
------------------------------
Server Certifications
OS Product Certified With Version Status Addtl. Info. Components Other Install
Issue
9 10g 64-bit N/A N/A Certified Yes None None None
8 10g 64-bit N/A N/A Certified Yes None None None
10 10g 64-bit N/A N/A Projected None N/A N/A N/A
9 9.2 64-bit N/A N/A Certified Yes None None None
8 9.2 64-bit N/A N/A Certified Yes None None None
10 9.2 64-bit N/A N/A Projected None N/A N/A N/A
2.6 9.2 N/A N/A Certified Yes None None None
9 9.2 N/A N/A Certified Yes None None None
8 9.2 N/A N/A Certified Yes None None None
7 9.2 N/A N/A Certified Yes None None None
10 9.2 N/A N/A Projected None N/A N/A N/A
9 9.0.1 64-bit N/A N/A Desupported Yes None N/A N/A
8 9.0.1 64-bit N/A N/A Desupported Yes None N/A N/A
2.6 9.0.1 N/A N/A Desupported Yes None N/A N/A
9 9.0.1 N/A N/A Desupported Yes None N/A N/A
8 9.0.1 N/A N/A Desupported Yes None N/A N/A
7 9.0.1 N/A N/A Desupported Yes None N/A N/A
9 8.1.7 (8i) 64-bit N/A N/A Desupported Yes None N/A N/A
8 8.1.7 (8i) 64-bit N/A N/A Desupported Yes None N/A N/A
2.6 8.1.7 (8i) N/A N/A Desupported Yes None N/A N/A
9 8.1.7 (8i) N/A N/A Desupported Yes None N/A N/A
8 8.1.7 (8i) N/A N/A Desupported Yes None N/A N/A
7 8.1.7 (8i) N/A N/A Desupported Yes None N/A N/A
everything below: desupported

Oracle clients:
---------------

Server Version
Client Version 10.1.0 9.2.0 9.0.1 8.1.7 8.1.6 8.1.5 8.0.6 8.0.5 7.3.4
10.1.0 Yes Yes Was Yes #2 No No No No No
9.2.0 Yes Yes Was Yes No No Was No No #1
9.0.1 Was Was Was Was Was No Was No Was
8.1.7 Yes Yes Was Yes Was Was Was Was Was
8.1.6 No No Was Was Was Was Was Was Was
8.1.5 No No No Was Was Was Was Was Was
8.0.6 No Was Was Was Was Was Was Was Was
8.0.5 No No No Was Was Was Was Was Was
7.3.4 No Was Was Was Was Was Was Was Was

-- -----------------------------------------------------
-- 0.9 QUICK INFO WITH REGARDS LOGS AND BACKUP RECOVERY:
-- -----------------------------------------------------

SELECT * from V$BACKUP;

SELECT file#, substr(name, 1, 30), status, checkpoint_change# -- uit


controlfile
FROM V$DATAFILE;

SELECT d.file#, d.status, d.checkpoint_change#, b.status, b.CHANGE#,


to_char(b.TIME,'DD-MM-YYYY;HH24:MI'), substr(d.name, 1, 40)
FROM V$DATAFILE d, V$BACKUP b
WHERE d.file#=b.file#;

SELECT file#, substr(name, 1, 30), status, fuzzy, checkpoint_change# -- uit


file header
FROM V$DATAFILE_HEADER;

SELECT first_change#, next_change#, sequence#, archived, substr(name, 1, 40),


COMPLETION_TIME, FIRST_CHANGE#, FIRST_TIME
FROM V$ARCHIVED_LOG
WHERE COMPLETION_TIME > SYSDATE -2;

SELECT recid, first_change#, sequence#, next_change#


FROM V$LOG_HISTORY;

SELECT resetlogs_change#, checkpoint_change#, controlfile_change#, open_resetlogs


FROM V$DATABASE;

SELECT * FROM V$RECOVER_FILE -- Which file needs recovery

-- ----------------------------------------------------------------------------
-- 0.10 QUICK INFO WITH REGARDS TO TABLESPACES, DATAFILES, REDO LOGFILES etc..:
-- -----------------------------------------------------------------------------

-- online redo log informatie: V$LOG, V$LOGFILE:

SELECT l.group#, l.members, l.status, l.bytes, substr(lf.member, 1, 50)


FROM V$LOG l, V$LOGFILE lf
WHERE l.group#=lf.group#;

SELECT THREAD#, SEQUENCE#, FIRST_CHANGE#, FIRST_TIME,


to_char(FIRST_TIME, 'DD-MM-YYYY;HH24:MI')
FROM V$LOG_HISTORY;
-- WHERE SEQUENCE#

SELECT GROUP#, ARCHIVED, STATUS FROM V$LOG;

-- tablespace free-used:

SELECT Total.name "Tablespace Name",


Free_space, (total_space-Free_space) Used_space, total_space
FROM
(SELECT tablespace_name, sum(bytes/1024/1024) Free_Space
FROM sys.dba_free_space
GROUP BY tablespace_name
) Free,
(SELECT b.name, sum(bytes/1024/1024) TOTAL_SPACE
FROM sys.v_$datafile a, sys.v_$tablespace B
WHERE a.ts# = b.ts#
GROUP BY b.name
) Total
WHERE Free.Tablespace_name = Total.name;
SELECT substr(file_name, 1, 70), tablespace_name FROM dba_data_files;

----------------------------------------------
-- 0.11 AUDIT Statements:
----------------------------------------------

select v.sql_text, v.FIRST_LOAD_TIME, v.PARSING_SCHEMA_ID, v.DISK_READS,


v.ROWS_PROCESSED, v.CPU_TIME,
b.username from
v$sqlarea v, dba_users b
where v.FIRST_LOAD_TIME > '2008-05-12'
and v.PARSING_SCHEMA_ID=b.user_id
order by v.FIRST_LOAD_TIME ;

-----------------------------------------------
-- 0.12 EXAMPLE OF DYNAMIC SQL:
-----------------------------------------------

select 'UPDATE '||t.table_name||' SET '||c.column_name||'=REPLACE('||


c.column_name||','''',CHR(7));'
from user_tab_columns c, user_tables t
where c.table_name=t.table_name and t.num_rows>0 and c.DATA_LENGTH>10
and data_type like '%CHAR%'
ORDER BY t.table_name desc;

create public synonym EMPLOYEE for HARRY.EMPLOYEE;

select 'create public synonym '||table_name||' for CISADM.'||table_name||';'


from dba_tables where owner='CISADM';

select 'GRANT SELECT, INSERT, UPDATE, DELETE ON '||table_name||' TO CISUSER;'


from dba_tables where owner='CISADM';

select 'GRANT SELECT ON '||table_name||' TO CISREAD;'


from dba_tables where owner='CISADM';

-----------------------------------------------
-- 0.13 ORACLE MOST COMMON DATATYPES:
-----------------------------------------------

Example: number as integer in comparison to smallint


----------------------------------------------------
SQL> create table a
2 (id number(3));

Table created.

SQL> create table b


2 (id smallint);

Table created.

SQL> create table c


2 (id integer);

Table created.

SQL> insert into a


2 values
3 (5);

1 row created.

SQL> insert into a


2 values
3 (999);

1 row created.

SQL> insert into a


2 values
3 (1001);
(1001)
*
ERROR at line 3:
ORA-01438: value larger than specified precision allowed for this column

SQL> insert into b


2 values
3 (5);

1 row created.

SQL> insert into b


2 values
3 (99);

1 row created.

SQL> insert into b


2 values
3 (999);

1 row created.

SQL> insert into b


2 values
3 (1001);

1 row created.

SQL> insert into b


2 values
3 (65536);

1 row created.

SQL> insert into b


2 values
3 (1048576);

1 row created.

SQL> insert into b


2 values
3 (1099511627776);

1 row created.

SQL> insert into b


2 values
3 (9.5);

1 row created.

SQL> insert into b


2 values
3 (100.23);

1 row created.

SQL> select * from b;

ID
----------
5
99
999
1001
65536
1048576
1.0995E+12
10
100

9 rows selected.

smallint is really not that "small". Actually its float(38).


SQL> insert into c
2 values
3 (5);

1 row created.

SQL> insert into c


2 values
3 (9999);

1 row created.

SQL> insert into c


2 values
3 (92.7);

1 row created.

SQL> insert into c


2 values
3 (1099511627776);

1 row created.

SQL> select * from c;

ID
----------
5
9999
93
1.0995E+12

========================
1. NOTES ON PERFORMANCE:
=========================

1.1 POOLS:
==========

-- SHARED POOL:
-- ------------

A literal SQL statement is considered as one which uses literals in the


predicate/s rather than bind variables
where the value of the literal is likely to differ between various executions of
the statement.
Eg 1:
SELECT * FROM emp WHERE ename='CLARK';
is used by the application instead of
SELECT * FROM emp WHERE ename=:bind1;
SQL statement for this article as it can be shared.

-- Hard Parse
If a new SQL statement is issued which does not exist in the shared pool then this
has to be parsed fully.
Eg: Oracle has to allocate memory for the statement from the shared pool, check
the statement syntactically
and semantically etc... This is referred to as a hard parse and is very expensive
in both terms of CPU used
and in the number of latch gets performed.

--Soft Parse
If a session issues a SQL statement which is already in the shared pool AND it can
use an existing version
of that statement then this is known as a 'soft parse'.
As far as the application is concerned it has asked to parse the statement.

if two statements are textually identical but cannot be shared then these are
called 'versions' of the same statement.
If Oracle matches to a statement with many versions it has to check each version
in turn to see
if it is truely identical to the statement currently being parsed. Hence high
version counts are best avoided.

The best approach to take is that all SQL should be sharable unless it is adhoc or
infrequently used SQL where
it is important to give CBO as much information as possible in order for it to
produce a good execution plan.

--Eliminating Literal SQL


If you have an existing application it is unlikely that you could eliminate all
literal SQL but you should
be prepared to eliminate some if it is causing problems. By looking at the
V$SQLAREA view it is possible
to see which literal statements are good candidates for converting to use bind
variables. The following query
shows SQL in the SGA where there are a large number of similar statements:

SELECT substr(sql_text,1,40) "SQL",


count(*) ,
sum(executions) "TotExecs"
FROM v$sqlarea
WHERE executions < 5
GROUP BY substr(sql_text,1,40)
HAVING count(*) > 30
ORDER BY 2;

The values 40,5 and 30 are example values so this query is looking for different
statements whose first
40 characters are the same which have only been executed a few times each and
there are at least 30 different
occurrances in the shared pool. This query uses the idea it is common for literal
statements to begin
"SELECT col1,col2,col3 FROM table WHERE ..." with the leading portion of each
statement being the same.

--Avoid Invalidations
Some specific orders will change the state of cursors to INVALIDATE. These orders
modify directly
the context of related objects associated with cursors. That's orders are
TRUNCATE, ANALYZE or DBMS_STATS.GATHER_XXX
on tables or indexes, grants changes on underlying objects. The associated cursors
will stay in the SQLAREA but when it
will be reference next time, it should be reloaded and reparsed fully, so the
global performance will be impacted.

The following query could help us to better identify the concerned cursors:

SELECT substr(sql_text, 1, 40) "SQL", invalidations from v$sqlarea


order by invalidations DESC;

-- CURSOR_SHARING parameter (8.1.6 onwards)


<Parameter:CURSOR_SHARING> is a new parameter introduced in Oracle8.1.6. It should
be used with caution in this release.
If this parameter is set to FORCE then literals will be replaced by system
generated bind variables where possible.
For multiple similar statements which differ only in the literals used this allows
the cursors to be shared
even though the application supplied SQL uses literals.
The parameter can be set dynamically at the system or session level thus:
ALTER SESSION SET cursor_sharing = FORCE;
or
ALTER SYSTEM SET cursor_sharing = FORCE;
or it can be set in the init.ora file.
Note: As the FORCE setting causes system generated bind variables to be used in
place of literals, a different execution
plan may be chosen by the cost based optimizer (CBO) as it no longer has the
literal values available to it
when costing the best execution plan.
In Oracle9i, it is possible to set CURSOR_SHARING=SIMILAR. SIMILAR causes
statements that may differ
in some literals, but are otherwise identical, to share a cursor, unless the
literals affect either the meaning
of the statement or the degree to which the plan is optimized. This enhancement
improves the usability of the parameter
for situations where FORCE would normally cause a different, undesired execution
plan.
With CURSOR_SHARING=SIMILAR, Oracle determines which literals are "safe" for
substitution with bind variables.
This will result in some SQL not being shared in an attempt to provide a more
efficient execution plan.

-- SESSION_CACHED_CURSORS parameter
<Parameter:SESSION_CACHED_CURSORS> is a numeric parameter which can be set at
instance level or at session level
using the command:
ALTER SESSION SET session_cached_cursors = NNN;
The value NNN determines how many 'cached' cursors there can be in your session.
Whenever a statement is parsed Oracle first looks at the statements pointed to by
your private session cache -
if a sharable version of the statement exists it can be used. This provides a
shortcut access to frequently parsed
statements that uses less CPU and uses far fewer latch gets than a soft or hard
parse.
To get placed in the session cache the same statement has to be parsed 3 times
within the same cursor - a pointer to the
shared cursor is then added to your session cache. If all session cache cursors
are in use then the least recently
used entry is discarded.
If you do not have this parameter set already then it is advisable to set it to a
starting value of about 50.
The statistics section of the bstat/estat report includes a value for 'session
cursor cache hits' which shows
if the cursor cache is giving any benefit. The size of the cursor cache can then
be increased or decreased as necessary.
SESSION_CACHED_CURSORS are particularly useful with Oracle Forms applications when
forms are frequently opened and closed.

-- SHARED_POOL_RESERVED_SIZE parameter
There are quite a few notes explaining <Parameter:SHARED_POOL_RESERVED_SIZE>
already in circulation. The parameter
was introduced in Oracle 7.1.5 and provides a means of reserving a portion of the
shared pool for large memory allocations.
The reserved area comes out of the shared pool itself.
From a practical point of view one should set SHARED_POOL_RESERVED_SIZE to about
10% of SHARED_POOL_SIZE unless either
the shared pool is very large OR SHARED_POOL_RESERVED_MIN_ALLOC has been set lower
than the default value:

If the shared pool is very large then 10% may waste a significant amount of memory
when a few Mb will suffice.
If SHARED_POOL_RESERVED_MIN_ALLOC has been lowered then many space requests may be
eligible to be satisfied
from this portion of the shared pool and so 10% may be too little.
It is easy to monitor the space usage of the reserved area using the
<View:V$SHARED_POOL_RESERVED>
which has a column FREE_SPACE.

-- SHARED_POOL_RESERVED_MIN_ALLOC parameter
In Oracle8i this parameter is hidden.
SHARED_POOL_RESERVED_MIN_ALLOC should generally be left at its default value,
although in certain cases values
of 4100 or 4200 may help relieve some contention on a heavily loaded shared pool.

-- SHARED_POOL_SIZE parameter
<Parameter:SHARED_POOL_SIZE> controls the size of the shared pool itself. The size
of the shared pool can
impact performance. If it is too small then it is likely that sharable information
will be flushed from the pool
and then later need to be reloaded (rebuilt). If there is heavy use of literal SQL
and the shared pool is too large then
over time a lot of small chunks of memory can build up on the internal memory
freelists causing the shared pool latch
to be held for longer which in-turn can impact performance. In this situation a
smaller shared pool may perform better
than a larger one. This problem is greatly reduced in 8.0.6 and in 8.1.6 onwards
due to the enhancement in <bug:986149> .
NB: The shared pool itself should never be made so large that paging or swapping
occur as performance
can then decrease by many orders of magnitude.
-- _SQLEXEC_PROGRESSION_COST parameter (8.1.5 onwards)
This is a hidden parameter which was introduced in Oracle 8.1.5. The parameter is
included here as
the default setting has caused some problems with SQL sharability. Setting this
parameter to 0 can avoid these
issues which result in multiple versions statements in the shared pool.
Eg: Add the following to the init.ora file
# _SQLEXEC_PROGRESSION_COST is set to ZERO to avoid SQL sharing issues

# See Note:62143.1 for details

_sqlexec_progression_cost=0
Note that a side effect of setting this to '0' is that the V$SESSION_LONGOPS view
is not populated by long running queries.

-- MTS, Shared Server and XA


The multi-threaded server (MTS) adds to the load on the shared pool and can
contribute to any problems as the User Global Area (UGA)
resides in the shared pool. This is also true of XA sessions in Oracle7 as their
UGA is located in the shared pool. (In Oracle8/8i XA sessions
do NOT put their UGA in the shared pool). In Oracle8 the Large Pool can be used
for MTS reducing its impact on shared pool activity
- However memory allocations in the Large Pool still make use of the "shared pool
latch".
See <Note:62140.1> for a description of the Large Pool.
Using dedicated connections rather than MTS causes the UGA to be allocated out of
process private memory rather
than the shared pool. Private memory allocations do not use the "shared pool
latch" and so a switch from MTS to
dedicated connections can help reduce contention in some cases.

In Oracle9i, MTS was renamed to "Shared Server". For the purposes of the shared
pool, the behaviour is essentially the same.

Useful SQL for looking at memory and Shared Pool problems


---------------------------------------------------------

Indeling SGA:
-------------

SELECT * FROM V$SGA;

free memory shared pool:


------------------------

SELECT * FROM v$sgastat


WHERE name = 'free memory';

hit ratio shared pool:


----------------------

SELECT gethits,gets,gethitratio FROM v$librarycache


WHERE namespace = 'SQL AREA';

SELECT SUM(PINS) "EXECUTIONS",


SUM(RELOADS) "CACHE MISSES WHILE EXECUTING"
FROM V$LIBRARYCACHE;

SELECT sum(sharable_mem) FROM v$db_object_cache;

statistics:
-----------

SELECT class, value, name


FROM v$sysstat;

Executions:
-----------

SELECT substr(sql_text,1,90) "SQL",


count(*) ,
sum(executions) "TotExecs"
FROM v$sqlarea
WHERE executions > 5
GROUP BY substr(sql_text,1,90)
HAVING count(*) > 10
ORDER BY 2
;

The values 40,5 and 30 are example values so this query is looking for
different statements whose first 40 characters are the same
which have only been executed a few times each and there are at least 30 different

occurrances in the shared pool. This query uses the idea it is common for literal
statements to begin
"SELECT col1,col2,col3 FROM table WHERE ..." with the leading portion of each
statement being the same.

V$SQLAREA:

SQL_TEXT
VARCHAR2(1000)
First thousand characters of the SQL text for the current cursor

SHARABLE_MEM
NUMBER
Amount of shared memory used by a cursor. If multiple child cursors exist, then
the sum of all
shared memory used by all child cursors.

PERSISTENT_MEM
NUMBER
Fixed amount of memory used for the lifetime of an open cursor. If multiple child
cursors exist,
the fixed sum of memory used for the lifetime of all the child cursors.

RUNTIME_MEM
NUMBER
Fixed amount of memory required during execution of a cursor. If multiple child
cursors exist,
the fixed sum of all memory required during execution of all the child cursors.
SORTS
NUMBER
Sum of the number of sorts that were done for all the child cursors

VERSION_COUNT
NUMBER
Number of child cursors that are present in the cache under this parent

LOADED_VERSIONS
NUMBER
Number of child cursors that are present in the cache and have their context heap
(KGL heap 6) loaded

OPEN_VERSIONS
NUMBER
The number of child cursors that are currently open under this current parent

USERS_OPENING
NUMBER
The number of users that have any of the child cursors open

FETCHES
NUMBER
Number of fetches associated with the SQL statement

EXECUTIONS
NUMBER
Total number of executions, totalled over all the child cursors

USERS_EXECUTING
NUMBER
Total number of users executing the statement over all child cursors

LOADS
NUMBER
The number of times the object was loaded or reloaded

FIRST_LOAD_TIME
VARCHAR2(19)
Timestamp of the parent creation time

INVALIDATIONS
NUMBER
Total number of invalidations over all the child cursors

PARSE_CALLS
NUMBER
The sum of all parse calls to all the child cursors under this parent

DISK_READS
NUMBER
The sum of the number of disk reads over all child cursors

BUFFER_GETS
NUMBER
The sum of buffer gets over all child cursors
ROWS_PROCESSED
NUMBER
The total number of rows processed on behalf of this SQL statement

COMMAND_TYPE
NUMBER
The Oracle command type definition

OPTIMIZER_MODE
VARCHAR2(10)
Mode under which the SQL statement is executed

PARSING_USER_ID
NUMBER
The user ID of the user that has parsed the very first cursor under this parent

PARSING_SCHEMA_ID
NUMBER
The schema ID that was used to parse this child cursor

KEPT_VERSIONS
NUMBER
The number of child cursors that have been marked to be kept using the
DBMS_SHARED_POOL package

ADDRESS
RAW(4)
The address of the handle to the parent for this cursor

HASH_VALUE
NUMBER
The hash value of the parent statement in the library cache

MODULE
VARCHAR2(64)
Contains the name of the module that was executing at the time that the SQL
statement was first parsed as set
by calling DBMS_APPLICATION_INFO.SET_MODULE

MODULE_HASH
NUMBER
The hash value of the module that is named in the MODULE column

ACTION
VARCHAR2(64)
Contains the name of the action that was executing at the time that the SQL
statement was first parsed
as set by calling DBMS_APPLICATION_INFO.SET_ACTION

ACTION_HASH
NUMBER
The hash value of the action that is named in the ACTION column

SERIALIZABLE_ABORTS
NUMBER
Number of times the transaction fails to serialize, producing ORA-08177 errors,
totalled over all the child cursors
IS_OBSOLETE
VARCHAR2(1)
Indicates whether the cursor has become obsolete (Y) or not (N). This can happen
if the number of child cursors
is too large.

CHILD_LATCH
NUMBER
Child latch number that is protecting the cursor

V$SQL:
------

V$SQL lists statistics on shared SQL area without the GROUP BY clause and contains
one row for each child
of the original SQL text entered.

Column Datatype Description


SQL_TEXT
VARCHAR2(1000)
First thousand characters of the SQL text for the current cursor

SHARABLE_MEM
NUMBER
Amount of shared memory used by this child cursor (in bytes)

PERSISTENT_MEM
NUMBER
Fixed amount of memory used for the lifetime of this child cursor (in bytes)

RUNTIME_MEM
NUMBER
Fixed amount of memory required during the execution of this child cursor

SORTS
NUMBER
Number of sorts that were done for this child cursor

LOADED_VERSIONS
NUMBER
Indicates whether the context heap is loaded (1) or not (0)

OPEN_VERSIONS
NUMBER
Indicates whether the child cursor is locked (1) or not (0)

USERS_OPENING
NUMBER
Number of users executing the statement

FETCHES
NUMBER
Number of fetches associated with the SQL statement

EXECUTIONS
NUMBER
Number of executions that took place on this object since it was brought into the
library cache

USERS_EXECUTING
NUMBER
Number of users executing the statement

LOADS
NUMBER
Number of times the object was either loaded or reloaded

FIRST_LOAD_TIME
VARCHAR2(19)
Timestamp of the parent creation time

INVALIDATIONS
NUMBER
Number of times this child cursor has been invalidated

PARSE_CALLS
NUMBER
Number of parse calls for this child cursor

DISK_READS
NUMBER
Number of disk reads for this child cursor

BUFFER_GETS
NUMBER
Number of buffer gets for this child cursor

ROWS_PROCESSED
NUMBER
Total number of rows the parsed SQL statement returns

COMMAND_TYPE
NUMBER
Oracle command type definition

OPTIMIZER_MODE
VARCHAR2(10)
Mode under which the SQL statement is executed

OPTIMIZER_COST
NUMBER
Cost of this query given by the optimizer

PARSING_USER_ID
NUMBER
User ID of the user who originally built this child cursor

PARSING_SCHEMA_ID
NUMBER
Schema ID that was used to originally build this child cursor

KEPT_VERSIONS
NUMBER
Indicates whether this child cursor has been marked to be kept pinned in the
cache using the DBMS_SHARED_POOL package
ADDRESS
RAW(4)
Address of the handle to the parent for this cursor

TYPE_CHK_HEAP
RAW(4)
Descriptor of the type check heap for this child cursor

HASH_VALUE
NUMBER
Hash value of the parent statement in the library cache

PLAN_HASH_VALUE
NUMBER
Numerical representation of the SQL plan for this cursor. Comparing one
PLAN_HASH_VALUE to another easily
identifies whether or not two plans are the same (rather than comparing the two
plans line by line).

CHILD_NUMBER
NUMBER
Number of this child cursor

MODULE
VARCHAR2(64)
Contains the name of the module that was executing at the time that the SQL
statement was first parsed,
which is set by calling DBMS_APPLICATION_INFO.SET_MODULE

MODULE_HASH
NUMBER
Hash value of the module listed in the MODULE column

ACTION
VARCHAR2(64)
Contains the name of the action that was executing at the time that the SQL
statement was first parsed,
which is set by calling DBMS_APPLICATION_INFO.SET_ACTION

ACTION_HASH
NUMBER
Hash value of the action listed in the ACTION column

SERIALIZABLE_ABORTS
NUMBER
Number of times the transaction fails to serialize, producing ORA-08177 errors,
per cursor

OUTLINE_CATEGORY
VARCHAR2(64)
If an outline was applied during construction of the cursor, then this column
displays the category
of that outline. Otherwise the column is left blank.

CPU_TIME
NUMBER
CPU time (in microseconds) used by this cursor for parsing/executing/fetching
ELAPSED_TIME
NUMBER
Elapsed time (in microseconds) used by this cursor for parsing/executing/fetching

OUTLINE_SID
NUMBER
Outline session identifier

CHILD_ADDRESS
RAW(4)
Address of the child cursor

SQLTYPE
NUMBER
Denotes the version of the SQL language used for this statement

REMOTE
VARCHAR2(1)
(Y/N) Identifies whether the cursor is remote mapped or not

OBJECT_STATUS
VARCHAR2(19)
Status of the cursor (VALID/INVALID)

LITERAL_HASH_VALUE
NUMBER
Hash value of the literals which are replaced with system-generated bind
variables and are to be matched,
when CURSOR_SHARING is used. This is not the hash value for the SQL statement. If
CURSOR_SHARING is not used,
then the value is 0.

LAST_LOAD_TIME
VARCHAR2(19)

IS_OBSOLETE
VARCHAR2(1)
Indicates whether the cursor has become obsolete (Y) or not (N). This can happen
if the number of child cursors
is too large.

CHILD_LATCH
NUMBER
Child latch number that is protecting the cursor

Checking for high version counts:


--------------------------------

SELECT address, hash_value,


version_count ,
users_opening ,
users_executing,
substr(sql_text,1,40) "SQL"
FROM v$sqlarea
WHERE version_count > 10
;

"Versions" of a statement occur where the SQL is character for character identical
but the underlying objects or binds
etc.. are different.

Finding statement/s which use lots of shared pool memory:


--------------------------------------------------------

SELECT substr(sql_text,1,60) "Stmt", count(*),


sum(sharable_mem) "Mem",
sum(users_opening) "Open",
sum(executions) "Exec"
FROM v$sql
GROUP BY substr(sql_text,1,60)
HAVING sum(sharable_mem) > 20000
;

SELECT substr(sql_text,1,100) "Stmt", count(*),


sum(sharable_mem) "Mem",
sum(users_opening) "Open",
sum(executions) "Exec"
FROM v$sql
GROUP BY substr(sql_text,1,60)
HAVING sum(executions) > 200
;

SELECT substr(sql_text,1,100) "Stmt", count(*),


sum(executions) "Exec"
FROM v$sql
GROUP BY substr(sql_text,1,100)
HAVING sum(executions) > 200
;
where MEMSIZE is about 10% of the shared pool size in bytes. This should show if
there are similar literal statements,
or multiple versions of a statements which account for a large portion of the
memory in the shared pool.

1.2 statistics:
---------------

- Rule based / Cost based


- apply EXPLAIN PLAN in query

- ANALYZE COMMAND:

ANALYZE TABLE EMPLOYEE COMPUTE STATISTICS;


ANALYZE TABLE EMPLOYEE COMPUTE STATISTICS FOR ALL INDEXES;
ANALYZE INDEX scott.indx1 COMPUTE STATISTICS;
ANALYZE TABLE EMPLOYEE ESTIMATE STATISTICS SAMPLE 10 PERCENT;
ALTER TABLE EMPLOYEE DELETE STATISTICS;

- DBMS_UTILITY.ANALYZE_SCHEMA() procedure:

DBMS_UTILITY.ANALYZE_SCHEMA (
schema VARCHAR2,
method VARCHAR2,
estimate_rows NUMBER DEFAULT NULL,
estimate_percent NUMBER DEFAULT NULL,
method_opt VARCHAR2 DEFAULT NULL);

DBMS_UTILITY.ANALYZE_DATABASE (
method VARCHAR2,
estimate_rows NUMBER DEFAULT NULL,
estimate_percent NUMBER DEFAULT NULL,
method_opt VARCHAR2 DEFAULT NULL);

method=compute, estimate, delete

To exexcute:

exec DBMS_UTILITY.ANALYZE_SCHEMA('CISADM','COMPUTE');

1.3 Storage parameters:


-----------------------

segement: pctfree, pctused, number AND size of extends in STORAGE clause

- very low updates : pctfree low


- if updates, oltp : pctfree 10, pctused 40
- if only inserts : pctfree low

1.4 rebuild indexes on regular basis:


-----------------------------------------

alter index SCOTT.EMPNO_INDEX rebuild


tablespace INDEX
storage (initial 5M next 5M pctincrease 0);

You should next use the ANALYZE TABLE COMPUTE STATISTICS command

1.5 Is an index used in a query?:


---------------------------------

De WHERE clause of a query must use the 'leading column' of (one of the)
index(es):
Suppose an index 'indx1' exists on EMPLOYEE(city, state, zip)

Suppose a user issues the query:


SELECT .. FROM EMPLOYEE WHERE state='NY'

Then this query will not use that index!


Therfore you must pay attention to the cardinal column of any index.

1.6 set transaction parameters:


-------------------------------

ONLY ORACLE 7,8,8i:


Suppose you must perform an action which will generate a lot
of redo and rollback.
If you want to influence which rollback segment will be used
in your transactions, you can use the statement

set transaction use rollback segment SEGMENT_NAME

1.7 Reduce fragmentation of a dictionary managed tablespace:


------------------------------------------------------------

alter tablespace DATA coalesce;

1.8 normalisation of tables:


----------------------------

The more tables are 'normalized', the higher the performance costs for
queries joining tables

1.9 commits na zoveel rows:


----------------------------

declare
i number := 0;
cursor s1 is SELECT * FROM tab1 WHERE col1 = 'value1'
FOR UPDATE;
begin
for c1 in s1 loop
update tab1 set col1 = 'value2'
WHERE current of s1;

i := i + 1; -- Commit after every X records


if i > 1000 then
commit;
i := 0;
end if;

end loop;
commit;
end;
/

-- ------------------------------

CREATE TABLE TEST


(
ID NUMBER(10) NULL,
DATUM DATE NULL,
NAME VARCHAR2(10) NULL
);

declare
i number := 1000;
begin
while i>1 loop
insert into TEST
values (1, sysdate+i,'joop');

i := i - 1;
commit;

end loop;
commit;
end;
/

-- ------------------------------

CREATE TABLE TEST2


(
i number NULL,
ID NUMBER(10) NULL,
DATUM DATE NULL,
DAG VARCHAR2(10) NULL,
NAME VARCHAR2(10) NULL
);

declare
i number := 1;
j date;
k varchar2(10);
begin
while i<1000000 loop
j:=sysdate+i;
k:=TO_CHAR(SYSDATE+i,'DAY');
insert into TEST2
values (i,1, j, k,'joop');

i := i + 1;
commit;

end loop;
commit;
end;
/

-- ------------------------------

CREATE TABLE TEST3


(
ID NUMBER(10) NULL,
DATUM DATE NULL,
DAG VARCHAR2(10) NULL,
VORIG VARCHAR2(10) NULL,
NAME VARCHAR2(10) NULL
);

declare
i number := 1;
j date;
k varchar2(10);
l varchar2(10);
begin
while i<1000 loop
j:=sysdate+i;
k:=TO_CHAR(SYSDATE+i,'DAY');
l:=TO_CHAR(SYSDATE+i-1,'DAY');
insert into TEST3
(ID,DATUM,DAG,VORIG,NAME)
values (i, j, k, l,'joop');

i := i + 1;
commit;

end loop;
commit;
end;
/

1.10 explain plan commAND, autotrace:


-------------------------------------

1 explain plan commAND:


-----------------------

First execute the utlxplan.sql script.


This script will create the PLAN_TABLE table, needed for storage of performance
data.
Now it's possible to do the following:

-- optionally, delete the former performance data


DELETE FROM plan_table WHERE statement_id = 'XXX'; COMMIT;

-- now you can run the query that is to be analyzed


EXPLAIN PLAN SET STATEMENT_ID = 'XXX'
FOR
SELECT * FROM EMPLOYEE WHERE city > 'Y%';

To view results, you can use the utlxpls.sql script.

2. set autotrace on / off


-------------------------

Deze maakt ook gebruik van de PLAN_TABLE en de PLUSTRACE role moet bestaan.
Desgewenst kan het plustrce.sql script worden uitgevoerd (onder SYS).

Opmerking: Execution plan / access path bij een join query:

- nested loop: 1 table is de driving table met full table scan of gebruik van
index,
en de tweede table wordt benadert m.b.v. een index van de
tweede table gebaseerd op de WHERE clause.

- merge join: als er geen bruikbare index is, worden alle rows opgehaald,
gesorteerd, en gejoined naar een resultset.
- Hash join: bepaalde init.ora parameters moeten aanwezig zijn
(HASH_JOIN_ENABLE=TRUE, HASH_AREA_SIZE= , of via
ALTER SESSION SET HASH_JOIN_ENABLED=TRUE).
Meestal zeer effectief bij joins van een kleine table met een grote table.
De kleine table is de driving table in memory en het vervolg is een algolritme
wat lijkt op de nested loop

Kan ook worden afgedwongen met een hint:

SELECT /*+ USE_HASH(COMPANY) */ COMPANY.Name,


SUM(Dollar_Amount) FROM COMPANY, SALES
WHERE COMPANY.Company_ID = SALES.Company_ID GROUP BY COMPANY.Name;

3 SQL trace en TKPROFF


----------------------

SQL trace kan geactiveerd worden via init.ora of via

ALTER SESSION SET SQL_TRACE=TRUE


DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(sid, serial#, TRUE);
DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(12, 398, TRUE);
DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(12, 398, FALSE);
DBMS_SUPPORT.START_TRACE_IN_SESSION(12,398);

Turn SQL tracing on in session 448. The trace information will get written to
user_dump_dest.

SQL> exec dbms_system.set_sql_trace_in_session(448,2288,TRUE);

Turn SQL tracing off in session 448

SQL> exec dbms_system.set_sql_trace_in_session(448,2288,FALSE);

Init.ora:

Max_dump_file_size in OS blocks
SQL_TRACE=TRUE (kan zeer grote files opleveren, is voor alle sessions)
USER_DUMP_DEST= lokatie trace files

1.12 Indien de CBO niet het beste access path gebruikt: hints in query:
-----------------------------------------------------------------------

Goal hints: ALL_ROWS, FIRST_ROWS, CHOOSE, RULE


Access methods hints: FULL, ROWID, CLUSTER, HASH, INDEX

SELECT /*+ INDEX(emp_pk) */


FROM emp WHERE empno=12345;

SELECT /*+ RULE */ ename, dname


FROM emp, dept WHERE emp.deptno=dept.deptno

==============================================
3. Data dictonary queries m.b.t perfoRMANce:
==============================================

3.1 Reads AND writes in files:


------------------------------

V$FILESTAT, V$DATAFILE

- Relative File I/O (1)

SELECT fs.file#, df.file#, substr(df.name, 1, 50),


fs.phyrds, fs.phywrts, df.status
FROM v$filestat fs, v$datafile df
WHERE fs.file#=df.file#

- Relative File I/O (2)

set pagesize 60 linesize 80 newpage 0 feedback off


ttitle skip centre 'Datafile IO Weights' skip centre
column Total_IO format 999999999
column Weigt format 999.99
column file_name format A40
break on drive skip 2
compute sum of Weight on Drive

SELECT
substr(DF.Name, 1, 6) Drive,
DF.Name File_Name,
FS.Phyblkrd+FS.Phyblkwrt Total_IO,
100*(FS.Phyblkrd+FS.Phyblkwrt) / MaxIO Weight
FROM V$FILESTAT FS, V$DATAFILE DF,
(SELECT MAX(Phyblkrd+Phyblkwrt) MaxIO FROM V$FILESTAT)
WHERE
DF.File#=FS.File#
ORDER BY Weight desc
/

3.2 undocumented init parameters:


---------------------------------

SELECT *
FROM SYS.X$KSPPI
WHERE SUBSTR(KSPPINM,1,1) = '_';

3.3 Kans op gebruik index of niet?:


-----------------------------------

Kijk in

DBA_TAB_COLUMNS.NUM_DISTINCT
DBA_TABLES.NUM_ROWS

als num_distinct in de buurt komt van num_rows :


index favoriet i.p.v. full table
Kijk in
DBA_INDEXES, USER_INDEXES.CLUSTERING_FACTOR
als clustering_factor = aantal blocks: ordered

3.4 snel overzicht hit ratio buffer cache:


------------------------------------------

Hit ratio= (LR - PR) / LR

Stel er zijn nauwelijk Physical Reads PR, ofwel PR=0, dan is de


Hit Ratio=LR/LR=1 Er worden dan geen blocks van disk gelezen.

Praktijk: Hit ratio moet gemiddeld wel zo > 0,8 - 0,9

V$sess_io en v$sysstat en v$session kunnen geraadpleegd worden om de hit ratio


te bepalen.

V$sess_io: sid, consistent_gets, physical_reads


V$session: sid, username

SELECT name, value


FROM v$sysstat
WHERE name IN ('db block gets', 'consistent gets','physical reads');

SELECT (1-(pr.value/(dbg.value+cg.value)))*100
FROM v$sysstat pr, v$sysstat dbg, v$sysstat cg
WHERE pr.name = 'physical reads'
AND dbg.name = 'db block gets'
AND cg.name = 'consistent gets';

-- uitgebeidere query m.b.t. hit ratio

CLEAR
SET HEAD ON
SET VERIFY OFF

col HitRatio format 999.99 heading 'Hit Ratio'


col CGets format 9999999999999 heading 'Consistent Gets'
col DBGets format 9999999999999 heading 'DB Block Gets'
col PhyGets format 9999999999999 heading 'Physical Reads'

SELECT substr(Username, 1, 10), v$sess_io.sid, consistent_gets, block_gets,


physical_reads,
100*(consistent_gets+block_gets-physical_reads)/
(consistent_gets+block_gets) HitRatio
FROM v$session, v$sess_io
WHERE v$session.sid = v$sess_io.sid
AND (consistent_gets+block_gets) > 0
AND Username is NOT NULL
/

SELECT 'Hit Ratio' Database,


cg.value CGets,
db.value DBGets,
pr.value PhyGets,
100*(cg.value+db.value-pr.value)/(cg.value+db.value) HitRatio
FROM v$sysstat db, v$sysstat cg, v$sysstat pr
WHERE db.name = 'db block gets'
AND cg.name = 'consistent gets'
AND pr.name = 'physical reads'
/

3.6 Wat zijn de actieve transacties?:


-------------------------------------

SELECT substr(username, 1, 10), substr(terminal, 1, 10), substr(osuser, 1, 10),


t.start_time, r.name, t.used_ublk "ROLLB BLKS",
decode(t.space, 'YES', 'SPACE TX',
decode(t.recursive, 'YES', 'RECURSIVE TX',
decode(t.noundo, 'YES', 'NO UNDO TX', t.status)
)) status
FROM sys.v_$transaction t, sys.v_$rollname r, sys.v_$session s
WHERE t.xidusn = r.usn
AND t.ses_addr = s.saddr

3.7 sid's, resource belasting en locks:


---------------------------------------

SELECT sid, lmode, ctime, block


FROM v$lock

SELECT s.sid, substr(s.username, 1, 10), substr(s.schemaname, 1, 10),


substr(s.osuser, 1, 10),
substr(s.program, 1, 10), s.command,
l.lmode, l.block
FROM v$session s, v$lock l
WHERE s.sid=l.sid;

SELECT l.addr, s.saddr, l.sid, s.sid, l.type, l.lmode,


s.status, substr(s.schemaname, 1, 10),
s.lockwait, s.row_wait_obj#
FROM v$lock l, v$session s
WHERE l.addr=s.saddr

SELECT sid, substr(owner, 1, 10), substr(object, 1, 10)


FROM v$access

SID Session number that is accessing an object


OWNER Owner of the object
OBJECT Name of the object
TYPE Type identifier for the object

SELECT substr(s.username, 1, 10), s.sid,


t.log_io, t.phy_io
FROM v$session s, v$transaction t
WHERE t.ses_addr=s.saddr

3.8 latch use in SGA (locks op process):


----------------------------------------
SELECT c.name,a.gets,a.misses,a.sleeps, a.immediate_gets,a.immediate_misses,b.pid
FROM v$latch a, v$latchholder b, v$latchname c
WHERE a.addr = b.laddr(+)
AND a.latch# = c.latch#
AND (c.name like 'redo%' or c.name like 'row%')
ORDER BY a.latch#;

column latch_name format a40


SELECT name latch_name, gets, misses,
round(decode(gets-misses,0,1,gets-misses)/
decode(gets,0,1,gets),3) hit_ratio
FROM v$latch WHERE name = 'redo allocation';

column latch_name format a40


SELECT name latch_name, immediate_gets, immediate_misses,
round(decode(immediate_gets-immediate_misses,0,1,
immediate_gets-immediate_misses)/
decode(immediate_gets,0,1,immediate_gets),3) hit_ratio
FROM v$latch WHERE name = 'redo copy';

column name format a40


column value format a10
SELECT name,value FROM v$parameter WHERE name in
('log_small_entry_max_size','log_simultaneous_copies',
'cpu_count');

-- latches en locks in beeld

set pagesize 23
set pause on
set pause 'Hit any key...'

col sid format 999999


col serial# format 999999
col username format a12 trunc
col process format a8 trunc
col terminal format a12 trunc
col type format a12 trunc
col lmode format a4 trunc
col lrequest format a4 trunc
col object format a73 trunc

SELECT s.sid, s.serial#,


decode(s.process, null,
decode(substr(p.username,1,1), '?', upper(s.osuser), p.username),
decode( p.username, 'ORACUSR ', upper(s.osuser), s.process)
) process,
nvl(s.username, 'SYS ('||substr(p.username,1,4)||')') username,
decode(s.terminal, null, rtrim(p.terminal, chr(0)),
upper(s.terminal)) terminal,
decode(l.type,
-- Long locks
'TM', 'DML/DATA ENQ', 'TX', 'TRANSAC ENQ',
'UL', 'PLS USR LOCK',
-- Short locks
'BL', 'BUF HASH TBL', 'CF', 'CONTROL FILE',
'CI', 'CROSS INST F', 'DF', 'DATA FILE ',
'CU', 'CURSOR BIND ',
'DL', 'DIRECT LOAD ', 'DM', 'MOUNT/STRTUP',
'DR', 'RECO LOCK ', 'DX', 'DISTRIB TRAN',
'FS', 'FILE SET ', 'IN', 'INSTANCE NUM',
'FI', 'SGA OPN FILE',
'IR', 'INSTCE RECVR', 'IS', 'GET STATE ',
'IV', 'LIBCACHE INV', 'KK', 'LOG SW KICK ',
'LS', 'LOG SWITCH ',
'MM', 'MOUNT DEF ', 'MR', 'MEDIA RECVRY',
'PF', 'PWFILE ENQ ', 'PR', 'PROCESS STRT',
'RT', 'REDO THREAD ', 'SC', 'SCN ENQ ',
'RW', 'ROW WAIT ',
'SM', 'SMON LOCK ', 'SN', 'SEQNO INSTCE',
'SQ', 'SEQNO ENQ ', 'ST', 'SPACE TRANSC',
'SV', 'SEQNO VALUE ', 'TA', 'GENERIC ENQ ',
'TD', 'DLL ENQ ', 'TE', 'EXTEND SEG ',
'TS', 'TEMP SEGMENT', 'TT', 'TEMP TABLE ',
'UN', 'USER NAME ', 'WL', 'WRITE REDO ',
'TYPE='||l.type) type,
decode(l.lmode, 0, 'NONE', 1, 'NULL', 2, 'RS', 3, 'RX',
4, 'S', 5, 'RSX', 6, 'X',
to_char(l.lmode) ) lmode,
decode(l.request, 0, 'NONE', 1, 'NULL', 2, 'RS', 3, 'RX',
4, 'S', 5, 'RSX', 6, 'X',
to_char(l.request) ) lrequest,
decode(l.type, 'MR', decode(u.name, null,
'DICTIONARY OBJECT', u.name||'.'||o.name),
'TD', u.name||'.'||o.name,
'TM', u.name||'.'||o.name,
'RW', 'FILE#='||substr(l.id1,1,3)||
' BLOCK#='||substr(l.id1,4,5)||' ROW='||l.id2,
'TX', 'RS+SLOT#'||l.id1||' WRP#'||l.id2,
'WL', 'REDO LOG FILE#='||l.id1,
'RT', 'THREAD='||l.id1,
'TS', decode(l.id2, 0, 'ENQUEUE',
'NEW BLOCK ALLOCATION'),
'ID1='||l.id1||' ID2='||l.id2) object
FROM sys.v_$lock l, sys.v_$session s, sys.obj$ o, sys.user$ u,
sys.v_$process p
WHERE s.paddr = p.addr(+)
AND l.sid = s.sid
AND l.id1 = o.obj#(+)
AND o.owner# = u.user#(+)
AND l.type <> 'MR'
UNION ALL /*** LATCH HOLDERS ***/
SELECT s.sid, s.serial#, s.process, s.username, s.terminal,
'LATCH', 'X', 'NONE', h.name||' ADDR='||rawtohex(laddr)
FROM sys.v_$process p, sys.v_$session s, sys.v_$latchholder h
WHERE h.pid = p.pid
AND p.addr = s.paddr
UNION ALL /*** LATCH WAITERS ***/
SELECT s.sid, s.serial#, s.process, s.username, s.terminal,
'LATCH', 'NONE', 'X', name||' LATCH='||p.latchwait
FROM sys.v_$session s, sys.v_$process p, sys.v_$latch l
WHERE latchwait is not null
AND p.addr = s.paddr
AND p.latchwait = l.addr
/

SELECT v.SID, v.BLOCK_GETS, v.BLOCK_CHANGES, w.USERNAME, w.OSUSER, w.TERMINAL


FROM v$sess_io v, V$session w
WHERE v.SID=w.SID ORDER BY v.SID;

SQL> desc v$sess_io


Name Null? Type
----------------------------- -------- --------------------
SID NUMBER
BLOCK_GETS NUMBER
CONSISTENT_GETS NUMBER
PHYSICAL_READS NUMBER
BLOCK_CHANGES NUMBER
CONSISTENT_CHANGES NUMBER

SQL> desc v$session;


Name Null? Type
----------------------------- -------- --------------------
SADDR RAW(8)
SID NUMBER
SERIAL# NUMBER
AUDSID NUMBER
PADDR RAW(8)
USER# NUMBER
USERNAME VARCHAR2(30)
COMMAND NUMBER
OWNERID NUMBER
TADDR VARCHAR2(16)
LOCKWAIT VARCHAR2(16)
STATUS VARCHAR2(8)
SERVER VARCHAR2(9)
SCHEMA# NUMBER
SCHEMANAME VARCHAR2(30)
OSUSER VARCHAR2(30)
PROCESS VARCHAR2(12)
MACHINE VARCHAR2(64)
TERMINAL VARCHAR2(30)
PROGRAM VARCHAR2(48)
TYPE VARCHAR2(10)
SQL_ADDRESS RAW(8)
SQL_HASH_VALUE NUMBER
SQL_ID VARCHAR2(13)
SQL_CHILD_NUMBER NUMBER
PREV_SQL_ADDR RAW(8)
PREV_HASH_VALUE NUMBER
PREV_SQL_ID VARCHAR2(13)
PREV_CHILD_NUMBER NUMBER
PLSQL_ENTRY_OBJECT_ID NUMBER
PLSQL_ENTRY_SUBPROGRAM_ID NUMBER
PLSQL_OBJECT_ID NUMBER
PLSQL_SUBPROGRAM_ID NUMBER
MODULE VARCHAR2(48)
MODULE_HASH NUMBER
ACTION VARCHAR2(32)
ACTION_HASH NUMBER
CLIENT_INFO VARCHAR2(64)
FIXED_TABLE_SEQUENCE NUMBER
ROW_WAIT_OBJ# NUMBER
ROW_WAIT_FILE# NUMBER
ROW_WAIT_BLOCK# NUMBER
ROW_WAIT_ROW# NUMBER
LOGON_TIME DATE
LAST_CALL_ET NUMBER
PDML_ENABLED VARCHAR2(3)
FAILOVER_TYPE VARCHAR2(13)
FAILOVER_METHOD VARCHAR2(10)
FAILED_OVER VARCHAR2(3)
RESOURCE_CONSUMER_GROUP VARCHAR2(32)
PDML_STATUS VARCHAR2(8)
PDDL_STATUS VARCHAR2(8)
PQ_STATUS VARCHAR2(8)
CURRENT_QUEUE_DURATION NUMBER
CLIENT_IDENTIFIER VARCHAR2(64)
BLOCKING_SESSION_STATUS VARCHAR2(11)
BLOCKING_INSTANCE NUMBER
BLOCKING_SESSION NUMBER
SEQ# NUMBER
EVENT# NUMBER
EVENT VARCHAR2(64)
P1TEXT VARCHAR2(64)
P1 NUMBER
P1RAW RAW(8)
P2TEXT VARCHAR2(64)
P2 NUMBER
P2RAW RAW(8)
P3TEXT VARCHAR2(64)
P3 NUMBER
P3RAW RAW(8)
WAIT_CLASS_ID NUMBER
WAIT_CLASS# NUMBER
WAIT_CLASS VARCHAR2(64)
WAIT_TIME NUMBER
SECONDS_IN_WAIT NUMBER
STATE VARCHAR2(19)
SERVICE_NAME VARCHAR2(64)
SQL_TRACE VARCHAR2(8)
SQL_TRACE_WAITS VARCHAR2(5)
SQL_TRACE_BINDS VARCHAR2(5)

SQL>

========================================================
4. IMP and EXP, IMPDP and EXPDP, and SQL*Loader Examples
========================================================

4.1 EXPDP and IMPDP examples:


=============================
New for Oracle 10g, are the impdp and expdp utilities.

EXPDP practice/practice PARFILE=par1.par


EXPDP hr/hr DUMPFILE=export_dir:hr_schema.dmp LOGFILE=export_dir:hr_schema.explog
EXPDP system/******** PARFILE=c:\rmancmd\dpe_1.expctl

Oracle 10g provides two new views, DBA_DATAPUMP_JOBS and DBA_DATAPUMP_SESSIONS


that allow the DBA to monitor the progress
of all DataPump operations.

SELECT
owner_name
,job_name
,operation
,job_mode
,state
,degree
,attached_sessions
FROM dba_datapump_jobs
;

SELECT
DPS.owner_name
,DPS.job_name
,S.osuser
FROM
dba_datapump_sessions DPS
,v$session S
WHERE S.saddr = DPS.saddr
;

Example 1. EXPDP parfile


------------------------

JOB_NAME=NightlyDRExport
DIRECTORY=export_dir
DUMPFILE=export_dir:fulldb_%U.dmp
LOGFILE=export_dir:NightlyDRExport.explog
FULL=Y
PARALLEL=2
FILESIZE=650M
CONTENT=ALL
STATUS=30
ESTIMATE_ONLY=Y

Example 2. EXPDP parfile, only for getting an estimate of export size


---------------------------------------------------------------

JOB_NAME=EstimateOnly
DIRECTORY=export_dir
LOGFILE=export_dir:EstimateOnly.explog
FULL=Y
CONTENT=DATA_ONLY
ESTIMATE=STATISTICS
ESTIMATE_ONLY=Y
STATUS=60
Example 3. EXPDP parfile, only 1 schema, writing to multiple files with %U
variable, limited to 650M
----------------------------------------------------------------------------------
------------

JOB_NAME=SH_TABLESONLY
DIRECTORY=export_dir
DUMPFILE=export_dir:SHONLY_%U.dmp
LOGFILE=export_dir:SH_TablesOnly.explog
SCHEMAS=SH
PARALLEL=2
FILESIZE=650M
STATUS=60

Example 4. EXPDP parfile, multiple tables, writing to multiple files with %U


variable, limited
----------------------------------------------------------------------------------
------

JOB_NAME=HR_PAYROLL_REFRESH
DIRECTORY=export_dir
DUMPFILE=export_dir:HR_PAYROLL_REFRESH_%U.dmp
LOGFILE=export_dir:HR_PAYROLL_REFRESH.explog
STATUS=20
FILESIZE=132K
CONTENT=ALL
TABLES=HR.EMPLOYEES,HR.DEPARTMENTS,HR.PAYROLL_CHECKS,HR.PAYROLL_HOURLY,HR.PAYROLL_
SALARY,HR.PAYROLL_TRANSACTIONS

Example 5. EXPDP parfile, Exports all objects in the HR schema, including


metadata, asof just before midnight on April 10, 2005
----------------------------------------------------------------------------------
---------------------------------------

JOB_NAME=HREXPORT
DIRECTORY=export_dir
DUMPFILE=export_dir:HREXPORT_%U.dmp
LOGFILE=export_dir:2005-04-10_HRExport.explog
SCHEMAS=HR
CONTENTS=ALL
FLASHBACK_TIME=TO_TIMESTAMP"('04-10-2005 23:59', 'MM-DD-YYYY HH24:MI')"

Example 6. IMPDP parfile, Imports data +only+ into selected tables in the HR
schema, Multiple dump files will be used
----------------------------------------------------------------------------------
------------------------------------

JOB_NAME=HR_PAYROLL_IMPORT
DIRECTORY=export_dir
DUMPFILE=export_dir:HR_PAYROLL_REFRESH_%U.dmp
LOGFILE=export_dir:HR_PAYROLL_IMPORT.implog
STATUS=20
TABLES=HR.PAYROLL_CHECKS,HR.PAYROLL_HOURLY,HR.PAYROLL_SALARY,HR.PAYROLL_TRANSACTIO
NS
CONTENT=DATA_ONLY
TABLE_EXISTS_ACTION=TRUNCATE
Example 7. IMPDP parfile,3 tables in the SH schema are the only tables to be
refreshed,These tables will be truncated before loading
----------------------------------------------------------------------------------
----------------------------------------------

DIRECTORY=export_dir
JOB_NAME=RefreshSHTables
DUMPFILE=export_dir:fulldb_%U.dmp
LOGFILE=export_dir:RefreshSHTables.implog
STATUS=30
CONTENT=DATA_ONLY
SCHEMAS=SH
INCLUDE=TABLE:"IN('COUNTRIES','CUSTOMERS','PRODUCTS','SALES')"
TABLE_EXISTS_ACTION=TRUNCATE

Example IMPDP parfile,Generates SQLFILE output showing the DDL statements,Note


that this code is +not+ executed!
----------------------------------------------------------------------------------
------------------------------

DIRECTORY=export_dir
JOB_NAME=GenerateImportDDL
DUMPFILE=export_dir:hr_payroll_refresh_%U.dmp
LOGFILE=export_dir:GenerateImportDDL.implog
SQLFILE=export_dir:GenerateImportDDL.sql
INCLUDE=TABLE

Example: schedule a procedure which uses DBMS_DATAPUMP


------------------------------------------------------

BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name => 'HR_EXPORT'
,job_type => 'PLSQL_BLOCK'
,job_action => 'BEGIN HR.SP_EXPORT;END;'
,start_date => '04/18/2005 23:00:00.000000'
,repeat_interval => 'FREQ=DAILY'
,enabled => TRUE
,comments => 'Performs HR Schema Export nightly at 11 PM'
);
END;
/

======================================
How to use the NETWORK_LINK paramater:
======================================

Note 1:
=======

Lora, the DBA at Acme Bank, is at the center of attention in a high-profile


meeting of the bank's top management team.
The objective is to identify ways of enabling end users to slice and dice the data
in the company's main data warehouse.
At the meeting, one idea presented is to create several small data marts�each
based on a particular functional area�that
can each be used by specialized teams.

To effectively implement the data mart approach, the data specialists must get
data into the data marts quickly and efficiently.
The challenge the team faces is figuring out how to quickly refresh the warehouse
data to the data marts, which run on
heterogeneous platforms. And that's why Lora is at the meeting. What options does
she propose for moving the data?

An experienced and knowledgeable DBA, Lora provides the meeting attendees with
three possibilities, as follows:

Using transportable tablespaces


Using Data Pump (Export and Import)
Pulling tablespaces

This article shows Lora's explanation of these options, including their


implementation details and their pros and cons.

Transportable Tablespaces:

Lora starts by describing the transportable tablespaces option. The quickest way
to transport an entire tablespace to
a target system is to simply transfer the tablespace's underlying files, using FTP
(file transfer protocol)
or rcp (remote copy).
However, just copying the Oracle data files is not sufficient; the target database
must recognize and import the files
and the corresponding tablespace before the tablespace data can become available
to end users.
Using transportable tablespaces
involves copying the tablespace files and making the data available in the target
database.

A few checks are necessary before this option can be considered. First, for a
tablespace TS1 to be transported to a
target system,
it must be self-contained. That is, all the indexes, partitions, and other
dependent segments of the tables in the tablespace
must be inside the tablespace. Lora explains that if a set of tablespaces contains
all the dependent segments,
the set is considered
to be self-contained. For instance, if tablespaces TS1 and TS2 are to be
transferred as a set and a table in TS1 has
an index in TS2, the tablespace set is self-contained. However, if another index
of a table in TS1 is in tablespace TS3,
the tablespace set (TS1, TS2) is not self-contained.

To transport the tablespaces, Lora proposes using the Data Pump Export utility in
Oracle Database 10g. Data Pump is Oracle's
next-generation data transfer tool, which replaces the earlier Oracle Export (EXP)
and Import (IMP) tools.
Unlike those older tools, which use regular SQL to extract and insert data, Data
Pump uses proprietary APIs that bypass
the SQL buffer, making the process extremely fast. In addition, Data Pump can
extract specific objects, such as a particular
stored procedure or a set of tables from a particular tablespace. Data Pump Export
and Import are controlled by jobs,
which the DBA can pause, restart, and stop at will.

Lora has run a test before the meeting to see if Data Pump can handle Acme's
requirements. Lora's test transports the
TS1 and TS2 tablespaces as follows:

1. Check that the set of TS1 and TS2 tablespaces is self- contained. Issue the
following command:

BEGIN
SYS.DBMS_TTS.TRANSPORT_SET_CHECK ('TS1','TS2');
END;

2. Identify any nontransportable sets. If no rows are selected, the tablespaces


are self-contained:

SELECT * FROM SYS.TRANSPORT_SET_VIOLATIONS;

no rows selected

3. Ensure the tablespaces are read-only:

SELECT STATUS
FROM DBA_TABLESPACES
WHERE TABLESPACE_NAME IN ('TS1','TS2');

STATUS
---------
READ ONLY
READ ONLY

4. Transfer the data files of each tablespace to the remote system, into the
directory /u01/oradata,
using a transfer mechanism such as FTP or rcp.

5. In the target database, create a database link to the source database (named
srcdb in the line below).

CREATE DATABASE LINK srcdb


USING 'srcdb';

6. In the target database, import the tablespaces into the database, using Data
Pump Import.

impdp lora/lora123
TRANSPORT_DATAFILES="'/u01/oradata/ts1_1.dbf','/u01/oradata/ts2_1.dbf'"
NETWORK_LINK='srcdb'
TRANSPORT_TABLESPACES=\(TS1,TS2\)
NOLOGFILE=Y

This step makes the TS1 and TS2 tablespaces and their data available in the target
database.
Note that Lora doesn't export the metadata from the source database. She merely
specifies the value srcdb,
the database link to the source database, for the parameter NETWORK_LINK in the
impdp command above.
Data Pump Import fetches the necessary metadata from the source across the
database link and re-creates it in the target.

7. Finally, make the TS1 and TS2 tablespaces in the source database read-write.

ALTER TABLESPACE TS1 READ WRITE;


ALTER TABLESPACE TS2 READ WRITE;

Note 2:
=======

One of the most significant characteristics of an import operation is its mode,


because the mode largely determines
what is imported. The specified mode applies to the source of the operation,
either a dump file set or another database
if the NETWORK_LINK parameter is specified.

The NETWORK_LINK parameter initiates a network import. This means that the impdp
client initiates the import request,
typically to the local database. That server contacts the remote source database
referenced by the database link
in the NETWORK_LINK parameter, retrieves the data, and writes it directly back to
the target database.
There are no dump files involved.

In the following example, the source_database_link would be replaced with the name
of a valid database link
that must already exist.

impdp hr/hr TABLES=employees DIRECTORY=dpump_dir1


NETWORK_LINK=source_database_link EXCLUDE=CONSTRAINT

This example results in an import of the employees table (excluding constraints)


from the source database.
The log file is written to dpump_dir1, specified on the DIRECTORY parameter.

4.2 Export / Import examples:


=============================

In all Oracle versions 7,8,8i,9i,10g you can use the exp and imp utilities.

exp system/manager file=expdat.dmp compress=Y owner=(HARRY, PIET)


exp system/manager file=hr.dmp owner=HR indexes=Y
exp system/manager file=expdat.dmp TABLES=(john.SALES)

imp system/manager file=hr.dmp full=Y buffer=64000 commit=Y


imp system/manager file=expdat.dmp FROMuser=ted touser=john indexes=N commit=Y
buffer=64000
imp rm_live/rm file=dump.dmp tables=(employee)

imp system/manager file=expdat.dmp FROMuser=ted touser=john buffer=4194304

c:\> cd [oracle_db_home]\bin
c:\> set nls_lang=american_america.WE8ISO8859P15

# export NLS_LANG=AMERICAN_AMERICA.UTF8
# export NLS_LANG=AMERICAN_AMERICA.AL32UTF8

c:\> imp system/manager fromuser=mis_owner touser=mis_owner file=[yourexport.dmp]

FROM Oracle8i one can use the QUERY= export parameter to SELECTively unload a
subset of the data FROM a table.
Look at this example:
exp scott/tiger tables=emp query=\"WHERE deptno=10\"

-- Export metadata only:

The Export utility is used to export the metadata describing the objects contained
in the transported tablespace.
For our example scenario, the Export command could be:

EXP TRANSPORT_TABLESPACE=y TABLESPACES=ts_temp_sales FILE=jan_sales.dmp

This operation will generate an export file, jan_sales.dmp. The export file will
be small, because it contains
only metadata. In this case, the export file will contain information describing
the table temp_jan_sales,
such as the column names, column datatype, and all other information that the
target Oracle database will need
in order to access the objects in ts_temp_sales.

$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
$

Extended example:
-----------------

CASE 1:
=======

We create a user Albert on a 10g DB. This user will create a couple of tables
with referential constraints (PK-FK relations). Then we will export this user,
drop the user, and do an import. See what we have after the import.

-- User:
create user albert identified by albert
default tablespace ts_cdc
temporary tablespace temp
QUOTA 10M ON sysaux
QUOTA 20M ON users
QUOTA 50M ON TS_CDC
;

-- GRANTS:
GRANT create session TO albert;
GRANT create table TO albert;
GRANT create sequence TO albert;
GRANT create procedure TO albert;
GRANT connect TO albert;
GRANT resource TO albert;

-- connect albert/albert

-- create tables

create table LOC -- table of locations


(
LOCID int,
CITY varchar2(16),
constraint pk_loc primary key (locid)
);

create table DEPT -- table of departments


(
DEPID int,
DEPTNAME varchar2(16),
LOCID int,
constraint pk_dept primary key (depid),
constraint fk_dept_loc foreign key (locid) references loc(locid)
);

create table EMP -- table of employees


(
EMPID int,
EMPNAME varchar2(16),
DEPID int,
constraint pk_emp primary key (empid),
constraint fk_emp_dept foreign key (depid) references dept(depid)
);

-- show constraints:

SQL> select CONSTRAINT_NAME, CONSTRAINT_TYPE,TABLE_NAME,R_CONSTRAINT_NAME from


user_constraints;

CONSTRAINT_NAME C TABLE_NAME R_CONSTRAINT_NAME


------------------------------ - ------------------------------
------------------------------
FK_EMP_DEPT R EMP PK_DEPT
FK_DEPT_LOC R DEPT PK_LOC
PK_LOC P LOC
PK_DEPT P DEPT
PK_EMP P EMP

-- insert some data:

INSERT INTO LOC VALUES (1,'Amsterdam');


INSERT INTO LOC VALUES (2,'Haarlem');
INSERT INTO LOC VALUES (3,null);
INSERT INTO LOC VALUES (4,'Utrecht');

INSERT INTO DEPT VALUES (1,'Sales',1);


INSERT INTO DEPT VALUES (2,'PZ',1);
INSERT INTO DEPT VALUES (3,'Management',2);
INSERT INTO DEPT VALUES (4,'RD',3);
INSERT INTO DEPT VALUES (5,'IT',4);

INSERT INTO EMP VALUES (1,'Joop',1);


INSERT INTO EMP VALUES (2,'Gerrit',2);
INSERT INTO EMP VALUES (3,'Harry',2);
INSERT INTO EMP VALUES (4,'Christa',3);
INSERT INTO EMP VALUES (5,null,4);
INSERT INTO EMP VALUES (6,'Nina',5);
INSERT INTO EMP VALUES (7,'Nadia',5);

-- make an export

C:\oracle\expimp>exp '/@test10g2 as sysdba' file=albert.dat owner=albert

Export: Release 10.2.0.1.0 - Production on Sat Mar 1 08:03:59 2008

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 -


Production
With the Partitioning, OLAP and Data Mining options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
server uses AL32UTF8 character set (possible charset conversion)

About to export specified users ...


. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user ALBERT
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user ALBERT
About to export ALBERT's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
. about to export ALBERT's tables via Conventional Path ...
. . exporting table DEPT 5 rows exported
. . exporting table EMP 7 rows exported
. . exporting table LOC 4 rows exported
. exporting synonyms
. exporting views
. exporting stored procedures
. exporting operators
. exporting referential integrity constraints
. exporting triggers
. exporting indextypes
. exporting bitmap, functional and extensible indexes
. exporting posttables actions
. exporting materialized views
. exporting snapshot logs
. exporting job queues
. exporting refresh groups and children
. exporting dimensions
. exporting post-schema procedural objects and actions
. exporting statistics
Export terminated successfully without warnings.

C:\oracle\expimp>

-- drop user albert

SQL>drop user albert cascade

- create user albert

See above

-- do the import

C:\oracle\expimp>imp '/@test10g2 as sysdba' file=albert.dat fromuser=albert


touser=albert

Import: Release 10.2.0.1.0 - Production on Sat Mar 1 08:09:26 2008

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 -


Production
With the Partitioning, OLAP and Data Mining options

Export file created by EXPORT:V10.02.01 via conventional path


import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
import server uses AL32UTF8 character set (possible charset conversion)
. importing ALBERT's objects into ALBERT
. . importing table "DEPT" 5 rows imported
. . importing table "EMP" 7 rows imported
. . importing table "LOC" 4 rows imported
About to enable constraints...
Import terminated successfully without warnings.

C:\oracle\expimp>

- connect albert/albert

SQL> select * from emp;

EMPID EMPNAME DEPID


---------- ---------------- ----------
1 Joop 1
2 Gerrit 2
3 Harry 2
4 Christa 3
5 4
6 Nina 5
7 Nadia 5

7 rows selected.

SQL> select * from loc;

LOCID CITY
---------- ----------------
1 Amsterdam
2 Haarlem
3
4 Utrecht

SQL> select * from dept;

DEPID DEPTNAME LOCID


---------- ---------------- ----------
1 Sales 1
2 PZ 1
3 Management 2
4 RD 3
5 IT 4

-- show constraints:

SQL> select CONSTRAINT_NAME, CONSTRAINT_TYPE,TABLE_NAME,R_CONSTRAINT_NAME from


user_constraints;

CONSTRAINT_NAME C TABLE_NAME R_CONSTRAINT_NAME


------------------------------ - ------------------------------
------------------------------
FK_DEPT_LOC R DEPT PK_LOC
FK_EMP_DEPT R EMP PK_DEPT
PK_DEPT P DEPT
PK_EMP P EMP
PK_LOC P LOC

Everything is back again.

CASE 2:
=======

We are not going to drop the user, but empty the tables:

SQL> alter table dept disable constraint FK_DEPT_LOC;


SQL> alter table emp disable constraint FK_EMP_DEPT;
SQL> alter table dept disable constraint PK_DEPT;
SQL> alter table emp disable constraint pk_emp;
SQL> alter table loc disable constraint pk_loc;
SQL> truncate table emp;
SQL> truncate table loc;
SQL> truncate table dept;

-- do the import

C:\oracle\expimp>imp '/@test10g2 as sysdba' file=albert.dat ignore=y


fromuser=albert touser=albert

Import: Release 10.2.0.1.0 - Production on Sat Mar 1 08:25:27 2008

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 -


Production
With the Partitioning, OLAP and Data Mining options

Export file created by EXPORT:V10.02.01 via conventional path


import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
import server uses AL32UTF8 character set (possible charset conversion)
. importing ALBERT's objects into ALBERT
. . importing table "DEPT" 5 rows imported
. . importing table "EMP" 7 rows imported
. . importing table "LOC" 4 rows imported
About to enable constraints...
IMP-00017: following statement failed with ORACLE error 2270:
"ALTER TABLE "EMP" ENABLE CONSTRAINT "FK_EMP_DEPT""
IMP-00003: ORACLE error 2270 encountered
ORA-02270: no matching unique or primary key for this column-list
IMP-00017: following statement failed with ORACLE error 2270:
"ALTER TABLE "DEPT" ENABLE CONSTRAINT "FK_DEPT_LOC""
IMP-00003: ORACLE error 2270 encountered
ORA-02270: no matching unique or primary key for this column-list
Import terminated successfully with warnings.

So the data gets imported, but we have a problem with the FOREIGN KEYS:

SQL> select CONSTRAINT_NAME, CONSTRAINT_TYPE,TABLE_NAME,R_CONSTRAINT_NAME, STATUS


from user_constrai
nts;

CONSTRAINT_NAME C TABLE_NAME R_CONSTRAINT_NAME


STATUS
------------------------------ - ------------------------------
------------------------------ -----
FK_DEPT_LOC R DEPT PK_LOC
DISABLED
FK_EMP_DEPT R EMP PK_DEPT
DISABLED
PK_LOC P LOC
DISABLED
PK_EMP P EMP
DISABLED
PK_DEPT P DEPT
DISABLED
alter table dept enable constraint pk_dept;
alter table emp enable constraint pk_emp;
alter table loc enable constraint pk_loc;
alter table dept enable constraint FK_DEPT_LOC;
alter table emp enable constraint FK_EMP_DEPT;
alter table dept enable constraint PK_DEPT;

SQL> select CONSTRAINT_NAME, CONSTRAINT_TYPE,TABLE_NAME,R_CONSTRAINT_NAME, STATUS


from user_constraints;

CONSTRAINT_NAME C TABLE_NAME R_CONSTRAINT_NAME


STATUS
------------------------------ - ------------------------------
------------------------------ -----
FK_DEPT_LOC R DEPT PK_LOC
ENABLED
FK_EMP_DEPT R EMP PK_DEPT
ENABLED
PK_DEPT P DEPT
ENABLED
PK_EMP P EMP
ENABLED
PK_LOC P LOC
ENABLED

SQL>

Everything is back again.

$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
$$$$$$$$$$$$$$$$$$$$

What is exported?:
------------------

Tables, indexes, data, database links gets exported.

Example:
--------

exp system/manager file=oemuser.dmp owner=oemuser

Verbonden met: Oracle9i Enterprise Edition Release 9.0.1.4.0 - Production


With the Partitioning option
JServer Release 9.0.1.4.0 - Production.
Export is uitgevoerd in WE8MSWIN1252 tekenset en AL16UTF16 NCHAR-tekenset.

Export van opgegeven gebruikers gaat beginnen ...


. pre-schema procedurele objecten en acties wordt ge�xporteerd.
. bibliotheeknamen van verwijzende functie voor gebruiker OEMUSER worden ge�xpo
teerd
. objecttypedefinities voor gebruiker OEMUSER worden ge�xporteerd
Export van objecten van OEMUSER gaat beginnen ...
. databasekoppelingen worden ge�xporteerd.
. volgnummers worden ge�xporteerd.
. clusterdefinities worden ge�xporteerd.
. export van tabellen van OEMUSER gaat beginnen ... via conventioneel pad ...
. . tabel CUSTOMERS wordt ge�xporteerd.Er zijn 2
rijen ge�xporteerd.
. synoniemen worden ge�xporteerd.
. views worden ge�xporteerd.
. opgeslagen procedures worden ge�xporteerd.
. operatoren worden ge�xporteerd.
. referenti�le integriteitsbeperkingen worden ge�xporteerd.
. triggers worden ge�xporteerd.
. indextypen worden ge�xporteerd.
. bitmap, functionele en uit te breiden indexen worden ge�xporteerd.
. acties post-tabellen worden ge�xporteerd
. snapshots worden ge�xporteerd.
. logs voor snapshots worden ge�xporteerd.
. takenwachtrijen worden ge�xporteerd
. herschrijfgroepen en kinderen worden ge�xporteerd
. dimensies worden ge�xporteerd.
. post-schema procedurele objecten en acties wordt ge�xporteerd.
. statistieken worden ge�xporteerd.
Export is succesvol be�indigd zonder waarschuwingen.

D:\temp>

Can one import tables to a different tablespace?


-------------------------------------------------

Import the dump file using the INDEXFILE= option


Edit the indexfile. Remove remarks and specify the correct tablespaces.
Run this indexfile against your database, this will create the required tables
in the appropriate tablespaces
Import the table(s) with the IGNORE=Y option.
Change the default tablespace for the user:

Revoke the "UNLIMITED TABLESPACE" privilege FROM the user


Revoke the user's quota FROM the tablespace FROM WHERE the object was exported.
This forces the import utility to create tables in the user's default tablespace.
Make the tablespace to which you want to import the default tablespace for the
user
Import the table

Can one export to multiple files?/ Can one beat the Unix 2 Gig limit?
---------------------------------------------------------------------

FROM Oracle8i, the export utility supports multiple output files.


exp SCOTT/TIGER FILE=D:\F1.dmp,E:\F2.dmp FILESIZE=10m LOG=scott.log

Use the following technique if you use an Oracle version prior to 8i:

Create a compressed export on the fly.

# create a named pipe


mknod exp.pipe p
# read the pipe - output to zip file in the background
gzip < exp.pipe > scott.exp.gz &
# feed the pipe
exp userid=scott/tiger file=exp.pipe ...

Some famous Errors:


-------------------

Error 1:
--------

EXP-00008: ORACLE error 6550 encountered


ORA-06550: line 1, column 31:
PLS-00302: component 'DBMS_EXPORT_EXTENSION' must be declared

1. The errors indicate that


$ORACLE_HOME/rdbms/admin/CATALOG.SQL
and
$ORACLE_HOME/rdbms/admin/CATPROC.SQL
Should be run again, as has been previously suggested. Were these scripts run
connected as SYS?
Try SELECT OBJECT_NAME, OBJECT_TYPE FROM DBA_OBJECTS WHERE STATUS =
'INVALID' AND OWNER = 'SYS';
Do you have invalid objects? Is DBMS_EXPORT_EXTENSION invalid? If so, try
compiling it manually:
ALTER PACKAGE DBMS_EXPORT_EXTENSION COMPILE BODY;
If you receive errors during manual compilation, please show errors for further
information.

2. Or possibly different imp/exp versions are run to another version


of the database.

The problem can be resolved by copying the higher version


CATEXP.SQL and executed in the lesser version RDBMS.

3. Other fix:

If there are problems in exp/imp from single byte to multibyte databases:

- Analyze which tables/rows could be affected by national characters before


running the export
- Increase the size of affected rows.
- Export the table data once again.

Error 2:
--------

EXP-00091: Exporting questionable statistics.

Hi. This warning is generated because the statistics are questionable due to the
client character set difference from the server character set.
There is an article which discusses the causes of questionable statistics
available
via the MetaLink Advanced Search option by Doc ID:
Doc ID: 159787.1 9i: Import STATISTICS=SAFE
If you do not want this conversion to occur, you need to ensure the client NLS
environment
performing the export is set to match the server.

Fix ~~~~
a) If the statistics of a table are not required to include in export
take the export with parameter STATISTICS=NONE
Example: $exp scott/tiger file=emp1.dmp tables=emp STATISTICS=NONE
b) In case, the statistics are need to be included can use
STATISTICS=ESTIMATE or COMPUTE (default is Estimate).

Error 3:
--------

EXP-00056: ORACLE error 1403 encountered


ORA-01403: no data found
EXP-00056: ORACLE error 1403 encountered
ORA-01403: no data found
EXP-00000: Export terminated unsuccessfully

You can't export any DB with an exp utility of a newer version.


The exp version must be equal or older than the DB version

Doc ID </help/usaeng/Search/search.html>: Note:281780.1 Content Type:


TEXT/PLAIN
Subject: Oracle 9.2.0.4.0: Schema Export Fails with ORA-1403 (No Data Found) on
Exporting Cluster Definitions Creation Date: 29-AUG-2004
Type: PROBLEM Last Revision Date: 29-AUG-2004
Status: PUBLISHED
The information in this article applies to:
- Oracle Server - Enterprise Edition - Version: 9.2.0.4 to 9.2.0.4
- Oracle Server - Personal Edition - Version: 9.2.0.4 to 9.2.0.4
- Oracle Server - Standard Edition - Version: 9.2.0.4 to 9.2.0.4
This problem can occur on any platform.

ERRORS
------

EXP-56 ORACLE error encountered


ORA-1403 no data found
EXP-0: Export terminated unsuccessfully

SYMPTOMS
--------

A schema level export with the 9.2.0.4 export utility from a 9.2.0.4 or higher
release database in which XDB has been installed, fails when exporting
the cluster definitions with:

...
. exporting cluster definitions
EXP-00056: ORACLE error 1403 encountered
ORA-01403: no data found
EXP-00000: Export terminated unsuccessfully

You can confirm that XDB has been installed in the database:
SQL> SELECT substr(comp_id,1,15) comp_id, status, substr(version,1,10) version,
substr(comp_name,1,30) comp_name FROM dba_registry ORDER BY 1;

COMP_ID STATUS VERSION COMP_NAME


--------------- ----------- ---------- ------------------------------
...
XDB INVALID 9.2.0.4.0 Oracle XML Database
XML VALID 9.2.0.6.0 Oracle XDK for Java
XOQ LOADED 9.2.0.4.0 Oracle OLAP API

You create a trace file of the ORA-1403 error:

SQL> SHOW PARAMETER user_dump


SQL> ALTER SYSTEM SET EVENTS '1403 trace name errorstack level 3';
System altered.

-- Re-run the export

SQL> ALTER SYSTEM SET EVENTS '1403 trace name errorstack off';
System altered.

The trace file that was written to your USER_DUMP_DEST directory, shows:

ksedmp: internal or fatal error


ORA-01403: no data found
Current SQL statement for this session:
SELECT xdb_uid FROM SYS.EXU9XDBUID

You can confirm that you have no invalid XDB objects in the database:

SQL> SET lines 200


SQL> SELECT status, object_id, object_type, owner||'.'||object_name
"OWNER.OBJECT" FROM dba_objects WHERE owner='XDB' AND status != 'VALID'
ORDER BY 4,2;

no rows selected

Note: If you do have invalid XDB objects, and the same ORA-1403 error occurs
when performing a full database export, see the solution mentioned in:
[NOTE:255724.1] <ml2_documents.showDocument?p_id=255724.1&p_database_id=NOT>

"Oracle 9i: Full Export Fails with ORA-1403


(No Data Found) on Exporting Cluster Defintions"

CHANGES
-------

You recently restored the database from a backup or you recreated the
controlfile, or you performed Operating System actions on your database
tempfiles.

CAUSE
-----

The Temporary tablespace does not have any tempfiles.

Note that the errors are different when exporting with a 9.2.0.3 or earlier
export utility:

. exporting cluster definitions


EXP-00056: ORACLE error 1157 encountered
ORA-01157: cannot identify/lock data file 201 - see DBWR trace file
ORA-01110: data file 201: 'M:\ORACLE\ORADATA\M9201WA\TEMP01.DBF'
ORA-06512: at "SYS.DBMS_LOB", line 424
ORA-06512: at "SYS.DBMS_METADATA", line 1140
ORA-06512: at line 1
EXP-00000: Export terminated unsuccessfully

The errors are also different when exporting with a 9.2.0.5 or later export
utility:

. exporting cluster definitions


EXP-00056: ORACLE error 1157 encountered
ORA-01157: cannot identify/lock data file 201 - see DBWR trace file
ORA-01110: data file 201: 'M:\ORACLE\ORADATA\M9205WA\TEMP01.DBF'
EXP-00000: Export terminated unsuccessfully

FIX
---

1. If the controlfile does not have any reference to the tempfile(s),


add the tempfile(s):

SQL> SET lines 200


SQL> SELECT status, enabled, name FROM v$tempfile;
no rows selected

SQL> ALTER TABLESPACE temp ADD TEMPFILE


'M:\ORACLE\ORADATA\M9204WA\TEMP01.DBF' REUSE;

or:

If the controlfile has a reference to the tempfile(s), but the files are
missing on disk, re-create the temporary tablespace, e.g.:

SQL> SET lines 200


SQL> CREATE TEMPORARY TABLESPACE temp2 TEMPFILE
'M:\ORACLE\ORADATA\M9204WA\TEMP201.DBF' SIZE 100m AUTOEXTEND ON
NEXT 100M MAXSIZE 2000M;
SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp2;
SQL> DROP TABLESPACE temp;
SQL> CREATE TEMPORARY TABLESPACE temp TEMPFILE
'M:\ORACLE\ORADATA\M9204WA\TEMP01.DBF' SIZE 100m AUTOEXTEND ON
NEXT 100M MAXSIZE 2000M;
SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp;
SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP
SQL> DROP TABLESPACE temp2 INCLUDING CONTENTS AND DATAFILES;
2. Now re-run the export.

Other errors:
-------------

Doc ID </help/usaeng/Search/search.html>: Note:175624.1 Content Type:


TEXT/X-HTML
Subject: Oracle Server - Export and Import FAQ Creation Date: 08-FEB-2002
Type: FAQ Last Revision Date: 16-FEB-2005
Status: PUBLISHED
PURPOSE
=======
This Frequently Asked Questions (FAQ) provides common Export and Import issues
in the following sections:
- GENERIC - LARGE FILES - INTERMEDIA - TOP EXPORT DEFECTS
- COMPATIBILITY - TABLESPACE - ADVANCED QUEUING - TOP IMPORT DEFECTS
- PARAMETERS - ORA-942 - REPLICATION
- PERFORMANCE - NLS - FREQUENT ERRORS

GENERIC
=======
Question: What is actually happening when I export and import data?
See Note 61949.1 </metalink/plsql/showdoc?db=NOT&id=61949.1> "Overview
of Export and Import in Oracle7"

Question: What is important when doing a full database export or import?


See Note 10767.1 </metalink/plsql/showdoc?db=NOT&id=10767.1> "How to
perform full system Export/Import"
Question: Can data corruption occur using export & import (version 8.1.7.3 to
9.2.0)?
See Note 199416.1 </metalink/plsql/showdoc?db=NOT&id=199416.1> "ALERT:
EXP Can Produce Dump File with Corrupted Data"

Question: How to Connect AS SYSDBA when Using Export or Import?


See Note 277237.1 </metalink/plsql/showdoc?db=NOT&id=277237.1> "How to
Connect AS SYSDBA when Using Export or Import"

COMPATIBILITY
=============
Question: Which version should I use when moving data between different database
releases?
See Note 132904.1 </metalink/plsql/showdoc?db=NOT&id=132904.1>
"Compatibility Matrix for Export & Import Between Different Oracle Versions"
See Note 291024.1 </metalink/plsql/showdoc?db=NOT&id=291024.1>
"Compatibility and New Features when Transporting Tablespaces with Export and
Import"
See Note 76542.1 </metalink/plsql/showdoc?db=NOT&id=76542.1> "NT:
Exporting from Oracle8, Importing Into Oracle7"

Question: How to resolve the IMP-69 error when importing into a database?
See Note 163334.1 </metalink/plsql/showdoc?db=NOT&id=163334.1> "Import
Gets IMP-00069 when Importing 8.1.7 Export"
See Note 1019280.102 </metalink/plsql/showdoc?db=NOT&id=1019280.102>
"IMP-69 on Import"
PARAMETERS
==========
Question: What is the difference between a Direct Path and a Conventional Path
Export?
See Note 155477.1 </metalink/plsql/showdoc?db=NOT&id=155477.1>
"Parameter DIRECT: Conventional Path Export versus Direct Path Export"

Question: What is the meaning of the Export parameter CONSISTENT=Y and when should
I use it?
See Note 113450.1 </metalink/plsql/showdoc?db=NOT&id=113450.1> "When to
Use CONSISTENT=Y During an Export"

Question: How to use the Oracle8i/9i Export parameter QUERY=... and what does it
do?
See Note 91864.1 </metalink/plsql/showdoc?db=NOT&id=91864.1> "Query=
Syntax in Export in 8i"
See Note 277010.1 </metalink/plsql/showdoc?db=NOT&id=277010.1> "How to
Specify a Query in Oracle10g Export DataPump and Import DataPump"

Question: How to create multiple export dumpfiles instead of one large file?
See Note 290810.1 </metalink/plsql/showdoc?db=NOT&id=290810.1>
"Parameter FILESIZE - Make Export Write to Multiple Export Files"

PERFORMANCE
===========
Question: Import takes so long to complete. How can I improve the performance of
Import?
See Note 93763.1 </metalink/plsql/showdoc?db=NOT&id=93763.1> "Tuning
Considerations when Import is slow"

Question: Why has export performance decreased after creating tables with LOB
columns?
See Note 281461.1 </metalink/plsql/showdoc?db=NOT&id=281461.1> "Export
and Import of Table with LOB Columns (like CLOB and BLOB) has Slow Performance"

LARGE FILES
===========
Question: Which commands to use for solving Export dump file problems on UNIX
platforms?
See Note 30528.1 </metalink/plsql/showdoc?db=NOT&id=30528.1> "QREF:
Export/Import/SQL*Load Large Files in Unix - Quick Reference"

Question: How to solve the EXP-15 and EXP-2 errors when Export dump file is larger
than 2Gb?
See Note 62427.1 </metalink/plsql/showdoc?db=NOT&id=62427.1> "2Gb or Not
2Gb - File limits in Oracle"
See Note 1057099.6 </metalink/plsql/showdoc?db=NOT&id=1057099.6> "Unable
to export when export file grows larger than 2GB"
See Note 290810.1 </metalink/plsql/showdoc?db=NOT&id=290810.1>
"Parameter FILESIZE - Make Export Write to Multiple Export Files"

Question: How to export to a tape device by using a named pipe?


See Note 30428.1 </metalink/plsql/showdoc?db=NOT&id=30428.1> "Exporting
to Tape on Unix System"
TABLESPACE
==========
Question: How to transport tablespace between different versions?
See Note 291024.1 </metalink/plsql/showdoc?db=NOT&id=291024.1>
"Compatibility and New Features when Transporting Tablespaces with Export and
Import"

Question: How to move tables to a different tablespace and/or different user?


See Note 1012307.6 </metalink/plsql/showdoc?db=NOT&id=1012307.6> "Moving
Tables Between Tablespaces Using EXPORT/IMPORT"
See Note 1068183.6 </metalink/plsql/showdoc?db=NOT&id=1068183.6> "How to
change the default tablespace when importing using the INDEXFILE option"

Question: How can I export all tables of a specific tablespace?


See Note 1039292.6 </metalink/plsql/showdoc?db=NOT&id=1039292.6> "How to
Export Tables for a specific Tablespace"

ORA-942
=======
Question: How to resolve an ORA-942 during import of the ORDSYS schema?
See Note 109576.1 </metalink/plsql/showdoc?db=NOT&id=109576.1> "Full
Import shows Errors when adding Referential Constraint on Cartrige Tables"

Question: How to resolve an ORA-942 during import of a snapshot (log) into a


different schema?
See Note 1017292.102 </metalink/plsql/showdoc?db=NOT&id=1017292.102>
"IMP-00017 IMP-00003 ORA-00942 USING FROMUSER/TOUSER ON SNAPSHOT [LOG] IMPORT"

Question: How to resolve an ORA-942 during import of a trigger on a renamed table?


See Note 1020026.102 </metalink/plsql/showdoc?db=NOT&id=1020026.102>
"ORA-01702, ORA-00942, ORA-25001, When Importing Triggers"

Question: How to resolve an ORA-942 during import of one specific table?


See Note 1013822.102 </metalink/plsql/showdoc?db=NOT&id=1013822.102>
"ORA-00942: ON TABLE LEVEL IMPORT"

NLS
===
Question: Which effect has the client's NLS_LANG setting on an export and import?
See Note 227332.1 </metalink/plsql/showdoc?db=NOT&id=227332.1> "NLS
considerations in Import/Export - Frequently Asked Questions"
See Note 15656.1 </metalink/plsql/showdoc?db=NOT&id=15656.1>
"Export/Import and NLS Considerations"

Question: How to prevent the loss of diacritical marks during an export/import?


See Note 96842.1 </metalink/plsql/showdoc?db=NOT&id=96842.1> "Loss Of
Diacritics When Performing EXPORT/IMPORT Due To Incorrect Charactersets"

INTERMEDIA OBJECTS
==================
Question: How to solve an EXP-78 when exporting metadata for an interMedia Text
index?
See Note 130080.1 </metalink/plsql/showdoc?db=NOT&id=130080.1> "Problems
with EXPORT after upgrading from 8.1.5 to 8.1.6"

Question: I dropped the ORDSYS schema, but now I get ORA-6550 and PLS-201 when
exporting?
See Note 120540.1 </metalink/plsql/showdoc?db=NOT&id=120540.1> "EXP-8
PLS-201 After Drop User ORDSYS"

ADVANCED QUEUING OBJECTS


========================
Question: Why does export show ORA-1403 and ORA-6512 on an AQ object, after an
upgrade?
See Note 159952.1 </metalink/plsql/showdoc?db=NOT&id=159952.1> "EXP-8
and ORA-1403 When Performing A Full Export"

Question: How to resolve export errors on DBMS_AQADM_SYS and


DBMS_AQ_SYS_EXP_INTERNAL?
See Note 114739.1 </metalink/plsql/showdoc?db=NOT&id=114739.1> "ORA-4068
while performing full database export"

REPLICATION OBJECTS
===================
Question: How to resolve import errors on DBMS_IJOB.SUBMIT for Replication jobs?
See Note 137382.1 </metalink/plsql/showdoc?db=NOT&id=137382.1> "IMP-3,
PLS-306 Unable to Import Oracle8i JobQueues into Oracle8"

Question: How to reorganize Replication base tables with Export and Import?
See Note 1037317.6 </metalink/plsql/showdoc?db=NOT&id=1037317.6> "Move
Replication System Tables using Export/Import for Oracle 8.X"

FREQUENTLY REPORTED EXPORT/IMPORT ERRORS


========================================
EXP-00002: Error in writing to export file
Note 1057099.6 </metalink/plsql/showdoc?db=NOT&id=1057099.6> "Unable to
export when export file grows
larger than 2GB"

EXP-00002: error in writing to export file


The export file could not be written to disk anymore, probably because the disk is
full or the device has an error.
Most of the time this is followed by a device (filesystem) error message
indicating the problem.

Possible causes are file systems that do not support a certain limit (eg. dump
file size > 2Gb) or a disk/filesystem that ran out of space.

EXP-00003: No storage definition found for segment(%s,%s) (EXP-3 EXP-0)


Note 274076.1 </metalink/plsql/showdoc?db=NOT&id=274076.1> "EXP-00003
When Exporting From Oracle9i 9.2.0.5.0 with a Pre-9.2.0.5.0 Export Utility"
Note 124392.1 </metalink/plsql/showdoc?db=NOT&id=124392.1> "EXP-3 while
exporting Rollback Segment definitions during FULL Database Export"

EXP-00067: "Direct path cannot export %s which contains object or lob data."
Note 1048461.6 </metalink/plsql/showdoc?db=NOT&id=1048461.6> "EXP-00067
PERFORMING DIRECT PATH EXPORT"
EXP-00079: Data in table %s is protected (EXP-79)
Note 277606.1 </metalink/plsql/showdoc?db=NOT&id=277606.1> "How to
Prevent EXP-00079 or EXP-00080 Warning (Data in Table xxx is Protected) During
Export"

EXP-00091: Exporting questionable statistics


Note 159787.1 </metalink/plsql/showdoc?db=NOT&id=159787.1> "9i: Import
STATISTICS=SAFE"

IMP-00016: Required character set conversion (type %lu to %lu) not supported
Note 168066.1 </metalink/plsql/showdoc?db=NOT&id=168066.1> "IMP-16 When
Importing Dumpfile into a Database Using Multibyte Characterset"

IMP-00020: Long column too large for column buffer size


Note 148740.1 </metalink/plsql/showdoc?db=NOT&id=148740.1> "ALERT:
Export of table with dropped functional index may cause IMP-20 on import"

ORA-00904: Invalid column name (EXP-8 ORA-904 EXP-0)


Note 106155.1 </metalink/plsql/showdoc?db=NOT&id=106155.1> "EXP-00008
ORA-1003 ORA-904 During Export"
Note 172220.1 </metalink/plsql/showdoc?db=NOT&id=172220.1> "Export of
Database fails with EXP-00904 and ORA-01003"
Note 158048.1 </metalink/plsql/showdoc?db=NOT&id=158048.1> "Oracle8i
Export Fails on Synonym Export with EXP-8 and ORA-904"
Note 130916.1 </metalink/plsql/showdoc?db=NOT&id=130916.1> "ORA-904
using EXP73 against Oracle8/8i Database"
Note 1017276.102 </metalink/plsql/showdoc?db=NOT&id=1017276.102>
"Oracle8i Export Fails on Synonym Export with EXP-8 and ORA-904"

ORA-01406: Fetched column value was truncated (EXP-8 ORA-1406 EXP-0)


Note 163516.1 </metalink/plsql/showdoc?db=NOT&id=163516.1> "EXP-0 and
ORA-1406 during Export of Object Types"

ORA-01422: Exact fetch returns more than requested number of rows


Note 221178.1 </metalink/plsql/showdoc?db=NOT&id=221178.1> "PLS-201 and
ORA-06512 at 'XDB.DBMS_XDBUTIL_INT' while Exporting Database"
Note 256548.1 </metalink/plsql/showdoc?db=NOT&id=256548.1> "Export of
Database with XDB Throws ORA-1422 Error"

ORA-01555: Snapshot too old


Note 113450.1 </metalink/plsql/showdoc?db=NOT&id=113450.1> "When to Use
CONSISTENT=Y During an Export"

ORA-04030: Out of process memory when trying to allocate %s bytes (%s,%s) (IMP-3
ORA-4030 ORA-3113)
Note 165016.1 </metalink/plsql/showdoc?db=NOT&id=165016.1> "Corrupt
Packages When Export/Import Wrapper PL/SQL Code"

ORA-06512: at "SYS.DBMS_STATS", line ... (IMP-17 IMP-3 ORA-20001 ORA-6512)


Note 123355.1 </metalink/plsql/showdoc?db=NOT&id=123355.1> "IMP-17 and
IMP-3 errors referring dbms_stats package during import"

ORA-29344: Owner validation failed - failed to match owner 'SYS'


Note 294992.1 </metalink/plsql/showdoc?db=NOT&id=294992.1> "Import
DataPump: Transport Tablespace Fails with ORA-39123 and 29344 (Failed to match
owner SYS)"
ORA-29516: Aurora assertion failure: Assertion failure at %s (EXP-8 ORA-29516 EXP-
0)
Note 114356.1 </metalink/plsql/showdoc?db=NOT&id=114356.1> "Export
Fails With ORA-29516 Aurora Assertion Failure EXP-8"

PLS-00103: Encountered the symbol "," when expecting one of the following ...
(IMP-17 IMP-3 ORA-6550 PLS-103)
Note 123355.1 </metalink/plsql/showdoc?db=NOT&id=123355.1> "IMP-17 and
IMP-3 errors referring dbms_stats package during import"
Note 278937.1 </metalink/plsql/showdoc?db=NOT&id=278937.1> "Import
DataPump: ORA-39083 and PLS-103 when Importing Statistics Created with Non "." NLS
Decimal Character"

EXPORT TOP ISSUES CAUSED BY DEFECTS


===================================
Release : 8.1.7.2 and below
Problem : Export may fail with ORA-1406 when exporting object type definitions
Solution : apply patch-set 8.1.7.3
Workaround: no, see Note 163516.1 </metalink/plsql/showdoc?db=NOT&id=163516.1>
"EXP-0 and ORA-1406 during Export of Object Types"

Bug 1098503 </metalink/plsql/showdoc?db=Bug&id=1098503>


Release : Oracle8i (8.1.x) and Oracle9i (9.x)
Problem : EXP-79 when Exporting Protected Tables
Solution : this is not a defect
Workaround: N/A, see Note 277606.1 </metalink/plsql/showdoc?db=NOT&id=277606.1>
"How to Prevent EXP-00079 or EXP-00080 Warning (Data in Table xxx is Protected)
During Export"

Bug 2410612 </metalink/plsql/showdoc?db=Bug&id=2410612>


Release : 8.1.7.3 and higher and 9.0.1.2 and higher
Problem : Conventional export may produce an export file with corrupt data
Solution : 8.1.7.5 and 9.2.0.x or check for Patch 2410612
<http://updates.oracle.com/ARULink/PatchDetails/process_form?patch_num=2410612>
(for 8.1.7.x), 2449113 (for 9.0.1.x)
Workaround: yes, see Note 199416.1 </metalink/plsql/showdoc?db=NOT&id=199416.1>
"ALERT: Client Program May Give Incorrect Query Results
(EXP Can Produce Dump File with Corrupted Data)"

Release : Oracle8i (8.1.x)


Problem : Full database export fails with EXP-3: no storage definition found
for segment
Solution : Oracle9i (9.x)
Workaround: yes, see Note 124392.1 </metalink/plsql/showdoc?db=NOT&id=124392.1>
"EXP-3 while exporting Rollback Segment definitions during FULL Database Export"

Bug 2900891 </metalink/plsql/showdoc?db=Bug&id=2900891>


Release : 9.0.1.4 and below and 9.2.0.3 and below
Problem : Export with 8.1.7.3 and 8.1.7.4 from Oracle9i fails with invalid
identifier SPOLICY
(EXP-8 ORA-904 EXP-0)
Solution : 9.2.0.4 or 9.2.0.5
Workaround: yes, see Bug 2900891 </metalink/plsql/showdoc?db=Bug&id=2900891> how
to recreate view sys.exu81rls

Bug 2685696 </metalink/plsql/showdoc?db=Bug&id=2685696>


Release : 9.2.0.3 and below
Problem : Export fails when exporting triggers in call to XDB.DBMS_XDBUTIL_INT
(EXP-56 ORA-1422 ORA-6512)
Solution : 9.2.0.4 or check for Patch 2410612
<http://updates.oracle.com/ARULink/PatchDetails/process_form?patch_num=2410612>
(for 9.2.0.2 and 9.2.0.3)
Workaround: yes, see Note 221178.1 </metalink/plsql/showdoc?db=NOT&id=221178.1>
"ORA-01422 ORA-06512: at "XDB.DBMS_XDBUTIL_INT" while exporting
full database"

Bug 2919120 </metalink/plsql/showdoc?db=Bug&id=2919120>


Release : 9.2.0.4 and below
Problem : Export fails when exporting triggers in call to XDB.DBMS_XDBUTIL_INT
(EXP-56 ORA-1422 ORA-6512)
Solution : 9.2.0.5 or check for Patch 2919120
<http://updates.oracle.com/ARULink/PatchDetails/process_form?patch_num=2919120>
(for 9.2.0.4)
Workaround: yes, see Note 256548.1 </metalink/plsql/showdoc?db=NOT&id=256548.1>
"Export of Database with XDB Throws ORA-1422 Error"

IMPORT TOP ISSUES CAUSED BY DEFECTS


===================================
Bug 1335408 </metalink/plsql/showdoc?db=Bug&id=1335408>
Release : 8.1.7.2 and below
Problem : Bad export file using a locale with a ',' decimal seperator (IMP-17
IMP-3 ORA-6550 PLS-103)
Solution : apply patch-set 8.1.7.3 or 8.1.7.4
Workaround: yes, see Note 123355.1 </metalink/plsql/showdoc?db=NOT&id=123355.1>
"IMP-17 and IMP-3 errors referring DBMS_STATS package during import"

Bug 1879479 </metalink/plsql/showdoc?db=Bug&id=1879479>


Release : 8.1.7.2 and below and 9.0.1.2 and below
Problem : Export of a wrapped package can result in a corrupt package being
imported
(IMP-3 ORA-4030 ORA-3113 ORA-7445 ORA-600[16201]).
Solution : in Oracle8i with 8.1.7.3 and higher; in Oracle9iR1 with 9.0.1.3 and
higher
Workaround: no, see Note 165016.1 </metalink/plsql/showdoc?db=NOT&id=165016.1>
"Corrupt Packages When Export/Import Wrapper PL/SQL Code"

Bug 2067904 </metalink/plsql/showdoc?db=Bug&id=2067904>


Release : Oracle8i (8.1.7.x) and 9.0.1.2 and below
Problem : Trigger-name causes call to DBMS_DDL.SET_TRIGGER_FIRING_PROPERTY to
fail during Import
(IMP-17 IMP-3 ORA-931 ORA-23308 ORA-6512).
Solution : in Oracle9iR1 with patchset 9.0.1.3
Workaround: yes, see Note 239821.1 </metalink/plsql/showdoc?db=NOT&id=239821.1>
"ORA-931 or ORA-23308 in SET_TRIGGER_FIRING_PROPERTY
on Import of Trigger in 8.1.7.x and 9.0.1.x"

Bug 2854856 </metalink/plsql/showdoc?db=Bug&id=2854856>


Release : Oracle8i (8.1.7.x) and 9.0.1.2 and below
Problem : Schema-name causes call to DBMS_DDL.SET_TRIGGER_FIRING_PROPERTY to
fail during Import
(IMP-17 IMP-3 ORA-911 ORA-6512).
Solution : in Oracle9iR2 with patchset 9.2.0.4
Workaround: yes, see Note 239890.1 </metalink/plsql/showdoc?db=NOT&id=239890.1>
"ORA-911 in SET_TRIGGER_FIRING_PROPERTY on Import of Trigger
in 8.1.7.x and Oracle9i"

4.3 SQL*Loader examples:


-=======================

SQL*Loader is used for loading data from text files into


Oracle tables. The text file can have fixed column positions
or columns separated by a special character, for example an ",".

to call sqlloader

sqlldr system/manager control=smssoft.ctl


sqlldr parfile=bonus.par

Example 1:
----------

BONUS.PAR:

userid=scott
control=bonus.ctl
bad=bonus.bad
log=bonus.log
discard=bonus.dis
rows=2
errors=2
skip=0

BONUS.CTL:

LOAD DATA
INFILE bonus.dat
APPEND
INTO TABLE BONUS
(name position(01:08) char,
city position(09:19) char,
salary position(20:22) integer external)

Now you can use the command:


$ sqlldr parfile=bonus.par

Example 2:
----------
LOAD1.CTL:

LOAD DATA
INFILE 'PLAYER.TXT'
INTO TABLE BASEBALL_PLAYER
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
(player_id,last_name,first_name,middle_initial,start_date)

SQLLDR system/manager CONTROL=LOAD1.CTL LOG=LOAD1.LOG


BAD=LOAD1.BAD DISCARD=LOAD1.DSC

Example 3: another controlfile:


------------------------------
SMSSOFT.CTL:

LOAD DATA
INFILE 'SMSSOFT.TXT'
TRUNCATE
INTO TABLE SMSSOFTWARE
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
(DWMACHINEID, SERIALNUMBER, NAME, SHORTNAME, SOFTWARE, CMDB_ID, LOGONNAME)

Example 4: another controlfile:


-------------------------------

LOAD DATA
INFILE *
BADFILE 'd:\stage\loader\load.bad'
DISCARDFILE 'd:\stage\loader\load.dsc'
APPEND
INTO TABLE TEST
FIELDS TERMINATED BY "<tab>" TRAILING NULLCOLS
(
c1,
c2 char,
c3 date(8) "DD-MM-YY"
)
BEGINDATA
1<tab>X<tab>25-12-00
2<tab>Y<tab>31-12-00

Note: The <tab> placeholder is only for illustration purposes, in the acutal
implementation,
one would use a real tab character which is not visible.

- Convential path load:


When the DIRECT=Y parameter is not used, the convential path is used.
This means that essentially INSERT statements are used,
triggers and referential integrety are in normal use, and that
the buffer cache is used.

- Direct path load:


Buffer cache is not used. Existing used blocks are not used.
New blocks are written as needed.
Referential integrety and triggers are disabled during the load.

Example 5:
----------

The following shows the control file (sh_sales.ctl) loading the sales table:

LOAD DATA INFILE sh_sales.dat APPEND INTO TABLE sales


FIELDS TERMINATED BY "|"
(PROD_ID, CUST_ID, TIME_ID, CHANNEL_ID, PROMO_ID, QUANTITY_SOLD, AMOUNT_SOLD)

It can be loaded with the following command:


$ sqlldr sh/sh control=sh_sales.ctl direct=true

4.4 Creation of new table on basis of existing table:


=====================================================

CREATE TABLE EMPLOYEE_2


AS SELECT * FROM EMPLOYEE

CREATE TABLE temp_jan_sales NOLOGGING TABLESPACE ts_temp_sales


AS SELECT * FROM sales
WHERE time_id BETWEEN '31-DEC-1999' AND '01-FEB-2000';

insert into t SELECT * FROM t2;

insert into DSA_IMPORT


SELECT * FROM MDB_DW_COMPONENTEN@SALES

4.5 Copy commAND om data uit een remote database te halen:


==========================================================

set copycommit 1
set arraysize 1000
copy FROM HR/PASSWORD@loc -
create EMPLOYEE -
using
SELECT * FROM employee -
WHERE state='NM'

4.6 Simple differences between table versions:


==============================================

SELECT * FROM new_version


MINUS SELECT * FROM old_version;

SELECT * FROM old_version


MINUS SELECT * FROM new_version;

=======================================================
5. Add, Move AND Size Datafiles, tablespaces, logfiles:
=======================================================

5.1 ADD OR DROP REDO LOGFILE GROUP:


===================================

ADD:
----

alter database
add logfile group 4
('/db01/oracle/CC1/log_41.dbf', '/db02/oracle/CC1/log_42.dbf') size 5M;

ALTER DATABASE
ADD LOGFILE ('/oracle/dbs/log1c.rdo', '/oracle/dbs/log2c.rdo') SIZE 500K;

ALTER DATABASE
ADD LOGFILE ('/oracle/dbs/log1c.rdo', '/oracle/dbs/log2c.rdo') SIZE 500K;

Add logfile plus group:

ALTER DATABASE ADD LOGFILE GROUP 4


('/dbms/tdbaeduc/educslot/recovery/redo_logs/redo04.log') SIZE 50M;
ALTER DATABASE ADD LOGFILE GROUP 5
('/dbms/tdbaeduc/educslot/recovery/redo_logs/redo05.log') SIZE 50M;

ALTER DATABASE
ADD LOGFILE ('G:\ORADATA\AIRM\REDO05.LOG') SIZE 20M;

DROP:
-----

-An instance requires at least two groups of online redo log files,
regardless of the number of members in the groups. (A group is one or more
members.)
-You can drop an online redo log group only if it is inactive.
If you need to drop the current group, first force a log switch to occur.

ALTER DATABASE DROP LOGFILE GROUP 3;

ALTER DATABASE DROP LOGFILE 'G:\ORADATA\AIRM\REDO02.LOG';

5.2 ADD REDO LOGFILE MEMBER:


============================

alter database
add logfile member '/db03/oracle/CC1/log_3c.dbf' to group 4;

Note: More on ONLINE LOGFILES:


------------------------------

-- Log Files Without Redundancy

LOGFILE
GROUP 1 '/u01/oradata/redo01.log'SIZE 10M,
GROUP 2 '/u02/oradata/redo02.log'SIZE 10M,
GROUP 3 '/u03/oradata/redo03.log'SIZE 10M,
GROUP 4 '/u04/oradata/redo04.log'SIZE 10M

-- Log Files With Redundancy

LOGFILE
GROUP 1 ('/u01/oradata/redo1a.log','/u05/oradata/redo1b.log') SIZE 10M,
GROUP 2 ('/u02/oradata/redo2a.log','/u06/oradata/redo2b.log') SIZE 10M,
GROUP 3 ('/u03/oradata/redo3a.log','/u07/oradata/redo3b.log') SIZE 10M,
GROUP 4 ('/u04/oradata/redo4a.log','/u08/oradata/redo4b.log') SIZE 10M

-- Related Queries
View information on log files
SELECT *
FROM gv$log;

View information on log file history

SELECT thread#, first_change#,


TO_CHAR(first_time,'MM-DD-YY HH12:MIPM'), next_change# FROM gv$log_history;

-- Forcing log file switches

ALTER SYSTEM SWITCH LOGFILE;

-- Clear A Log File If It Has Become Corrupt

ALTER DATABASE CLEAR LOGFILE GROUP <group_number>;

This statement overcomes two situations where dropping redo logs is not possible:
If there are only two log groups The corrupt redo log file
belongs to the current group.

ALTER DATABASE CLEAR LOGFILE GROUP 4;

-- Clear A Log File If It Has Become Corrupt And Avoid Archiving


ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP <group_number>;

-- Use this version of clearing a log file if the corrupt log file has not been
archived.
ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3;

Managing Log File Groups


Adding a redo log file group ALTER DATABASE ADD LOGFILE
('<log_member_path_and_name>', '<log_member_path_and_name>')
SIZE <integer> <K|M>;
ALTER DATABASE ADD LOGFILE
('/oracle/dbs/log1c.rdo', '/oracle/dbs/log2c.rdo') SIZE 500K;
Adding a redo log file group and specifying the group number ALTER DATABASE ADD
LOGFILE GROUP <group_number>
('<log_member_path_and_name>') SIZE <integer> <K|M>;
ALTER DATABASE ADD LOGFILE GROUP 4 ('c:\temp\newlog1.log') SIZE 100M;

Relocating redo log files ALTER DATABASE RENAME FILE


'<existing_path_and_file_name>'
TO '<new_path_and_file_name>';
conn / as sysdba

SELECT member
FROM v_$logfile;

SHUTDOWN;

host

$ cp /u03/logs/log1a.log /u04/logs/log1a.log
$ cp /u03/logs/log1b.log /u05/logs/log1b.log
$ exit

startup mount

ALTER DATABASE RENAME FILE '/u03/logs/log1a.log'


TO '/u04/oradata/log1a.log';

ALTER DATABASE RENAME FILE '/u04/logs/log1b.log'


TO '/u05/oradata/log1b.log';

ALTER DATABASE OPEN

host

$ rm /u03/logs/log1a.log
$ rm /u03/logs/log1b.log

$ exit

SELECT member
FROM v_$logfile;
Drop a redo log file group ALTER DATABASE DROP LOGFILE GROUP <group_number>;
ALTER DATABASE DROP LOGFILE GROUP 4;

Managing Log File Members


Adding log file group members ALTER DATABASE ADD LOGFILE MEMBER
'<log_member_path_and_name>'
TO GROUP <group_number>;
ALTER DATABASE ADD LOGFILE MEMBER '/oracle/dbs/log2b.rdo' TO GROUP 2;
Dropping log file group members ALTER DATABASE DROP LOGFILE MEMBER
'<log_member_path_and_name>';
ALTER DATABASE DROP LOGFILE MEMBER '/oracle/dbs/log3c.rdo';

Dumping Log Files

Dumping a log file to trace ALTER SYSTEM DUMP LOGFILE '<logfile_path_and_name>'


DBA MIN <file_number> <block_number>
DBA MAX <file_number> <block_number>;

or

ALTER SYSTEM DUMP LOGFILE '<logfile_path_and_name>'


TIME MIN <value>
TIME MIN <value>
conn uwclass/uwclass

alter session set nls_date_format='MM/DD/YYYY HH24:MI:SS';

SELECT SYSDATE
FROM dual;

CREATE TABLE test AS


SELECT owner, object_name, object_type
FROM all_objects
WHERE SUBSTR(object_name,1,1) BETWEEN 'A' AND 'W';

INSERT INTO test


(owner, object_name, object_type)
VALUES
('UWCLASS', 'log_dump', 'TEST');

COMMIT;

conn / as sysdba

SELECT ((SYSDATE-1/1440)-TO_DATE('01/01/2007','MM/DD/YYYY'))*86400 ssec


FROM dual;

ALTER SYSTEM DUMP LOGFILE 'c:\oracle\product\oradata\orabase\redo01.log' TIME MIN


579354757;

Disable Log Archiving

Stop log file archiving The following is undocumented and unsupported and should
be used only with great care and following through tests. One might consider this
for loading a data warehouse. Be sure to restart logging as soon as the load is
complete or the system will be at extremely high risk.

The rest of the database remains unchanged. The buffer cache works in exactly the
same way, old buffers get overwritten, old dirty buffers get written to disk. It's
just the process of physically flushing the redo buffer that gets disabled.

I used it in a very large test environment where I wanted to perform a massive


amount of changes (a process to convert blobs to clobs actually) and it was going
to take days to complete. By disabling logging, I completed the
task in hours and if anything untoward were to have happened, I was quite happy to
restore the test database back from backup.

~ the above paraphrased from a private email from Richard Foote.


conn / as sysdba

SHUTDOWN;

STARTUP MOUNT EXCLUSIVE;

ALTER DATABASE NOARCHIVELOG;

ALTER DATABASE OPEN;

ALTER SYSTEM SET "_disable_logging"=TRUE;

5.3 RESIZE DATABASE FILE:


=========================

alter database
datafile '/db05/oracle/CC1/data01.dbf' rezise 400M; (increase or decrease size)

alter tablespace DATA


datafile '/db05/oracle/CC1/data01.dbf' rezise 400M; (increase or decrease size)

5.4 ADD FILE TO TABLESPACE:


===========================
alter tablespace DATA
add datafile '/db05/oracle/CC1/data02.dbf'
size 50M
autoextend ON
maxsize unlimited;

5.5 ALTER STORAGE FOR FILE:


===========================

alter database
datafile '/db05/oracle/CC1/data01.dbf'
autoextend ON
maxsize unlimited;

alter database datafile '/oradata/temp/temp.dbf' autoextend off;

The AUTOEXTEND option cannot be turned OFF at for the entire tablespace with
a single command. Each datafile within the tablespace must explicitly turn off
the AUTOEXTEND option via the ALTER DATABASE command.

+447960585647

5.6 MOVE OF DATA FILE:


======================

connect internal
shutdown

mv /db01/oracle/CC1/data01.dbf /db02/oracle/CC1

connect / as SYSDBA
startup mount CC1

alter database rename file


'/db01/oracle/CC1/data01.dbf' to '/db02/oracle/CC1/data01.dbf';

alter database open;

alter database rename file


'/dbms/tdbaplay/playdwhs/database/playdwhs/sysaux01.dbf' to
'/dbms/tdbaplay/playdwhs/database/default/sysaux01.dbf';

alter database rename file


'/dbms/tdbaplay/playdwhs/database/playdwhs/system01.dbf' to
'/dbms/tdbaplay/playdwhs/database/default/system01.dbf';

alter database rename file


'/dbms/tdbaplay/playdwhs/database/playdwhs/temp01.dbf' to
'/dbms/tdbaplay/playdwhs/database/default/temp01.dbf';

alter database rename file


'/dbms/tdbaplay/playdwhs/database/playdwhs/undotbs01.dbf' to
'/dbms/tdbaplay/playdwhs/database/default/undotbs01.dbf';

alter database rename file


'/dbms/tdbaplay/playdwhs/database/playdwhs/users01.dbf' to
'/dbms/tdbaplay/playdwhs/database/default/users01.dbf';
alter database rename file
'/dbms/tdbaplay/playdwhs/database/playdwhs/redo01.log' to
'/dbms/tdbaplay/playdwhs/recovery/redo_logs/redo01.log';

alter database rename file


'/dbms/tdbaplay/playdwhs/database/playdwhs/redo02.log' to
'/dbms/tdbaplay/playdwhs/recovery/redo_logs/redo02.log';

alter database rename file


'/dbms/tdbaplay/playdwhs/database/playdwhs/redo03.log' to
'/dbms/tdbaplay/playdwhs/recovery/redo_logs/redo03.log';

5.7 MOVE OF REDO LOG FILE:


==========================

connect internal
shutdown

mv /db05/oracle/CC1/redo01.dbf /db02/oracle/CC1

connect / as SYSDBA
startup mount CC1

alter database rename file


'/db05/oracle/CC1/redo01.dbf' to '/db02/oracle/CC1/redo01.dbf';

alter database open;

in case of problems:

ALTER DATABASE CLEAR LOGFILE GROUP n

example:
--------

shutdown immediate

op Unix:
mv /u01/oradata/spltst1/redo01.log /u02/oradata/spltst1/
mv /u03/oradata/spltst1/redo03.log /u02/oradata/spltst1/

startup mount pfile=/apps/oracle/admin/SPLTST1/pfile/init.ora

alter database rename file


'/u01/oradata/spltst1/redo01.log' to '/u02/oradata/spltst1/redo01.log';

alter database rename file


'/u03/oradata/spltst1/redo03.log' to '/u02/oradata/spltst1/redo03.log';

alter database open;


5.8 Put a datafile or tablespace ONLINE or OFFLINE:
===================================================

alter tablespace data offline;


alter tablespace data online;

alter database datafile 8 offline;


alter database datafile 8 online;

5.9 ALTER DEFAULT STORAGE:


==========================

alter tablespace AP_INDEX_SMALL


default storage (initial 5M next 5M pctincrease 0);

5.10 CREATE TABLESPACE STORAGE PARAMETERS:


==========================================

locally managed 9i style:

-- autoallocate:
----------------

CREATE TABLESPACE DEMO DATAFILE '/u02/oracle/data/lmtbsb01.dbf' size 100M


extent management local autoallocate;

-- uniform size, 1M is default:


-------------------------------

CREATE TABLESPACE LOBS DATAFILE 'f:\oracle\oradata\pegacc\lobs01.dbf' SIZE 3000M


EXTENT MANAGEMENT LOCAL UNIFORM SIZE 64K;

CREATE TABLESPACE LOBS2 DATAFILE 'f:\oracle\oradata\pegacc\lobs02.dbf' SIZE 3000M


EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M;

CREATE TABLESPACE CISTS_01 DATAFILE '/u04/oradata/pilactst/cists_01.dbf' SIZE


1000M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K;

CREATE TABLESPACE CISTS_01 DATAFILE '/u01/oradata/spldev1/cists_01.dbf' SIZE 400M


EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K;

CREATE TABLESPACE PUB DATAFILE 'C:\ORACLE\ORADATA\TEST10G\PUB.DBF' SIZE 50M


EXTENT MANAGEMENT LOCAL AUTOALLOCATE
SEGMENT SPACE MANAGEMENT AUTO;

CREATE TABLESPACE STAGING DATAFILE 'C:\ORACLE\ORADATA\TEST10G\STAGING.DBF' SIZE


50M
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
SEGMENT SPACE MANAGEMENT AUTO;

CREATE TABLESPACE RMAN DATAFILE 'C:\ORACLE\ORADATA\RMAN\RMAN.DBF' SIZE 100M


EXTENT MANAGEMENT LOCAL AUTOALLOCATE
SEGMENT SPACE MANAGEMENT AUTO;
CREATE TABLESPACE CISTS_01 DATAFILE '/u07/oradata/spldevp/cists_01.dbf' SIZE 1200M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K;

CREATE TABLESPACE USERS DATAFILE '/u06/oradata/splpack/users01.dbf' SIZE 50M


EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K;

CREATE TABLESPACE INDX DATAFILE '/u06/oradata/splpack/indx01.dbf' SIZE 100M


EXTENT MANAGEMENT LOCAL UNIFORM SIZE

CREATE TEMPORARY TABLESPACE TEMP TEMPFILE '/u07/oradata/spldevp/temp01.dbf'


SIZE 200M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 10M;

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMP;

ALTER TABLESPACE CISTS_01


ADD DATAFILE '/u03/oradata/splplay/cists_02.dbf' SIZE 1000M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K;

ALTER TABLESPACE UNDOTBS


ADD DATAFILE '/dbms/tdbaprod/prodross/database/default/undotbs03.dbf' SIZE 2000M;

alter tablespace DATA


add datafile '/db05/oracle/CC1/data02.dbf'
size 50M
autoextend ON
maxsize unlimited;

-- segment management manual or automatic:


-- ---------------------------------------

We can have a locally managed tablespace, but the segment space management,
via the free lists and the pct_free and pct_used parameters,
be still used manually.

To specify manual space management, use the SEGMENT SPACE MANAGEMENT MANUAL clause

CREATE TABLESPACE INDX2 DATAFILE '/u06/oradata/bcict2/indx09.dbf' SIZE 5000M


EXTENT MANAGEMENT LOCAL AUTOALLOCATE
SEGMENT SPACE MANAGEMENT MANUAL;

or if you want segement space management to be automatic:

CREATE TABLESPACE INDX2 DATAFILE '/u06/oradata/bcict2/indx09.dbf' SIZE 5000M


EXTENT MANAGEMENT LOCAL AUTOALLOCATE
SEGMENT SPACE MANAGEMENT AUTO;

-- temporary tablespace:
------------------------

CREATE TEMPORARY TABLESPACE TEMP TEMPFILE '/u04/oradata/pilactst/temp01.dbf'


SIZE 200M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 10M;

create user cisadm identified by cisadm


default tablespace cists_01
temporary tablespace temp;

create user cisuser identified by cisuser


default tablespace cists_01
temporary tablespace temp;

create user cisread identified by cisread


default tablespace cists_01
temporary tablespace temp;

grant connect to cisadm;


grant connect to cisuser;
grant connect to cisread;

grant resource to cisadm;


grant resource to cisuser;
grant resource to cisread;

CREATE TEMPORARY TABLESPACE TEMP TEMPFILE '/u04/oradata/bcict2/tempt01.dbf'


SIZE 5000M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 100M;

alter tablespace TEMP add tempfile '/u04/oradata/bcict2/temp02.dbf'


SIZE 5000M;

alter tablespace UNDO add file '/u04/oradata/bcict2/undo07.dbf' size 500M;

ALTER DATABASE datafile '/u04/oradata/bcict2/undo07.dbf' RESIZE 3000M;

CREATE TEMPORARY TABLESPACE TEMP2 TEMPFILE '/u04/oradata/bcict2/temp01.dbf'


SIZE 5000M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 100M;

ALTER TABLESPACE TEMP


ADD TEMPFILE '/u04/oradata/bcict2/tempt4.dbf' SIZE 5000M;

1 /u03/oradata/bcict2/temp.dbf
2 /u03/oradata/bcict2/temp01.dbf
3 /u03/oradata/bcict2/temp02.dbf

ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02.dbf' DROP


INCLUDING DATAFILES;

The extent management clause is optional for temporary tablespaces because all
temporary tablespaces
are created with locally managed extents of a uniform size. The Oracle default for
SIZE is 1M.
But if you want to specify another value for SIZE, you can do so as shown in the
above statement.

The AUTOALLOCATE clause is not allowed for temporary tablespaces.

If you get errors:


------------------

If the controlfile does not have any reference to the tempfile(s),


add the tempfile(s):

SQL> SET lines 200


SQL> SELECT status, enabled, name FROM v$tempfile;
no rows selected

SQL> ALTER TABLESPACE temp ADD TEMPFILE 'M:\ORACLE\ORADATA\M9204WA\TEMP01.DBF'


REUSE;

or:

If the controlfile has a reference to the tempfile(s), but the files are
missing on disk, re-create the temporary tablespace, e.g.:

SQL> SET lines 200


SQL> CREATE TEMPORARY TABLESPACE temp2 TEMPFILE
'M:\ORACLE\ORADATA\M9204WA\TEMP201.DBF' SIZE 100m AUTOEXTEND ON
NEXT 100M MAXSIZE 2000M;
SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp2;
SQL> DROP TABLESPACE temp;
SQL> CREATE TEMPORARY TABLESPACE temp TEMPFILE
'M:\ORACLE\ORADATA\M9204WA\TEMP01.DBF' SIZE 100m AUTOEXTEND ON
NEXT 100M MAXSIZE 2000M;
SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp;
SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP
SQL> DROP TABLESPACE temp2 INCLUDING CONTENTS AND DATAFILES;

-- undo tablespace:
-- ----------------

CREATE UNDO TABLESPACE undotbs_02


DATAFILE '/u01/oracle/rbdb1/undo0201.dbf' SIZE 2M REUSE AUTOEXTEND ON;

ALTER SYSTEM SET UNDO_TABLESPACE = undotbs_02;

-- ROLLBACK TABLESPACE:
-- --------------------

create tablespace RBS


datafile '/disk01/oracle/oradata/DB1/rbs01.dbf' size 25M
default storage (
initial 500K
next 500K
pctincrease 0
minextents 2 );
##################################################################################
#####

CREATE TABLESPACE "DRSYS" LOGGING DATAFILE '/u02/oradata/pegacc/drsys01.dbf'


SIZE 20M REUSE AUTOEXTEND ON NEXT 1024K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ;

CREATE TABLESPACE "INDX" LOGGING DATAFILE '/u02/oradata/pegacc/indx01.dbf'


SIZE 100M REUSE AUTOEXTEND ON NEXT 1024K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ;

CREATE TABLESPACE "TOOLS" LOGGING DATAFILE '/u02/oradata/pegacc/tools01.dbf'


SIZE 100M REUSE AUTOEXTEND ON NEXT 1024K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ;

CREATE TABLESPACE "USERS" LOGGING DATAFILE '/u02/oradata/pegacc/users01.dbf'


SIZE 1000M REUSE AUTOEXTEND ON NEXT 1024K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ;

CREATE TABLESPACE "XDB" LOGGING DATAFILE '/u02/oradata/pegacc/xdb01.dbf'


SIZE 20M REUSE AUTOEXTEND ON NEXT 1024K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ;

CREATE TABLESPACE "LOBS" LOGGING DATAFILE '/u02/oradata/pegacc/lobs01.dbf'


SIZE 2000M REUSE AUTOEXTEND ON NEXT 1024K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M ;

##################################################################################
#####

General form of a 8i type statement:

CREATE TABLESPACE DATA


DATAFILE 'G:\ORADATA\RCDB\DATA01.DBF' size 100M
EXTENT MANAGEMENT DICTIONARY
default storage (
initial 512K
next 512K
minextents 1
pctincrease 0 )
minimum extent 512K
logging
online
peRMANENTt;

More info:
----------

By declaring a tablespace as DICTIONARY managed, you are specifying that extent


management for segments
in this tablespace will be managed using the dictionary tables sys.fet$ and
sys.uet$. Oracle updates
these tables in the data dictionary whenever an extent is allocated, or freed for
reuse. This is the default
in Oracle8i when no extent management clause is used in the CREATE TABLESPACE
statement.
The sys.fet$ table is clustered in the C_TS# cluster. Because it is created
without a SIZE clause, one block
will be reserved in the cluster for each tablespace. Although, if a tablespace has
more free extents
than can be contained in a single cluster block, then cluster block chaining will
occur which can significantly
impact performance on the data dictionary and space management transactions in
particular. Unfortunately,
chaining in this cluster cannot be repaired without recreating the entire
database. Preferably, the number
of free extents in a tablespace should never be greater than can be recorded in
the primary cluster block
for that tablespace, which is about 500 free extents for a database with an 8K
database block size.

Used extents, on the other hand, are recorded in the data dictionary table
sys.uet$, which is clustered in the
C_FILE#_BLOCK# cluster. Unlike the C_TS# cluster, C_FILE#_BLOCK# is sized on the
assumption that segments
will have an average of just 4 or 5 extents each. Unless your data dictionary was
specifically
customized prior to database creation to allow for more used extents per segment,
then creating segments
with thousands of extents (like mentioned in the previous section) will cause
excessive cluster block chaining
in this cluster. The major dilemma with an excessive number of used and/or free
extents is that they can
misrepresent the operations of the dictionary cache LRU mechanism. Extents should
therefore not be allowed to grow
into the thousands, not because of the impact of full table scans, but rather the
performance of the data dictionary
and dictionary cache.

A Locally Managed Tablespace is a tablespace that manages its own extents by


maintaining a bitmap in each
datafile to keep track of the free or used status of blocks in that datafile. Each
bit in the bitmap corresponds
to a block or a group of blocks. When the extents are allocated or freed for
reuse, Oracle simply changes
the bitmap values to show the new status of the blocks. These changes do not
generate rollback information
because they do not update tables in the data dictionary (except for tablespace
quota information). This is the
default in Oracle9i. If COMPATIBLE is set to 9.0.0, then the default extent
management for any new tablespace is
locally managed in Oracle9i. If COMPATIBLE is less than 9.0.0, then the default
extent management for any
new tablespace is dictionary managed in Oracle9i.
While free space is represented in a bitmap within the tablespace, used extents
are only recorded in the
extent map in the segment header block of each segment, and if necessary, in
additional extent map blocks
within the segment.

Keep in mind though, that this information is not cached in the dictionary cache.
It must be obtained from the
database block every time that it is required, and if those blocks are not in the
buffer cache,
that involves I/O and potentially lots of it. Take for example a query against
DBA_EXTENTS. This query would
be required to read every segment header and every additional extent map block in
the entire database.
It is for this reason that it is recommended that the number of extents per
segment in locally managed tablespaces
be limited to the number of rows that can be contained in the extent map with the
segment header block.
This would be approximately - (db_block_size / 16) - 7. For a database with a db
block size of 8K,
the above formula would be 505 extents.

5.11 DEALLOCATE EN OPSPOREN VAN UNUSED SPACE IN EEN TABLE:


==========================================================

alter table emp


deallocate unused;

alter table emp


deallocate unused
keep 100K;

alter table emp


allocate extent (
size 100K
datafile '/db05/oradata/CC1/user05.dbf');

Deze datafile moet in dezelfde tablespace bestaan.

-- gebruik van de dbms_space.unused_space package

declare
var1 number;
var2 number;
var3 number;
var4 number;
var5 number;
var6 number;
var7 number;

begin
dbms_space.unused_space('AUTOPROV1', 'MACADDRESS_INDEX', 'INDEX',
var1, var2, var3, var4, var5, var6, var7);
dbms_output.put_line('OBJECT_NAME = NOG ZON SLECHTE INDEX');
dbms_output.put_line('TOTAL_BLOCKS ='||var1);
dbms_output.put_line('TOTAL_BYTES ='||var2);
dbms_output.put_line('UNUSED_BLOCKS ='||var3);
dbms_output.put_line('UNUSED_BYTES ='||var4);
dbms_output.put_line('LAST_USED_EXTENT_FILE_ID ='||var5);
dbms_output.put_line('LAST_USED_EXTENT_BLOCK_ID ='||var6);
dbms_output.put_line('LAST_USED_BLOCK ='||var7);

end;
/

5.12 CREATE TABLE:


==================

-- STORAGE PARAMETERS EXAMPLE:


-- ---------------------------

create table emp


(
id number,
name varchar(2)
)
tablespace users
pctfree 10
storage
(initial 1024K
next 1024K
pctincrease 10
minextents 2);

ALTER a COLUMN:
===============

ALTER TABLE GEWEIGERDETRANSACTIE


MODIFY (VERBRUIKTIJD DATE);

-- Creation of new table on basis of existing table:


-- -------------------------------------------------

CREATE TABLE EMPLOYEE_2


AS SELECT * FROM EMPLOYEE

insert into t SELECT * FROM t2;

insert into DSA_IMPORT


SELECT * FROM MDB_DW_COMPONENTEN@SALES

-- Creation of a table with an autoincrement:


-- ------------------------------------------

CREATE SEQUENCE seq_customer


INCREMENT BY 1
START WITH 1
MAXVALUE 99999
NOCYCLE;

CREATE SEQUENCE seq_employee


INCREMENT BY 1
START WITH 1218
MAXVALUE 99999
NOCYCLE;

CREATE SEQUENCE seq_a


INCREMENT BY 1
START WITH 1
MAXVALUE 99999
NOCYCLE;

CREATE TABLE CUSTOMER (


CUSTOMER_ID NUMBER (10) NOT NULL,
NAAM VARCHAR2 (30) NOT NULL,
CONSTRAINT PK_CUSTOMER
PRIMARY KEY ( CUSTOMER_ID )
USING INDEX
TABLESPACE INDX PCTFREE 10
STORAGE ( INITIAL 16K NEXT 16K PCTINCREASE 0 ))
TABLESPACE USERS
PCTFREE 10 PCTUSED 40
INITRANS 1 MAXTRANS 255
STORAGE (
INITIAL 80K NEXT 80K PCTINCREASE 0
MINEXTENTS 1 MAXEXTENTS 2147483645 )
NOCACHE;

CREATE OR REPLACE TRIGGER tr_CUSTOMER_ins


BEFORE INSERT ON CUSTOMER FOR EACH ROW
BEGIN
SELECT seq_customer.NEXTVAL INTO :NEW.CUSTOMER_ID FROM dual;
END;

CREATE SEQUENCE seq_brains_verbruik


INCREMENT BY 1
START WITH 1750795
MAXVALUE 100000000
NOCYCLE;

CREATE OR REPLACE TRIGGER tr_PARENTEENHEID_ins


BEFORE INSERT ON PARENTEENHEID FOR EACH ROW
BEGIN
SELECT seq_brains_verbruik.NEXTVAL INTO :NEW.VERBRUIKID FROM dual;
END;

5.13 REBUILD OF INDEX:


======================

ALTER INDEX emp_pk


REBUILD -- online 8.16 or higher
NOLOGGING
TABLESPACE INDEX_BIG
PCTFREE 10
STORAGE ( INITIAL 5M
NEXT 5M
pctincrease 0
);

ALTER INDEX emp_ename


INITRANS 5
MAXTRANS 10
STORAGE (PCTINCREASE 50);

In situations where you have B*-tree index leaf blocks that can be freed up for
reuse, you can merge
those leaf blocks using the following statement:

ALTER INDEX vmoore COALESCE;

DROP INDEX emp_ename:

-- Basic example of creating an index:

CREATE INDEX emp_ename ON emp(ename)


TABLESPACE users
STORAGE (INITIAL 20K
NEXT 20k
PCTINCREASE 75)
PCTFREE 0;

If you have a LMT, you can just do:

create index cust_indx on customers(id) nologging;

This statement is without storage parameters.

-- Dropping an index:

DROP INDEX emp_ename:

5.14 MOVE TABLE TO OTHER TABLESPACE:


====================================

ALTER TABLE CHARLIE.CUSTOMERS MOVE TABLESPACE USERS2

5.15 SYNONYM (pointer to an object):


====================================

example:
create public synonym EMPLOYEE for HARRY.EMPLOYEE;

5.16 DATABASE LINK:


===================

CREATE PUBLIC DATABASE LINK SALESLINK


CONNECT TO FRONTEND IDENTIFIED BY cygnusx1
USING 'SALES';

SELECT * FROM employee@MY_LINK;

For example, using a database link to database sales.division3.acme.com,


a user or application can reference remote data as follows:

SELECT * FROM scott.emp@sales.division3.acme.com; # emp table in scott's schema


SELECT loc FROM scott.dept@sales.division3.acme.com;
If GLOBAL_NAMES is set to FALSE, then you can use any name for the link to
sales.division3.acme.com.
For example, you can call the link foo. Then, you can access the remote database
as follows:

SELECT name FROM scott.emp@foo; # link name different FROM global name

Synonyms for Schema Objects:

Oracle lets you create synonyms so that you can hide the database link name FROM
the user.
A synonym allows access to a table on a remote database using the same syntax that
you would use
to access a table on a local database. For example, assume you issue the following
query
against a table in a remote database:

SELECT * FROM emp@hq.acme.com;

You can create the synonym emp for emp@hq.acme.com


so that you can issue the following query instead to access the same data:

SELECT * FROM emp;

View DATABASE LINKS:

select substr(owner,1,10), substr(db_link,1,50), substr(username,1,25),


substr(host,1,40), created from dba_db_links

5.17 TO CLEAR TABLESPACE TEMP:


==============================

alter tablespace TEMP default storage (pctincrease 0);


alter session set events 'immediate trace name DROP_SEGMENTS level TS#+1';

5.18 RENAME OF OBJECT:


======================

RENAME sales_staff TO dept_30;


RENAME emp2 TO emp;

5.19 CREATE PROFILE:


====================

CREATE PROFILE DEVELOP_FIN LIMIT


SESSIONS_PER_USER 4
IDLE_TIME 30;

CREATE PROFILE PRIOLIMIT LIMIT


SESSIONS_PER_USER 10;

ALTER USER U_ZKN


PROFILE EXTERNLIMIT;

ALTER PROFILE EXTERNLIMIT


LIMIT PASSWORD_REUSE_TIME 90
PASSWORD_REUSE_MAX UNLIMITED;

ALTER PROFILE EXTERNLIMIT


LIMIT SESSIONS_PER_USER 20
IDLE_TIME 20;

5.20 RECOMPILE OF FUNCTION, PACKAGE, PROCEDURE:


===============================================

ALTER FUNCTION schema.function COMPILE;


example: ALTER FUNCTION oe.get_bal COMPILE;

ALTER PACKAGE schema.package COMPILE specification/body/package


example ALTER PACKAGE emp_mgmt COMPILE PACKAGE;

ALTER PROCEDURE schema.procedure COMPILE;


example ALTER PROCEDURE hr.remove_emp COMPILE;

TO FIND OBJECTS:

SELECT 'ALTER '||decode( object_type,


'PACKAGE SPECIFICATION'
,'PACKAGE'
,'PACKAGE BODY'
,'PACKAGE'
,object_type)
||' '||owner
||'.'|| object_name ||' COMPILE '
||decode( object_type,
'PACKAGE SPECIFICATION'
,'SPECIFACTION'
,'PACKAGE BODY'
,'BODY'
, NULL) ||';'
FROM dba_objects WHERE status = 'INVALID';

5.21 CREATE PACKAGE:


====================

A package is a set of related functions and / or routines.


Packages are used to group together PL/SQL code blocks which make up a common
application
or are attached to a single business function. Packages consist of a specification
and a body.
The package specification lists the public interfaces to the blocks within the
package body.
The package body contains the public and private PL/SQL blocks which make up the
application,
private blocks are not defined in the package specification and cannot be called
by any routine
other than one defined within the package body.
The benefits of packages are that they improve the organisation of procedure
and function blocks, allow you to update the blocks that make up the package body
without affecting the specification (which is the object that users have rights
to)
and allow you to grant execute rights once instead of for each and every block.

To create a package specification we use a variation on the CREATE command,


all we need put in the specification is each PL/SQL block header that will
be public within the package. An example follows :-

CREATE OR REPLACE PACKAGE MYPACK1 AS


PROCEDURE MYPROC1 (REQISBN IN NUMBER, MYVAR1 IN OUT CHAR,TCOST OUT NUMBER);
FUNCTION MYFUNC1;
END MYPACK1;

To create a package body we now specify each PL/SQL block that makes up the
package,
note that we are not creating these blocks separately (no CREATE OR REPLACE is
required for the procedure and function definitions). An example follows :-

CREATE OR REPLACE PACKAGE BODY MYPACK1 AS


PROCEDURE MYPROC1
(REQISBN IN NUMBER,
MYVAR1 IN OUT CHAR,
TCOST OUT NUMBER)
TEMP_COST NUMBER(10,2))
IS BEGIN
SELECT COST FROM JD11.BOOK INTO TEMP_COST WHERE ISBN = REQISBN;
IF TEMP_COST > 0 THEN
UPDATE JD11.BOOK SET COST = (TEMP_COST*1.175) WHERE ISBN = REQISBN;
ELSE
UPDATE JD11.BOOK SET COST = 21.32 WHERE ISBN = REQISBN;
END IF;
TCOST := TEMP_COST;
COMMIT;
EXCEPTION
WHEN NO_DATA_FOUND THEN
INSERT INTO JD11.ERRORS (CODE, MESSAGE) VALUES(99, 'ISBN NOT FOUND');
END MYPROC1;
FUNCTION MYFUNC1
RETURN NUMBER
IS RCOST NUMBER(10,2);
BEGIN
SELECT COST FROM JD11.BOOK INTO RCOST WHERE ISBN = 21;
RETURN (RCOST);
END MYFUNC1;
END MYPACK1;

You can execute a public package block like this :-


EXECUTE :PCOST := JD11.MYPACK1.MYFUNC1 - WHERE JD11 is the schema name that
owns the package. You can use DROP PACKAGE and DROP PACKAGE BODY to remove the
package objects FROM the database.

CREATE OR REPLACE PACKAGE schema.package

CREATE PACKAGE emp_mgmt AS


FUNCTION hire (last_name VARCHAR2, job_id VARCHAR2,
manager_id NUMBER, salary NUMBER,
commission_pct NUMBER, department_id NUMBER)
RETURN NUMBER;
FUNCTION create_dept(department_id NUMBER, location NUMBER)
RETURN NUMBER;
PROCEDURE remove_emp(employee_id NUMBER);
PROCEDURE remove_dept(department_id NUMBER);
PROCEDURE increase_sal(employee_id NUMBER, salary_incr NUMBER);
PROCEDURE increase_comm(employee_id NUMBER, comm_incr NUMBER);
no_comm EXCEPTION;
no_sal EXCEPTION;
END emp_mgmt;
/

Before you can call this package's procedures and functions,


you must define these procedures and functions in the package body.

5.22 View a view:


=================

set long 2000

SELECT text
FROM sys.dba_views
WHERE view_name = 'CONTROL_PLAZA_V';

5.23 ALTER SYSTEM:


==================

ALTER SYSTEM CHECKPOINT;


ALTER SYSTEM ENABLE/DISABLE RESTRICTED SESSION;
ALTER SYSTEM FLUSH SHARED_POOL;
ALTER SYSTEM SWITCH LOGFILE;
ALTER SYSTEM SUSPEND/RESUME;
ALTER SYSTEM SET RESOURCE_LIMIT = TRUE;
ALTER SYSTEM SET LICENSE_MAX_USERS = 300;
ALTER SYSTEM SET GLOBAL_NAMES=FALSE;
ALTER SYSTEM SET COMPATIBLE = '9.2.0' SCOPE=SPFILE;

5.24 HOW TO ENABLE OR DISABLE TRIGGERS:


=======================================

Disable enable trigger:

ALTER TRIGGER Reorder DISABLE;


ALTER TRIGGER Reorder ENABLE;

Or in 1 time for all triggers on a table:

ALTER TABLE Inventory


DISABLE ALL TRIGGERS;
5.25 DIASABLING AND ENABLING AN INDEX:
======================================

alter index HEAT_CUSTOMER_POSTAL_CODE unusable;


alter index HEAT_CUSTOMER_POSTAL_CODE rebuild;

5.26 CREATE A VIEW:


===================

CREATE VIEW v1 AS SELECT


LPAD(' ',40-length(size_tab.size_col)/2,' ') size_col
FROM size_tab;

CREATE VIEW X
AS
SELECT * FROM gebruiker@aptest

5.27 MAKE A USER:


=================

CREATE USER jward


IDENTIFIED BY aZ7bC2
DEFAULT TABLESPACE data_ts
QUOTA 100M ON test_ts
QUOTA 500K ON data_ts
TEMPORARY TABLESPACE temp_ts
PROFILE clerk;
GRANT connect TO jward;

create user jaap identified by jaap


default tablespace users
temporary tablespace temp;

grant connect to jaap;

grant resource to jaap;

Dynamic queries:
----------------

-- CREATE USER AND GRANT PERMISSION STATEMENTS


-- dynamic querieS

SELECT 'CREATE USER '||USERNAME||' identified by '||USERNAME||' default tableSpace


'||
DEFAULT_TABLESPACE||' temporary tableSpace '||TEMPORARY_TABLESPACE||';'
FROM DBA_USERS
WHERE USERNAME NOT IN ('SYS','SYSTEM','OUTLN','CTXSYS','ORDSYS','MDSYS');

SELECT 'GRANT CREATE SeSSion to '||USERNAME||';' FROM DBA_USERS


WHERE USERNAME NOT IN ('SYS','SYSTEM','OUTLN','CTXSYS','ORDSYS','MDSYS');

SELECT 'GRANT connect to '||USERNAME||';' FROM DBA_USERS


WHERE USERNAME NOT IN ('SYS','SYSTEM','OUTLN','CTXSYS','ORDSYS','MDSYS');

SELECT 'GRANT reSource to '||USERNAME||';' FROM DBA_USERS


WHERE USERNAME NOT IN ('SYS','SYSTEM','OUTLN','CTXSYS','ORDSYS','MDSYS');

SELECT 'GRANT unlimited tableSpace to '||USERNAME||';' FROM DBA_USERS


WHERE USERNAME NOT IN ('SYS','SYSTEM','OUTLN','CTXSYS','ORDSYS','MDSYS');

Becoming another user:


======================

- Do the query:

select 'ALTER USER '||username||' IDENTIFIED BY VALUES '||''''||


password||''''||';'
from dba_users;

- change the password


- do what you need to do as the other account
- change the password back to the original value

-- grant <other roles or permissions> to <user>

SELECT 'ALTER TABLE RM_LIVE.'||table_name||' disable constraint '||


constraint_name||';'
from dba_constraints where owner='RM_LIVE' and CONSTRAINT_TYPE='R';

SELECT 'ALTER TABLE RM_LIVE.'||table_name||' disable constraint '||


constraint_name||';'
from dba_constraints where owner='RM_LIVE' and CONSTRAINT_TYPE='P';

5.28 CREATE A SEQUENCE:


=======================

Sequences are database objects from which multiple users can generate unique
integers.
You can use sequences to automatically generate primary key values.

CREATE SEQUENCE <sequence name>


INCREMENT BY <increment number>
START WITH <start number>
MAXVALUE <maximum value>
CYCLE ;

CREATE SEQUENCE department_seq


INCREMENT BY 1
START WITH 1
MAXVALUE 99999
NOCYCLE;

5.29 STANDARD USERS IN 9i:


==========================

CTXSYS is the primary schema for interMedia.


MDSYS, ORDSYS, and ORDPLUGINS are schemas required when installing any of the
cartridges.
MTSSYS is required for the Oracle Service for MTS and is specific to NT.
OUTLN is an integral part of the database required for the plan stability feature
in Oracle8i.

While the interMedia and cartridge schemas can be recreated by running their
associated
scripts as needed, I am not 100% on the steps associated with the MTSSYS user.

Unfortunately, the OUTLN user is created at database creation time when sql.bsq is
run.
The OUTLN user owns the package OUTLN_PKG which is used to manage stored outlines
and their outline categories.
There are other tables (base tables), indexes, grants, and synonyms related to
this package.

By default, are automatically created during database creation :


SCOTT by script $ORACLE_HOME/rdbms/admin/utlsampl.sql
OUTLN by script $ORACLE_HOME/rdbms/admin/sql.bsq
Optionally:
DBSNMP if Enterprise Manager Intelligent Agent is installed
TRACESVR if Enterprise Manager is installed
AURORA$ORB$UNAUTHENTICATED \
AURORA$JIS$UTILITY$ -- if Oracle Servlet Engine (OSE) is installed
OSE$HTTP$ADMIN /
MDSYS if Oracle Spatial option is installed
ORDSYS if interMedia Audio option is installed
ORDPLUGINS if interMedia Audio option is installed
CTXSYS if Oracle Text option is installed
REPADMIN if Replication Option is installed
LBACSYS if Oracle Label Security option is installed
ODM if Oracle Data Mining option is installed
ODM_MTR idem
OLAPSYS if OLAP option is installed
WMSYS if Oracle Workspace Manager script owmctab.plb is
executed.
ANONYMOUS if catqm.sql catalog script for SQL XML management
XDB is executed

5.30 FORCED LOGGING:


====================

alter database no force logging;

If a database is in force logging mode, all changes, except those in temporary


tablespaces, will be logged,
independently from any nologging specification. It is also possible to put
arbitrary tablespaces into force logging mode:
alter tablespace force logging. A force logging might take a while to complete
because

alter database add supplemental log data;

ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;

ALTER TABLESPACE TDBA_CDC NO FORCE LOGGING;


====================================================
ORACLE INSTALLATIONS ON SOLARIS, LINUX, AIX, VMS:
====================================================

6: Install on Solaris
7: Install on Linux
8: Install on OpenVMS
9: Install on AIX

==================================
6.1. Install Oracle 92 on Solaris:
==================================

6.1 Tutorial 1:
===============

Short Guide to install Oracle 9.2.0 on SUN Solaris 8

--------------------------------------------------------------------------------

The Oracle 9i Distribution can be found on Oracle Technet


(http://technet.oracle.com)

The following, short Installation Guide shows how to install Oracle 9.2.0 for SUN
Solaris 8.
You may download our scripts to create a database, we suggest this way and NOT
using DBASSIST. Besides this scripts,
you can download our SQLNET configuration files TNSNAMES.ORA. LISTENER.ORA and
SQLNET.ORA.

Check Hardware Requirements


Operating System Software Requirements
Java Runtime Environment (JRE)
Check Software Limits
Setup the Solaris Kernel
Create Unix Group �dba�
Create Unix User �oracle�
Setup ORACLE environment ($HOME/.profile) as follows
Install from CD-ROM ...
... or Unpacking downloaded installation files
Check oraInst.loc File
Install with Installer in interactive mode
Create the Database
Start Listener
Automatically Start / Stop the Database
Install Oracle Options (optional)
Download Scripts for Sun Solaris

For our installation, we used the following ORACLE_HOME and ORACLE_SID, please
adjust these parameters
for your own environment.
ORACLE_HOME = /opt/oracle/product/9.2.0

ORACLE_SID = TYP2

--------------------------------------------------------------------------------

Check Hardware Requirements

Minimal Memory: 256 MB


Minimal Swap Space: Twice the amount of the RAM

To determine the amount of RAM memory installed on your system, enter the
following command.

$ /usr/sbin/prtconf

To determine the amount of SWAP installed on your system, enter the following
command and multiply
the BLOCKS column by 512.

$ swap -l

Use the latest kernel patch from Sun Microsystems (http://sunsolve.sun.com)

Operating System Software Requirements

Use the latest kernel patch from Sun Microsystems.

- Download the Patch from: http://sunsolve.sun.com


- Read the README File included in the Patch
- Usually the only thing you have to do is:

$ cd <patch cluster directory>


$ ./install_custer
$ cat /var/sadm/install_data/<luster name>_log
$ showrev -p

- Reboot the system

To determine your current operating system information:

$ uname -a

To determine which operating system patches are installed:

$ showrev -p

To determine which operating system packages are installed:

$ pkginfo -i [package_name]

To determine if your X-windows system is working properly on your local system,


but you can redirect the X-windows
output to another system.

$ xclock

To determine if you are using the correct system executables:


$ /usr/bin/which make
$ /usr/bin/which ar
$ /usr/bin/which ld
$ /usr/bin/which nm

Each of the four commands above should point to the /usr/ccs/bin directory. If
not, add /usr/ccs/bin to the
beginning of the PATH environment variable in the current shell.

Java Runtime Environment (JRE)

The JRE shipped with Oracle9i is used by Oracle Java applications such as the
Oracle Universal Installer
is the only one supported. You should not modify this JRE, unless it is done
through a patch provided by
Oracle Support Services. The inventory can contain multiple versions of the JRE,
each of which can be used
by one or more products or releases. The Installer creates the oraInventory
directory
the first time it is run to keep an inventory of products that it installs on your
system as well as other
installation information. The location of oraInventory is defined in
/var/opt/oracle/oraInst.loc.
Products in an ORACLE_HOME access the JRE through a symbolic link in
$ORACLE_HOME/JRE to the actual location
of a JRE within the inventory. You should not modify the symbolic link.

Check Software Limits

Oracle9i includes native support for files greater than 2 GB. Check your shell to
determine
whether it will impose a limit.

To check current soft shell limits, enter the following command:


$ ulimit -Sa

To check maximum hard limits, enter the following command:


$ ulimit -Ha

The file (blocks) value should be multiplied by 512 to obtain the maximum file
size imposed by the shell.
A value of unlimited is the operating system default and is the maximum value of 1
TB.

Setup the Solaris Kernel

Set to the sum of the PROCESSES parameter for each Oracle database, adding the
largest one twice, then add
an additional 10 for each database.
For example, consider a system that has three Oracle instances with the PROCESSES
parameter
in their initSID.ora files set to the following values:

ORACLE_SID=TYP1, PROCESSES=100
ORACLE_SID=TYP2, PROCESSES=100
ORACLE_SID=TYP3, PROCESSES=200
The value of SEMMNS is calculated as follows:

SEMMNS = [(A=100) + (B=100)] + [(C=200) * 2] + [(# of instances=3) * 10] = 630

Setting parameters too high for the operating system can prevent the machine from
booting up.
Refer to Sun Microsystems Sun SPARC Solaris system administration documentation
for parameter limits.

*
* Kernel Parameters on our SUN Enterprise with 640MB for Oracle 9
*
set shmsys:shminfo_shmmax=4294967295
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=100
set shmsys:shminfo_shmseg=10
set semsys:seminfo_semmni=100
set semsys:seminfo_semmsl=100
set semsys:seminfo_semmns=2500
set semsys:seminfo_semopm=100
set semsys:seminfo_semvmx=32767

-- remarks:

The parameter for shared memory (shminfo_shmmax) can be set to the maximum
value; it will not impact Solaris in any way.
The values for semaphores (seminfo_semmni and seminfo_semmns) depend on the
number of clients you want to collect
data from.
As a rule of the thumb, the values should be set to at least (2*nr of clients +
15).
You will have to reboot the system after making changes to the /etc/system file.

Solaris doesn't automatically allocate shared memory, unless you specify the
value in /etc/system and reboot.

Were I you, i'd put in lines in /etc/system that look something like this:
only the first value is *really* important. It specifies the maximum amount
of shared memory to allocate. I'd make this parameter be about 70-75% of your
physical ram (assuming you have nothing else on this machine running besides
Oracle ... if not, adjust down accordingly). Then this value will dictate
your maximum SGA size as you build your database.

set shmsys:shminfo_shmmax=4294967295
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=100
set shmsys:shminfo_shmseg=10
set semsys:seminfo_semmsl=256
set semsys:seminfo_semmns=1024
set semsys:seminfo_semmni=400

-- end remarks

Create Unix Group �dba�

$ groupadd -g 400 dba


$ groupdel dba
Create Unix User �oracle�

$ useradd -u 400 -c "Oracle Owner" -d /export/home/oracle \


-g "dba" -m -s /bin/ksh oracle

Setup ORACLE environment ($HOME/.profile) as follows

# Setup ORACLE environment

ORACLE_HOME=/opt/oracle/product/9.2.0; export ORACLE_HOME


ORACLE_SID=TYP2; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
TNS_ADMIN=/export/home/oracle/config/9.2.0; export TNS_ADMIN
NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1; export NLS_LANG
ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data; export ORA_NLS33
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:/usr/openwin/lib
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/dt/lib:/usr/ucblib:/usr/local/lib
export LD_LIBRARY_PATH

# Set up the search paths:

PATH=/bin:/usr/bin:/usr/sbin:/opt/bin:/usr/ccs/bin:/opt/local/GNU/bin
PATH=$PATH:/opt/local/bin:/opt/NSCPnav/bin:$ORACLE_HOME/bin
PATH=$PATH:/usr/local/samba/bin:/usr/ucb:.
export PATH

# CLASSPATH must include the following JRE location(s):

CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
CLASSPATH=$CLASSPATH:$ORACLE_HOME/network/jlib

Install from CD-ROM ...

Usually the CD-ROM will be mounted automatically by the Solaris Volume Manager, if
not, do it as follows as user root.

$ su root
$ mkdir /cdrom
$ mount -r -F hsfs /dev/.... /cdrom

exit or CTRL-D

... or Unpacking downloaded installation files

If you downloaded database installation files from Oracle site


(901solaris_disk1.cpio.gz, 901solaris_disk2.cpio.gz and
901solaris_disk3.cpio.gz) gunzip them somewhere and you'll get three .cpio files.
The best way to download the huge files
is to use the tool GetRight ( http://www.getright.com/ )

$ cd <somewhere>
$ mkdir Disk1 Disk2 Disk3
$ cd Disk1
$ gunzip 901solaris_disk1.cpio.gz
$ cat 901solaris_disk1.cpio | cpio -icd

This will extract all the files for Disk1, repeat steps for Disk2 and D3isk3. Now
you should have three directories
(Disk1, Disk2 and Disk3) containing installation files.

Check oraInst.loc File

If you used Oracle before on your system, then you must edit the Oracle Inventory
File, usually located in:
/var/opt/oracle/oraInst.loc

inventory_loc=/opt/oracle/product/oraInventory

Install with Installer in interactive mode

Install Oracle 9i with Oracle Installer

$ cd /Disk1
$ DISPLAY=<Any X-Window Host>:0.0
$ export DISPLAY
$ ./runInstaller

example display:
$ export DISPLAY=192.168.1.10:0.0

Answer the questions in the Installer, we use the following install directories

Inventory Location: /opt/oracle/product/oraInventory


Oracle Universal Installer in: /opt/oracle/product/oui
Java Runtime Environment in: /opt/oracle/product/jre/1.1.8

Edit the Database Startup Script /var/opt/oracle/oratab

TYP2:/opt/oracle/product/9.2.0:Y

Create the Database

Edit and save the CREATE DATABASE File initTYP2.sql in $ORACLE_HOME/dbs, or create
a symbolic-Link
from $ORACLE_HOME/dbs to your Location.

$ cd $ORACLE_HOME/dbs
$ ln -s /export/home/oracle/config/9.2.0/initTYP2.ora initTYP2.ora
$ ls -l

initTYP2.ora -> /export/home/oracle/config/9.2.0/initTYP2.ora

First start the Instance, just to test your initTYP2.ora file for correct syntax
and system resources.

$ cd /export/home/oracle/config/9.2.0/
$ sqlplus /nolog
SQL> connect / as sysdba
SQL> startup nomount
SQL> shutdown immediate

Now you can create the database


SQL> @initTYP2.sql
SQL> @shutdown immediate
SQL> startup

Check the Logfile: initTYP2.log

Start Listener

$ lsnrctl start LSNRTYP2

Automatically Start / Stop the Database

To start the Database automatically on Boot-Time, create or use our Startup


Scripts dbora and lsnrora
(included in ora_config_sol_920.tar.gz),
which must be installed in /etc/init.d. Create symbolic Links from the Startup
Directories.

lrwxrwxrwx 1 root root S99dbora -> ../init.d/dbora*


lrwxrwxrwx 1 root root S99lsnrora -> ../init.d/lsnrora*

Install Oracle Options (optional)

You may want to install the following Options:

Oracle JVM
Orcale XML
Oracle Spatial
Oracle Ultra Search
Oracle OLAP
Oracle Data Mining
Example Schemas
Run the following script install_options.sh to enable this options in the
database. Before running this scripts
adjust the initSID.ora paramaters
as follows for the build process. After this, you can reset the paramters to
smaller values.

parallel_automatic_tuning = false
shared_pool_size = 200000000
java_pool_size = 100000000

$ ./install_options.sh

Download Scripts for Sun Solaris

These Scripts can be used as Templates. Please note, that some Parameters like
ORACLE_HOME, ORACLE_SID and PATH
must be adjusted on your own Environment. Besides this, you should check the
initSID.ora Parameters
for your Database (Size, Archivelog, ...)

6.2 Environment oracle user:


----------------------------

typical profile for Oracle account on most unix systems:


.profile
--------

MAIL=/usr/mail/${LOGNAME:?}
umask=022
EDITOR=vi; export EDITOR
ORACLE_BASE=/opt/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/9.2; export ORACLE_HOME
ORACLE_SID=OWS; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN
NLS_LANG=AMERICAN_AMERICA.AL16UTF8; export NLS_LANG
ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data; export ORA_NLS33
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:/usr/openwin/lib
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/dt/lib:/usr/ucblib:/usr/local/lib
export LD_LIBRARY_PATH
PATH=.:/usr/bin:/usr/sbin:/sbin:/usr/ucb:/etc:$ORACLE_HOME/lib:/usr/oasys/bin:$ORA
CLE_HOME/bin:/usr/local/bin:
export PATH
PS1='$PWD >'
DISPLAY=172.17.2.128:0.0
export DISPLAY

/etc >more passwd


-----------------
root:x:0:1:Super-User:/:/sbin/sh
daemon:x:1:1::/:
bin:x:2:2::/usr/bin:
sys:x:3:3::/:
adm:x:4:4:Admin:/var/adm:
lp:x:71:8:Line Printer Admin:/usr/spool/lp:
uucp:x:5:5:uucp Admin:/usr/lib/uucp:
nuucp:x:9:9:uucp Admin:/var/spool/uucppublic:/usr/lib/uucp/uucico
smmsp:x:25:25:SendMail Message Submission Program:/:
listen:x:37:4:Network Admin:/usr/net/nls:
nobody:x:60001:60001:Nobody:/:
noaccess:x:60002:60002:No Access User:/:
nobody4:x:65534:65534:SunOS 4.x Nobody:/:
avdsel:x:1002:100:Albert van der Sel:/export/home/avdsel:/bin/ksh
oraclown:x:1001:102:Oracle owner:/export/home/oraclown:/bin/ksh
brighta:x:1005:102:Bright Alley:/export/home/brighta:/bin/ksh
customer:x:2000:102:Customer account:/export/home/customer:/usr/bin/tcsh

/etc >more group


----------------
root::0:root
other::1:
bin::2:root,bin,daemon
sys::3:root,bin,sys,adm
adm::4:root,adm,daemon
uucp::5:root,uucp
mail::6:root
tty::7:root,adm
lp::8:root,lp,adm
nuucp::9:root,nuucp
staff::10:
daemon::12:root,daemon
sysadmin::14:
smmsp::25:smmsp
nobody::60001:
noaccess::60002:
nogroup::65534:
dba::100:oraclown,brighta
oper::101:
oinstall::102:

=====================================
7. install Oracle 9i on Linux:
=====================================

====================
7.1.Article 1:
====================

The Oracle 9i Distribution can be found on Oracle Technet


(http://technet.oracle.com)

The following short Guide shows how to install and configure Oracle 9.2.0 on
RedHat Linux 7.2 / 8.0 You may download our
Scripts to create a database, we suggest this way and NOT using DBASSIST. Besides
these scripts, you can download our
NET configuration files: LISTNER.ORA, TNSNAMES.ORA and SQLNET.ORA.

System Requirements
Create Unix Group �dba�
Create Unix User �oracle�
Setup Environment ($HOME/.bash_profile) as follows
Mount the Oracle 9i CD-ROM (only if you have the CD) ...
... or Unpacking downloaded installation files
Install with Installer in interactive mode
Create the Database
Create your own DB-Create Script (optional)
Start Listener
Automatically Start / Stop the Database
Setup Kernel Parameters ( if necessary )
Install Oracle Options (optional)
Download Scripts for RedHat Linux 7.2

For our installation, we used the following ORACLE_HOME AND ORACLE_SID, please
adjust these parameters for
your own environment.

ORACLE_HOME = /opt/oracle/product/9.2.0
ORACLE_SID = VEN1

--------------------------------------------------------------------------------

System Requirements

Oracle 9i needs Kernel Version 2.4 and glibc 2.2, which is included in RedHat
Linux 7.2.
Component
Check with ...
... Output

Liunx Kernel Version 2.4


rpm -q kernel
kernel-2.4.7-10

System Libraries
rpm -q glibc
glibc-2.2.4-19.3

Proc*C/C++
rpm -q gcc
gcc-2.96-98

Create Unix Group �dba�

$ groupadd -g 400 dba

Create Unix User �oracle�

$ useradd -u 400 -c "Oracle Owner" -d /home/oracle \


-g "dba" -m -s /bin/bash oracle

Setup Environment ($HOME/.bash_profile) as follows

# Setup ORACLE environment

ORACLE_HOME=/opt/oracle/product/9.2.0; export ORACLE_HOME


ORACLE_SID=VEN1; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
ORACLE_OWNER=oracle; export ORACLE_OWNER
TNS_ADMIN=/home/oracle/config/9.2.0; export TNS_ADMIN
NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1; export NLS_LANG
ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data; export ORA_NLS33
CLASSPATH=$ORACLE_HOME/jdbc/lib/classes111.zip
LD_LIBRARY_PATH=$ORACLE_HOME/lib; export LD_LIBRARY_PATH

### see JSDK: export CLASSPATH

# Set up JAVA and JSDK environment:

export JAVA_HOME=/usr/local/jdk
export JSDK_HOME=/usr/local/jsdk
CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JSDK_HOME/lib/jsdk.jar
export CLASSPATH

# Set up the search paths:

PATH=$POSTFIX/bin:$POSTFIX/sbin:$POSTFIX/sendmail
PATH=$PATH:/usr/local/jre/bin:/usr/local/jdk/bin:/bin:/sbin:/usr/bin:/usr/sbin
PATH=$PATH:/usr/local/bin:$ORACLE_HOME/bin:/usr/local/jsdk/bin
PATH=$PATH:/usr/local/sbin:/usr/bin/X11:/usr/X11R6/bin:/root/bin
PATH=$PATH:/usr/local/samba/bin
export PATH

Mount the Oracle 9i CD-ROM (only if you have the CD) ...

Mount the CD-ROM as user root.

$ su root
$ mkdir /cdrom
$ mount -t iso9660 /dev/cdrom /cdrom
$ exit

... or Unpacking downloaded installation files

If you downloaded database installation files from Oracle site


(Linux9i_Disk1.cpio.gz, Linux9i_Disk2.cpio.gz and
Linux9i_Disk3.cpio.gz) gunzip them somewhere and you'll get three .cpio files. The
best way to download the huge files
is to use the tool GetRight ( http://www.getright.com/ )

$ cd <somewhere>
$ cpio -idmv < Linux9i_Disk1.cpio
$ cpio -idmv < Linux9i_Disk2.cpio
$ cpio -idmv < Linux9i_Disk3.cpio

Now you should have three directories (Disk1, Disk2 and Disk3) containing
installation files.

Install with Installer in interactive mode

Install Oracle 9i with Oracle Installer

$ cd Disk1
$ DISPLAY=<Any X-Window Host>:0.0
$ export DISPLAY
$ ./runInstaller

Answer the questions in the Installer, we use the following install directories

Inventory Location: /opt/oracle/product/oraInventory


Oracle Universal Installer in: /opt/oracle/product/oui
Java Runtime Environment in: /opt/oracle/product/jre/1.1.8

Edit the Database Startup Script /etc/oratab

VEN1:/opt/oracle/product/9.2.0:Y

Create the Database

Edit and save the CREATE DATABASE File initVEN1.sql in $ORACLE_HOME/dbs, or create
a symbolic-Link from
$ORACLE_HOME/dbs to your Location.

$ cd $ORACLE_HOME/dbs
$ ln -s /home/oracle/config/9.2.0/initVEN1.ora initVEN1.ora
$ ls -l

initVEN1.ora -> /home/oracle/config/9.2.0/initVEN1.ora


First start the Instance, just to test your initVEN1.ora file for correct syntax
and system resources.

$ cd /home/oracle/config/9.2.0/
$ sqlplus /nolog
SQL> connect / as sysdba
SQL> startup nomount
SQL> shutdown immediate

Now you can create the database

SQL> @initVEN1.sql
SQL> @shutdown immediate
SQL> startup

Check the Logfile: initVEN1.log

Create your own DB-Create Script (optional)

You can generate your own DB-Create Script using the Tool: $ORACLE_HOME/bin/dbca

Start Listener

$ lsnrctl start LSNRVEN1

Automatically Start / Stop the Database

To start the Database automatically on Boot-Time, create or use our Startup


Scripts dbora and lsnrora (included in
ora_config_linux_901.tar.gz), which must be installed in /etc/rc.d/init.d. Create
symbolic Links from the
Startup Directories in /etc/rc.d (e.g. /etc/rc.d/rc2.d).

lrwxrwxrwx 1 root root S99dbora -> ../init.d/dbora*


lrwxrwxrwx 1 root root S99lsnrora -> ../init.d/lsnrora*

Setup Kernel Parameters ( if necessary )

Oracle9i uses UNIX resources such as shared memory, swap space, and semaphores
extensively
for interprocess communication. If your kernel parameter settings are insufficient
for Oracle9i,
you will experience problems during installation and instance startup.
The greater the amount of data you can store in memory, the faster your database
will operate. In addition,
by maintaining data in memory, the UNIX kernel reduces disk I/O activity.

Use the ipcs command to obtain a list of the system�s current shared memory and
semaphore segments,
and their identification number and owner.
You can modify the kernel parameters by using the /proc file system.

To modify kernel parameters using the /proc file system:

1. Log in as root user.

2. Change to the /proc/sys/kernel directory.


3. Review the current semaphore parameter values in the sem file using the cat or
more utility

# cat sem

The output will list, in order, the values for the SEMMSL, SEMMNS, SEMOPM, and
SEMMNI parameters.
The following example shows how the output will appear.

250 32000 32 128

In the preceding example, 250 is the value of the SEMMSL parameter, 32000 is the
value of the SEMMNS parameter, 32
is the value of the SEMOPM parameter, and 128 is the value of the SEMMNI
parameter.

4. Modify the parameter values using the following command:

# echo SEMMSL_value SEMMNS_value SEMOPM_value SEMMNI_value > sem

In the preceding command, all parameters must be entered in order.

5. Review the current shared memory parameters using the cat or more utility.

# cat shared_memory_parameter

In the preceding example, the shared_memory_parameter is either the SHMMAX or


SHMMNI parameter. The parameter name must be
entered in lowercase letters.

6. Modify the shared memory parameter using the echo utility. For example, to
modify the SHMMAX parameter, enter the following:

# echo 2147483648 > shmmax

7. Write a script to initialize these values during system startup and include the
script in your system init files.
Refer to the following table to determine if your system shared memory and
semaphore kernel parameters are set high enough for Oracle9i.
The parameters in the following table are the minimum values required to run
Oracle9i with a single database instance.
You can put the initialization in the file /etc/rc.d/rc.local

# Setup Kernel Parameters for Oracle 9i

echo 250 32000 100 128 > /proc/sys/kernel/sem


echo 2147483648 > /proc/sys/kernel/shmmax
echo 4096 > /proc/sys/kernel/shmmni

Install Oracle Options (optional)

You may want to install the following Options:

Oracle JVM
Orcale XML
Oracle Spatial
Oracle Ultra Search
Oracle OLAP
Oracle Data Mining
Example Schemas
Run the following script install_options.sh to enable this options in the
database. Before running this scripts adjust
the initSID.ora paramaters as follows for the build process. After this, you can
reset the paramters to smaller values.

parallel_automatic_tuning = false
shared_pool_size = 200000000
java_pool_size = 100000000

$ ./install_options.sh

Download Scripts for RedHat Linux 7.2

These Scripts can be used as Templates. Please note, that some Parameters like
ORACLE_HOME, ORACLE_SID and PATH must
be adjusted on your own Environment. Besides this, you should check the
initSID.ora Parameters for your Database (Size, Archivelog, ...)

====================
7.2.Article 2:
====================

Installing Oracle9i (9.2.0.5.0) on Red Hat Linux (Fedora Core 2)

by Jeff Hunter, Sr. Database Administrator

--------------------------------------------------------------------------------

Contents

Overview
Swap Space Considerations
Configuring Shared Memory
Configuring Semaphores
Configuring File Handles
Create Oracle Account and Directories
Configuring the Oracle Environment
Configuring Oracle User Shell Limits
Downloading / Unpacking the Oracle9i Installation Files
Update Red Hat Linux System - (Oracle Metalink Note: 252217.1)
Install the Oracle 9.2.0.4.0 RDBMS Software
Install the Oracle 9.2.0.5.0 Patchset
Post Installation Steps
Creating the Oracle Database

--------------------------------------------------------------------------------

Overview

The following article is a summary of the steps required to successfully install


the Oracle9i (9.2.0.4.0) RDBMS software on Red Hat Linux Fedora Core 2. Also
included in this article is a detailed overview for applying the Oracle9i
(9.2.0.5.0) patchset. Keep in mind the following assumptions throughout this
article:

When installing Red Hat Linux Fedora Core 2, I install ALL components.
(Everything). This makes it easier than trying to troubleshoot missing software
components.

As of March 26, 2004, Oracle includes the Oracle9i RDBMS software with the
9.2.0.4.0 patchset already included. This will save considerable time since the
patchset does not have to be downloaded and installed. We will, however, be
applying the 9.2.0.5.0 patchset.

Although it is not required, it is recommend to apply the 9.2.0.5.0 patchset.

The post installation section includes steps for configuring the Oracle Networking
files, configuring the database to start and stop when the machine is cycled, and
other miscellaneous tasks.

Finally, at the end of this article, we will be creating an Oracle 9.2.0.5.0


database named ORA920 using supplied scripts.

--------------------------------------------------------------------------------

Swap Space Considerations

Ensure enough swap space is available.

Installing Oracle9i requires a minimum of 512MB of memory.


(An inadequate amount of swap during the installation will cause the Oracle
Universal Installer to either "hang" or "die")

To check the amount of memory / swap you have allocated, type either:
# free

- OR -

# cat /proc/swaps

- OR -

# cat /proc/meminfo | grep MemTotal

If you have less than 512MB of memory (between your RAM and SWAP), you can add
temporary swap space by creating a temporary swap file. This way you do not have
to use a raw device or even more drastic, rebuild your system.
As root, make a file that will act as additional swap space, let's say about
300MB:
# dd if=/dev/zero of=tempswap bs=1k count=300000

Now we should change the file permissions:


# chmod 600 tempswap

Finally we format the "partition" as swap and add it to the swap space:
# mke2fs tempswap
# mkswap tempswap
# swapon tempswap

--------------------------------------------------------------------------------

Configuring Shared Memory

The Oracle RDBMS uses shared memory in UNIX to allow processes to access common
data structures and data.
These data structures and data are placed in a shared memory segment to allow
processes the fastest form of
Interprocess Communications (IPC) available. The speed is primarily a result of
processes not needing to copy
data between each other to share common data and structures - relieving the kernel
from having to get involved.
Oracle uses shared memory in UNIX to hold its Shared Global Area (SGA). This is an
area of memory within
the Oracle instance that is shared by all Oracle backup and foreground processes.
It is important to size
the SGA to efficiently hold the database buffer cache, shared pool, redo log
buffer as well as other shared
Oracle memory structures. Inadequate sizing of the SGA can have a dramatic
decrease in performance of the database.

To determine all shared memory limits you can use the ipcs command. The following
example shows the values
of my shared memory limits on a fresh RedHat Linux install using the defaults:

# ipcs -lm

------ Shared Memory Limits --------


max number of segments = 4096
max seg size (kbytes) = 32768
max total shared memory (kbytes) = 8388608
min seg size (bytes) = 1
Let's continue this section with an overview of the parameters that are
responsible for configuring the
shared memory settings in Linux.
SHMMAX

The SHMMAX parameter is used to define the maximum size (in bytes) for a shared
memory segment and should be set
large enough for the largest SGA size. If the SHMMAX is set incorrectly (too low),
it is possible that the
Oracle SGA (which is held in shared segments) may be limited in size. An
inadequate SHMMAX setting would result
in the following:
ORA-27123: unable to attach to shared memory segment
You can determine the value of SHMMAX by performing the following:

# cat /proc/sys/kernel/shmmax
33554432
As you can see from the output above, the default value for SHMMAX is 32MB. This
is often too small to configure the Oracle SGA. I generally set the SHMMAX
parameter to 2GB.
NOTE: With a 32-bit Linux operating system, the default maximum size of the SGA is
1.7GB. This is the reason I will often set the SHMMAX parameter to 2GB since it
requires a larger value for SHMMAX.
On a 32-bit Linux operating system, without Physical Address Extension (PAE), the
physical memory is divided into a 3GB user space and a 1GB kernel space. It is
therefore possible to create a 2.7GB SGA, but you will need make several changes
at the Linux operating system level by changing the mapped base. In the case of a
2.7GB SGA, you would want to set the SHMMAX parameter to 3GB.

Keep in mind that the maximum value of the SHMMAX parameter is 4GB.

To change the value SHMMAX, you can use either of the following three methods:

This is method I use most often. This method sets the SHMMAX on startup by
inserting the following kernel parameter in the /etc/sysctl.conf startup file:
# echo "kernel.shmmax=2147483648" >> /etc/sysctl.conf

If you wanted to dynamically alter the value of SHMMAX without rebooting the
machine, you can make this change directly to the /proc file system. This command
can be made permanent by putting it into the /etc/rc.local startup file:
# echo "2147483648" > /proc/sys/kernel/shmmax

You can also use the sysctl command to change the value of SHMMAX:
# sysctl -w kernel.shmmax=2147483648
SHMMNI

We now look at the SHMMNI parameters. This kernel parameter is used to set the
maximum number of shared memory segments system wide. The default value for this
parameter is 4096. This value is sufficient and typically does not need to be
changed.
You can determine the value of SHMMNI by performing the following:

# cat /proc/sys/kernel/shmmni
4096
SHMALL

Finally, we look at the SHMALL shared memory kernel parameter. This parameter
controls the total amount of shared memory (in pages) that can be used at one time
on the system. In short, the value of this parameter should always be at least:
ceil(SHMMAX/PAGE_SIZE)
The default size of SHMALL is 2097152 and can be queried using the following
command:
# cat /proc/sys/kernel/shmall
2097152
From the above output, the total amount of shared memory (in bytes) that can be
used at one time on the system is:
SM = (SHMALL * PAGE_SIZE)
= 2097152 * 4096
= 8,589,934,592 bytes
The default setting for SHMALL should be adequate for our Oracle installation.
NOTE: The page size in Red Hat Linux on the i386 platform is 4096 bytes. You can,
however, use bigpages which supports the configuration of larger memory page
sizes.
--------------------------------------------------------------------------------

Configuring Semaphores

Now that we have configured our shared memory settings, it is time to take care of
configuring our semaphores. A semaphore can be thought of as a counter that is
used to control access to a shared resource. Semaphores provide low level
synchronization between processes (or threads within a process) so that only one
process (or thread) has access to the shared segment, thereby ensureing the
integrity of that shared resource. When an application requests semaphores, it
does so using "sets".
To determine all semaphore limits, use the following:

# ipcs -ls

------ Semaphore Limits --------


max number of arrays = 128
max semaphores per array = 250
max semaphores system wide = 32000
max ops per semop call = 32
semaphore max value = 32767
You can also use the following command:
# cat /proc/sys/kernel/sem
250 32000 32 128
SEMMSL

The SEMMSL kernel parameter is used to control the maximum number of semaphores
per semaphore set.
Oracle recommends setting SEMMSL to the largest PROCESS instance parameter setting
in the init.ora file for all databases hosted on the Linux system plus 10. Also,
Oracle recommends setting the SEMMSL to a value of no less than 100.

SEMMNI

The SEMMNI kernel parameter is used to control the maximum number of semaphore
sets on the entire Linux system.
Oracle recommends setting the SEMMNI to a value of no less than 100.

SEMMNS

The SEMMNS kernel parameter is used to control the maximum number of semaphores
(not semaphore sets) on the entire Linux system.
Oracle recommends setting the SEMMNS to the sum of the PROCESSES instance
parameter setting for each database on the system, adding the largest PROCESSES
twice, and then finally adding 10 for each Oracle database on the system. To
summarize:

SEMMNS = sum of PROCESSES setting for each database on the system


+ ( 2 * [largest PROCESSES setting])
+ (10 * [number of databases on system]
To determine the maximum number of semaphores that can be allocated on a Linux
system, use the following calculation. It will be the lesser of:

SEMMNS -or- (SEMMSL * SEMMNI)


SEMOPM

The SEMOPM kernel parameter is used to control the number of semaphore operations
that can be performed per semop system call.
The semop system call (function) provides the ability to do operations for
multiple semaphores with one semop system call. A semaphore set can have the
maximum number of SEMMSL semaphores per semaphore set and is therefore recommended
to set SEMOPM equal to SEMMSL.

Oracle recommends setting the SEMOPM to a value of no less than 100.

Setting Semaphore Kernel Parameters

Finally, we see how to set all semaphore parameters using several methods. In the
following, the only parameter I care about changing (raising) is SEMOPM. All other
default settings should be sufficient for our example installation.
This is method I use most often. This method sets all semaphore kernel parameters
on startup by inserting the following kernel parameter in the /etc/sysctl.conf
startup file:
# echo "kernel.sem=250 32000 100 128" >> /etc/sysctl.conf

If you wanted to dynamically alter the value of all semaphore kernel parameters
without rebooting the machine, you can make this change directly to the /proc file
system. This command can be made permanent by putting it into the /etc/rc.local
startup file:
# echo "250 32000 100 128" > /proc/sys/kernel/sem

You can also use the sysctl command to change the value of all semaphore settings:

# sysctl -w kernel.sem="250 32000 100 128"

--------------------------------------------------------------------------------

Configuring File Handles

When configuring our Linux database server, it is critical to ensure that the
maximum number of file handles is large enough. The setting for file handles
designate the number of open files that you can have on the entire Linux system.
Use the following command to determine the maximum number of file handles for the
entire system:

# cat /proc/sys/fs/file-max
103062
Oracle recommends that the file handles for the entire system be set to at least
65536. In most cases, the default for Red Hat Linux is 103062. I have seen others
(Red Hat Linux AS 2.1, Fedora Core 1, and Red Hat version 9) that will only
default to 32768. If this is the case, you will want to increase this value to at
least 65536.

This is method I use most often. This method sets the maximum number of file
handles (using the kernel parameter file-max) on startup by inserting the
following kernel parameter in the /etc/sysctl.conf startup file:
# echo "fs.file-max=65536" >> /etc/sysctl.conf

If you wanted to dynamically alter the value of all semaphore kernel parameters
without rebooting the machine, you can make this change directly to the /proc file
system. This command can be made permanent by putting it into the /etc/rc.local
startup file:
# echo "65536" > /proc/sys/fs/file-max

You can also use the sysctl command to change the maximum number of file handles:
# sysctl -w fs.file-max=65536
NOTE: It is also possible to query the current usage of file handles using the
following command:
# cat /proc/sys/fs/file-nr
1140 0 103062
In the above example output, here is an explanation of the three values from the
file-nr command:
Total number of allocated file handles.
Total number of file handles currently being used.
Maximum number of file handles that can be allocated. This is essentially the
value of file-max - (see above).

NOTE: If you need to increase the value in /proc/sys/fs/file-max, then make sure
that the ulimit is set properly. Usually for 2.4.20 it is set to unlimited. Verify
the ulimit setting my issuing the ulimit command:
# ulimit
unlimited

--------------------------------------------------------------------------------

Create Oracle Account and Directories

Now let's create the Oracle UNIX account all all required directories:
Login as the root user id.
% su -
Create directories.
# mkdir -p /u01/app/oracle
# mkdir -p /u03/app/oradata
# mkdir -p /u04/app/oradata
# mkdir -p /u05/app/oradata
# mkdir -p /u06/app/oradata
Create the UNIX Group for the Oracle User Id.
# groupadd -g 115 dba
Create the UNIX User for the Oracle Software.
# useradd -u 173 -c "Oracle Software Owner" -d /u01/app/oracle -g "dba" -m -s
/bin/bash oracle
# passwd oracle
Changing password for user oracle.
New UNIX password: ************
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password: ************
passwd: all authentication tokens updated successfully.
Change ownership of all Oracle Directories to the Oracle UNIX User.
# chown -R oracle:dba /u01
# chown -R oracle:dba /u03
# chown -R oracle:dba /u04
# chown -R oracle:dba /u05
# chown -R oracle:dba /u06
Oracle Environment Variable Settings
NOTE: Ensure to set the environment variable: LD_ASSUME_KERNEL=2.4.1 Failing to
set the LD_ASSUME_KERNEL parameter will cause
the Oracle Universal Installer to hang!

Verify all mount points. Please keep in mind that all of the following mount
points can simply be directories if you only have one hard drive.
For our installation, we will be using four mount points (or directories) as
follows:

/u01 : The Oracle RDBMS software will be installed to /u01/app/oracle.

/u03 : This mount point will contain the physical Oracle files:

Control File 1
Online Redo Log File - Group 1 / Member 1
Online Redo Log File - Group 2 / Member 1
Online Redo Log File - Group 3 / Member 1

/u04 : This mount point will contain the physical Oracle files:

Control File 2
Online Redo Log File - Group 1 / Member 2
Online Redo Log File - Group 2 / Member 2
Online Redo Log File - Group 3 / Member 2

/u05 : This mount point will contain the physical Oracle files:

Control File 3
Online Redo Log File - Group 1 / Member 3
Online Redo Log File - Group 2 / Member 3
Online Redo Log File - Group 3 / Member 3

/u06 : This mount point will contain the all physical Oracle data files.

This will be one large RAID 0 stripe for all Oracle data files.
All tablespaces including System, UNDO, Temporary, Data, and Index.

--------------------------------------------------------------------------------

Configuring the Oracle Environment

After configuring the Linux operating environment, it is time to setup the Oracle
UNIX User ID for the installation of the Oracle RDBMS Software.
Keep in mind that the following steps need to be performed by the oracle user id.
Before delving into the details for configuring the Oracle User ID, I packaged an
archive of shell scripts and configuration files to assist
with the Oracle preparation and installation. You should download the archive
"oracle_920_installation_files_linux.tar" as the Oracle User ID
and place it in his HOME directory.

Login as the oracle user id.


% su - oracle

Unpackage the contents of the oracle_920_installation_files_linux.tar archive.


After extracting the archive, you will have a new directory
called oracle_920_installation_files_linux that contains all required files. The
following set of commands descibe how to extract the file
and where to copy/extract all required files:
$ id
uid=173(oracle) gid=115(dba) groups=115(dba)

$ pwd
/u01/app/oracle

$ tar xvf oracle_920_installation_files_linux.tar


oracle_920_installation_files_linux/
oracle_920_installation_files_linux/admin.tar
oracle_920_installation_files_linux/common.tar
oracle_920_installation_files_linux/dbora
oracle_920_installation_files_linux/dbshut
oracle_920_installation_files_linux/.bash_profile
oracle_920_installation_files_linux/dbstart
oracle_920_installation_files_linux/ldap.ora
oracle_920_installation_files_linux/listener.ora
oracle_920_installation_files_linux/sqlnet.ora
oracle_920_installation_files_linux/tnsnames.ora
oracle_920_installation_files_linux/crontabORA920.txt

$ cp oracle_920_installation_files_linux/.bash_profile ~/.bash_profile

$ tar xvf oracle_920_installation_files_linux/admin.tar

$ tar xvf oracle_920_installation_files_linux/common.tar

$ . ~/.bash_profile
.bash_profile executed
$

--------------------------------------------------------------------------------

Configuring Oracle User Shell Limits

Many of the Linux shells (including BASH) implement certain controls over certain
critical resources like the number of file descriptors that
can be opened and the maximum number of processes available to a user's session.
In most cases, you will not need to alter any of these shell limits,
but you find yourself getting errors when creating or maintaining the Oracle
database, you may want to read through this section.
You can use the following command to query these shell limits:

# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 16383
virtual memory (kbytes, -v) unlimited
Maximum Number of Open File Descriptors for Shell Session

Let's first talk about the maximum number of open file descriptors for a user's
shell session.
NOTE: Make sure that throughout this section, that you are logged in as the oracle
user account since this is the shell account we want to test!

Ok, you are first going to tell me, "But I've already altered my Linux environment
by setting the system wide kernel parameter /proc/sys/fs/file-max".
Yes, this is correct, but there is still a per user limit on the number of open
file descriptors. This typically defaults to 1024.
To check that, use the following command:

% su - oracle
% ulimit -n
1024
If you wanted to change the maximum number of open file descriptors for a user's
shell session, you could edit the /etc/security/limits.conf as the root account.
For your Linux system, you would add the following lines:
oracle soft nofile 4096
oracle hard nofile 101062
The first line above sets the soft limit, which is the number of files handles (or
open files) that the Oracle user will have after logging in to the shell account.
The hard limit defines the maximum number of file handles (or open files) are
possible for the user's shell account. If the oracle user account starts to
recieve error messages about running out of file handles, then number of file
handles should be increased for the oracle using the user should increase the
number of file handles using the hard limit setting. You can increase the value of
this parameter to 101062 for the current session by using the following:
% ulimit -n 101062
Keep in mind that the above command will only effect the current shell session. If
you were to log out and log back in, the value would be set back to its default
for that shell session.
NOTE: Although you can set the soft and hard file limits higher, it is critical to
understand to never set the hard limit for nofile for your shell account equal to
/proc/sys/fs/file-max. If you were to do this, your shell session could use up all
of the file descriptors for the entire Linux system, which means that the entire
Linux system would run out of file descriptors. At this point, you would not be
able to initiate any new logins since the system would not be able to open any PAM
modules, which are required for login. Notice that I set my hard limit to 101062
and not 103062. In short, I am leaving 2000 spare!

We're not totally done yet. We still need to ensure that pam_limits is configured
in the /etc/pam.d/system-auth file. The steps defined below sould already be
performed with a normal Red Hat Linux installation, but should still be validated!

The PAM module will read the /etc/security/limits.conf file. You should have an
entry in the /etc/pam.d/system-auth file as follows:

session required /lib/security/$ISA/pam_limits.so


I typically validate that my /etc/pam.d/system-auth file has the following two
entries:
session required /lib/security/$ISA/pam_limits.so
session required /lib/security/$ISA/pam_unix.so
Finally, let's test our new settings for the maximum number of open file
descriptors for the oracle shell session. Logout and log back in as the oracle
user account then run the following commands.

Let's first check all current soft shell limits:

$ ulimit -Sa
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 4096
pipe size (512 bytes, -p) 8
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 16383
virtual memory (kbytes, -v) unlimited
Finally, let's check all current hard shell limits:
$ ulimit -Ha
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 101062
pipe size (512 bytes, -p) 8
stack size (kbytes, -s) unlimited
cpu time (seconds, -t) unlimited
max user processes (-u) 16383
virtual memory (kbytes, -v) unlimited
The soft limit is now set to 4096 while the hard limit is now set to 101062.
NOTE: There may be times when you cannot get access to the root user account to
change the /etc/security/limits.conf file. You can set this value in the user's
login script for the shell as follows:
su - oracle
cat >> ~oracle/.bash_profile << EOF
ulimit -n 101062
EOF

NOTE: For this section, I used the BASH shell. The session values will not always
be the same for other shells.
Maximum Number of Processes for Shell Session

This section is very similar to the previous section, "Maximum Number of Open File
Descriptors for Shell Session" and deals with the same concept of soft limits and
hard limits as well as configuring pam_limits. For most default Red Hat Linux
installations, you will not need to be concerned with the maximum number of user
processes as this value is generally high enough!
NOTE: For this section, I used the BASH shell. The session values will not always
be the same for other shells.

Let's start by querying the current limit of the maximum number of processes for
the oracle user:

% su - oracle
% ulimit -u
16383
If you wanted to change the soft and hard limits for the maximum number of
processes for the oracle user, (and for that matter, all users), you could edit
the /etc/security/limits.conf as the root account. For your Linux system, you
would add the following lines:
oracle soft nproc 2047
oracle hard nproc 16384
NOTE: There may be times when you cannot get access to the root user account to
change the /etc/security/limits.conf file. You can set this value in the user's
login script for the shell as follows:
su - oracle
cat >> ~oracle/.bash_profile << EOF
ulimit -u 16384
EOF

Miscellaneous Notes

To check all current soft shell limits, enter the following command:
$ ulimit -Sa
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 4096
pipe size (512 bytes, -p) 8
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 16383
virtual memory (kbytes, -v) unlimited
To check maximum hard limits, enter the following command:
$ ulimit -Ha
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 101062
pipe size (512 bytes, -p) 8
stack size (kbytes, -s) unlimited
cpu time (seconds, -t) unlimited
max user processes (-u) 16383
virtual memory (kbytes, -v) unlimited
The file (blocks) value should be multiplied by 512 to obtain the maximum file
size imposed by the shell. A value of unlimited is the operating system default
and typically has a maximum value of 1 TB.
NOTE: Oracle9i Release 2 (9.2.0) includes native support for files greater than 2
GB. Check your shell to determine whether it will impose a limit.

--------------------------------------------------------------------------------

Downloading / Unpacking the Oracle9i Installation Files

Most of the actions throughout the rest of this document should be done as the
"oracle" user account unless otherwise noted. If you are not logged in as the
"oracle" user account, do so now.

Download Oracle9i from Oracle's OTN Site.


(If you do not currently have an account with Oracle OTN, you will need to create
one. This is a FREE account!)
http://www.oracle.com/technology/software/products/oracle9i/htdocs/linuxsoft.html

Download the following files to a temporary directory (i.e.


/u01/app/oracle/orainstall:

ship_9204_linux_disk1.cpio.gz (538,906,295 bytes) (cksum - 245082434)


ship_9204_linux_disk2.cpio.gz (632,756,922 bytes) (cksum - 2575824107)
ship_9204_linux_disk3.cpio.gz (296,127,243 bytes) (cksum - 96915247)

Directions to extract the files.

Run "gunzip <filename>" on all the files.


% gunzip ship_9204_linux_disk1.cpio.gz
Extract the cpio archives with the command: "cpio -idmv < <filename>"
% cpio -idmv < ship_9204_linux_disk1.cpio
NOTE: Some browsers will uncompress the files but leave the extension the same
(gz) when downloading. If the above steps do not work for you, try skipping step 1
and go directly to step 2 without changing the filename.
% cpio -idmv < ship_9204_linux_disk1.cpio.gz

You should now have three directories called "Disk1, Disk2 and Disk3" containing
the Oracle9i Installation files:
/Disk1
/Disk2
/Disk3

--------------------------------------------------------------------------------

Update Red Hat Linux System - (Oracle Metalink Note: 252217.1)

The following RPMs, all of which are available on the Red Hat Fedora Core 2 CDs,
will need to be updated as per the steps described in Metalink Note: 252217.1 -
"Requirements for Installing Oracle 9iR2 on RHEL3".
All of these packages will need to be installed as the root user:

From Fedora Core 2 / Disk #1

# cd /mnt/cdrom/Fedora/RPMS
# rpm -Uvh libpng-1.2.2-22.i386.rpm
From Fedora Core 2 / Disk #2
# cd /mnt/cdrom/Fedora/RPMS
# rpm -Uvh gnome-libs-1.4.1.2.90-40.i386.rpm
From Fedora Core 2 / Disk #3
# cd /mnt/cdrom/Fedora/RPMS
# rpm -Uvh compat-libstdc++-7.3-2.96.126.i386.rpm
# rpm -Uvh compat-libstdc++-devel-7.3-2.96.126.i386.rpm
# rpm -Uvh compat-db-4.1.25-2.1.i386.rpm
# rpm -Uvh compat-gcc-7.3-2.96.126.i386.rpm
# rpm -Uvh compat-gcc-c++-7.3-2.96.126.i386.rpm
# rpm -Uvh openmotif21-2.1.30-9.i386.rpm
# rpm -Uvh pdksh-5.2.14-24.i386.rpm
From Fedora Core 2 / Disk #4
# cd /mnt/cdrom/Fedora/RPMS
# rpm -Uvh sysstat-5.0.1-2.i386.rpm
Set gcc296 and g++296 in PATH
Put gcc296 and g++296 first in $PATH variable by creating the following symbolic
links:
# mv /usr/bin/gcc /usr/bin/gcc323
# mv /usr/bin/g++ /usr/bin/g++323
# ln -s /usr/bin/gcc296 /usr/bin/gcc
# ln -s /usr/bin/g++296 /usr/bin/g++
Check hostname
Make sure the hostname command returns a fully qualified host name by amending the
/etc/hosts file if necessary:
# hostname
Install the 3006854 patch:
The Oracle / Linux Patch 3006854 can be downloaded here.
# unzip p3006854_9204_LINUX.zip
# cd 3006854
# sh rhel3_pre_install.sh

--------------------------------------------------------------------------------

Install the Oracle 9.2.0.4.0 RDBMS Software

As the "oracle" user account:

Set your DISPLAY variable to a valid X Windows display.


% DISPLAY=<Any X-Windows Host>:0.0
% export DISPLAY

NOTE: If you forgot to set the DISPLAY environment variable and you get the
following error:

Xlib: connection to ":0.0" refused by server


Xlib: Client is not authorized to connect to Server

you will then need to execute the following command to get "runInstaller" working
again:
% rm -rf /tmp/OraInstall

If you don't do this, the Installer will hang without giving any error messages.
Also make sure that "runInstaller" has stopped running in the background. If not,
kill it.

Change directory to the Oracle installation files you downloaded and extracted.
Then run: runInstaller.

$ su - oracle
$ cd orainstall/Disk1
$ ./runInstaller
Initializing Java Virtual Machine from /tmp/OraInstall2004-05-02_08-45-
13PM/jre/bin/java. Please wait...
Screen Name Response
Welcome Screen: Click "Next"
Inventory Location: Click "OK"
UNIX Group Name: Use "dba"
Root Script Window: Open another window, login as the root userid, and run
"/tmp/orainstRoot.sh". When the script has completed, return to the dialog from
the Oracle Installer and hit Continue.
File Locations: Leave the "Source Path" at its default setting. For the
Destination name, I like to use "OraHome920". You can leave the Destination path
at it's default value which should be "/u01/app/oracle/product/9.2.0".
Available Products: Select "Oracle9i Database 9.2.0.4.0" and click "Next"
Installation Types: Select "Enterprise Edition (2.84GB)" and click "Next"
Database Configuration: Select "Software Only" and click "Next"
Summary: Click "Install"

Running root.sh script.


When the "Link" phase is complete, you will be prompted to run the
$ORACLE_HOME/root.sh script as the "root" user account.

Shutdown any started Oracle processes


The Oracle Universal Installer will succeed in starting some Oracle programs, in
particular the Oracle HTTP Server (Apache), the Oracle Intelligent Agent, and
possibly the Orcle TNS Listener. Make sure all programs are shutdown before
attempting to continue in installing the Oracle 9.2.0.5.0 patchset:

% $ORACLE_HOME/Apache/Apache/bin/apachectl stop
% agentctl stop

% lsnrctl stop

--------------------------------------------------------------------------------

Install the Oracle 9.2.0.5.0 Patchset

Once you have completed installing of the Oracle9i (9.2.0.4.0) RDBMS software, you
should now apply the 9.2.0.5.0 patchset.
NOTE: The details and instructions for applying the 9.2.0.5.0 patchset in this
article is not absolutely necessary. I provide it here simply as a convenience for
those how do want to apply the latest patchset.

The 9.2.0.5.0 patchset can be downloaded from Oracle Metalink:

Patch Number: 3501955


Description: ORACLE 9i DATABASE SERVER RELEASE 2 - PATCH SET 4 VERSION 9.2.0.5.0
Product: Oracle Database Family
Release: Oracle 9.2.0.5
Select a Platform or Language: Linux x86
Last Updated: 26-MAR-2004
Size: 313M (328923077 bytes)

Use the following steps to install the Oracle10g Universal Installer and then the
Oracle 9.2.0.5.0 patchset.

To start, let's unpack the Oracle 9.2.0.5.0 to a temporary directory:


% cd orapatch
% unzip p3501955_9205_LINUX.zip
% cpio -idmv < 9205_lnx32_release.cpio

Next, we need to install the Oracle10g Universal Installer into the same
$ORACLE_HOME we used to install the Oracle9i RDBMS software.
NOTE: Using the old Universal Installer that was used to install the Oracle9i
RDBMS software, (OUI release 2.2), cannot be used to install the 9.2.0.5.0
patchset and higher!

Starting with the Oracle 9.2.0.5.0 patchset, Oracle requires the use of the
Oracle10g Universal Installer to apply the 9.2.0.5.0 patchset and to perform all
subsequent maintenance operations on the Oracle software $ORACLE_HOME.

Let's get this thing started by installing the Oracle10g Universal Installer. This
must be done by running the runInstaller that is included with the 9.2.0.5.0
patchset we extracted in the above step:

% cd orapatch/Disk1
% ./runInstaller -ignoreSysPrereqs
Starting Oracle Universal Installer...

Checking installer requirements...


Checking operating system version: must be redhat-2.1, UnitedLinux-1.0, redhat-3,
SuSE-7 or SuSE-8
Failed <<<<

>>> Ignoring required pre-requisite failures. Continuing...

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2004-08-30_07-


48-15PM. Please wait ...
Oracle Universal Installer, Version
10.1.0.2.0 Production
Copyright (C) 1999, 2004, Oracle. All rights reserved.
Use the following options in the Oracle Universal Installer to install the
Oracle10g OUI:
Screen Name Response
Welcome Screen: Click "Next"
File Locations: The "Source Path" should be pointing to the products.xml file by
default.
For the Destination name, choose the same one you created when installing the
Oracle9i software. The name we used in this article was "OraHome920" and the
destination path should be "/u01/app/oracle/product/9.2.0".

Select a Product to Install: Select "Oracle Universal Installer 10.1.0.2.0" and


click "Next"
Summary: Click "Install"

Exit from the Oracle Universal Installer.

Correct the runInstaller symbolic link bug. (Bug 3560961)


After the installation of Oracle10g Universal Installer, there is a bug that does
NOT update the $ORACLE_HOME/bin/runInstaller symbolic link to point to the new 10g
installation location. Since the symbolic link does not get updated, the
runInstaller command still points to the old installer (2.2) and will be run
instead of the new 10g installer.

To correct this, you will need to manually update the


$ORACLE_HOME/bin/runInstaller symbolic link:

% cd $ORACLE_HOME/bin
% ln -s -f $ORACLE_HOME/oui/bin/runInstaller.sh runInstaller

We now install the Oracle 9.2.0.5.0 patchset by executing the newly installed 10g
Universal Installer:
% cd
% runInstaller -ignoreSysPrereqs
Starting Oracle Universal Installer...

Checking installer requirements...

Checking operating system version: must be redhat-2.1, UnitedLinux-1.0, redhat-3,


SuSE-7 or SuSE-8
Failed <<<<

>>> Ignoring required pre-requisite failures. Continuing...


Preparing to launch Oracle Universal Installer from /tmp/OraInstall2004-08-30_07-
59-30PM. Please wait ...
Oracle Universal Installer, Version
10.1.0.2.0 Production
Copyright (C) 1999, 2004, Oracle. All rights reserved.
Here is an overview of the selections I made while performing the 9.2.0.5.0
patchset install:
Screen Name Response
Welcome Screen: Click "Next"
File Locations: The "Source Path" should be pointing to the products.xml file by
default.
For the Destination name, choose the same one you created when installing the
Oracle9i software. The name we used in this article was "OraHome920" and the
destination path should be "/u01/app/oracle/product/9.2.0".

Select a Product to Install: Select "Oracle 9iR2 Patchsets 9.2.0.5.0" and click
"Next"
Summary: Click "Install"

Running root.sh script.


When the Link phase is complete, you will be prompted to run the
$ORACLE_HOME/root.sh script as the "root" user account. Go ahead and run the
root.sh script.

Exit Universal Installer


Exit from the Universal Installer and continue on to the Post Installation section
of this article.

--------------------------------------------------------------------------------

Post Installation Steps

After applying the Oracle 9.2.0.5.0 patchset, we should perform several


miscellaneous tasks like configuring the Oracle Networking files and setting up
startup and shutdown scripts for then the machine is cycled.
Configuring Oracle Networking Files:
I already included sample configuration files (contained in the
oracle_920_installation_files_linux.tar file) that can be simply copied to their
proper location and started. Change to the oracle HOME directory and copy the
files as follows:

% cd
% cd oracle_920_installation_files_linux
% cp ldap.ora $ORACLE_HOME/network/admin/
% cp tnsnames.ora $ORACLE_HOME/network/admin/
% cp sqlnet.ora $ORACLE_HOME/network/admin/
% cp listener.ora $ORACLE_HOME/network/admin/

% cd
% lsnrctl start

Update /etc/oratab:
The dbora script (below) relies on an entry in the /etc/oratab. Perform the
following actions as the oracle user account:

% echo "ORA920:/u01/app/oracle/product/9.2.0:Y" >> /etc/oratab

Configuring Startup / Shutdown Scripts:


Also included in the oracle_920_installation_files_linux.tar file is a script
called dbora. This script can be used by the init process to startup and shutdown
the database when the machine is cycled. The following tasks will need to be
performed by the root user account:

% su -
# cp /u01/app/oracle/oracle_920_installation_files_linux/dbora /etc/init.d

# chmod 755 /etc/init.d/dbora

# ln -s /etc/init.d/dbora /etc/rc3.d/S99dbora
# ln -s /etc/init.d/dbora /etc/rc4.d/S99dbora
# ln -s /etc/init.d/dbora /etc/rc5.d/S99dbora
# ln -s /etc/init.d/dbora /etc/rc0.d/K10dbora
# ln -s /etc/init.d/dbora /etc/rc6.d/K10dbora

--------------------------------------------------------------------------------

Creating the Oracle Database

Finally, let's create an Oracle9i database. This can be done using scripts that I
already included with the oracle_920_installation_files_linux.tar download. The
scripts are included in the ~oracle/admin/ORA920/create directory. To create the
database, perform the following steps:
% su - oracle
% cd admin/ORA920/create
% ./RUN_CRDB.sh
After starting the RUN_CRDB.sh, there will be no screen activity until the
database creation is complete. You can, however, bring up a new console window to
the Linux databse server as the oracle user account, navigate to the same
directory you started the database creation from, and tail the crdb.log log file.
$ telnet linux3
...
Fedora Core release 2 (Tettnang)
Kernel 2.6.5-1.358 on an i686
login: oracle
Password: xxxxxx
.bash_profile executed
[oracle@linux3 oracle]$ cd admin/ORA920/create
[oracle@linux3 create]$ tail -f crdb.log

=====================================
8. Install Oracle 9.2.0.2 on OpenVMS:
=====================================

VMS:
====
Using OUI to install Oracle9i Release 2 on an OpenVMS System

We have a PC running Xcursion and a 16 Processor GS1280 with the 2 built-in disks
In the examples we booted on disk DKA0:
Oracle account is on disk DKA100. Oracle and the database will be installed on
DKA100.
Install disk MUST be ODS-5.

Installation uses the 9.2 downloaded from the Oracle website. It comes in a Java
JAR file.
Oracle ships a JRE with its product. However, you will have to install Java on
OpenVMS so you can unpack
the 9.2 JAR file that comes from the Oracle website
Unpack the JAR file as described on the Oracle website. This will create two .BCK
files.

Follow the instructions in the VMS_9202_README.txt file on how to restore the 2


backup save sets.

When the two backup save sets files are restored, you should end up with two
directories:

[disk1] directory
[disk2] directory

These directories will be in the root of a disk. In this example they are in the
root of DKA100.
The OUI requires X-Windows. If the Alpha system you are using does not have a
graphic head,
use a PC with an X-Windows terminal such as Xcursion.

During this install we discovered a problem:


Instructions tell you to run

@DKA100:[disk1]runinstaller.

This will not work because the RUNINSTALLER.COM file is not in the root of
DKA100:[disk1].
You must first copy RUNINSTALLER.COM from the dka100:[disk1.000000] directory into
dka100:[disk1]:

$ Copy dka100:[disk1.000000]runinstaller.com dka100:[disk1]

From a terminal window execute:

@DKA100:[disk1]runinstaller

- Oracle Installer starts


Start the installation
Click Next to start the installation.

- Assign name and directory structure for the Oracle Home ORACLE_HOME

Assign a name for your Oracle home.


Assign the directory structure for the home, for example

Ora_home
Dka100:[oracle.oracle9]

This is where the OUI will install Oracle.


The OUI will create the directories as necessary

- Select product to install


Select Database.
Click Next.
- Select type of installation
Select Enterprise Edition (or Standard Edition or Custom).
Click Next.
- Enable RAC
Select No.
Click Next.
- Database summary
View list of products that will be installed.
Click Install.
- Installation begins
Installation takes from 45 minutes to an hour.
Installation ends
Click Exit.

Oracle is now installed in DKA100:[oracle.oracle9].


To create the first database, you must first set up Oracle logicals.
To do this use a terminal and execute

@[.oracle9]orauser .

The tool to create and manage databases is DBCA.


On the terminal, type DBCA to launch the Database Assistant.
Welcome to Database Configuration Assistant
DBCA starts.
Click Next.
Select an operation
Select Create a Database.
Click Next.
Select a template
Select New Database.
Click Next.
Enter database name and SID
Enter the name of the database and Oracle System Identifier (SID):
In this example, the database name is DB9I.
The SID is DB9I1.
Click Next.
Select database features
Select which demo databases are installed.
In the example, we selected all possible databases.
Click Next.
Select default node
Select the node in which you want your database to operate by default.
In the example, we selected Shared Server Mode.
Click Next.
Select memory
In the example, we selected the default.
Click Next.
Specify database storage parameters
Select the device and directory.
Use the UNIX device syntax I.E.
For example, DKA100:[oracle.oracle9.database] would be:

/DKA100/oracle/oracle9/database/

In the example, we kept the default settings.


Click Next.

Select database creation options


Creating a template saves time when creating a database.
Click Finish.
Create a template
Click OK.
Creating and starting Oracle Instance
The database builds.
If it completes successfully, click Exit.
If it does not complete successfully, build it again.
Running the database
Enter �show system� to see the Oracle database up and running.
Set up some files to start and stop the database.
Example of a start file
This command sets the logicals to manage the database:

$ @dka100:[oracle.oracle9]orauser db9i1

The next line starts the Listener (needed for client connects).
The final lines start the database.
Stop database example
Example of how to stop the database.
Test database server
Use the Enterprise Manager console to test the database server.
Oracle Enterprise Manager
Enter address of server and SID.
Name the server.
Click OK.
Databases connect information
Select database.
Enter system account and password.
Change connection box to �AS SYSDBA.�
Click OK.
Open database
Database is opened and exposed.
Listener
Listener automatically picks up the SID from the database.
Start Listener before database and the SID will display in the Listener.
If you start the database before the Listener, the SID may not appear immediately.
To see if the SID is registered in the Listener, enter:

$lsnrctl stat

Alter a user
User is altered:

SQL> alter user oe identified by oe account unlock;


SQL> exit

Preferred method is to use the Enterprise Manager Console.


==================================================
9. Installation of Oracle 9i on AIX and other UNIX
==================================================

AIX:
====

9.1 Installation of Oracle 9i on AIX

Doc ID: Note:201019.1 Content Type: TEXT/PLAIN


Subject: AIX: Quick Start Guide - 9.2.0 RDBMS Installation Creation Date:
25-JUN-2002
Type: REFERENCE Last Revision Date: 14-APR-2004
Status: PUBLISHED
Quick Start Guide
Oracle9i Release 2 (9.2.0) RDBMS Installation
AIX Operating System

Purpose
=======

This document is designed to be a quick reference that can be used when


installing Oracle9i Release 2 (9.2.0) on an AIX platform. It is NOT designed
to replace the Installation Guide or other documentation. A familiarity
with the AIX Operating System is assumed. If more detailed information is
needed, please see the Appendix at the bottom of this document for additional
resources.

Each step should be done in the order that it is listed. These steps are the
bare minimum that is necessary for a typical install of the Oracle9i RDBMS.

Verify OS version is certified with the RDBMS version


======================================================

The following steps are required to verify your version of the AIX operating
system is certified with the version of the RDBMS (Oracle9i Release 2 (9.2.0)):

1. Point your web browser to http://metalink.oracle.com.


2. Click the "Certify & Availability" button near the left.
3. Click the "Certifications" button near the top middle.
4. Click the "View Certifications by Platform" link.
5. Select "IBM RS/6000 AIX" and click "Submit".
6. Select Product Group "Oracle Server" and click "Submit".
7. Select Product "Oracle Server - Enterprise Edition" and click "Submit".
8. Read any general notes at the top of the page.
9. Select "9.2 (9i) 64-bit" and click "Submit".

The "Status" column displays the certification status. The links in the
"Addt'l Info" and "Install Issue" columns may contain additional information
relevant to a given version. Note that if patches are listed under one of
these links, your installation is not considered certified unless you apply
them. The "Addt'l Info" link also contains information about available
patchsets. Installation of patchsets is not required to be considered
certified, but they are highly recommended.
Pre-Installation Steps for the System Administrator
====================================================

The following steps are required to verify your operating system meets minimum
requirements for installation, and should be performed by the root user. For
assistance with system administration issues, please contact your system
administator or operating system vendor.

Use these steps to manually check the operating system requirements before
attempting to install Oracle RDBMS software, or you may choose to use the
convenient "Unix InstallPrep script" which automates these checks for you. For
more information about the script, including download information, please
review the following article:

Note:189256.1 UNIX: Script to Verify Installation Requirements for


Oracle 9.x version of RDBMS

The InstallPrep script currently does not check requirements for AIX5L systems.

The Following Steps Need to be Performed by the Root User:

1. Configure Operating System Resources:

Ensure that the system has at least the following resources:

? 400 MB in /tmp *
? 256 MB of physical RAM memory
? Two times the amount of physical RAM memory for Swap/Paging space
(On systems with more than 2 GB of physical RAM memory, the
requirements for Swap/Paging space can be lowered, but Swap/Paging
space should never be less than physical RAM memory.)

* You may also redirect /tmp by setting the TEMP environment variable.
This is only recommended in rare circumstances where /tmp cannot be
expanded to meet free space requirements.

2. Create an Oracle Software Owner and Group:

Create an AIX user and group that will own the Oracle software.
(user = oracle, group = dba)

? Use the "smit security" command to create a new group and user

Please ensure that the user and group you use are defined in the local
/etc/passwd (user) and /etc/group (group) files rather than resolved via
a network service such as NIS.

3. Create a Software Mount Point and Datafile Mount Points:

Create a mount point for the Oracle software installation.


(at least 3.5 GB, typically /u01)

Create a second, third, and fourth mount point for the database files.
(typically /u02, /u03, and /u04) Use of multiple mount points is not
required, but is highly recommended for best performance and ease of
recoverability.

4. Ensure that Asynchronous Input Output (AIO) is "Available":

Use the following command to check the current AIO status:

# lsdev -Cc aio

Verify that the status shown is "Available". If the status shown is


"Defined", then change the "STATE to be configured at system restart"
to "Available" after running the following command:

# smit chaio

5. Ensure that the math library is installed on your system:

Use the following command to determine if the math library is installed:

# lslpp -l bos.adt.libm

If this fileset is not installed and "COMMITTED", then you must install
it from the AIX operating system CD-ROM from IBM. With the correct
CD-ROM mounted, run the following command to begin the process to load
the required bos.adt.libm fileset:

# smit install_latest

AIX5L systems also require the following filesets:

# lslpp -l bos.perf.perfstat
# lslpp -l bos.perf.libperfstat

6. Download and install JDK 1.3.1 from IBM. At the time this article was
created, the JDK could be downloaded from the following URL:

http://www.ibm.com/developerworks/java/jdk/aix/index.html

Please contact IBM Support if you need assistance downloading or


installing the JDK.

7. Mount the Oracle CD-ROM:

Mount the Oracle9i Release 2 (9.2.0) CD-ROM using the command:

# mount -rv cdrfs /dev/cd0 /cdrom

8. Run the rootpre.sh script:

NOTE: You must shutdown ALL Oracle database instances (if any) before
running the rootpre.sh script. Do not run the rootpre.sh script
if you have a newer version of an Oracle database already installed
on this system.

Use the following command to run the rootpre.sh script:

# /cdrom/rootpre.sh
Installation Steps for the Oracle User
=======================================

The Following Steps Need to be Performed by the Oracle User:

1. Set Environment Variables

Environment variables should be set in the login script for the oracle
user. If the oracle user's default shell is the C-shell (/usr/bin/csh),
then the login script will be named ".login". If the oracle user's
default shell is the Bourne-shell (/usr/bin/bsh) or the Korn-shell
(/usr/bin/sh or /usr/bin/ksh), then the login script will be named
".profile". In either case, the login script will be located in the
oracle user's home directory ($HOME).

The examples below assume that your software mount point is /u01.

Parameter Value
----------- -----------------------------

ORACLE_HOME /u01/app/oracle/product/9.2.0

PATH /u01/app/oracle/product/9.2.0/bin:/usr/ccs/bin:
/usr/bin/X11:
(followed by any other directories you wish to include)

ORACLE_SID Set this to what you will call your database instance.
(typically 4 characters in length)

DISPLAY <ip-address>:0.0
(review Note:153960.1 for detailed information)

2. Set the umask:

Set the oracle user's umask to "022" in you ".profile" or ".login" file.

Example:

umask 022

3. Verify the Environment

Log off and log on as the oracle user to ensure all environment variables
are set correctly. Use the following command to view them:

% env | more

Before attempting to run the Oracle Universal Installer (OUI), verify that
you can successfully run the following command:

% /usr/bin/X11/xclock

If this does not display a clock on your display screen, please review the
following article:

Note:153960.1 FAQ: X Server testing and troubleshooting

4. Start the Oracle Universal Installer and install the RDBMS software:
Use the following commands to start the installer:

% cd /tmp
% /cdrom/runInstaller

Respond to the installer prompts as shown below:

? When prompted for whether rootpre.sh has been run by root, enter "y".
This should have been done in Pre-Installation step 8 above.

? At the "Welcome Screen", click Next.

? If prompted, enter the directory to use for the "Inventory Location".


This can be any directory, but is usually not under ORACLE_HOME because
the oraInventory is shared with all Oracle products on the system.

? If prompted, enter the "UNIX Group Name" for the oracle user (dba).

? At the "File Locations Screen", verify the Destination listed is your


ORACLE_HOME directory. Also enter a NAME to identify this ORACLE_HOME.
The NAME can be anything, but is typically "DataServer" and the first
three digits of the version. For example: "DataServer920"

? At the "Available Products Screen", choose Oracle9i Database, then click


Next.

? At the "Installation Types Screen", choose Enterprise Edition, then


click Next.

? If prompted, click Next at the "Component Locations Screen" to accept


the default directories.

? At the "Database Configuration Screen", choose the the configuration


based on how you plan to use the database, then click Next.

? If prompted, click Next at the "Privileged Operating System Groups


Screen" to accept the default values (your current OS primary group).

? If prompted, enter the Global Database Name in the format


"ORACLE_SID.hostname" at the "Database Identification Screen".
For example: "TEST.AIXhost". The SID entry should be filled in with
the value of ORACLE_SID. Click Next.

? If prompted, enter the directory where you would like to put datafiles
at the "Database File Location Screen". Click Next.

? If prompted, select "Use the default character set" (WE8ISO8859P1) at


the "Database Character Set Screen". Click Next.

? At the "Choose JDK Home Directory", enter the directory where you have
previously installed the JDK 1.3.1 from IBM. This should have been
done in Pre-Installation step 6 above.

? At the "Summary Screen", review your choices, then click Install.

The install will begin. Follow instructions regarding running "root.sh"


and any other prompts. When completed, the install will have created a
default database, configured a Listener, and started both for you.

Note: If you are having problems changing CD-ROMs when prompted to do so,
please review the following article:

Note:146566.1 How to Unmount / Eject First Cdrom

Your Oracle9i Release 2 (9.2.0) RDBMS installation is now complete and ready
for use.

Appendix A
==========

Documentation is available from the following resources:

Oracle9i Release 2 (9.2.0) CD-ROM Disk1


----------------------------------------

Mount the CD-ROM, then use a web browser to open the file "index.htm" located
at the top level directory of the CD-ROM. On this CD-ROM you will find the
Installation Guide, Administrator's Reference, and other useful documentation.

Oracle Documentation Center


---------------------------

Point your web browser to the following URL:

http://otn.oracle.com/documentation/content.html

Select the highest version CD-pack displayed to ensure you get the most
up-to-date information.

Unattended install:
-------------------

Note 1:
-------

This note describes how to start the unattended install of patch 9.2.0.5 on AIX
5L, which can be applied
to 9.2.0.2, 9.2.0.3, 9.2.0.4

Shut down the existing Oracle server instance with normal or immediate priority.
For example,
shutdown all instances (cleanly) if running Parallel Server.

Stop all listener, agent and other processes running in or against the ORACLE_HOME
that will have
the patch set installation. Run slibclean (/usr/sbin/slibclean) as root to remove
ant currently unused
modules in kernel and library memory.

To perform a silent installation requiring no user intervention:


Copy the response file template provided in the response directory where you
unpacked
the patch set tar file.

Edit the values for all fields labeled as <Value Required> according to the
comments and
examples in the template.

Start the Oracle Universal Installer from the directory described in Step 4 which
applies to your situation.
You should pass the full path of the response file template you have edited
locally as the last argument
with your own value of ORACLE_HOME and FROM_LOCATION. The following is an example
of the command:

% ./runInstaller -silent -responseFile full_path_to_your_response_file

Run the $ORACLE_HOME/root.sh script from a root session. If you are applying the
patch set
in a cluster database environment, the root.sh script should be run in the same
way on both the local node
and all participating nodes.

Note 2:
-------

In order to make an unattended install of 9.2.0.1 on Win2K:

Running Oracle Universal Installer and Specifying a Response File


To run Oracle Universal Installer and specify the response file:

Go to the MS-DOS command prompt.

Go to the directory where Oracle Universal Installer is installed.

Run the appropriate response file. For example,

C:\program files\oracle\oui\install> setup.exe -silent -nowelcome -responseFile


filename

Where... Description
filename
Identifies the full path of the specific response file

-silent
Runs Oracle Universal Installer in complete silent mode. The Welcome window is
suppressed automatically.
This parameter is optional. If you use -silent, -nowelcome is not necessary.

-nowelcome
Suppresses the Welcome window that appears during installation. This parameter is
optional.

Note 3:
-------
Unattended install of 9.2.0.5 on Win2K:

To perform a silent installation requiring no user intervention:

Make a copy of the response file template provided in the response directory where
you unzipped
the patch set file.
Edit the values for all fields labeled as <Value Required> according to the
comments and examples
in the template.

Start Oracle Universal Installer release 10.1.0.2 located in the unzipped area of
the patch set.
For example, Disk1\setup.exe. You should pass the full path of the response file
template you have edited
locally as the last argument with your own value of ORACLE_HOME and FROM_LOCATION.
The syntax is as follows:

setup.exe -silent -responseFile ORACLE_BASE\ORACLE_HOME\response_file_path

===============================
9.2 Oracle and UNIX and other OS:
===============================

You have the following options for creating your new Oracle database:

- Use the Database Configuration Assistant (DBCA).

DBCA can be launched by the Oracle Universal Installer, depending upon the type of
install that you select,
and provides a graphical user interface (GUI) that guides you through the creation
of a database.
You can chose not to use DBCA, or you can launch it as a standalone tool at any
time in the future to create a database.

Run DCBA as

% dbca

- Create the database manually from a script.

If you already have existing scripts for creating your database, you can still
create your database manually.
However, consider editing your existing script to take advantage of new Oracle
features. Oracle provides a sample database
creation script and a sample initialization parameter file with the database
software files it distributes,
both of which can be edited to suit your needs.

- Upgrade an existing database.

In all cases, the Oracle software needs to be installed on your host machine.
9.1.1 Operating system dependencies:
------------------------------------

First, determine for this version of Oracle, what OS settings


must be made, and if any patches must be installed.

For example, on Linux, glibc 2.1.3 is needed with Oracle version 8.1.7.
Linux could be quite critical with respect to libraries in combination
with Oracle.

Ook moet er mogelijk shmmax (max size of shared memory segment)


en dergelijke kernel parameters worden aangepast.

# sysctl -w kernel.shmmax=100000000
# echo "kernel.shmmax = 100000000" >> /etc/sysctl.conf

Opmerking: Het onderstaANDe is algemeen, maar is ook afgeleid van een Oracle
8.1.7
installatie op Linux Redhat 6.2

Als de 8.1.7 installatie gedaan wordt is ook nog de Java JDK 1.1.8 nodig.
Deze kan gedownload worden van www.blackdown.org

Download jdk-1.1.8_v3 jdk118_v3-glibc-2.1.3.tar.bz2 in /usr/local


tar xvif jdk118_v3-glibc-2.1.3.tar.bz2
ln -s /usr/local/jdk118_v3 /usr/local/java

9.1.2 Environment variables:


----------------------------

Make sure you have the following environment variables set:

ON UNIX:
========

Example 1:
----------

ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE (root voor oracle


software)
ORACLE_HOME=$ORACLE_BASE/product/8.1.5; export ORACLE_HOME (bepaald de
directory waarin de instance software zich bevind)
ORACLE_SID=brdb; export ORACLE_SID (bepaald de naam
van de huidige instance)
ORACLE_TERM=xterm, vt100, ansi of wat ANDers; export ORACLE_TERM
ORA_NLSxx=$ORACLE_HOME/ocommon/nls/admin/data; export ORA_NLS (bepaald de nls
directory t.b.v. datafiles voor meerdere talen)
NLS_LANG="Dutch_The NetherlANDs.WE8ISO8859P1"; export NLS_LANG (Dit specificeert
de language, territory en characterset
t.b.v de client
applicaties.
LD_LIBRARY_PATH=/u01/app/oracle/product/8.1.7/lib; export LD_LIBRARY_PATH
PATH=$ORACLE_HOME/bin:/bin:/user/bin:/usr/sbin:/bin; export PATH

plaats deze variabelen in de oracle user profile file:


.profile, of .bash_profile etc..
Example 2:
----------

/dbs01 - - - - - Db directory 1
/dbs01 /app - - - - Constante
/dbs01 /app /oracle - - $ORACLE_BASE Oracle base
directory
/dbs01 /app /oracle /admin - $ORACLE_ADMIN Oracle admin
directory
/dbs01 /app /oracle /product - - Constante
/dbs01 /app /oracle /product /817 $ORACLE_HOME Oracle home
directory

# LISTENER.ORA Network Configuration File:


/dbs01/app/oracle/product/817/network/admin/listener.ora

# TNSNAMES.ORA Network Configuration File:


/dbs01/app/oracle/product/817/network/admin/tnsnames.ora

Example 3:
----------

/dbs01/app/orace Oracle software


/dbs02/oradata database files
/dbs03/oradata database files
..
..
/var/opt/oracle network files
/opt/oracle/admin/bin

Example 4:
----------

Mountpunt Device Omvang (Mbyte) Doel


/ /dev/md/dsk/d1 100 Unix Root-filesysteem
/usr /dev/md/dsk/d3 1200 Unix usr-filesysteem
/var /dev/md/dsk/d4 200 Unix var-filesysteem
/home /dev/md/dsk/d5 200 Unix opt-filesysteem
/opt /dev/md/dsk/d6 4700 Oracle_Home
/u01 /dev/md/dsk/d7 8700 Oracle datafiles
/u02 /dev/md/dsk/d8 8700 Oracle datafiles
/u03 /dev/md/dsk/d9 8700 Oracle datafiles
/u04 /dev/md/dsk/d10 8700 Oracle datafiles
/u05 /dev/md/dsk/d110 8700 Oracle datafiles
/u06 /dev/md/dsk/d120 8700 Oracle datafiles
/u07 /dev/md/dsk/d123 8650 Oracle datafiles

Example 5:
----------

initBENE.ora /opt/oracle/product/8.0.6/dbs
tnsnames.ora /opt/oracle/product/8.0.6/network/admin
listener.ora /opt/oracle/product/8.0.6/network/admin
alert log /var/opt/oracle/bene/bdump
oratab /var/opt/oracle
Example 6:
----------

ORACLE_BASE /u01/app/oracle
ORACLE_HOME $ORACLE_BASE/product/10.1.0/db_1
ORACLE_PATH /u01/app/oracle/product/10.1.0/db_1/bin:.
Note: The period adds the current working directory to the search path.

ORACLE_SID SAL1
ORAENV_ASK NO
SQLPATH /home:/home/oracle:/u01/oracle
TNS_ADMIN $ORACLE_HOME/network/admin
TWO_TASK
Function Specifies the default connect identifier to use in the connect string.
If this environment variable is set,
you do not need to specify the connect identifier in the connect string. For
example, if the TWO_TASK environment variable
is set to sales, you can connect to a database using the CONNECT username/password
command
rather than the CONNECT username/password@sales command.
Syntax Any connect identifier.
Example PRODDB_TCP

to identify the SID and Oracle home directory for the instance that you want to
shut down, enter the following command:

Solaris:

$ cat /var/opt/oracle/oratab

Other operating systems:

$ cat /etc/oratab

ON NT/2000:
===========

SET ORACLE_BASE=G:\ORACLE
SET ORACLE_HOME=G:\ORACLE\ORA81
SET ORACLE_SID=AIRM
SET ORA_NLSxxx=G:\ORACLE\ORA81\ocommon\nls\admin\data
SET NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1

ON OpenVMS:
===========

When Oracle is installed on VMS, a root directory is chosen which is pointed


to by the logical name ORA_ROOT. This directory can be placed anywhere on the VMS
system.
The majority of code, configuration files and command procedures are found below
this root directory.

When a new database is created a new directory is created in the root directory
to store database specific configuration files. This directory is called
[.DB_dbname].
This directory will normally hold the system tablespace data file as well
as the database specific startup, shutdown and orauser files.

The Oracle environment for a VMS user is set up by running the appropriate
ORAUSER_dbname.COM file. This sets up the necessary command symbols and logical
names
to access the various ORACLE utilities.
Each database created on a VMS system will have an ORAUSER file in it's home
directory
and will be named ORAUSER_dbname.COM, e.g. for a database SALES the file
specification could be:

ORA_ROOT:[DB_SALES]ORAUSER_SALES.COM

To have the environment set up automatically on login, run this command file in
your login.com file.
To access SQLPLUS use the following command with a valid username and password:

$ SQLPLUS username/password

SQLDBA is also available on VMS and can be invoked similarly:


$ SQLDBA username/password

9.1.3 OFA directory structuur:


------------------------------

Hou je aan OFA. Een voorbeeld voor database PROD:

/opt/oracle/product/8.1.6
/opt/oracle/product/8.1.6/admin/PROD

/opt/oracle/product/8.1.6/admin/pfile
/opt/oracle/product/8.1.6/admin/adhoc
/opt/oracle/product/8.1.6/admin/bdump
/opt/oracle/product/8.1.6/admin/udump
/opt/oracle/product/8.1.6/admin/adump
/opt/oracle/product/8.1.6/admin/cdump
/opt/oracle/product/8.1.6/admin/create

/u02/oradata/PROD
/u03/oradata/PROD
/u04/oradata/PROD

etc..

Example mountpoints and disks:


------------------------------

Mountpunt Device Omvang Doel


/ /dev/md/dsk/d1 100 Unix Root-filesysteem
/usr /dev/md/dsk/d3 1200 Unix usr-filesysteem
/var /dev/md/dsk/d4 200 Unix var-filesysteem
/home /dev/md/dsk/d5 200 Unix opt-filesysteem
/opt /dev/md/dsk/d6 4700 Oracle_Home
/u01 /dev/md/dsk/d7 8700 Oracle datafiles
/u02 /dev/md/dsk/d8 8700 Oracle datafiles
/u03 /dev/md/dsk/d9 8700 Oracle datafiles
/u04 /dev/md/dsk/d10 8700 Oracle datafiles
/u05 /dev/md/dsk/d110 8700 Oracle datafiles
/u06 /dev/md/dsk/d120 8700 Oracle datafiles
/u07 /dev/md/dsk/d123 8650 Oracle datafiles

9.1.4 Users en groups:


----------------------

Als je met OS verificatie wilt werken, moet in de init.ora gezet zijn:


remote_login_passwordfile=none (passwordfile authentication via exlusive)

Benodigde groups in UNIX: group dba. Deze moet voorkomen in de /etc/group file
vaak is ook nog nodig de group oinstall

groupadd dba
groupadd oinstall
groupadd oper

Maak nu user oracle aan:


adduser -g oinstall -G dba -d /home/oracle oracle

# groupadd dba
# useradd oracle
# mkdir /usr/oracle
# mkdir /usr/oracle/9.0
# chown -R oracle:dba /usr/oracle
# touch /etc/oratab
# chown oracle:dba /etc/oratab

9.1.5 mount points en disks:


----------------------------

maak de mount points:

mkdir /opt/u01
mkdir /opt/u02
mkdir /opt/u03
mkdir /opt/u04

dit moeten voor een produktie omgeving aparte schijven zijn

Geef nu ownership van deze mount points aan user oracle en group oinstall

chown -R oracle:oinstall /opt/u01


chown -R oracle:oinstall /opt/u02
chown -R oracle:oinstall /opt/u03
chown -R oracle:oinstall /opt/u04

directories: drwxr-xr-x oracle dba


files : -rw-r----- oracle dba
: -rw-r--r-- oracle dba

chmod 644 *
chmod u+x filename
chmod ug+x filename

9.1.6 test van user oracle:


---------------------------

log in als user oracle en geef de commANDo's

$groups laat de groups zien (oinstall, dba)


$umask laat 022 zien, zoniet zet dan de line umask 022 in het .profile

umask is de default mode van een file of directory wanneer deze aangemaakt wordt.
rwxrwxrwx=777
rw-rw-rw-=666
rw-r--r--=644 welke correspondeert met umask 022

Verander nu het .profile of .bash_profile van de user oracle.


Plaats de environment variabelen van 9.1 in het profile.

log uit en in als user oracle, en test de environment:


%env
%echo $variablename

9.1.7 Oracle Installer bij 8.1.x op Linux:


------------------------------------------

Log in als user oracle. Draai nu oracle installer:

Linux:

startx
cd /usr/local/src/Oracle8iR3
./runInstaller

of

Ga naar install/linux op de CD en run runIns.sh

Nu volgt een grafische setup. Beantwoord de vragen.

Het kan zijn dat de installer vraagt om scripts uit te voeren zoals:
orainstRoot.sh en root.sh
Om dit uit te voeren:

open een nieuw window


su root
cd $ORACLE_HOME
./orainstRoot.sh

Installatie database op Unix:


-----------------------------
$ export PATH=$PATH:$ORACLE_HOME/bin
$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib
$ dbca &

or

$ cat "db1:/usr/oracle/9.0:Y >> /etc/oratab"


$ cd $ORACLE_HOME/dbs
$ cat initdw.ora |sed s/"#db_name = MY_DB_NAME"/"db_name = db1"/|sed
s/#control_files/control_files/ > initdb1.ora
Start and create database :
$ export PATH=$PATH:$ORACLE_HOME/bin
$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib
$ export ORACLE_SID=db1
$ sqlplus /nolog <<!
connect / as sysdba
startup nomount
create database db1
!

This creates a default database with files in $ORACLE_HOME/dbs


Now add the database meta data to actually make it useful :

$ sqlplus /nolog <<!


connect / as sysdba
@?/rdbms/admin/catalog # E.g: /apps/oracle/product/9.2/rdbms/admin
@?/rdbms/admin/catproc
!
Now create a user and give it wide ranging permissions :
$ sqlplus /nolog <<!
connect / as sysdba
create user myuser identified by password;
grant create session,create any table to myuser;
grant unlimited tablespace to myuser;
!

9.1.8 OS or Password Authentication:


------------------------------------

-- Preparing to Use OS Authentication

To enable authentication of an administrative user using the operating system you


must do the following:

Create an operating system account for the user. Add the user to the OSDBA or
OSOPER operating system defined groups.
Ensure that the initialization parameter, REMOTE_LOGIN_PASSWORDFILE, is set to
NONE. This is the default value
for this parameter.

A user can be authenticated, enabled as an administrative user, and connected to a


local database by typing
one of the following SQL*Plus commands:
CONNECT / AS SYSDBA
CONNECT / AS SYSOPER

For a remote database connection over a secure connection, the user must also
specify the net service name
of the remote database:

CONNECT /@net_service_name AS SYSDBA


CONNECT /@net_service_name AS SYSOPER

OSDBA:
unix : dba
windows: ORA_DBA

OSOPER:
unix : oper
windows: ORA_OPER

-- Preparing to Use Password File Authentication

To enable authentication of an administrative user using password file


authentication you must do the following:

Create an operating system account for the user.


If not already created, Create the password file using the ORAPWD utility:

ORAPWD FILE=filename PASSWORD=password ENTRIES=max_users

Set the REMOTE_LOGIN_PASSWORDFILE initialization parameter to EXCLUSIVE.


Connect to the database as user SYS (or as another user with the administrative
privilege).
If the user does not already exist in the database, create the user. Grant the
SYSDBA or SYSOPER
system privilege to the user:
GRANT SYSDBA to scott;

This statement adds the user to the password file, thereby enabling connection AS
SYSDBA.

For example, user scott has been granted the SYSDBA privilege, so he can connect
as follows:

CONNECT scott/tiger AS SYSDBA

9.1.9 Create a 9i database:


---------------------------

Step 1: Decide on Your Instance Identifier (SID)

Step 2: Establish the Database Administrator Authentication Method

Step 3: Create the Initialization Parameter File


Step 4: Connect to the Instance

Step 5: Start the Instance.

Step 6: Issue the CREATE DATABASE Statement

Step 7: Create Additional Tablespaces

Step 8: Run Scripts to Build Data Dictionary Views

Step 9: Run Scripts to Install Additional Options (Optional)

Step 10: Create a Server Parameter File (Recommended)

Step 11: Back Up the Database.

Step 1:
-------

% ORACLE_SID=ORATEST; export ORACLE_SID

Step 2: see above


-----------------

Step 3: init.ora
----------------

Note DB_CACHE_SIZE 10g:

Parameter type Big integer


Syntax DB_CACHE_SIZE = integer [K | M | G]
Default value If SGA_TARGET is set: If the parameter is not specified, then the
default is 0
(internally determined by the Oracle Database).
If the parameter is specified, then the user-specified value indicates a minimum
value for the memory pool.
If SGA_TARGET is not set, then the default is either 48 MB or 4MB * number of CPUs
* granule size, whichever is greater

Modifiable ALTER SYSTEM


Basic No

Oracle10g Obsolete Oracle SGA Parameters

Using AMM via the sga_target parameter renders several parameters obsolete.
Remember, you can continue
to perform manual SGA tuning if you like, but if you set sga_target, then these
parameters will default to zero:

db_cache_size - This parameter determines the number of database block buffers in


the Oracle SGA and is the single most important parameter
in Oracle memory.

db_xk_cache_size - This set of parameters (with x replaced by 2, 4, 8, 16, or 32)


sets the size for specialized areas of the buffer area
used to store data from tablespaces with varying blocksizes. When these are set,
they impose a hard limit on the maximum size of their respective areas.

db_keep_cache_size - This is used to store small tables that perform full table
scans. This data buffer pool was a sub-pool of db_block_buffers in Oracle8i.

db_recycle_cache_size - This is reserved for table blocks from very large tables
that perform full table scans. This was buffer_pool_keep in Oracle8i.

large_pool_size - This is a special area of the shared pool that is reserved for
SGA usage when using the multi-threaded server. The large pool is used for
parallel query and RMAN processing, as well as setting the size of the Java pool.

log_buffer - This parameter determines the amount of memory to allocate for


Oracle's redo log buffers. If there is a high amount of
update activity, the log_buffer should be allocated more space.

shared_pool_size - This parameter defines the pool that is shared by all users in
the system, including SQL areas and data dictionary caching. A large
shared_pool_size is not always better than a smaller shared pool. If your
application contains non-reusable SQL, you may get better performance with a
smaller shared pool.

java_pool_size -- This parameter specifies the size of the memory area used by
Java, which is similar to the shared pool used by SQL and PL/SQL.

streams_pool_size - This is a new area in Oracle Database 10g that is used to


provide buffer areas for the streams components of Oracle.

This is exactly the same automatic tuning principle behind the Oracle9i
pga_aggregate_target parameter that made these parameters obsolete. If you set
pga_aggregate_target, then these parameters are ignored:

sort_area_size - This parameter determines the memory region that is allocated for
in-memory sorting. When the v$sysstat value sorts (disk) become excessive, you may
want to allocate additional memory.

hash_area_size - This parameter determines the memory region reserved for hash
joins. Starting with Oracle9i, Oracle Corporation does not recommend using
hash_area_size unless the instance is configured with the shared server option.
Oracle recommends that you enable automatic sizing of SQL work areas by setting
pga_aggregate_target hash_area_size is retained only for backward compatibility
purposes.

Sample Initialization Parameter File


# Cache and I/O
DB_BLOCK_SIZE=4096
DB_CACHE_SIZE=20971520

# Cursors and Library Cache


CURSOR_SHARING=SIMILAR
OPEN_CURSORS=300

# Diagnostics and Statistics


BACKGROUND_DUMP_DEST=/vobs/oracle/admin/mynewdb/bdump
CORE_DUMP_DEST=/vobs/oracle/admin/mynewdb/cdump
TIMED_STATISTICS=TRUE
USER_DUMP_DEST=/vobs/oracle/admin/mynewdb/udump

# Control File Configuration


CONTROL_FILES=("/vobs/oracle/oradata/mynewdb/control01.ctl",
"/vobs/oracle/oradata/mynewdb/control02.ctl",
"/vobs/oracle/oradata/mynewdb/control03.ctl")

# Archive
LOG_ARCHIVE_DEST_1='LOCATION=/vobs/oracle/oradata/mynewdb/archive'
LOG_ARCHIVE_FORMAT=%t_%s.dbf
LOG_ARCHIVE_START=TRUE

# Shared Server
# Uncomment and use first DISPATCHES parameter below when your listener is
# configured for SSL
# (listener.ora and sqlnet.ora)
# DISPATCHERS = "(PROTOCOL=TCPS)(SER=MODOSE)",
# "(PROTOCOL=TCPS)(PRE=oracle.aurora.server.SGiopServer)"
DISPATCHERS="(PROTOCOL=TCP)(SER=MODOSE)",
"(PROTOCOL=TCP)(PRE=oracle.aurora.server.SGiopServer)",
(PROTOCOL=TCP)

# Miscellaneous
COMPATIBLE=9.2.0
DB_NAME=mynewdb

# Distributed, Replication and Snapshot


DB_DOMAIN=us.oracle.com
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE

# Network Registration
INSTANCE_NAME=mynewdb

# Pools
JAVA_POOL_SIZE=31457280
LARGE_POOL_SIZE=1048576
SHARED_POOL_SIZE=52428800

# Processes and Sessions


PROCESSES=150

# Redo Log and Recovery


FAST_START_MTTR_TARGET=300

# Resource Manager
RESOURCE_MANAGER_PLAN=SYSTEM_PLAN

# Sort, Hash Joins, Bitmap Indexes


SORT_AREA_SIZE=524288

# Automatic Undo Management


UNDO_MANAGEMENT=AUTO
UNDO_TABLESPACE=undotbs

Reasonable 10g init.ora:


------------------------
###########################################
# Cache and I/O
###########################################
db_block_size=8192
db_file_multiblock_read_count=16

###########################################
# Cursors and Library Cache
###########################################
open_cursors=300

###########################################
# Database Identification
###########################################
db_domain=antapex.org
db_name=test10g

###########################################
# Diagnostics and Statistics
###########################################
background_dump_dest=C:\oracle/admin/test10g/bdump
core_dump_dest=C:\oracle/admin/test10g/cdump
user_dump_dest=C:\oracle/admin/test10g/udump

###########################################
# File Configuration
###########################################
control_files=("C:\oracle\oradata\test10g\control01.ctl",
"C:\oracle\oradata\test10g\control02.ctl",
"C:\oracle\oradata\test10g\control03.ctl")
db_recovery_file_dest=C:\oracle/flash_recovery_area
db_recovery_file_dest_size=2147483648

###########################################
# Job Queues
###########################################
job_queue_processes=10

###########################################
# Miscellaneous
###########################################
compatible=10.2.0.1.0

###########################################
# Processes and Sessions
###########################################
processes=150

###########################################
# SGA Memory
###########################################
sga_target=287309824

###########################################
# Security and Auditing
###########################################
audit_file_dest=C:\oracle/admin/test10g/adump
remote_login_passwordfile=EXCLUSIVE

###########################################
# Shared Server
###########################################
dispatchers="(PROTOCOL=TCP) (SERVICE=test10gXDB)"

###########################################
# Sort, Hash Joins, Bitmap Indexes
###########################################
pga_aggregate_target=95420416

###########################################
# System Managed Undo and Rollback Segments
###########################################
undo_management=AUTO
undo_tablespace=UNDOTBS1

LOG_ARCHIVE_DEST=c:\oracle\oradata\log
LOG_ARCHIVE_FORMAT=arch_%t_%s_%r.dbf'

Flash_recovery_area: location where RMAN stores diskbased backups

Step 4: Connect to the Instance:


--------------------------------
Start SQL*Plus and connect to your Oracle instance AS SYSDBA.

$ SQLPLUS /nolog
CONNECT SYS/password AS SYSDBA

Step 5: Start the Instance:


---------------------------

Start an instance without mounting a database. Typically, you do this only during
database creation or while performing
maintenance on the database. Use the STARTUP command with the NOMOUNT option. In
this example, because the initialization
parameter file is stored in the default location, you are not required to specify
the PFILE clause:

STARTUP NOMOUNT

At this point, there is no database. Only the SGA is created and background
processes are started in preparation
for the creation of a new database.

Step 6: Issue the CREATE DATABASE Statement:


--------------------------------------------

To create the new database, use the CREATE DATABASE statement. The following
statement creates database mynewdb:

CREATE DATABASE mynewdb


USER SYS IDENTIFIED BY pz6r58
USER SYSTEM IDENTIFIED BY y1tz5p
LOGFILE GROUP 1 ('/vobs/oracle/oradata/mynewdb/redo01.log') SIZE 100M,
GROUP 2 ('/vobs/oracle/oradata/mynewdb/redo02.log') SIZE 100M,
GROUP 3 ('/vobs/oracle/oradata/mynewdb/redo03.log') SIZE 100M
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXLOGHISTORY 1
MAXDATAFILES 100
MAXINSTANCES 1
CHARACTER SET US7ASCII
NATIONAL CHARACTER SET AL16UTF16
DATAFILE '/vobs/oracle/oradata/mynewdb/system01.dbf' SIZE 325M REUSE
EXTENT MANAGEMENT LOCAL
DEFAULT TEMPORARY TABLESPACE tempts1
DATAFILE '/vobs/oracle/oradata/mynewdb/temp01.dbf'
SIZE 20M REUSE
UNDO TABLESPACE undotbs
DATAFILE '/vobs/oracle/oradata/mynewdb/undotbs01.dbf'
SIZE 200M REUSE AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED;

Oracle 10g create statement:

CREATE DATABASE playdwhs


USER SYS IDENTIFIED BY cactus
USER SYSTEM IDENTIFIED BY cactus
LOGFILE GROUP 1 ('/dbms/tdbaplay/playdwhs/recovery/redo_logs/redo01.log') SIZE
100M,
GROUP 2 ('/dbms/tdbaplay/playdwhs/recovery/redo_logs/redo02.log') SIZE
100M,
GROUP 3 ('/dbms/tdbaplay/playdwhs/recovery/redo_logs/redo03.log') SIZE
100M
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXLOGHISTORY 1
MAXDATAFILES 100
MAXINSTANCES 1
CHARACTER SET US7ASCII
NATIONAL CHARACTER SET AL16UTF16
DATAFILE '/dbms/tdbaplay/playdwhs/database/default/system01.dbf' SIZE 500M
REUSE
EXTENT MANAGEMENT LOCAL
SYSAUX DATAFILE '/dbms/tdbaplay/playdwhs/database/default/sysaux01.dbf' SIZE
300M REUSE
DEFAULT TEMPORARY TABLESPACE temp
TEMPFILE '/dbms/tdbaplay/playdwhs/database/default/temp01.dbf'
SIZE 1000M REUSE
UNDO TABLESPACE undotbs
DATAFILE '/dbms/tdbaplay/playdwhs/database/default/undotbs01.dbf'
SIZE 1000M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;

CONNECT SYS/password AS SYSDBA


-- create a user tablespace to be assigned as the default tablespace for users
CREATE TABLESPACE users LOGGING
DATAFILE '/u01/oracle/oradata/mynewdb/users01.dbf'
SIZE 25M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL;
-- create a tablespace for indexes, separate from user tablespace
CREATE TABLESPACE indx LOGGING
DATAFILE '/u01/oracle/oradata/mynewdb/indx01.dbf'
SIZE 25M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL;

For information about creating tablespaces, see Chapter 8, " Managing


Tablespaces".

Step 9: Run Scripts to Build Data Dictionary Views


Run the scripts necessary to build views, synonyms, and PL/SQL packages:

CONNECT SYS/password AS SYSDBA


@/u01/oracle/rdbms/admin/catalog.sql
@/u01/oracle/rdbms/admin/catproc.sql
EXIT

catalog.sql All databases Creates the data dictionary and public synonyms for many
of its views
Grants PUBLIC access to the synonyms

catproc.sql All databases Runs all scripts required for, or used with PL/SQL
catclust.sql Real Application Clusters Creates Real Application Clusters data
dictionary views

Oracle supplies other scripts that create additional structures you can use in
managing your database and creating database applications. These scripts are
listed in Table B-2.

See Also:

Your operating system-specific Oracle documentation for the exact names and
locations of these scripts on your operating system

Table B-2 Creating Additional Data Dictionary Structures

Script Name Needed For Run By Description


catblock.sql Performance management SYS Creates views that can dynamically display
lock dependency graphs
catexp7.sql Exporting data to Oracle7 SYS Creates the dictionary views needed for
the Oracle7 Export utility to export data from the Oracle Database in Oracle7
Export file format
caths.sql Heterogeneous Services SYS Installs packages for administering
heterogeneous services
catio.sql Performance management SYS Allows I/O to be traced on a table-by-table
basis
catoctk.sql Security SYS Creates the Oracle Cryptographic Toolkit package
catqueue.sql Advanced Queuing
Creates the dictionary objects required for Advanced Queuing
catrep.sql Oracle Replication SYS Runs all SQL scripts for enabling database
replication
catrman.sql Recovery Manager RMAN or any user with GRANT_RECOVERY_CATALOG_OWNER
role Creates recovery manager tables and views (schema) to establish an external
recovery catalog for the backup, restore, and recovery functionality provided by
the Recovery Manager (RMAN) utility
dbmsiotc.sql Storage management Any user Analyzes chained rows in index-organized
tables
dbmsotrc.sql Performance management SYS or SYSDBA Enables and disables generation
of Oracle Trace output
dbmspool.sql Performance management SYS or SYSDBA Enables DBA to lock PL/SQL
packages, SQL statements, and triggers into the shared pool
userlock.sql Concurrency control SYS or SYSDBA Provides a facility for user-named
locks that can be used in a local or clustered environment to aid in sequencing
application actions
utlbstat.sql and utlestat.sql Performance monitoring SYS Respectively start and
stop collecting performance tuning statistics
utlchn1.sql Storage management Any user For use with the Oracle Database. Creates
tables for storing the output of the ANALYZE command with the CHAINED ROWS option.
Can handle both physical and logical rowids.
utlconst.sql Year 2000 compliance Any user Provides functions to validate that
CHECK constraints on date columns are year 2000 compliant
utldtree.sql Metadata management Any user Creates tables and views that show
dependencies between objects
utlexpt1.sql Constraints Any user For use with the Oracle Database. Creates the
default table (EXCEPTIONS) for storing exceptions from enabling constraints. Can
handle both physical and logical rowids.
utlip.sql PL/SQL SYS Used primarily for upgrade and downgrade operations. It
invalidates all existing PL/SQL modules by altering certain dictionary tables so
that subsequent recompilations will occur in the format required by the database.
It also reloads the packages STANDARD and DBMS_STANDARD, which are necessary for
any PL/SQL compilations.
utlirp.sql PL/SQL SYS Used to change from 32-bit to 64-bit word size or vice
versa. This script recompiles existing PL/SQL modules in the format required by
the new database. It first alters some data dictionary tables. Then it reloads the
packages STANDARD and DBMS_STANDARD, which are necessary for using PL/SQL.
Finally, it triggers a recompilation of all PL/SQL modules, such as packages,
procedures, and types.
utllockt.sql Performance monitoring SYS or SYSDBA Displays a lock wait-for graph,
in tree structure format
utlpwdmg.sql Security SYS or SYSDBA Creates PL/SQL functions for default password
complexity verification. Sets the default password profile parameters and enables
password management features.
utlrp.sql PL/SQL SYS Recompiles all existing PL/SQL modules that were previously
in an INVALID state, such as packages, procedures, and types.
utlsampl.sql Examples SYS or any user with DBA role Creates sample tables, such as
emp and dept, and users, such as scott
utlscln.sql Oracle Replication Any user Copies a snapshot schema from another
snapshot site
utltkprf.sql Performance management SYS Creates the TKPROFER role to allow the
TKPROF profiling utility to be run by non-DBA users
utlvalid.sql Partitioned tables Any user Creates tables required for storing
output of ANALYZE TABLE ...VALIDATE STRUCTURE of a partitioned table
utlxplan.sql Performance management Any user

+++++++

Graag op de pl003 de twee volgende instances:

- playdwhs
- accpdwhs
En op de pl101 de volgende instance:
- proddwhs

Graag conform de huidige standaard voor filesystems.


Dat wil zeggen, al deze databases komen op volumegroup roca_vg.
Met daaronder het de volgende mount points:
/dbms/tdba[env]/[env]dwhs/admin
/dbms/tdba[env]/[env]dwhs/database
/dbms/tdba[env]/[env]dwhs/recovery
/dbms/tdba[env]/[env]dwhs/export

/dev/fslv32 0.25 0.23 7% 55 1%


/dbms/tdbaaccp/accproca/admin
/dev/fslv33 15.00 11.78 22% 17 1%
/dbms/tdbaaccp/accproca/database
/dev/fslv34 4.00 3.51 13% 12 1%
/dbms/tdbaaccp/accproca/recovery
/dev/fslv35 5.00 4.99 1% 10 1%
/dbms/tdbaaccp/accproca/export

1. FS:
SIZE(G): LPs: PPs:
/dbms/tdbaplay/playdwhs/admin 0.25 4 8
/dbms/tdbaplay/playdwhs/database 15 240 480
/dbms/tdbaplay/playdwhs/recovery 4 64 128
/dbms/tdbaplay/playdwhs/export 5 80 160

SIZE(G): LPs: PPs:


/dbms/tdbaaccp/accpdwhs/admin 0.25 4 8
/dbms/tdbaaccp/accpdwhs/database 15 240 480
/dbms/tdbaaccp/accpdwhs/recovery 4 64 128
/dbms/tdbaaccp/accpdwhs/export 5 80 160

SIZE(G): LPs: PPs:


/dbms/tdbaprod/proddwhs/admin 0.25 4 8
/dbms/tdbaprod/proddwhs/database 15 240 480
/dbms/tdbaprod/proddwhs/recovery 4 64 128
/dbms/tdbaprod/proddwhs/export 5 80 160

CREATE DATABASE mynewdb


USER SYS IDENTIFIED BY pz6r58
USER SYSTEM IDENTIFIED BY y1tz5p
LOGFILE GROUP 1 ('/vobs/oracle/oradata/mynewdb/redo01.log') SIZE 100M,
GROUP 2 ('/vobs/oracle/oradata/mynewdb/redo02.log') SIZE 100M,
GROUP 3 ('/vobs/oracle/oradata/mynewdb/redo03.log') SIZE 100M
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXLOGHISTORY 1
MAXDATAFILES 100
MAXINSTANCES 1
CHARACTER SET US7ASCII
NATIONAL CHARACTER SET AL16UTF16
DATAFILE '/vobs/oracle/oradata/mynewdb/system01.dbf' SIZE 325M REUSE
EXTENT MANAGEMENT LOCAL
DEFAULT TEMPORARY TABLESPACE tempts1
DATAFILE '/vobs/oracle/oradata/mynewdb/temp01.dbf'
SIZE 20M REUSE
UNDO TABLESPACE undotbs
DATAFILE '/vobs/oracle/oradata/mynewdb/undotbs01.dbf'
SIZE 200M REUSE AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED;

+++++++

Step 7: Create Additional Tablespaces:


--------------------------------------

To make the database functional, you need to create additional files and
tablespaces for users.
The following sample script creates some additional tablespaces:

CONNECT SYS/password AS SYSDBA


-- create a user tablespace to be assigned as the default tablespace for users
CREATE TABLESPACE users LOGGING
DATAFILE '/vobs/oracle/oradata/mynewdb/users01.dbf'
SIZE 25M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL;
-- create a tablespace for indexes, separate from user tablespace
CREATE TABLESPACE indx LOGGING
DATAFILE '/vobs/oracle/oradata/mynewdb/indx01.dbf'
SIZE 25M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL;
EXIT

Step 8: Run Scripts to Build Data Dictionary Views:


---------------------------------------------------

Run the scripts necessary to build views, synonyms, and PL/SQL packages:

CONNECT SYS/password AS SYSDBA


@/vobs/oracle/rdbms/admin/catalog.sql
@/vobs/oracle/rdbms/admin/catproc.sql
EXIT

Do not forget to run as SYSTEM the script /sqlplus/admin/pupbld.sql;


@/dbms/tdbaaccp/ora10g/home/sqlplus/admin/pupbld.sql

@/dbms/tdbaaccp/ora10g/home/rdbms/admin/catexp.sql

The following table contains descriptions of the scripts:

Script Description
CATALOG.SQL: Creates the views of the data dictionary tables, the dynamic
performance views, and public synonyms
for many of the views. Grants PUBLIC access to the synonyms.

CATPROC.SQL: Runs all scripts required for or used with PL/SQL.

Step 10: Create a Server Parameter File (Recommended):


------------------------------------------------------

Oracle recommends you create a server parameter file as a dynamic means of


maintaining initialization parameters.
The following script creates a server parameter file from the text initialization
parameter file and writes it
to the default location. The instance is shut down, then restarted using the
server parameter file (in the default location).

CONNECT SYS/password AS SYSDBA


-- create the server parameter file
CREATE SPFILE='/vobs/oracle/dbs/spfilemynewdb.ora' FROM
PFILE='/vobs/oracle/admin/mynewdb/scripts/init.ora';
SHUTDOWN
-- this time you will start up using the server parameter file
CONNECT SYS/password AS SYSDBA
STARTUP
EXIT

CREATE SPFILE='/opt/app/oracle/product/9.2/dbs/spfileOWS.ora'
FROM PFILE='/opt/app/oracle/admin/OWS/pfile/init.ora';

CREATE SPFILE='/opt/app/oracle/product/9.2/dbs/spfilePEGACC.ora'
FROM PFILE='/opt/app/oracle/admin/PEGACC/scripts/init.ora';

CREATE SPFILE='/opt/app/oracle/product/9.2/dbs/spfilePEGTST.ora'
FROM PFILE='/opt/app/oracle/admin/PEGTST/scripts/init.ora';

9.10 Oracle 9i licenses:


------------------------

Setting License Parameters

Oracle no longer offers licensing by the number of concurrent sessions. Therefore


the LICENSE_MAX_SESSIONS and LICENSE_SESSIONS_WARNING
initialization parameters have been deprecated.

- named user licesnsing:

If you use named user licensing, Oracle can help you enforce this form of
licensing. You can set a limit on the number of users
created in the database. Once this limit is reached, you cannot create more users.

Note:
This mechanism assumes that each person accessing the database has a unique user
name and that no people share a user name.
Therefore, so that named user licensing can help you ensure compliance with your
Oracle license agreement, do not allow
multiple users to log in using the same user name.

To limit the number of users created in a database, set the LICENSE_MAX_USERS


initialization parameter in the
database's initialization parameter file, as shown in the following example:

LICENSE_MAX_USERS = 200

- per-processor licensing:
Oracle encourages customers to license the database on the per-processor licensing
model. With this licensing method
you count up the number of CPUs in your computer, and multiply that number by the
licensing cost of the database
and database options you need.

Currently the Standard (STD) edition of the database is priced at $15,000 per
processor, and the Enterprise (EE) edition is priced at
$40,000 per processor. The RAC feature is $20,000 per processor extra, and you
need to add 22 percent annually for the support contract.

It's possible to license the database on a per-user basis, which makes financial
sense if there'll never be many users accessing
the database. However, the licensing method can't be changed after it is initially
licensed. So if the business grows and
requires significantly more users to access the database, the costs could exceed
the costs under the per-processor model.
You also have to understand what Oracle corporation considers to be a user for the
purposes of licensing purposes.
If 1,000 users access the database through an application server, which only makes
five connections to the database,
then Oracle will require that either 1,000 user licenses be purchased or that the
database be licensed via
the per-processor pricing model.

The Oracle STD edition is licensed at $300 per user (with a five user minimum),
and EE edition costs $800 per user
(with a 25 user minimum). There is still an annual support fee of 22 percent,
which should be budgeted in addition to the licensing fees.
If the support contract is not paid each year, then the customer is not licensed
to upgrade to the latest version of the database and must
re-purchase all of the licenses over again in order to upgrade versions. This
section only gives you a brief overview of the available
licensing options and costs, so if you have additional questions you really should
contact an Oracle sales representative

Note about 10g init.ora:


------------------------

PARALLEL_MAX_SERVERS=(> apply or capture processes) Each capture process and


apply process may use
multiple parallel execution servers. The apply process by default needs two
parallel servers.
So this parameter needs to set to at least 2 even for a single non-parallel
apply process.
Specify a value for this parameter to ensure that there are enough parallel
execution servers. In our installation we went for 12 apply server, so we
increased the number of parallel_max_server above this figure of 12.
_kghdsidx_count=1 This parameter prevents the shared_pool from being
divided among CPUs
LOG_PARALLELISM=1 This parameter must be set to 1 at each database that
captures events.
Parameters set using DBMS_CAPTURE_ADM package:
Using the DBMS_CAPTURADM.SET_PARAMETER procedure there a 3 a parameters that
are of common usage to affect installation
PARALLELISM=3
There may be only one logminer session for the whole ruleset and only one
enqueuer process
that will push the objects. you can safely define as much as 3 execution
capture process per CPU
_CHECKPOINT_FREQUENCY=1
Increase the frequency of logminer checkpoints especially in a database with
significant LOB or DDL activity. A logminer checkpoint is requested by default
every 10Mb of redo mined.
_SGA_SIZE
Amount of memory available from the shared pool for logminer processing. The
default amount
of shared_pool memory allocated to logminer is 10Mb. Increase this value
especially in environments
where large LOBs are processed.

9.11. Older Database installations:


-----------------------------------

CREATE DATABASE Examples on 8.x

The easiest way to create a 8i, 9i database, is using the "Database Configuration
Assistant".
Using this tool, you are able to create a database and setup the NET configuration
and the listener,
in a graphical environment.

It is also possible to use a script running in sqlpus (8i,9i) or svrmgrl (only in


8i).

Charactersets that are used a lot in europe:

WE8ISO8859P15
WE8MMSWIN1252

Example 1:
----------

$ SQLPLUS /nolog
CONNECT username/password AS sysdba

STARTUP NOMOUMT PFILE=<path to init.ora>

-- Create database
CREATE DATABASE rbdb1
CONTROLFILE REUSE
LOGFILE '/u01/oracle/rbdb1/redo01.log' SIZE 1M REUSE,
'/u01/oracle/rbdb1/redo02.log' SIZE 1M REUSE,
'/u01/oracle/rbdb1/redo03.log' SIZE 1M REUSE,
'/u01/oracle/rbdb1/redo04.log' SIZE 1M REUSE
DATAFILE '/u01/oracle/rbdb1/system01.dbf' SIZE 10M REUSE
AUTOEXTEND ON
NEXT 10M MAXSIZE 200M
CHARACTER SET WE8ISO8859P1;
run catalog.sql
run catproq.sql

-- Create another (temporary) system tablespace


CREATE ROLLBACK SEGMENT rb_temp STORAGE (INITIAL 100 k NEXT 250 k);

-- Alter temporary system tablespace online before proceding


ALTER ROLLBACK SEGMENT rb_temp ONLINE;

-- Create additional tablespaces ...


-- RBS: For rollback segments
-- USERs: Create user sets this as the default tablespace
-- TEMP: Create user sets this as the temporary tablespace
CREATE TABLESPACE rbs
DATAFILE '/u01/oracle/rbdb1/rbs01.dbf' SIZE 5M REUSE AUTOEXTEND ON
NEXT 5M MAXSIZE 150M;
CREATE TABLESPACE users
DATAFILE '/u01/oracle/rbdb1/users01.dbf' SIZE 3M REUSE AUTOEXTEND ON
NEXT 5M MAXSIZE 150M;
CREATE TABLESPACE temp
DATAFILE '/u01/oracle/rbdb1/temp01.dbf' SIZE 2M REUSE AUTOEXTEND ON
NEXT 5M MAXSIZE 150M;

-- Create rollback segments.


CREATE ROLLBACK SEGMENT rb1 STORAGE(INITIAL 50K NEXT 250K)
tablespace rbs;
CREATE ROLLBACK SEGMENT rb2 STORAGE(INITIAL 50K NEXT 250K)
tablespace rbs;
CREATE ROLLBACK SEGMENT rb3 STORAGE(INITIAL 50K NEXT 250K)
tablespace rbs;
CREATE ROLLBACK SEGMENT rb4 STORAGE(INITIAL 50K NEXT 250K)
tablespace rbs;

-- Bring new rollback segments online and drop the temporary system one
ALTER ROLLBACK SEGMENT rb1 ONLINE;
ALTER ROLLBACK SEGMENT rb2 ONLINE;
ALTER ROLLBACK SEGMENT rb3 ONLINE;
ALTER ROLLBACK SEGMENT rb4 ONLINE;

ALTER ROLLBACK SEGMENT rb_temp OFFLINE;


DROP ROLLBACK SEGMENT rb_temp ;

Example 2:
----------

connect internal
startup nomount pfile=/disk00/oracle/software/7.3.4/dbs/initDB1.ora

create database "DB1"


maxinstances 2
maxlogfiles 32
maxdatafiles 254
characterset "US7ASCII"

datafile '/disk02/oracle/oradata/DB1/system01.dbf' size 128M


autoextent on next 8M maxsize 256M
logfile group 1
('/disk03/oracle/oradata/DB1/redo1a.log',
'/disk04/oracle/oradata/DB1/redo1b.log') size 5M,
group 2
('/disk05/oracle/oradata/DB1/redo2a.log',
('/disk06/oracle/oradata/DB1/redo2b.log') size 5M

REM * install data dictionary views


@/disk00/oracle/software/7.3.4/rdbms/admin/catalog.sql
@/disk00/oracle/software/7.3.4/rdbms/admin/catproq.sql

create rollback segment SYSROLL tablespace system


storage (initial 2M next 2M minextents 2 maxextents 255);

alter rollback segment SYSROLL online;

create tablespace RBS


datafile '/disk01/oracle/oradata/DB1/rbs01.dbf' size 25M
default storage (
initial 500K
next 500K
pctincrease 0
minextents 2 );

create rollback segment RBS_1 tablespace RBS1


storage (initial 512K next 512K minextents 50);

create rollback segment RBS02 tablespace RBS


storage (initial 500K next 500K minextents 2 optimal 1M);

etc..

alter rollback segment RBS01 online;


alter rollback segment RBS02 online;

etc..

create tablespace DATA


datafile '/disk05/oracle/oradata/DB1/data01.dbf' size 25M
default storage (
initial 500K
next 500K
pctincrease 0
maxextends UNLIMITED );

etc.. other tablespaces you need


run other scripts you need.

alter user sys temporary tablespace TEMP;


alter user system default tablespace TOOLS temporary tablespace TEMP;

connect system/manager

@/disk00/oracle/software/7.3.4/rdbms/admin/catdbsyn.sql
@/disk00/oracle/software/7.3.4/rdbms/admin/pubbld.sql
t.b.v. PRODUCT_USER_PROFILE, SQLPLUS_USER_PROFILE

Example 3: on NT/2000 8i best example:


--------------------------------------

Suppose you want a second database on a NT/2000 Server:

1. create a service with oradim

oradim -new -sid -startmode -pfile

2. sqlplus /nolog (or use svrmgrl)

startup nomount pfile="G:\oracle\admin\hd\pfile\init.ora"

SVRMGR> CREATE DATABASE hd


LOGFILE 'G:\oradata\hd\redo01.log' SIZE 2048K,
'G:\oradata\hd\redo02.log' SIZE 2048K,
'G:\oradata\hd\redo03.log' SIZE 2048K
MAXLOGFILES 32
MAXLOGMEMBERS 2
MAXLOGHISTORY 1
DATAFILE 'G:\oradata\hd\system01.dbf' SIZE 264M REUSE AUTOEXTEND ON NEXT
10240K
MAXDATAFILES 254
MAXINSTANCES 1
CHARACTER SET WE8ISO8859P1
NATIONAL CHARACTER SET WE8ISO8859P1;

@catalog.sql
@catproq.sql

Oracle 9i:
----------

Example 1:
----------

CREATE DATABASE mynewdb


USER SYS IDENTIFIED BY pz6r58
USER SYSTEM IDENTIFIED BY y1tz5p
LOGFILE GROUP 1 ('/vobs/oracle/oradata/mynewdb/redo01.log') SIZE 100M,
GROUP 2 ('/vobs/oracle/oradata/mynewdb/redo02.log') SIZE 100M,
GROUP 3 ('/vobs/oracle/oradata/mynewdb/redo03.log') SIZE 100M
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXLOGHISTORY 1
MAXDATAFILES 100
MAXINSTANCES 1
CHARACTER SET US7ASCII
NATIONAL CHARACTER SET AL16UTF16
DATAFILE '/vobs/oracle/oradata/mynewdb/system01.dbf' SIZE 325M REUSE
EXTENT MANAGEMENT LOCAL
DEFAULT TEMPORARY TABLESPACE tempts1
DATAFILE '/vobs/oracle/oradata/mynewdb/temp01.dbf'
SIZE 20M REUSE
UNDO TABLESPACE undotbs
DATAFILE '/vobs/oracle/oradata/mynewdb/undotbs01.dbf'
SIZE 200M REUSE AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED;

9.2 Automatische start oracle bij system boot:


==============================================

9.2.1 oratab:
-------------

Inhoud ORATAB in /etc of /var/opt:

Voorbeeld:

# $ORACLE_SID:$ORACLE_HOME:[N|Y]
#
ORCL:/u01/app/oracle/product/8.0.5:Y
#

De oracle scripts om de database te starten en te stoppen zijn:


$ORACLE_HOME/bin/dbstart en dbshut,
of startdb en stopdb of wat daarop lijkt. Deze kijken in ORATAB om te zien welke
databases
gestart moeten worden.

9.2.2 dbstart en dbshut:


------------------------

Het script dbstart zal oratab lezen en ook tests doen en om de oracle versie
te bepalen. Verder bestaat de kern uit:

het starten van sqldba, svrmgrl of sqlplus


vervolgens doen we een connect
vervolgens geven we het startup commando.

Voor dbshut geldt een overeenkomstig verhaal.

9.2.3 init, sysinit, rc:


------------------------

Voor een automatische start, voeg nu de juiste entries toe in het


/etc/rc2.d/S99dbstart
(or equivalent) file:

Tijdens het opstarten van Unix worden de scrips in de /etc/rc2.d uitgevoerd die
beginnen met een 'S'
en in alfabetische volgorde.
De Oracle database processen zullen als (een van de) laatste processen worden
gestart.
Het bestAND S99oracle is gelinkt met deze directory.

Inhoud S99oracle:

su - oracle -c "/path/to/$ORACLE_HOME/bin/dbstart" # Start DB's


su - oracle -c "/path/to/$ORACLE_HOME/bin/lsnrctl start" # Start listener
su - oracle -c "/path/tp/$ORACLE_HOME/bin/namesctl start" # Start OraNames
(optional)

Het dbstart script is een standaard Oracle script. Het kijkt in oratab welke sid's
op 'Y' staan,
en zal deze databases starten.

of customized via een customized startdb script:

ORACLE_ADMIN=/opt/oracle/admin; export ORACLE_ADMIN

su - oracle -c "$ORACLE_ADMIN/bin/startdb WPRD


1>$ORACLE_ADMIN/log/WPRD/startWPRD.$$ 2>&1"
su - oracle -c "$ORACLE_ADMIN/bin/startdb WTST
1>$ORACLE_ADMIN/log/WTST/startWTST.$$ 2>&1"
su - oracle -c "$ORACLE_ADMIN/bin/startdb WCUR
1>$ORACLE_ADMIN/log/WCUR/startWCUR.$$ 2>&1"

9.3 Het stoppen van Oracle in unix:


-----------------------------------

Tijdens het down brengen van Unix (shutdown -i 0) worden de scrips in de directory
/etc/rc2.d
uitgevoerd die beginnen met een 'K' en in alfabetische volgorde.
De Oracle database processen zijn een van de eerste processen die worden
afgesloten.
Het bestand K10oracle is gelinkt met de /etc/rc2.d/K10oracle

# Configuration File: /opt/oracle/admin/bin/K10oracle

ORACLE_ADMIN=/opt/oracle/admin; export ORACLE_ADMIN

su - oracle -c "$ORACLE_ADMIN/bin/stopdb WPRD 1>$ORACLE_ADMIN/log/WPRD/stopWPRD.$$


2>&1"
su - oracle -c "$ORACLE_ADMIN/bin/stopdb WCUR 1>$ORACLE_ADMIN/log/WCUR/stopWCUR.$$
2>&1"
su - oracle -c "$ORACLE_ADMIN/bin/stopdb WTST 1>$ORACLE_ADMIN/log/WTST/stopWTST.$$
2>&1"

9.4 startdb en stopdb:


----------------------

Startdb [ORACLE_SID]
--------------------

Dit script is een onderdeel van het script S99Oracle. Dit script heeft 1
parameter, ORACLE_SID

# Configuration File: /opt/oracle/admin/bin/startdb

# Algemene omgeving zetten

. $ORACLE_ADMIN/env/profile

ORACLE_SID=$1
echo $ORACLE_SID

# Omgeving zetten RDBMS


. $ORACLE_ADMIN/env/$ORACLE_SID.env

# Het starten van de database


sqlplus /nolog << EOF
connect / as sysdba
startup
EOF

# Het starten van de listener


lsnrctl start $ORACLE_SID

# Het starten van de intelligent agent voor alle instances


#lsnrctl dbsnmp_start

Stopdb [ORACLE_SID]
-------------------

Dit script is een onderdeel van het script K10Oracle. Dit script heeft 1
parameter, ORACLE_SID

# Configuration File: /opt/oracle/admin/bin/stopdb

# Algemene omgeving zetten


. $ORACLE_ADMIN/env/profile

ORACLE_SID=$1
export $ORACLE_SID

# Settings van het RDBMS


. $ORACLE_ADMIN/env/$ORACLE_SID.env

# Het stoppen van de intelligent agent


#lsnrctl dbsnmp_stop

# Het stoppen van de listener


lsnrctl stop $ORACLE_SID

# Het stoppen van de database.


sqlplus /nolog << EOF
connect / as sysdba
shutdown immediate
EOF
9.5 Batches:
------------

De batches (jobs) worden gestart door het Unix proces cron

# Batches (Oracle)

# Configuration File: /var/spool/cron/crontabs/root


# Format of lines:
# min hour daymo month daywk cmd
#
# Dayweek 0=sunday, 1=monday...
0 9 * * 6 /sbin/sh
/opt/oracle/admin/batches/bin/batches.sh
>> /opt/oracle/admin/batches/log/batcheserroroutput.log 2>&1

# Configuration File: /opt/oracle/admin/batches/bin/batches.sh


# Door de op de commandline ' BL_TRACE=T ; export BL_TRACE ' worden alle
commando's getoond.
case $BL_TRACE in
T) set -x ;;
esac

ORACLE_ADMIN=/opt/oracle/admin; export ORACLE_ADMIN


ORACLE_HOME=/opt/oracle/product/8.1.6; export ORACLE_HOME

ORACLE_SID=WCUR ; export ORACLE_SID


su - oracle -c ". $ORACLE_ADMIN/env/profile ; . $ORACLE_ADMIN/env/$ORACLE_SID.env;

cd $ORACLE_ADMIN/batches/bin; sqlplus /NOLOG


@$ORACLE_ADMIN/batches/bin/Analyse_WILLOW2K.sql 1>
$ORACLE_ADMIN/batches/log/batches$ORACLE_SID.`date +"%y%m%d"` 2>&1"

ORACLE_SID=WCON ; export ORACLE_SID


su - oracle -c ". $ORACLE_ADMIN/env/profile ; . $ORACLE_ADMIN/env/$ORACLE_SID.env;

cd $ORACLE_ADMIN/batches/bin; sqlplus /NOLOG


@$ORACLE_ADMIN/batches/bin/Analyse_WILLOW2K.sql 1>
$ORACLE_ADMIN/batches/log/batches$ORACLE_SID.`date +"%y%m%d"` 2>&1"

9.6 Autostart in NT/Win2K:


--------------------------

1) Older versions

delete the existing instance FROM the command prompt:


oradim80 -delete -sid SID

recreate the instance FROM the command prompt:

oradim -new -sid SID -intpwd <password> -startmode <auto> -pfile


<path\initSID.ora>

Execute the command file FROM the command prompt:


oracle_home\database\strt<sid>.cmd

Check the log file generated FROM this execution: oracle_home\rdbmsxx\oradimxx.log


2) NT Registry value

HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\HOME0\ORA_SID_AUTOSTART REG_EXPAND_SZ TRUE

9.7 Tools:
----------

Relink van Oracle:


------------------

info:

showrev -p
pkginfo -i

relink:

mk -f $ORACLE_HOME/rdbms/lib/ins_rdbms.mk install
mk -f $ORACLE_HOME/svrmgr/lib/ins_svrmgr.mk install
mk -f $ORACLE_HOME/network/lib/ins_network.mk install

$ORACLE_HOME/bin

relink all

Relinking Oracle

Background: Applications for UNIX are generally not distributed as complete


executables. Oracle, like many
application vendors who create products for UNIX, distribute individual object
files, library archives of object files,
and some source files which then get ?relinked? at the operating system level
during installation to create
usable executables. This guarantees a reliable integration with functions
provided by the OS system libraries.
Relinking occurs automatically under these circumstances:
- An Oracle product has been installed with an Oracle provided installer.
- An Oracle patch set has been applied via an Oracle provided installer.

[Step 1] Log into the UNIX system as the Oracle software owner
Typically this is the user 'oracle'.

[STEP 2] Verify that your $ORACLE_HOME is set correctly:


For all Oracle Versions and Platforms, perform this basic environment check
first:
% cd $ORACLE_HOME
% pwd
...Doing this will ensure that $ORACLE_HOME is set correctly in your current
environment.

[Step 3] Verify and/or Configure the UNIX Environment for Proper Relinking:
For all Oracle Versions and UNIX Platforms: The Platform specific environment
variables LIBPATH, LD_LIBRARY_PATH,
& SHLIB_PATH typically are already set to include system library locations like
'/usr/lib'.
In most cases, you need only check what they are set to first, then add the
$ORACLE_HOME/lib directory to them
where appropriate. i.e.:
% setenv LD_LIBRARY_PATH ${ORACLE_HOME}/lib:${LD_LIBRARY_PATH}
(see [NOTE:131207.1] How to Set UNIX Environment Variables for help with setting
UNIX environment variables)

If on SOLARIS (Sparc or Intel) with:


Oracle 7.3.X, 8.0.X, or 8.1.X:
- Ensure that /usr/ccs/bin is before /usr/ucb in $PATH
% which ld ....should return '/usr/ccs/bin/ld'
If using 32bit(non 9i) Oracle,
- Set LD_LIBRARY_PATH=$ORACLE_HOME/lib
If using 64bit(non 9i) Oracle,
- Set LD_LIBRARY_PATH=$ORACLE_HOME/lib
- Set LD_LIBRARY_PATH_64=$ORACLE_HOME/lib64
Oracle 9.X.X (64Bit) on Solaris (64Bit) OS
- Set LD_LIBRARY_PATH=$ORACLE_HOME/lib32
- Set LD_LIBRARY_PATH_64=$ORACLE_HOME/lib
Oracle 9.X.X (32Bit) on Solaris (64Bit) OS
- Set LD_LIBRARY_PATH=$ORACLE_HOME/lib

[Step 4] For all Oracle Versions and UNIX Platforms:


Verify that you performed Step 2 correctly:
% env|pg ....make sure that you see the correct absolute path for
$ORACLE_HOME in the variable definitions.

[Step 5] Run the OS Commands to Relink Oracle:


Before relinking Oracle, shut down both the database and the listener.

Oracle 8.1.X or 9.X.X


------------------------
*** NEW IN 8i AND ABOVE ***
A 'relink' script is provided in the $ORACLE_HOME/bin directory.
% cd $ORACLE_HOME/bin
% relink ...this will display all of the command's options. usage:
relink <parameter>
accepted values for parameter:
all Every product executable that has been installed
oracle Oracle Database executable only
network net_client, net_server, cman
client net_client, plsql
client_sharedlib Client shared library
interMedia ctx
ctx Oracle Text utilities
precomp All precompilers that have been installed
utilities All utilities that have been installed
oemagent oemagent
Note: To give the correct permissions to the nmo and nmb
executables,
you must run the root.sh script after relinking oemagent.

ldap ldap, oid

Note: ldap option is available only from 9i. In 8i, you would have to manually
relink ldap.
You can relink most of the executables associated with an Oracle Server
Installation
by running the following command: % relink all
This will not relink every single executable Oracle provides
(you can discern which executables were relinked by checking their timestamp with

'ls -l' in the $ORACLE_HOME/bin directory).


However, 'relink all' will recreate the shared libraries that most executables
rely on and thereby
resolve most issues that require a proper relink.

-or-
Since the 'relink' command merely calls the traditional 'make' commands, you
still have the option of running the 'make' commands independently:
For executables: oracle, exp, imp, sqlldr, tkprof, mig, dbv, orapwd, rman,
svrmgrl, ogms, ogmsctl

% cd $ORACLE_HOME/rdbms/lib
% make -f ins_rdbms.mk install
For executables: sqlplus
% cd $ORACLE_HOME/sqlplus/lib
% make -f ins_sqlplus.mk install
For executables: isqlplus
% cd $ORACLE_HOME/sqlplus/lib
% make -f ins_sqlplus install_isqlplus
For executables: dbsnmp, oemevent, oratclsh
% cd $ORACLE_HOME/network/lib
% make -f ins_oemagent.mk install
For executables: names, namesctl
% cd $ORACLE_HOME/network/lib
% make -f ins_names.mk install
For executables: osslogin, trcasst, trcroute, onrsd, tnsping
% cd $ORACLE_HOME/network/lib
% make -f ins_net_client.mk install
For executables: tnslsnr, lsnrctl
% cd $ORACLE_HOME/network/lib
% make -f ins_net_server.mk install
For executables related to ldap (for example Oracle Internet Directory):
% cd $ORACLE_HOME/ldap/lib
% make -f ins_ldap.mk install

Note:
Unix Installation/OS: RDBMS Technical Forum

Displayed below are the messages of the selected thread.

Thread Status: Closed

From: Ray Stell 20-Apr-05 21:43


Subject: solaris upgrade

RDBMS Version: 9.2.0.4


Operating System and Version: Solaris 8
Error Number (if applicable):
Product (i.e. SQL*Loader, Import, etc.):
Product Version:

solaris upgrade

I need to move a server from solaris 5.8 to 5.9. Does this


require a new oracle 9.2.0 ee server install or relink or
nothing at all? Thanks.

--------------------------------------------------------------------------------

From: Samir Saad 21-Apr-05 03:28


Subject: Re : solaris upgrade

You must relink even if you find that the databases came up after Solaris upgrade
and they seem fine.

As for the existing Oracle installations, they will all be fine.


Samir.

--------------------------------------------------------------------------------

From: Oracle, soumya anand 21-Apr-05 10:59


Subject: Re : solaris upgrade

Hello Ray,

As rightly pointed by Samir, after an OS upgrade it sufficient to


relink the executables.

Regards,
Soumya

Note: troubles after relink:


----------------------------

If you see on AIX something that resembles the following:

P522:/home/oracle $lsnrctl
exec(): 0509-036 Cannot load program lsnrctl because of the following errors:
0509-130 Symbol resolution failed for /usr/lib/libc.a[aio_64.o] because:
0509-136 Symbol kaio_rdwr64 (number 0) is not exported from
dependent module /unix.
0509-136 Symbol listio64 (number 1) is not exported from
dependent module /unix.
0509-136 Symbol acancel64 (number 2) is not exported from
dependent module /unix.
0509-136 Symbol iosuspend64 (number 3) is not exported from
dependent module /unix.
0509-136 Symbol aio_nwait (number 4) is not exported from
dependent module /unix.
0509-150 Dependent module libc.a(aio_64.o) could not be loaded.
0509-026 System error: Cannot run a file that does not have a valid
format.
0509-192 Examine .loader section symbols with the
'dump -Tv' command.

If this occurs, you have asynchronous I/O turned off.

To turn on asynchronous I/O:


Run smitty chgaio and set STATE to be configured at system restart from defined to
available.
Press Enter.
Do one of the following:
Restart your system.
Run smitty aio and move the cursor to Configure defined Asynchronous I/O. Then
press Enter.

trace:
------

truss -aef -o /tmp/trace svrmgrl

To trace what a Unix process is doing enter:

truss -rall -wall -p <PID>


truss -p $ lsnrctl dbsnmp_start

NOTE: The "truss" command works on SUN and Sequent. Use "tusc" on HP-UX, "strace"
on Linux,
"trace" on SCO Unix or call your system administrator to find the equivalent
command on your system.
Monitor your Unix system:

Logfiles:
---------

Unix message files record all system problems like disk errors, swap errors, NFS
problems, etc.
Monitor the following files on your system to detect system problems:

tail -f /var/adm/SYSLOG
tail -f /var/adm/messages
tail -f /var/log/syslog

===============
10. CONSTRAINTS:
===============

10.1 index owner en table owner information: DBA_INDEXES


-------------------------------------------

set linesize 100

SELECT DISTINCT
substr(owner, 1, 10) as INDEX_OWNER,
substr(index_name, 1, 40) as INDEX_NAME,
substr(tablespace_name,1,40) as TABLE_SPACE,
substr(index_type, 1, 10) as INDEX_TYPE,
substr(table_owner, 1, 10) as TABLE_OWNER,
substr(table_name, 1, 40) as TABLE_NAME,
BLEVEL,NUM_ROWS,STATUS
FROM DBA_INDEXES
order by index_owner;

SELECT DISTINCT
substr(owner, 1, 10) as INDEX_OWNER,
substr(index_name, 1, 40) as INDEX_NAME,
substr(index_type, 1, 10) as INDEX_TYPE,
substr(table_owner, 1, 10) as TABLE_OWNER,
substr(table_name, 1, 40) as TABLE_NAME
FROM DBA_INDEXES
WHERE table_name='HEAT_CUSTOMER';

SELECT
substr(owner, 1, 10) as INDEX_OWNER,
substr(index_name, 1, 40) as INDEX_NAME,
substr(index_type, 1, 10) as INDEX_TYPE,
substr(table_owner, 1, 10) as TABLE_OWNER,
substr(table_name, 1, 40) as TABLE_NAME
FROM DBA_INDEXES
WHERE owner<>table_owner;

10.2 PK en FK constraint relations:


----------------------------------

SELECT
c.constraint_type as TYPE,
SUBSTR(c.table_name, 1, 40) as TABLE_NAME,
SUBSTR(c.constraint_name, 1, 40) as CONSTRAINT_NAME,
SUBSTR(c.r_constraint_name, 1, 40) as REF_KEY,
SUBSTR(b.column_name, 1, 40) as COLUMN_NAME
FROM DBA_CONSTRAINTS c, DBA_CONS_COLUMNS b
WHERE
c.constraint_name=b.constraint_name AND
c.OWNER in ('TRIDION_CM','TCMLOGDBUSER','VPOUSERDB')
AND c.constraint_type in ('P', 'R', 'U');

SELECT
c.constraint_type as TYPE,
SUBSTR(c.table_name, 1, 40) as TABLE_NAME,
SUBSTR(c.constraint_name, 1, 40) as CONSTRAINT_NAME,
SUBSTR(c.r_constraint_name, 1, 40) as REF_KEY,
SUBSTR(b.column_name, 1, 40) as COLUMN_NAME
FROM DBA_CONSTRAINTS c, DBA_CONS_COLUMNS b
WHERE
c.constraint_name=b.constraint_name AND
c.OWNER='RM_LIVE'
AND c.constraint_type in ('P', 'R', 'U');

SELECT distinct
c.constraint_type as TYPE,
SUBSTR(c.table_name, 1, 40) as TABLE_NAME,
SUBSTR(c.constraint_name, 1, 40) as CONSTRAINT_NAME,
SUBSTR(c.r_constraint_name, 1, 40) as REF_KEY
FROM DBA_CONSTRAINTS c, DBA_CONS_COLUMNS b
WHERE
c.constraint_name=b.constraint_name AND
c.OWNER='RM_LIVE'
AND c.constraint_type ='R';

-----------------------------------------------------------------------
create table reftables
(TYPE varchar2(32),
TABLE_NAME varchar2(40),
CONSTRAINT_NAME varchar2(40),
REF_KEY varchar2(40),
REF_TABLE varchar2(40));

insert into reftables


(type,table_name,constraint_name,ref_key)
SELECT distinct
c.constraint_type as TYPE,
SUBSTR(c.table_name, 1, 40) as TABLE_NAME,
SUBSTR(c.constraint_name, 1, 40) as CONSTRAINT_NAME,
SUBSTR(c.r_constraint_name, 1, 40) as REF_KEY
FROM DBA_CONSTRAINTS c, DBA_CONS_COLUMNS b
WHERE
c.constraint_name=b.constraint_name AND
c.OWNER='RM_LIVE'
AND c.constraint_type ='R';

update reftables
set REF_TABLE=(select distinct table_name from dba_cons_columns
where owner='RM_LIVE' and CONSTRAINT_NAME=REF_KEY);

----------------------------------------------------------------------

SELECT
c.constraint_type as TYPE,
SUBSTR(c.table_name, 1, 40) as TABLE_NAME,
SUBSTR(c.constraint_name, 1, 40) as CONSTRAINT_NAME,
SUBSTR(c.r_constraint_name, 1, 40) as REF_KEY
FROM DBA_CONSTRAINTS c, DBA_CONS_COLUMNS b
WHERE
c.constraint_name=b.constraint_name AND
c.OWNER='RM_LIVE'
AND c.constraint_type ='R';

SELECT
c.constraint_type as TYPE,
SUBSTR(c.table_name, 1, 40) as TABLE_NAME,
SUBSTR(c.constraint_name, 1, 40) as CONSTRAINT_NAME,
SUBSTR(c.r_constraint_name, 1, 40) as REF_KEY,
(select b.table_name from dba_cons_columns where
b.constraint_name=c.r_constraint_name) as REF_TABLE
FROM DBA_CONSTRAINTS c, DBA_CONS_COLUMNS b
WHERE
c.constraint_name=b.constraint_name AND
c.OWNER='RM_LIVE'
AND c.constraint_type ='R' or c.constraint_type ='P' ;

select

select c.constraint_name, c.constraint_type, c.table_name,


(select table_name from c where c.r_constraint_name,
o.constraint_name, o.column_name
from dba_constraints c, dba_cons_columns o
where c.constraint_name=o.constraint_name and c.constraint_type='R'
and c.owner='BRAINS';

SELECT 'SELECT * FROM '||c.table_name||' WHERE '||b.column_name||' '||


c.search_condition
FROM DBA_CONSTRAINTS c, DBA_CONS_COLUMNS b
WHERE
c.constraint_name=b.constraint_name AND
c.OWNER='BRAINS' AND c.constraint_type = 'C';

SELECT 'ALTER TABLE PROJECTS.'||table_name||' enable constraint '||


constraint_name||';'
FROM DBA_CONSTRAINTS
WHERE owner='PROJECTS' AND constraint_type='R';

SELECT 'ALTER TABLE BRAINS.'||table_name||' disable constraint '||


constraint_name||';'
FROM USER_CONSTRAINTS
WHERE owner='BRAINS' AND constraint_type='R';

10.3 PK en FK constraint informatie: DBA_CONSTRAINTS


-----------------------------------

-- owner and all foreign key, constraints

SELECT
SUBSTR(owner, 1, 10) as OWNER,
constraint_type as TYPE,
SUBSTR(table_name, 1, 40) as TABLE_NAME,
SUBSTR(constraint_name, 1, 40) as CONSTRAINT_NAME,
SUBSTR(r_constraint_name, 1, 40) as REF_KEY,
DELETE_RULE as DELETE_RULE,
status
FROM DBA_CONSTRAINTS
WHERE OWNER='BRAINS' AND constraint_type in ('R', 'P', 'U');

SELECT
SUBSTR(owner, 1, 10) as OWNER,
constraint_type as TYPE,
SUBSTR(table_name, 1, 30) as TABLE_NAME,
SUBSTR(constraint_name, 1, 30) as CONSTRAINT_NAME,
SUBSTR(r_constraint_name, 1, 30) as REF_KEY,
DELETE_RULE as DELETE_RULE,
status
FROM DBA_CONSTRAINTS
WHERE OWNER='BRAINS' AND constraint_type in ('R');

-- owner en alle primary key constraints bepalen van een bepaalde user, op
bepaalde objects

Zelfde query: Zet OWNER='gewenste_owner' AND constraint_type='P'

select owner, CONSTRAINT_NAME, CONSTRAINT_TYPE,TABLE_NAME,R_CONSTRAINT_NAME,STATUS

from dba_constraints where owner='FIN_VLIEG'


and constraint_type in ('P','R','U');

10.4 opsporen bijbehorende index van een bepaalde constraint: DBA_INDEXES,


DBA_CONSTRAINTS
------------------------------------------------------------

SELECT
c.constraint_type as Type,
substr(x.index_name, 1, 40) as INDX_NAME,
substr(c.constraint_name, 1, 40) as CONSTRAINT_NAME,
substr(x.tablespace_name, 1, 40) as TABLESPACE
FROM DBA_CONSTRAINTS c, DBA_INDEXES x
WHERE
c.constraint_name=x.index_name AND
c.constraint_name='UN_DEMO1';

SELECT
c.constraint_type as Type,
substr(x.index_name, 1, 40) as INDX_NAME,
substr(c.constraint_name, 1, 40) as CONSTRAINT_NAME,
substr(c.table_name, 1, 40) as TABLE_NAME,
substr(c.owner, 1, 10) as OWNER
FROM DBA_CONSTRAINTS c, DBA_INDEXES x
WHERE
c.constraint_name=x.index_name AND
c.owner='JOOPLOC';

10.5 opsporen tablespace van een constraint of constraint owner:


---------------------------------------------------------------

SELECT
substr(s.segment_name, 1, 40) as Segmentname,
substr(c.constraint_name, 1, 40) as Constraintname,
substr(s.tablespace_name, 1, 40) as Tablespace,
substr(s.segment_type, 1, 10) as Type
FROM DBA_SEGMENTS s, DBA_CONSTRAINTS c
WHERE
s.segment_name=c.constraint_name
AND
c.owner='PROJECTS';

10.6 Ophalen index create statements:


------------------------------------

DBA_INDEXES
DBA_IND_COLUMNS

SELECT
substr(i.index_name, 1, 40) as INDEX_NAME,
substr(i.index_type, 1, 15) as INDEX_TYPE,
substr(i.table_name, 1, 40) as TABLE_NAME,
substr(c.index_owner, 1, 10) as INDEX_OWNER,
substr(c.column_name, 1, 40) as COLUMN_NAME,
c.column_position as POSITION
FROM DBA_INDEXES i, DBA_IND_COLUMNS c
WHERE i.index_name=c.index_name AND i.owner='SALES';

10.7 Aan en uitzetten van constraints:


-------------------------------------

-- aanzetten:

alter table tablename enable constraint constraint_name

-- uitzetten:

alter table tablename disable constraint constraint_name

-- voorbeeld:

ALTER TABLE EMPLOYEE DISABLE CONSTRAINT FK_DEPNO;


ALTER TABLE EMPLOYEE ENABLE CONSTRAINT FK_DEPNO;

maar ook kan:

ALTER TABLE DEMO


ENABLE PRIMARY KEY;

-- Alle FK constraints van een schema in een keer uitzetten:

SELECT 'ALTER TABLE MIS_OWNER.'||table_name||' disable constraint '||


constraint_name||';'
FROM DBA_CONSTRAINTS
WHERE owner='MIS_OWNER' AND constraint_type='R'
AND TABLE_NAME LIKE 'MKM%';
SELECT 'ALTER TABLE MIS_OWNER.'||table_name||' enable constraint '||
constraint_name||';'
FROM DBA_CONSTRAINTS
WHERE owner='MIS_OWNER' AND constraint_type='R'
AND TABLE_NAME LIKE 'MKM%';

10.8 Constraint aanmaken en initieel uit:


----------------------------------------

Dit kan handig zijn bij bijvoorbeeld het laden van een
table waarbij mogelijk dubbele waarden voorkomen

ALTER TABLE CUSTOMERS


ADD CONSTRAINT PK_CUST PRIMARY KEY (custid) DISABLE;

Als nu blijkt dat bij het aanzetten van de constraint, er dubbele records
voorkomen,
kunnen we deze dubbele records plaatsen in de EXCEPTIONS table:

1. aanmaken EXCEPTIONS table:

@ORACLE_HOME\rdbms\admin\utlexcpt.sql

2. Constraint aaNzetten:

ALTER TABLE CUSTOMERS


ENABLE PRIMARY KEY exceptions INTO EXCEPTIONS;

Nu bevat de EXCEPTIONS table de dubbele rijen.

3. Welke dubbele rijen:

SELECT c.custid, c.name


FROM CUSTOMERS c, EXCEPTIONS s
WHERE c.rowid=s.row_id;

10.9 Gebruik PK FK constraints:


------------------------------

10.9.1: Voorbeeld normaal gebruik met DRI:

create table customers


(
custid number not null,
custname varchar(10),
CONSTRAINT pk_cust PRIMARY KEY (custid)
);

create table contacts


(
contactid number not null,
custid number,
contactname varchar(10),
CONSTRAINT pk_contactid PRIMARY KEY (contactid),
CONSTRAINT fk_cust FOREIGN KEY (custid) REFERENCES customers(custid)
);

Hierbij kun je dus niet zondermeer een row met een bepaald custid
uit customers verwijderen, indien er een row in contacts bestaat met hetzelfde
custid.

10.9.2: Voorbeeld met ON DELETE CASCADE:

create table contacts


(
contactid number not null,
custid number,
contactname varchar(10),
CONSTRAINT pk_contactid PRIMARY KEY (contactid),
CONSTRAINT fk_cust FOREIGN KEY (custid) REFERENCES customers(custid) ON DELETE
CASCADE
);

Ook de clausule "ON DELETE SET NULL" kan gebruikt worden.

Nu is het wel mogelijk om in customers een row te verwijderen, terwijl


in contacts een overeenkomende custid bestaat. De row in contacts
wordt dan namelijk ook verwijdert.

10.10 Procedures voor insert, delete:


------------------------------------

Als voorbeeld op table customers:

CREATE OR REPLACE PROCEDURE newcustomer (custid NUMBER, custname VARCHAR)


IS
BEGIN
INSERT INTO customers values (custid,custname);
commit;
END;
/

CREATE OR REPLACE PROCEDURE delcustomer (cust NUMBER)


IS
BEGIN
delete from customers where custid=cust;
commit;
END;
/

10.11 User datadictonary views:


-----------------------------

We hebben al gezien dat we voor constraint informatie voornamelijk de onderstaande


views raadplegen:

DBA_TABLES
DBA_INDEXES,
DBA_CONSTRAINTS,
DBA_IND_COLUMNS,
DBA_SEGMENTS

Deze zijn echter voor de DBA.

Gewone users kunnen informatie opvragen uit USER_ en ALL_ views.

USER_ : in the schema van de user


ALL_ : waar de user bij kan

USER_TABLES, ALL_TABLES
USER_INDEXES, ALL_INDEXES
USER_CONSTRAINTS, ALL_CONSTRAINTS
USER_VIEWS, ALL_VIEWS
USER_SEQUENCES, ALL_SEQUENCES
USER_CONS_COLUMNS, ALL_CONS_COLUMNS
USER_TAB_COLUMNS, ALL_TAB_COLUMNS
USER_SOURCE, ALL_SOURCE

cat
tab
col
dict

10.12 Create en drop index examples:


-----------------------------------

CREATE UNIQUE INDEX HEATCUST0 ON HEATCUST(CUSTTYPE)


TABLESPACE INDEX_SMALL PCTFREE 10
STORAGE(INITIAL 163840 NEXT 163840 PCTINCREASE 0 );

DROP INDEX indexname

10.13 Check the height of indexes:


---------------------------------

Is an index rebuild neccessary ?

SELECT index_name, owner, blevel,


decode(blevel,0,'OK BLEVEL',1,'OK BLEVEL',
2,'OK BLEVEL',3,'OK BLEVEL',4,'OK BLEVEL','BLEVEL HIGH') OK
FROM dba_indexes
WHERE owner='SALES'
and blevel > 3;

10.14 Make indexes unusable (before a large dataload):


-----------------------------------------------------

-- Make Indexes unusable


alter index HEAT_CUSTOMER_DISCON_DATE unusable;
alter index HEAT_CUSTOMER_EMAIL_ADDRESS unusable;
alter index HEAT_CUSTOMER_POSTAL_CODE unusable;

-- Enable Indexes again


alter index HEAT_CUSTOMER_DISCON_DATE rebuild;
alter index HEAT_CUSTOMER_EMAIL_ADDRESS rebuild;
alter index HEAT_CUSTOMER_POSTAL_CODE rebuild;

================================
11. DBMS_JOB and scheduled Jobs:
================================

Used in Oracle 9i and lower versions.

11.1 SNP background process:


----------------------------

Scheduled jobs zijn mogelijk wanneer het SNP background process


geactiveerd is. Dit kan via de init.ora:

JOB_QUEUE_PROCESSES=1 aantal SNP processes (SNP0, SNP1), max 36 t.b.v.


replication en jobqueue's
JOB_QUEUE_INTERVAL=60 check interval

11.2 DBMS_JOB package:


----------------------

DBMS_JOB.SUBMIT()
DBMS_JOB.REMOVE()
DBMS_JOB.CHANGE()
DBMS_JOB.WHAT()
DBMS_JOB.NEXT_DATE()
DBMS_JOB.INTERVAL()
DBMS_JOB.RUN()

11.2.1 DBMS_JOB.SUBMIT()
-----------------------

There are actually two versions SUBMIT() and ISUBMIT()

PROCEDURE DBMS_JOB.SUBMIT
(job OUT BINARY_INTEGER,
what IN VARCHAR2,
next_date IN DATE DEFAULT SYSDATE,
interval IN VARCHAR2 DEFAULT 'NULL',
no_parse IN BOOLEAN DEFAULT FALSE);

PROCEDURE DBMS_JOB.ISUBMIT
(job IN BINARY_INTEGER,
what IN VARCHAR2,
next_date in DATE DEFAULT SYSDATE
interval IN VARCHAR2 DEFAULT 'NULL',
no_parse in BOOLEAN DEFAULT FALSE);

The difference between ISUBMIT and SUBMIT is that ISUBMIT specifies a job number,
whereas SUBMIT returns a job number generated by the DBMS_JOB package

Look for submitted jobs:


------------------------

select job, last_date, next_date, interval, substr(what, 1, 50)


from dba_jobs;

Submit a job:
--------------

The jobnumber (if you use SUBMIT() ) will be derived from the sequence SYS.JOBSEQ

Suppose you have the following procedure:

create or replace procedure test1 is


begin
dbms_output.put_line('Hallo grapjas.');
end;
/

Example 1:
----------

variable jobno number;


begin
DBMS_JOB.SUBMIT(:jobno, 'test1;', Sysdate, 'Sysdate+1');
commit;
end;
/

DECLARE
jobno NUMBER;
BEGIN
DBMS_JOB.SUBMIT
(job => jobno
,what => 'test1;'
,next_date => SYSDATE
,interval => 'SYSDATE+1/24');
COMMIT;
END;
/

So suppose you submit the above job at 08.15h. Then the next, and first time,
that the job will run is at 09.15h.

Example 2:
----------

variable jobno number;


begin
DBMS_JOB.SUBMIT(:jobno, 'test1;', LAST_DAY(SYSDATE+1),
'LAST_DAY(ADD_MONTHS(LAST_DAY(SYSDATE+1),1))');
commit;
end;
/

Example 3:
----------

VARIABLE jobno NUMBER


BEGIN
DBMS_JOB.SUBMIT(:jobno,
'DBMS_DDL.ANALYZE_OBJECT(''TABLE'',
''CHARLIE'', ''X1'',
''ESTIMATE'', NULL, 50);',
SYSDATE, 'SYSDATE + 1');
COMMIT;
END;
/

PRINT jobno

JOBNO
----------
14144

Example 4: this job is scheduled every hour


-------------------------------------------

DECLARE
jobno NUMBER;
BEGIN
DBMS_JOB.SUBMIT
(job => jobno
,what => 'begin space_logger; end;'
,next_date => SYSDATE
,interval => 'SYSDATE+1/24');
COMMIT;
END;
/

Example 5: Examples of intervals


--------------------------------

'SYSDATE + 7' :exactly seven


days from the last execution
'SYSDATE + 1/48' :every half hour
'NEXT_DAY(TRUNC(SYSDATE), ''MONDAY'') + 15/24' :every Monday at
3PM
'NEXT_DAY(ADD_MONTHS(TRUNC(SYSDATE, ''Q''), 3), ''THURSDAY'')' :first Thursday of
each quarter
'TRUNC(SYSDATE + 1)' :Every day at
12:00 midnight
'TRUNC(SYSDATE + 1) + 8/24' :Every day at 8:00
a.m.
'NEXT_DAY(TRUNC(SYSDATE ), "TUESDAY" ) + 12/24' :Every Tuesday at
12:00 noon
'TRUNC(LAST_DAY(SYSDATE ) + 1)' :First day of the
month at midnight
'TRUNC(ADD_MONTHS(SYSDATE + 2/24, 3 ), 'Q' ) - 1/24' :Last day of the
quarter at 11:00 p.m.
NEXT_DAY(SYSDATE, "FRIDAY") ) ) + 9/24' :Every Monday,
Wednesday, and Friday at 9:00 a.m.

---------------------------------------------------------------------------------
Example 6:
----------

You have this testprocedure

create or replace procedure test1 as


id_next number;
begin
select max(id) into id_next from iftest;
insert into iftest
(id)
values
(id_next+1);
commit;
end;
/

Suppose on 16 juli at 9:26h you do:

variable jobno number;


begin
DBMS_JOB.SUBMIT(:jobno, 'test1;', LAST_DAY(SYSDATE+1),
'LAST_DAY(ADD_MONTHS(LAST_DAY(SYSDATE+1),1))');
commit;
end;
/

select job, to_char(this_date,'DD-MM-YYYY;HH24:MI'), to_char(next_date, 'DD-MM-


YYYY;HH24:MI')
from dba_jobs;

JOB TO_CHAR(THIS_DAT TO_CHAR(NEXT_DAT


---------- ---------------- ----------------
25 31-07-2004;09:26

Suppose on 16 juli at 9:38h you do:

variable jobno number;


begin
DBMS_JOB.SUBMIT(:jobno, 'test1;', LAST_DAY(SYSDATE)+1,
'LAST_DAY(ADD_MONTHS(LAST_DAY(SYSDATE+1),1))');
commit;
end;
/

JOB TO_CHAR(THIS_DAT TO_CHAR(NEXT_DAT


---------- ---------------- ----------------
25 31-07-2004;09:26
26 01-08-2004;09:38
Suppose on 16 juli at 9:41h you do:

variable jobno number;


begin
DBMS_JOB.SUBMIT(:jobno, 'test1;', SYSDATE,
'LAST_DAY(ADD_MONTHS(LAST_DAY(SYSDATE+1),1))');
commit;
end;
/

JOB TO_CHAR(THIS_DAT TO_CHAR(NEXT_DAT


---------- ---------------- ----------------
27 31-08-2004;09:41
25 31-07-2004;09:26
26 01-08-2004;09:39

Suppose on 16 juli at 9:46h you do:

variable jobno number;


begin
DBMS_JOB.SUBMIT(:jobno, 'test1;', SYSDATE, 'TRUNC(LAST_DAY(SYSDATE + 1/24 ) )');
commit;
end;
/

JOB TO_CHAR(THIS_DAT TO_CHAR(NEXT_DAT


--------- ---------------- ----------------
27 31-08-2004;09:41
28 31-07-2004;00:00
25 31-07-2004;09:26
29 31-07-2004;00:00

----------------------------------------------------------------------------------
----
variable jobno number;
begin
DBMS_JOB.SUBMIT(:jobno, 'test1;', null, 'TRUNC(LAST_DAY(SYSDATE ) + 1)' );
commit;
end;
/

In the job definition, use two single quotation marks around strings.
Always include a semicolon at the end of the job definition.

11.2.2 DBMS_JOB.REMOVE()
------------------------

Removing a Job FROM the Job Queue


To remove a job FROM the job queue, use the REMOVE procedure in the DBMS_JOB
package.
The following statements remove job number 14144 FROM the job queue:

BEGIN
DBMS_JOB.REMOVE(14144);
END;
/

11.2.3 DBMS_JOB.CHANGE()
------------------------

In this example, job number 14144 is altered to execute every three days:

BEGIN
DBMS_JOB.CHANGE(1, NULL, NULL, 'SYSDATE + 3');
END;
/

If you specify NULL for WHAT, NEXT_DATE, or INTERVAL when you call the
procedure DBMS_JOB.CHANGE, the current value remains unchanged.

11.2.4 DBMS_JOB.WHAT()
----------------------

You can alter the definition of a job by calling the DBMS_JOB.WHAT procedure.
The following example changes the definition for job number 14144:

BEGIN
DBMS_JOB.WHAT(14144,
'DBMS_DDL.ANALYZE_OBJECT(''TABLE'',
''HR'', ''DEPARTMENTS'',
''ESTIMATE'', NULL, 50);');
END;
/

11.2.5 DBMS_JOB.NEXT_DATE()
---------------------------

You can alter the next execution time for a job by calling the
DBMS_JOB.NEXT_DATE procedure, as shown in the following example:

BEGIN
DBMS_JOB.NEXT_DATE(14144, SYSDATE + 4);
END;
/

11.2.6 DBMS_JOB.INTERVAL():
---------------------------

The following example illustrates changing the execution interval


for a job by calling the DBMS_JOB.INTERVAL procedure:

BEGIN
DBMS_JOB.INTERVAL(14144, 'NULL');
END;
/
execute dbms_job.interval(<job number>,'SYSDATE+(1/48)');

In this case, the job will not run again after it successfully executes
and it will be deleted FROM the job queue

11.2.7 DBMS_JOB.BROKEN():
-------------------------

A job is labeled as either broken or not broken. Oracle does not attempt to run
broken jobs.

Example:

BEGIN
DBMS_JOB.BROKEN(10, TRUE);
END;
/

Example:

The following example marks job 14144 as not broken and sets its
next execution date to the following Monday:

BEGIN
DBMS_JOB.BROKEN(14144, FALSE, NEXT_DAY(SYSDATE, 'MONDAY'));
END;
/

Example:

exec DBMS_JOB.BROKEN( V_JOB_ID, true);

Example:

select JOB into V_JOB_ID from DBA_JOBS


where WHAT like '%SONERA%';

DBMS_SNAPSHOT.REFRESH( 'SONERA', 'C');

DBMS_JOB.BROKEN( V_JOB_ID, false);

fix broken jobs:


----------------

/* Filename on companion disk: job5.sql */*


CREATE OR REPLACE PROCEDURE job_fixer
AS
/*
|| calls DBMS_JOB.BROKEN to try and set
|| any broken jobs to unbroken
*/

/* cursor selects user's broken jobs */


CURSOR broken_jobs_cur
IS
SELECT job
FROM user_jobs
WHERE broken = 'Y';

BEGIN
FOR job_rec IN broken_jobs_cur
LOOP
DBMS_JOB.BROKEN(job_rec.job,FALSE);
END LOOP;
END job_fixer;

11.2.8 DBMS_JOB.RUN():
----------------------

BEGIN
DBMS_JOB.RUN(14144);
END;
/

11.3 DBMS_SCHEDULER:
--------------------

Used in Oracle 10g.

BEGIN

DBMS_SCHEDULER.create_job (
job_name => 'test_self_contained_job',
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN DBMS_STATS.gather_schema_stats(''JOHN''); END;',
start_date => SYSTIMESTAMP,
repeat_interval => 'freq=hourly; byminute=0',
end_date => NULL,
enabled => TRUE,
comments => 'Job created using the CREATE JOB procedure.');
End;
/

BEGIN
DBMS_SCHEDULER.run_job (job_name => 'TEST_PROGRAM_SCHEDULE_JOB',
use_current_session => FALSE);
END;
/

BEGIN
DBMS_SCHEDULER.stop_job (job_name => 'TEST_PROGRAM_SCHEDULE_JOB');
END;
/

Jobs can be deleted using the DROP_JOB procedure:

BEGIN
DBMS_SCHEDULER.drop_job (job_name => 'TEST_PROGRAM_SCHEDULE_JOB');
DBMS_SCHEDULER.drop_job (job_name => 'test_self_contained_job');
END;
/
Oracle 10g:
-----------

BMS_JOB has been replaced by DBMS_SCHEDULER.

Views:

V_$SCHEDULER_RUNNING_JOBS
GV_$SCHEDULER_RUNNING_JOBS
DBA_QUEUE_SCHEDULES
USER_QUEUE_SCHEDULES
_DEFSCHEDULE
DEFSCHEDULE
AQ$SCHEDULER$_JOBQTAB_S
AQ$_SCHEDULER$_JOBQTAB_F
AQ$SCHEDULER$_JOBQTAB
AQ$SCHEDULER$_JOBQTAB_R
AQ$SCHEDULER$_EVENT_QTAB_S
AQ$_SCHEDULER$_EVENT_QTAB_F
AQ$SCHEDULER$_EVENT_QTAB
AQ$SCHEDULER$_EVENT_QTAB_R
DBA_SCHEDULER_PROGRAMS
USER_SCHEDULER_PROGRAMS
ALL_SCHEDULER_PROGRAMS
DBA_SCHEDULER_JOBS
USER_SCHEDULER_JOBS
ALL_SCHEDULER_JOBS
DBA_SCHEDULER_JOB_CLASSES
ALL_SCHEDULER_JOB_CLASSES
DBA_SCHEDULER_WINDOWS
ALL_SCHEDULER_WINDOWS
DBA_SCHEDULER_PROGRAM_ARGS
USER_SCHEDULER_PROGRAM_ARGS
ALL_SCHEDULER_PROGRAM_ARGS
DBA_SCHEDULER_JOB_ARGS
USER_SCHEDULER_JOB_ARGS
ALL_SCHEDULER_JOB_ARGS
DBA_SCHEDULER_JOB_LOG
DBA_SCHEDULER_JOB_RUN_DETAILS
USER_SCHEDULER_JOB_LOG
USER_SCHEDULER_JOB_RUN_DETAILS
ALL_SCHEDULER_JOB_LOG
ALL_SCHEDULER_JOB_RUN_DETAILS
DBA_SCHEDULER_WINDOW_LOG
DBA_SCHEDULER_WINDOW_DETAILS
ALL_SCHEDULER_WINDOW_LOG
ALL_SCHEDULER_WINDOW_DETAILS
DBA_SCHEDULER_WINDOW_GROUPS
ALL_SCHEDULER_WINDOW_GROUPS
DBA_SCHEDULER_WINGROUP_MEMBERS
ALL_SCHEDULER_WINGROUP_MEMBERS
DBA_SCHEDULER_SCHEDULES
USER_SCHEDULER_SCHEDULES
ALL_SCHEDULER_SCHEDULES
DBA_SCHEDULER_RUNNING_JOBS
ALL_SCHEDULER_RUNNING_JOBS
USER_SCHEDULER_RUNNING_JOBS
DBA_SCHEDULER_GLOBAL_ATTRIBUTE
ALL_SCHEDULER_GLOBAL_ATTRIBUTE
DBA_SCHEDULER_CHAINS
USER_SCHEDULER_CHAINS
ALL_SCHEDULER_CHAINS
DBA_SCHEDULER_CHAIN_RULES
USER_SCHEDULER_CHAIN_RULES
ALL_SCHEDULER_CHAIN_RULES
DBA_SCHEDULER_CHAIN_STEPS
USER_SCHEDULER_CHAIN_STEPS
ALL_SCHEDULER_CHAIN_STEPS
DBA_SCHEDULER_RUNNING_CHAINS
USER_SCHEDULER_RUNNING_CHAINS
ALL_SCHEDULER_RUNNING_CHAINS

==================
12. Net8 / SQLNet:
==================

In bijvoorbeeld sql*plus vult men in:

-----------------
Username: system
Password: manager
Host String: XXX
-----------------

NET8 bij de client kijkt in TNSNAMES.ORAnaar de eerste entry

XXX= (description.. protocol..host...port.. SERVICE_NAME=Y)

XXX is eigenlijk een alias en is dus willekeurig hoewel


het uiteraard aansluit bij de instance name of database name waarnaar
je wilt connecten.
Maar het zou dus zelfs pipo mogen zijn.

Wordt XXX niet gevonden, dan meld de client:


ORA-12154 TNS: could not resolve SERVICE NAME

Vervolgens wordt door NET8 via de connect descriptor Y


contact gemaakt met de listener op de Server die luistert naar Y

Is Y niet wat de listener verwacht, dan meldt de listener aan de client:


TNS: listener could not resolve SERVICE_NAME in connect descriptor

12.1 sqlnet.ora voorbeeld:


--------------------------

SQLNET.AUTHENTICATION_SERVICES= (NTS)

NAMES.DIRECTORY_PATH= (TNSNAMES)

12.2 tnsnames.ora voorbeelden:


------------------------------
voorbeeld 1.

DB1=
(DESCRIPTION=
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=STARBOSS)(PORT=1521)
)
(CONNECT_DATA=
(SERVICE_NAME=DB1.world)
)
)

voorbeeld 2.

DB1.world=
(DESCRIPTION=
(ADDRESS_LIST=
(ADDRESS=(COMMUNITY=tcp.world)(PROTOCOL=TCP)(HOST=STARBOSS)(PORT=1521)
)
(CONNECT_DATA=(SID=DB1)
)
)

DB2.world=
(... )

DB3.world=
(... )

etc..

voorbeeld 3.

RCAT =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = w2ktest)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = rcat.antapex)
)
)

12.3 listener.ora voorbeelden:


------------------------------

Example 1:
----------

LISTENER=
(DESCRIPTION=
(ADDRESS=(PROTOCOL=TCP)(HOST=STARBOSS)(PORT=1521))
)
SID_LIST_LISTENER=
(SID_LIST=
(SID_DESC=
(GLOBAL_DBNAME=DB1.world)
(ORACLE_HOME=D:\oracle8i)
(SID_NAME=DB1)
)
)

Example 2:
----------

############## WPRD #####################################################


LOG_DIRECTORY_WPRD = /opt/oracle/admin/WPRD/network/log
LOG_FILE_WPRD = WPRD.log
TRACE_LEVEL_WPRD = OFF #ADMIN
TRACE_DIRECTORY_WPRD = /opt/oracle/admin/WPRD/network/trace
TRACE_FILE_WPRD = WPRD.trc

WPRD =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=blnl01)(PORT=1521)))))

SID_LIST_WPRD =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = WPRD)
(ORACLE_HOME = /opt/oracle/product/8.1.6)
(SID_NAME = WPRD)))

############## WTST #####################################################


LOG_DIRECTORY_WTST = /opt/oracle/admin/WTST/network/log
LOG_FILE_WTST = WTST.log
TRACE_LEVEL_WTST = OFF #ADMIN
TRACE_DIRECTORY_WTST = /opt/oracle/admin/WTST/network/trace
TRACE_FILE_WTST = WTST.trc

WTST =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=blnl01)(PORT=1522)))))

SID_LIST_WTST =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = WTST)
(ORACLE_HOME = /opt/oracle/product/8.1.6)
(SID_NAME = WTST)))

Example 3:
----------

# LISTENER.ORA Network Configuration File:


D:\oracle\ora901\NETWORK\ADMIN\listener.ora
# Generated by Oracle configuration tools.

LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
)
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = missrv)(PORT = 1521))
)
)

SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = D:\oracle\ora901)
(PROGRAM = extproc)
)
(SID_DESC =
(GLOBAL_DBNAME = o901)
(ORACLE_HOME = D:\oracle\ora901)
(SID_NAME = o901)
)
(SID_DESC =
(SID_NAME = MAST)
(ORACLE_HOME = D:\oracle\ora901)
(PROGRAM = hsodbc)
)
(SID_DESC =
(SID_NAME = NATOPS)
(ORACLE_HOME = D:\oracle\ora901)
(PROGRAM = hsodbc)
)
(SID_DESC =
(SID_NAME = VRF)
(ORACLE_HOME = D:\oracle\ora901)
(PROGRAM = hsodbc)
)
(SID_DESC =
(SID_NAME = DRILLS)
(ORACLE_HOME = D:\oracle\ora901)
(PROGRAM = hsodbc)
)
(SID_DESC =
(SID_NAME = DDS)
(ORACLE_HOME = D:\oracle\ora901)
(PROGRAM = hsodbc)
)
(SID_DESC =
(SID_NAME = IVP)
(ORACLE_HOME = D:\oracle\ora901)
(PROGRAM = hsodbc)
(SID_DESC =
(SID_NAME = ALBERT)
(ORACLE_HOME = D:\oracle\ora901)
(PROGRAM = hsodbc)
)
)

12.4: CONNECT TIME FAILOVER:


----------------------------

The connect-time failover feature allows clients to connect to another listener if


the initial connection
to the first listener fails. Multiple listener locations are specified in the
clients tnsnames.ora file.
If a connection attempt to the first listener fails, a connection request to the
next listener
in the list is attempted. This feature increases the availablity of the Oracle
service
should a listener location be unavailable.
Here is an example of what a tnsnames.ora file looks like with connect-time
failover enabled:

ORCL=
(DESCRIPTION=
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=DBPROD)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=DBFAIL)(PORT=1521))
)
(CONNECT_DATA=(SERVICE_NAME=PROD)(SERVER=DEDICATED)
)
)

12.5: CLIENT LOAD BALANCING:


----------------------------

Client Load Balancing is a feature that allows clients to randomly select from a
list of listeners.
Oracle Net moves through the list of listeners and balances the load of connection
requests
accross the available listeners.
Here is an example of the tnsnames.ora entry that allows for load balancing:

ORCL=
(DESCRIPTION=
(LOAD_BALANCE=ON)
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=MWEISHAN-DELL)(PORT=1522))
(ADDRESS=(PROTOCOL=TCP)(HOST=MWEISHAN-DELL)(PORT=1521))
)
(CONNECT_DATA=(SERVICE_NAME=PROD)(SERVER=DEDICATED)
)
)

Notice the additional parameter of LOAD_BALANCE. This enables load balancing


between the
two listener locations specified.

12.6: ORACLE SHARED SERVER:


---------------------------

With the dedicated Server, each server process has a PGA, outside the SGA
When Shared Server is used, the user program area's are in the SGA in the large
pool.

With a few init.ora parameters, you can configure Shared Server.

1. DISPATCHERS:

The DISPATCHERS parameter defines the number of dispatchers that should start when
the instance is started.
For example, if you want to configure 3 TCP/IP dispatchers and to IPC dispatchers,

you set the parameters as follows:

DISPATCHERS="(PRO=TCP)(DIS=3)(PRO=IPC)(DIS=2)"

For example, if you have 500 concurrent TCP/IP connections, and you want each
dispatcher to manage
50 concurrent connections, you need 10 dispatchers.
You set your DISPATCHERS parameter as follows:

DISPATCHERS="(PRO=TCP)(DIS=10)"

2. SHARED_SERVER:

The Shared_Servers parameter specifies the minimum number of Shared Servers to


start and retain
when the Oracle instance is started.

View information about dispatchers and shared servers with the following commands
and queries:

lsnrctl services

SELECT name, status, messages, idle, busy, bytes, breaks


FROM v$dispatcher;

12.7: Keeping Oracle connections alive through a Firewall:


----------------------------------------------------------

Implementing keep alive packets:


SQLNET.INBOUND_CONNECT_TIMEOUT

Notes:
=======

Note 1:
-------
Doc ID: Note:274130.1 Content Type: TEXT/PLAIN
Subject: SHARED SERVER CONFIGURATION Creation Date: 25-MAY-2004
Type: BULLETIN Last Revision Date: 24-JUN-2004
Status: PUBLISHED
PURPOSE
-------

This article discusses about the configuration of shared servers on 9i DB.

SHARED SERVER CONFIGURATION:


===========================

1. Add the parameter shared_servers in the init.ora

SHARED_SERVERS specifies the number of server processes that you want to


create when an instance is started up. If system load decreases,
this minimum number of servers is maintained. Therefore, you should take
care not to set SHARED_SERVERS too high at system startup.

Parameter type Integer


Parameter class Dynamic: ALTER SYSTEM

2. Add the parameter DISPATCHERS in the init.ora

DISPATCHERS configures dispatcher processes in the shared server


architecture.

USAGE:
-----
DISPATCHERS = "(PROTOCOL=TCP)(DISPATCHERS=3)"

3. Save the init.ora file.

4. Change the connect string in tnsnames.ora from

ORACLE.IDC.ORACLE.COM =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = xyzac)(PORT = 1521))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = oracle)
)
)

to

ORACLE.IDC.ORACLE.COM =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = xyzac)(PORT = 1521))
)
(CONNECT_DATA =
(SERVER = SHARED)
(SERVICE_NAME = Oracle)
)
)

Change SERVER=SHARED.

5. Shutdown and startup the database.

6. Make a new connection to database other than SYSDBA.

(NOTE: SYSDBA will always acquire dedicated connection by default.)

7. Check whether the connection is done through server server.

> Select server from v$session.

SERVER
---------
DEDICATED
DEDICATED
DEDICATED
SHARED
DEDICATED

NOTE:
====
The following parameters are optional (if not specified, Oracle selects
defaults):

MAX_DISPATCHERS:
===============
Specifies the maximum number of dispatcher processes that can run
simultaneously.

SHARED_SERVERS:
==============
Specifies the number of shared server processes created when an instance
is started up.

MAX_SHARED_SERVERS:
==================
Specifies the maximum number of shared server processes that can run
simultaneously.

CIRCUITS:
========
Specifies the total number of virtual circuits that are available for
inbound and outbound network sessions.

SHARED_SERVER_SESSIONS:
======================
Specifies the total number of shared server user sessions to allow.
Setting this parameter enables you to reserve user sessions for
dedicated servers.
Other parameters affected by shared server that may require adjustment:

LARGE_POOL_SIZE:
===============
Specifies the size in bytes of the large pool allocation heap. Shared
server may force the default value to be set too high, causing
performance problems or problems starting the database.

SESSIONS:
========
Specifies the maximum number of sessions that can be created in the
system. May need to be adjusted for shared server.

12.7 password for the listener:


-------------------------------

Note 1:

LSNRCTL> set password <password> where <password> is the password you want to use.

To change a password, use "Change_Password" You can also designate a password when
you configure the listener
with the Net8 Assistant. These passwords are stored in the listener.ora file and
although they will not show
in the Net8 Assistant, they are readable in the listener.ora file.

Note 2:

The password can be set either by specifying it through the command


CHANGE_PASSWORD, or through a parameter
in the listener.ora file. We saw how to do that through the CHANGE_PASSWORD
command earlier.
If the password is changed this way, it should not be specified in the
listener.ora file. The password is not
displayed anywhere. When supplying the password in the listener control utility,
you must supply it at the
Password: prompt as shown above. You cannot specify the password in one line as
shown below.

LSNRCTL> set password t0p53cr3t


LSNRCTL> stop
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC)))
TNS-01169: The listener has not recognized the password
LSNRCTL>

Note 3:

more correct method would be to password protect the listener functions.

See the net8 admin guide for info but in short -- you can:

LSNRCTL> change_password
Old password: <just hit enter if you don't have one yet>
New password:
Reenter new password:
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=slackdog)(PORT=1521)))
Password changed for LISTENER
The command completed successfully

LSNRCTL> set password


Password:
The command completed successfully

LSNRCTL> save_config
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=slackdog)(PORT=1521)))
Saved LISTENER configuration parameters.
Listener Parameter File /d01/home/oracle8i/network/admin/listener.ora
Old Parameter File /d01/home/oracle8i/network/admin/listener.bak
The command completed successfully
LSNRCTL>

Now, you need to use a password to do various operations (such as STOP) but not
others (such as STATUS)

=============================================
13. Datadictionary queries Rollback segments:
=============================================

13.1 naam, plaats en status van rollback segementen:


----------------------------------------------------

SELECT substr(segment_name, 1, 10), substr(tablespace_name, 1, 20), status,


INITIAL_EXTENT, NEXT_EXTENT, MIN_EXTENTS, MAX_EXTENTS, PCT_INCREASE
FROM DBA_ROLLBACK_SEGS;

13.2 indruk van aantal active transactions per rollback segment:


----------------------------------------------------------------

aantal actieve transacties: V$ROLLSTAT


naam rollback segment: V$ROLLNAME

SELECT n.name, s.xacts


FROM V$ROLLNAME n, V$ROLLSTAT s
WHERE n.usn=s.usn;
(usn=undo segment number)

13.3 grootte, naam, extents, bytes van de rollback segmenten:


-------------------------------------------------------------

SELECT substr(segment_name, 1, 15), bytes/1024/1024 Size_in_MB, blocks,


extents, substr(tablespace_name, 1, 15)
FROM DBA_SEGMENTS WHERE segment_type='ROLLBACK';

SELECT n.name, s.extents, s.rssize


FROM V$ROLLNAME n, V$ROLLSTAT s
WHERE n.usn=s.usn;
Create Tablespace RBS
datafile '/db1/oradata/oem/rbs.dbf' SIZE 200M AUTOEXTEND ON NEXT 20M MAXSIZE 500M
LOGGING
DEFAULT STORAGE (
INITIAL 5M
NEXT 5M
MINEXTENTS 2
MAXEXTENTS 100
PCTINCREASE 0
)
ONLINE
PERMANENT;

13.4 De optimal parameter:


--------------------------

SELECT n.name, s.optsize


FROM V$ROLLNAME n, V$ROLLSTAT s
WHERE n.usn=s.usn;

13.5 writes to rollback segementen:


-----------------------------------

Doe de query begin meting, en bij einde meting en bekijk het verschil

SELECT n.name, s.writes FROM V$ROLLNAME n, V$ROLLSTAT s


WHERE n.usn=s.usn

13.6 Wie en welke processes gebruiken de rollback segs:


-------------------------------------------------------

Query1: Query op v$lock, v$session, v$rollname

column rr heading 'RB Segment' format a15


column us heading 'Username' format a10
column os heading 'OS user' format a10
column te heading 'Terminal' format a15

SELECT R.name rr, nvl(S.username, 'no transaction') us,


S.Osuser os,
S.Terminal te
FROM V$LOCK L, V$SESSION S, V$ROLLNAME R
WHERE L.Sid=S.Sid(+)
AND trunc(L.Id1/65536)=R.usn
AND L.Type='TX'
AND L.Lmode=6
ORDER BY R.name
/

Query 2:

SELECT r.name "RBS", s.sid, s.serial#, s.username "USER", t.status,


t.cr_get, t.phy_io, t.used_ublk, t.noundo,
substr(s.program, 1, 78) "COMMAND"
FROM sys.v_$session s, sys.v_$transaction t, sys.v_$rollname r
WHERE t.addr = s.taddr
AND t.xidusn = r.usn
ORDER BY t.cr_get, t.phy_io
/

13.7 Bepaling minimum aantal rollbacksegmenten:


------------------------------------------------

Bepaal in init.ora via "show parameter transactions"

transactions= a (max no of transactions, stel 100)


transactions_per_rollback_segment= b (allowed no of concurrent tr/rbs, stel 10)

minimum=a/b (100/10=10)

13.8 Bepaling minimale grootte rollback segmenten:


--------------------------------------------------

lts=largest transaction size (normal production, niet af en toe batch loads)


min_size=minimum size van rollback segment
min_size= lts * 100 / (100 - (40 {%free} + 15 {iaiu} +5 {header}
min_size=lts * 1.67

Stel lts=700K, dan is de startwaarde rollbacksegment=1400K

=========================================================
14. Data dictionary queries m.b.t. security, permissions:
=========================================================

14.1 user information in datadictionary


---------------------------------------

SELECT username, user_id, password


FROM DBA_USERS
WHERE username='Kees';

14.2 default tablespace, account_status of users


------------------------------------------------

SELECT username, default_tablespace, account_status


FROM DBA_USERS;

14.3 tablespace quotas of users


-------------------------------

SELECT tablespace_name, bytes, max_bytes, blocks, max_blocks


FROM DBA_TS_QUOTAS
WHERE username='CHARLIE';

14.4 Systeem rechten van een user opvragen: DBA_SYS_PRIVS


---------------------------------------------------------
SELECT substr(grantee, 1, 15), substr(privilege, 1, 40),
admin_option
FROM DBA_SYS_PRIVS WHERE grantee='CHARLIE';

SELECT * FROM dba_sys_privs


WHERE grantee='Kees';

14.5 Invalid objects in DBA_OBJECTS:


------------------------------------

SELECT substr(owner, 1, 10), substr(object_name, 1, 40),


substr(object_type, 1, 40), status
FROM DBA_OBJECTS
WHERE status='INVALID';

14.6 session information


------------------------

SELECT sid, serial#, substr(username, 1, 10), substr(osuser, 1, 10),


substr(schemaname, 1, 10),
substr(program, 1, 15), substr(module, 1, 15), status, logon_time,
substr(terminal, 1, 15), substr(machine, 1, 15)
FROM V$SESSION;

14.7 kill a session


-------------------

alter system kill session 'SID, SERIAL#'

========================
15. INIT.ORA parameters:
========================

15.1 init.ora parameters en ARCHIVE MODE:


----------------------------------------

LOG_ARCHIVE_DEST=/oracle/admin/cc1/arch
LOG_ARCHIVE_START=TRUE
LOG_ARCHIVE_FORMAT=archcc1_%s.log

10g:

LOG_ARCHIVE_DEST=c:\oracle\oradata\log'
LOG_ARCHIVE_FORMAT=arch_%t_%s_%r.dbf'

other:

LOG_ARCHIVE_DEST_1=
LOG_ARCHIVE_DEST_2=
LOG_ARCHIVE_MAX_PROCESSES=2
15.2 init.ora en perfoRMANce en SGA:
------------------------------------

SORT_AREA_SIZE = 65536 (per PGA, max sort area)


SORT_AREA_RETAINED_SIZE = 65536 (size after sort)
PROCESSES = 100 (alle processes)
DB_BLOCK_SIZE = 8192
DB_BLOCK_BUFFERS = 3400 (DB_CACHE_SIZE in Oracle 9i)
SHARED_POOL_SIZE = 52428800
LOG_BUFFER = 26214400
LARGE_POOL_SIZE =
DBWR_IO_SLAVES (DB_WRITER_PROCESSES)
DB_WRITER_PROCESSES = 2
LGWR_IO_SLAVES=
DB_FILE_MULTIBLOCK_READ_COUNT =16 (minimize io during table scans,
it specifies max number of blocks
in one
io operation during sequential
read)
BUFFER_POOL_RECYCLE =
BUFFER_POOL_KEEP =
TIMED_STATISTICES =TRUE (statistics related to time are collected
or not)
OPTIMIZER_MODE =RULE, CHOOSE, FIRST_ROWS, ALL_ROWS

PARALLEL_MIN_SERVERS = 2 (voor Parallel Query, en parallel


recovery)
PARALLEL_MAX_SERVERS = 4

RECOVERY_PARALLELISM = 2 (set parallel recovery op database


niveau)

SHARED_POOL_SIZE: in bytes or K or M
SHARED_POOL_SIZE specifies (in bytes) the size of the shared pool. The shared pool
contains shared cursors, stored procedures,
control structures, and other structures. If you set PARALLEL_AUTOMATIC_TUNING to
false,
Oracle also allocates parallel execution message buffers from the shared pool.
Larger values improve perfoRMANce in multi-user systems.
Smaller values use less memory.
You can monitor utilization of the shared pool by querying the view V$SGASTAT.

SHARED_POOL_RESERVED_SIZE:
The parameter was introduced in Oracle 7.1.5 and provides a means of reserving a
portion of the shared pool
for large memory allocations. The reserved area comes out of the shared pool
itself.
From a practical point of view one should set SHARED_POOL_RESERVED_SIZE to about
10%
of SHARED_POOL_SIZE unless either the shared pool is very large OR
SHARED_POOL_RESERVED_MIN_ALLOC
has been set lower than the default value:
15.3 init.ora en jobs:
----------------------

JOB_QUEUE_PROCESSES=1 aantal SNP processes (SNP0, SNP1), max 36 t.b.v.


replication en jobqueue's
JOB_QUEUE_INTERVAL=60 check interval

15.4 instance name, sid:


------------------------

db_name = CC1
global_names = TRUE
instance_name = CC1
db_domain = antapex.net

15.5 overige parameters:


------------------------

OS_AUTHENT_PREFIX = "" (stANDaard is dat OPS$)


REMOTE_OS_AUTHENTICATION = TRUE or FALSE (of een OS authentication
via het netwerk kan)
REMOTE_LOGIN_PASSWORDFILEe = NONE or EXCLUSIVE

distributed_transactions =0 or >0 (starts the RECO process)


aq_tm_processes = (advanced queuing, message queues)

mts_servers = (number of shared server processes in


multithreaded server)
mts_max_servers =

audit_file_dest = /dbs01/app/oracle/admin/AMI_PRD/adump
background_dump_dest = /dbs01/app/oracle/admin/AMI_PRD/bdump
user_dump_dest = /dbs01/app/oracle/admin/AMI_PRD/udump
core_dump_dest = /dbs01/app/oracle/admin/AMI_PRD/cdump

resource_limit =true (specifies whether resource limits in


profiles are in effect)

license_max_sessions = (max number of concurrent user


sessions)
license_sessions_warning = (at this limit, warning in alert
log)
license_max_users = (maximum number of users that can
be created in the database)

are enforced)
compatible = 8.1.7.0.0
control_files = /dbs04/oradata/AMI_PRD/ctrl/cc1_01.ctl
control_files = /dbs05/oradata/AMI_PRD/ctrl/cc1_02.ctl
control_files = /dbs06/oradata/AMI_PRD/ctrl/cc1_03.ctl

db_files = 150 (max number of data files opened)


java_pool_size = 0
log_checkpoint_interval = 10000
log_checkpoint_timeout = 1800
max_dump_file_size = 10240
max_enabled_roles = 40
nls_date_format = "DD-MM-YYYY"
nls_language = AMERICAN
nls_territory = AMERICA
o7_dictionary_accessibility = TRUE
open_cursors = 250
optimizer_max_permutations = 1000
optimizer_mode = CHOOSE
parallel_max_servers = 5
pre_page_sga = TRUE
service_names = CC1
utl_file_dir = /app01/oradata/cc1/utl_file

All init.ora parameters:


-------------------------

PARAMETER DESCRIPTION
------------------------------ ----------------------------------------
O7_DICTIONARY_ACCESSIBILITY Version 7 Dictionary Accessibility
support [TRUE | FALSE]

active_instance_count Number of active instances in the


cluster database [NUMBER]
aq_tm_processes Number of AQ Time Managers to start [NUMBER]
archive_lag_target Maximum number of seconds of redos the
standby could lose [NUMBER]
asm_diskgroups Disk groups to mount automatically [CHAR]
asm_diskstring Disk set locations for discovery [CHAR]
asm_power_limit Number of processes for disk rebalancing [NUMBER]
audit_file_dest Directory in which auditing files are to reside
['Path']
audit_sys_operations Enable sys auditing [TRUE|FALSE]
audit_trail Enable system auditing [NONE|DB|DB_EXTENDED|OS]

background_core_dump Core Size for Background Processes [partial |


full]
background_dump_dest Detached process dump directory [file_path]
backup_tape_io_slaves BACKUP Tape I/O slaves [TRUE | FALSE]
bitmap_merge_area_size Maximum memory allow for BITMAP MERGE [NUMBER]
blank_trimming Blank trimming semantics parameter [TRUE | FALSE]
buffer_pool_keep Number of database blocks/latches in
keep buffer pool [CHAR: (buffers:n, latches:m)]
buffer_pool_recycle Number of database blocks/latches in
recycle buffer pool [CHAR: (buffers:n,
latches:m)]

circuits Max number of virtual circuits [NUMBER]


cluster_database If TRUE startup in cluster database mode [TRUE |
FALSE]
cluster_database_instances Number of instances to use for sizing
cluster db SGA structures [NUMBER]
cluster_interconnects Interconnects for RAC use [CHAR]
commit_point_strength Bias this node has toward not preparing
in a two-phase commit [NUMBER (0-255)]
compatible Database will be completely compatible
with this software version [CHAR: 9.2.0.0.0]
control_file_record_keep_time Control file record keep time in days [NUMBER]
control_files Control file names list [file_path,file_path..]
core_dump_dest Core dump directory [file_path]
cpu_count Initial number of cpu's for this instance
[NUMBER]
create_bitmap_area_size Size of create bitmap buffer for bitmap
index [INTEGER]
cursor_sharing Cursor sharing mode [EXACT | SIMILAR | FORCE]
create_stored_outlines Create stored outlines for DML statements [TRUE |
FALSE | category_name]
cursor_space_for_time Use more memory in order to get faster
execution [TRUE | FALSE]

db_16k_cache_size Size of cache for 16K buffers [bytes]


db_2k_cache_size Size of cache for 2K buffers [bytes]
db_32k_cache_size Size of cache for 32K buffers [bytes]
db_4k_cache_size Size of cache for 4K buffers [bytes]
db_8k_cache_size Size of cache for 8K buffers [bytes]
db_block_buffers Number of database blocks to cache in memory
[bytes: 8M or NUMBER of blocks (Ora7)]
db_block_checking Data and index block checking [TRUE | FALSE]
db_block_checksum Store checksum in db blocks and check
during reads [TRUE | FALSE]
db_block_size Size of database block [bytes]
db_cache_advice Buffer cache sizing advisory [internal use only]
db_cache_size Size of DEFAULT buffer pool for standard
block size buffers [bytes]
db_create_file_dest Default database location ['Path_to_directory']
db_create_online_log_dest_n Online log/controlfile destination (where n=1-5)
['Path']
db_domain Directory part of global database name
stored with CREATE DATABASE [CHAR]
* db_file_multiblock_read_count Db blocks to be read each IO [NUMBER]
db_file_name_convert Datafile name convert patterns and
strings for standby/clone db [, ]
db_files Max allowable # db files [NUMBER]
db_flashback_retention_target Maximum Flashback Database log retention time in
minutes [NUMBER]
db_keep_cache_size Size of KEEP buffer pool for standard
block size buffers [bytes]
db_name Database name specified in CREATE
DATABASE [CHAR]
db_recovery_file_dest Default database recovery file location [CHAR]
db_recovery_file_dest_size Database recovery files size limit [bytes]
db_recycle_cache_size Size of RECYCLE buffer pool for standard
block size buffers [bytes]
db_unique_name Database Unique Name [CHAR]
db_writer_processes Number of background database writer
processes to start [NUMBER]
dblink_encrypt_login Enforce password for distributed login
always be encrypted [TRUE | FALSE]
dbwr_io_slaves DBWR I/O slaves [NUMBER]
ddl_wait_for_locks Disable NOWAIT DML lock acquisitions [TRUE |
FALSE]
dg_broker_config_file1 Data guard broker configuration file #1 ['Path']
dg_broker_config_file2 Data guard broker configuration file #2 ['Path']
dg_broker_start Start Data Guard broker framework (DMON
process) [TRUE | FALSE]
disk_asynch_io Use asynch I/O for random access devices [TRUE |
FALSE]
dispatchers Specifications of dispatchers
(MTS_dispatchers in Ora 8) [CHAR]
distributed_lock_timeout Number of seconds a distributed transaction
waits for a lock [Internal]
dml_locks Dml locks - one for each table modified
in a transaction [NUMBER]
drs_start Start DG Broker monitor (DMON process)[TRUE |
FALSE]

enqueue_resources Resources for enqueues [NUMBER]


event Debug event control - default null string [CHAR]

fal_client FAL client [CHAR]


fal_server FAL server list [CHAR]
fast_start_io_target Upper bound on recovery reads [NUMBER]
fast_start_mttr_target MTTR target of forward crash recovery
in seconds [NUMBER]
fast_start_parallel_rollback Max number of parallel recovery slaves
that may be used [LOW | HIGH | FALSE]
file_mapping Enable file mapping [TRUE | FALSE]
fileio_network_adapters Network Adapters for File I/O [CHAR]
filesystemio_options IO operations on filesystem files [Internal]
fixed_date Fix SYSDATE value for debugging[NONE or
'2000_12_30_24_59_00']

gc_files_to_locks RAC/OPS - lock granularity number of


global cache locks per file (DFS) [CHAR]
gcs_server_processes Number of background gcs server processes to
start [NUMBER]
global_context_pool_size Global Application Context Pool Size in
Bytes [bytes]
global_names Enforce that database links have same
name as remote database [TRUE | FALSE]

hash_area_size Size of in-memory hash work area (Shared


Server)[bytes]
hash_join_enabled Enable/disable hash join (CBO) [TRUE | FALSE]
hi_shared_memory_address SGA starting address (high order 32-bits
on 64-bit platforms) [NUMBER]
hs_autoregister Enable automatic server DD updates in HS
agent self-registration [TRUE | FALSE]

ifile Include file in init.ora ['path_to_file']


instance_groups List of instance group names [CHAR]
instance_name Instance name supported by the instance [CHAR]
instance_number Instance number [NUMBER]
instance_type Type of instance to be executed
RDBMS or Automated Storage Management [RDBMS |
ASM]

java_max_sessionspace_size Max allowed size in bytes of a Java


sessionspace [bytes]
java_pool_size Size in bytes of the Java pool [bytes]
java_soft_sessionspace_limit Warning limit on size in bytes of a Java
sessionspace [NUMBER]
job_queue_processes Number of job queue slave processes [NUMBER]

large_pool_size Size in bytes of the large allocation pool


[bytes]
ldap_directory_access RDBMS's LDAP access option [NONE | PASSWORD |
SSL]
license_max_sessions Maximum number of non-system user sessions
(concurrent licensing) [NUMBER]
license_max_users Maximum number of named users that can be created
(named user licensing) [NUMBER]
license_sessions_warning Warning level for number of non-system
user sessions [NUMBER]
local_listener Define which listeners instances register with
[CHAR]
lock_name_space Used for generating lock names for
standby/primary database
assign each a unique name space [CHAR]
lock_sga Lock entire SGA in physical memory [Internal]
log_archive_config Log archive config
[SEND|NOSEND] [RECEIVE|NORECEIVE] [ DG_CONFIG]
log_archive_dest Archive logs destination ['path_to_directory']
log_archive_dest_n Archive logging parameters (n=1-10)
Enterprise Edition [CHAR]
log_archive_dest_state_n Archive logging parameter status (n=1-10) [CHAR]
Enterprise Edition [CHAR]
log_archive_duplex_dest Duplex archival destination ['path_to_directory']
log_archive_format Archive log filename format [CHAR: "MyApp%S.ARC"]
log_archive_local_first Establish EXPEDITE attribute default value [TRUE
| FALSE]
log_archive_max_processes Maximum number of active ARCH processes [NUMBER]
log_archive_min_succeed_dest Minimum number of archive destinations
that must succeed [NUMBER]
log_archive_start Start archival process on SGA initialization
[TRUE | FALSE]
log_archive_trace Archive log tracing level [NUMBER]

log_buffer Redo circular buffer size [bytes]


log_checkpoint_interval Checkpoint threshold, # redo blocks [NUMBER]
log_checkpoint_timeout Checkpoint threshold, maximum time interval
between
checkpoints in seconds [NUMBER]
log_checkpoints_to_alert Log checkpoint begin/end to alert file [TRUE |
FALSE]
log_file_name_convert Logfile name convert patterns and
strings for standby/clone db [, ]
log_parallelism Number of log buffer strands [NUMBER]
logmnr_max_persistent_sessions Maximum number of threads to mine [NUMBER]

max_commit_propagation_delay Max age of new snapshot in .01 seconds [NUMBER]


max_dispatchers Max number of dispatchers [NUMBER]
max_dump_file_size Maximum size (blocks) of dump file [UNLIMITED or
bytes]
max_enabled_roles Max number of roles a user can have enabled
[NUMBER]
max_rollback_segments Max number of rollback segments in SGA cache
[NUMBER]
max_shared_servers Max number of shared servers [NUMBER]
mts_circuits Max number of circuits [NUMBER]
mts_dispatchers Specifications of dispatchers [CHAR]
mts_listener_address Address(es) of network listener [CHAR]
mts_max_dispatchers Max number of dispatchers [NUMBER]
mts_max_servers Max number of shared servers [NUMBER]
mts_multiple_listeners Are multiple listeners enabled? [TRUE | FALSE]
mts_servers Number of shared servers to start up [NUMBER]
mts_service Service supported by dispatchers [CHAR]
mts_sessions max number of shared server sessions [NUMBER]

nls_calendar NLS calendar system name (Default=GREGORIAN)


[CHAR]
nls_comp NLS comparison, Enterprise Edition [BINARY |
ANSI]
nls_currency NLS local currency symbol [CHAR]
nls_date_format NLS Oracle date format [CHAR]
nls_date_language NLS date language name (Default=AMERICAN) [CHAR]
nls_dual_currency Dual currency symbol [CHAR]
nls_iso_currency NLS ISO currency territory name
override the default set by NLS_TERRITORY [CHAR]
nls_language NLS language name (session default) [CHAR]
nls_length_semantics Create columns using byte or char
semantics by default [BYTE | CHAR]
nls_nchar_conv_excp NLS raise an exception instead of
allowing implicit conversion [CHAR]
nls_numeric_characters NLS numeric characters [CHAR]
nls_sort Case-sensitive or insensitive sort [Language]
language may be BINARY, BINARY_CI, BINARY_AI,
GERMAN, GERMAN_CI, etc
nls_territory NLS territory name (country settings) [CHAR]
nls_time_format Time format [CHAR]
nls_time_tz_format Time with timezone format [CHAR]
nls_timestamp_format Time stamp format [CHAR]
nls_timestamp_tz_format Timestamp with timezone format [CHAR]

object_cache_max_size_percent Percentage of maximum size over optimal


of the user session's ob [NUMBER]
object_cache_optimal_size Optimal size of the user session's
object cache in bytes [bytes]
olap_page_pool_size Size of the olap page pool in bytes [bytes]
open_cursors Max # cursors per session [NUMBER]
open_links Max # open links per session [NUMBER]
open_links_per_instance Max # open links per instance [NUMBER]
optimizer_dynamic_sampling Optimizer dynamic sampling [NUMBER]
optimizer_features_enable Optimizer plan compatibility
(oracle version e.g. 8.1.7) [CHAR]
optimizer_index_caching Optimizer index caching percent [NUMBER]
optimizer_index_cost_adj Optimizer index cost adjustment [NUMBER]
optimizer_max_permutations Optimizer maximum join permutations per
query block [NUMBER]
optimizer_mode Optimizer mode [RULE | CHOOSE | FIRST_ROWS |
ALL_ROWS]
oracle_trace_collection_name Oracle TRACE default collection name [CHAR]
oracle_trace_collection_path Oracle TRACE collection path [CHAR]
oracle_trace_collection_size Oracle TRACE collection file max. size [NUMBER]
oracle_trace_enable Oracle Trace enabled/disabled [TRUE | FALSE]
oracle_trace_facility_name Oracle TRACE default facility name [CHAR]
oracle_trace_facility_path Oracle TRACE facility path [CHAR]
os_authent_prefix Prefix for auto-logon accounts [CHAR]
os_roles Retrieve roles from the operating system [TRUE |
FALSE]

parallel_adaptive_multi_user Enable adaptive setting of degree for


multiple user streams [TRUE | FALSE]
parallel_automatic_tuning Enable intelligent defaults for parallel
execution parameters [TRUE | FALSE]
parallel_execution_message_size Message buffer size for parallel
execution [bytes]
parallel_instance_group Instance group to use for all parallel
operations [CHAR]
parallel_max_servers Maximum parallel query servers per
instance [NUMBER]
parallel_min_percent Minimum percent of threads required for
parallel query [NUMBER]
parallel_min_servers Minimum parallel query servers per
instance [NUMBER]
parallel_server If TRUE startup in parallel server mode [TRUE |
FALSE]
parallel_server_instances Number of instances to use for sizing
OPS SGA structures [NUMBER]
parallel_threads_per_cpu Number of parallel execution threads per
CPU [NUMBER]
partition_view_enabled Enable/disable partitioned views [TRUE | FALSE]
pga_aggregate_target Target size for the aggregate PGA memory
consumed by the instance [bytes]
plsql_code_type PL/SQL code-type [INTERPRETED | NATIVE]
plsql_compiler_flags PL/SQL compiler flags [CHAR]
plsql_debug PL/SQL debug [TRUE | FALSE]
plsql_native_c_compiler plsql native C compiler [CHAR]
plsql_native_library_dir plsql native library dir ['Path_to_directory']
plsql_native_library_subdir_count plsql native library number of
subdirectories [NUMBER]
plsql_native_linker plsql native linker [CHAR]
plsql_native_make_file_name plsql native compilation make file [CHAR]
plsql_native_make_utility plsql native compilation make utility [CHAR]
plsql_optimize_level PL/SQL optimize level [NUMBER]
plsql_v2_compatibility PL/SQL version 2.x compatibility flag [TRUE |
FALSE]
plsql_warnings PL/SQL compiler warnings settings [CHAR]
See also DBMS_WARNING and
DBA_PLSQL_OBJECT_SETTINGS
pre_page_sga Pre-page sga for process [TRUE | FALSE]
processes User processes [NUMBER]

query_rewrite_enabled Allow rewrite of queries using materialized views


if enabled [FORCE | TRUE | FALSE]
query_rewrite_integrity Perform rewrite using materialized views
with desired integrity [STALE_TOLERATED |
TRUSTED | ENFORCED]

rdbms_server_dn RDBMS's Distinguished Name [CHAR]


read_only_open_delayed If TRUE delay opening of read only files
until first access [TRUE | FALSE]
recovery_parallelism Number of server processes to use for
parallel recovery [NUMBER]
remote_archive_enable Remote archival enable setting [RECEIVE[,SEND] |
FALSE | TRUE]
remote_dependencies_mode Remote-procedure-call dependencies mode
parameter [TIMESTAMP | SIGNATURE]
remote_listener Remote listener [CHAR]
remote_login_passwordfile Use a password file [NONE | SHARED | EXCLUSIVE]
remote_os_authent Allow non-secure remote clients to use
auto-logon accounts [TRUE | FALSE]
remote_os_roles Allow non-secure remote clients to use
os roles [TRUE | FALSE]
replication_dependency_tracking Tracking dependency for Replication
parallel propagation [TRUE | FALSE]
resource_limit Master switch for resource limit [TRUE | FALSE]
resource_manager_plan Resource mgr top plan [Plan_Name]
resumable_timeout Set resumable_timeout, seconds [NUMBER]
rollback_segments Undo segment list [CHAR]
row_locking Row-locking [ALWAYS | DEFAULT | INTENT]
(Default=always)

serial_reuse Reuse the frame segments [DISABLE | SELECT|DML|


PLSQL|ALL|NULL]
serializable Serializable [Internal]
service_names Service names supported by the instance [CHAR]
session_cached_cursors Number of cursors to save in the session
cursor cache [NUMBER]
session_max_open_files Maximum number of open files allowed per
session [NUMBER]
sessions User and system sessions [NUMBER]
sga_max_size Max total SGA size [bytes]
sga_target Target size of SGA [bytes]
shadow_core_dump Core Size for Shadow Processes [PARTIAL | FULL |
NONE]
shared_memory_address SGA starting address (low order 32-bits
on 64-bit platforms) [NUMBER]
shared_pool_reserved_size Size in bytes of reserved area of shared
pool [bytes]
shared_pool_size Size in bytes of shared pool [bytes]
shared_server_sessions Max number of shared server sessions [NUMBER]
shared_servers Number of shared servers to start up [NUMBER]
skip_unusable_indexes Skip unusable indexes if set to true [TRUE |
FALSE]
sort_area_retained_size Size of in-memory sort work area
retained between fetch calls [bytes]
sort_area_size Size of in-memory sort work area [bytes]
smtp_out_server utl_smtp server and port configuration parameter
[server_clause]
spfile Server parameter file [CHAR]
sp_name Service Provider Name [CHAR]
sql92_security Require select privilege for searched
update/delete [TRUE | FALSE]
sql_trace Enable SQL trace [TRUE | FALSE]
sqltune_category Category qualifier for applying hintsets [CHAR]
sql_version Sql language version parameter for
compatibility issues [CHAR]
standby_archive_dest Standby database archivelog destination
text string ['Path_to_directory']
standby_file_management If auto then files are created/dropped
automatically on standby [MANUAL | AUTO]
star_transformation_enabled Enable the use of star transformation
[TRUE | FALSE | DISABLE_TEMP_TABLE]
statistics_level Statistics level [ALL | TYPICAL | BASIC]
streams_pool_size Size in bytes of the streams pool [bytes]

tape_asynch_io Use asynch I/O requests for tape devices [TRUE |


FALSE]
thread Redo thread to mount [NUMBER]
timed_os_statistics Internal os statistic gathering interval
in seconds [NUMBER]
timed_statistics Maintain internal timing statistics [TRUE |
FALSE]
trace_enabled Enable KST tracing (Internal parameter) [TRUE |
FALSE]
tracefile_identifier Trace file custom identifier [CHAR]
transaction_auditing Transaction auditing records generated
in the redo log [TRUE | FALSE]
transactions Max. number of concurrent active
transactions [NUMBER]
transactions_per_rollback_segment Number of active transactions per
rollback segment [NUMBER]

undo_management Instance runs in SMU mode if TRUE, else


in RBU mode [MANUAL | AUTO]
undo_retention Undo retention in seconds [NUMBER]
undo_suppress_errors Suppress RBU errors in SMU mode [TRUE | FALSE]
undo_tablespace Use or switch undo tablespace [Undo_tbsp_name]
use_indirect_data_buffers Enable indirect data buffers (very large
SGA on 32-bit platforms [TRUE | FALSE]
user_dump_dest User process dump directory ['Path_to_directory']
utl_file_dir utl_file accessible directories list
utl_file_dir='Path1', 'Path2'..
or
utl_file_dir='Path1' # Must be
utl_file_dir='Path2' # consecutive entries

workarea_size_policy Policy used to size SQL working areas [MANUAL |


AUTO]

db_file_multiblock_read_count:
The db_file_multiblock_read_count initialization parameter determines the maximum
number of database blocks
read in one I/O operation during a full table scan. The setting of this
parameter can reduce
the number of I/O calls required for a full table scan, thus improving
performance.

15.6 9i UNDO or ROLLBACK parameters:


------------------------------------

- UNDO_MANAGEMENT
If AUTO, use automatic undo management mode. If MANUAL, use manual undo
management mode.

- UNDO_TABLESPACE
A dynamic parameter specifying the name of an undo tablespace to use.
- UNDO_RETENTION
A dynamic parameter specifying the length of time to retain undo. Default is 900
seconds.

- UNDO_SUPPRESS_ERRORS
If TRUE, suppress error messages if manual undo management SQL statements are
issued when operating
in automatic undo management mode. If FALSE, issue error message. This is a
dynamic parameter.

If you're database is on manual, you can still use the following 8i type
parameters:

- ROLLBACK_SEGMENTS
Specifies the rollback segments to be acquired at instance startup

- TRANSACTIONS
Specifies the maximum number of concurrent transactions

- TRANSACTIONS_PER_ROLLBACK_SEGMENT
Specifies the number of concurrent transactions that each rollback segment is
expected to handle

- MAX_ROLLBACK_SEGMENTS
Specifies the maximum number of rollback segments that can be online for any
instance

15.7 Oracle 9i init file examples:


---------------------------------=

Example 1:
----------

# Cache and I/O


DB_BLOCK_SIZE=4096
DB_CACHE_SIZE=20971520

# Cursors and Library Cache


CURSOR_SHARING=SIMILAR
OPEN_CURSORS=300

# Diagnostics and Statistics


BACKGROUND_DUMP_DEST=/vobs/oracle/admin/mynewdb/bdump
CORE_DUMP_DEST=/vobs/oracle/admin/mynewdb/cdump
TIMED_STATISTICS=TRUE
USER_DUMP_DEST=/vobs/oracle/admin/mynewdb/udump

# Control File Configuration


CONTROL_FILES=("/vobs/oracle/oradata/mynewdb/control01.ctl",
"/vobs/oracle/oradata/mynewdb/control02.ctl",
"/vobs/oracle/oradata/mynewdb/control03.ctl")

# Archive
LOG_ARCHIVE_DEST_1='LOCATION=/vobs/oracle/oradata/mynewdb/archive'
LOG_ARCHIVE_FORMAT=%t_%s.dbf
LOG_ARCHIVE_START=TRUE

# Shared Server
# Uncomment and use first DISPATCHES parameter below when your listener is
# configured for SSL
# (listener.ora and sqlnet.ora)
# DISPATCHERS = "(PROTOCOL=TCPS)(SER=MODOSE)",
# "(PROTOCOL=TCPS)(PRE=oracle.aurora.server.SGiopServer)"
DISPATCHERS="(PROTOCOL=TCP)(SER=MODOSE)",
"(PROTOCOL=TCP)(PRE=oracle.aurora.server.SGiopServer)",
(PROTOCOL=TCP)

# Miscellaneous
COMPATIBLE=9.2.0
DB_NAME=mynewdb

# Distributed, Replication and Snapshot


DB_DOMAIN=us.oracle.com
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE

# Network Registration
INSTANCE_NAME=mynewdb

# Pools
JAVA_POOL_SIZE=31457280
LARGE_POOL_SIZE=1048576
SHARED_POOL_SIZE=52428800

# Processes and Sessions


PROCESSES=150

# Redo Log and Recovery


FAST_START_MTTR_TARGET=300

# Resource Manager
RESOURCE_MANAGER_PLAN=SYSTEM_PLAN

# Sort, Hash Joins, Bitmap Indexes


SORT_AREA_SIZE=524288

# Automatic Undo Management


UNDO_MANAGEMENT=AUTO
UNDO_TABLESPACE=undotbs

Example 2:
----------

##############################################################################
# Copyright (c) 1991, 2001 by Oracle Corporation
##############################################################################

###########################################
# Cache and I/O
###########################################
db_block_size=8192
db_cache_size=50331648
###########################################
# Cursors and Library Cache
###########################################
open_cursors=300

###########################################
# Diagnostics and Statistics
###########################################
background_dump_dest=D:\oracle\admin\iasdb\bdump
core_dump_dest=D:\oracle\admin\iasdb\cdump
timed_statistics=TRUE
user_dump_dest=D:\oracle\admin\iasdb\udump

###########################################
# Distributed, Replication and Snapshot
###########################################
db_domain=missrv.miskm.mindef.nl
remote_login_passwordfile=EXCLUSIVE

###########################################
# File Configuration
###########################################
control_files=("D:\oracle\oradata\iasdb\CONTROL01.CTL",
"D:\oracle\oradata\iasdb\CONTROL02.CTL", "D:\oracle\oradata\iasdb\CONTROL03.CTL")

###########################################
# Job Queues
###########################################
job_queue_processes=4

###########################################
# MTS
###########################################
dispatchers="(PROTOCOL=TCP)(PRE=oracle.aurora.server.GiopServer)",
"(PROTOCOL=TCP)(PRE=oracle.aurora.server.SGiopServer)"

###########################################
# Miscellaneous
###########################################
aq_tm_processes=1
compatible=9.0.0
db_name=iasdb

###########################################
# Network Registration
###########################################
instance_name=iasdb

###########################################
# Pools
###########################################
java_pool_size=41943040
shared_pool_size=33554432

###########################################
# Processes and Sessions
###########################################
processes=150

###########################################
# Redo Log and Recovery
###########################################
fast_start_mttr_target=300

###########################################
# Sort, Hash Joins, Bitmap Indexes
###########################################
pga_aggregate_target=33554432
sort_area_size=524288

###########################################
# System Managed Undo and Rollback Segments
###########################################
undo_management=AUTO
undo_tablespace=UNDOTBS

==============
17. Snapshots:
==============

Snapshots allow you to replicate data based on column- and/or row-level


subsetting,
while multimaster replication requires replication of the entire table.

You need a database link to implement replication.

17.1 Database link:


-------------------

In de "local" database, waar de snapshot copy komt te staan, geef een statement
als bijv:

CREATE PUBLIC DATABASE LINK MY_LINK


CONNECT TO HARRY IDENTIFIED BY password
USING 'DB1';

De servicename "DB1" wordt via de tnsnames.ora geresolved in


een connectdescriptor, waarin de remote Servername, protocol, en
SID van de remote database bekend is geworden.

Nu is het mogelijk om bijv. de table employee in de remote database "DB1"


te SELECTeren:

SELECT * FROM employee@MY_LINK;

Ook 2PC is geimplementeerd:

update employee set amount=amount-100;

update employee@my_link set amount=amount+100;


commit;

17.2 Snapshots:
---------------

There are in general 2 styles of snapshots available

Simple snapshot:

One to one replication of a remote table to a local snapshot (=table).

The refresh of the snapshot can be a complete refresh, with the refresh rate
specified in the "create snapshot" command.
Also a snapshot log can be used at the remote original table in order to
replicate
only the transaction data.

Complex snapshot:

If multiple remote tables are joined in order to create/refresh a local


snapshot,
it is a "complex snapshot". Only complete refreshes are possible.
If joins or complex query clauses are used, like group by, one can only
use a "complex snapshot".

-> Example COMPLEX snapshot:

On the local database:

CREATE SNAPSHOT EMP_DEPT_COUNT

pctfree 5
tablespace SNAP
storage (initial 100K next 100K pctincrease 0)

REFRESH COMPLETE
START WITH SYSDATE
NEXT SYSDATE+7
AS
SELECT DEPTNO, COUNT(*) Dept_count
FROM EMPLOYEE@MY_LINK
GROUP BY Deptno;

Because the records in this snapshot will not correspond one to one
with the records in the master table (since the query contains a group by clause)
this is a complex snapshot. Thus the snapshot will be completely recreated
every time it is refreshed.

-> Example SIMPLE snapshot:

On the local database:

CREATE SNAPSHOT EMP_DEPT_COUNT


pctfree 5
tablespace SNAP
storage (initial 100K next 100K pctincrease 0)
REFRESH FAST
START WITH SYSDATE
NEXT SYSDATE+7
AS
SELECT * FROM EMPLOYEE@MY_LINK

In this case the refresh fast clause tells oracle to use a snapshot log to refresh
the local snapshot.
When a snapshotlog is used, only the changes to the master table are sent to the
targets.
The snapshot log must be created in the master database (WHERE the original object
is)

create snapshot log on employee


tablespace data
storage (initial 100K next 100K pctincrease 0);

Snapshot groups:
----------------

A snapshot group in a replication system maintains a partial or complete copy of


the objects
at the target master group. Snapshot groups cannot span master group boundaries.
Figure 3-7 displays the correlation between Groups A and B at the master site and
Groups A and B at the snapshot site.

Group A at the snapshot site (see Figure 3-7) contains only some of the objects in
the corresponding Group A
at the master site. Group B at the snapshot site contains all objects in Group B
at the master site.
Under no circumstances, however, could Group B at the snapshot site contain
objects FROM Group A at the master site.
As illustrated in Figure 3-7, a snapshot group has the same name as the master
group on which the snapshot group is based.
For example, a snapshot group based on a "PERSONNEL" master group is also named
"PERSONNEL."

In addition to maintaining organizational consistency between snapshot sites and


master sites,
snapshot groups are required for supporting updateable snapshots.
If a snapshot does not belong to a snapshot group, then it must be a read-only
snapshot.

A snapshot group is used to organize snapshots in a logical manner.

Refresh groups:
---------------

If 2 or more master tables which have a PK-FK relationship, are replicated, it is


possible'that the
2 cooresponding snapshots violate the referential integrety, because of different
refresh times and schedules etc..

Related snapshots can be collected int refresh groups. The purpose of a refresh
group is to coordinate
the refresh schedules of it's members.

This is achieved via the DBMS_REFRESH package. The procedures in this package are
MAKE, ADD, SUBSTRACT, CHANGE, DESTROY, and REFRESH

A refresh group could contain more than one snapshot groups.

Types of snapshots:
-------------------

Primary Key
-----------

Primary key snapshots are the default type of snapshot. They are updateable if the
snapshot was
created as part of a snapshot group and "FOR UPDATE" was specified when defining
the snapshot.
Changes are propagated according to the row-level changes that have occurred, as
identified by
the primary key value of the row (not the ROWID). The SQL statement for creating
an updateable,
primary key snapshot might look like:

CREATE SNAPSHOT sales.customer FOR UPDATE AS


SELECT * FROM sales.customer@dbs1.acme.com;

Primary key snapshots may contain a subquery so that you can create a horizontally
partitioned subset
of data at the remote snapshot site. This subquery may be as simple as a basic
WHERE clause or as
complex as a multilevel WHERE EXISTS clause. Primary key snapshots that contain a
SELECTed class of subqueries
can still be incrementally or fast refreshed. The following is a subquery snapshot
with a WHERE
clause containing a subquery:

CREATE SNAPSHOT sales.orders REFRESH FAST AS


SELECT * FROM sales.orders@dbs1.acme.com o
WHERE EXISTS
(SELECT 1 FROM sales.customer@dbs1.acme.com c
WHERE o.c_id = c.c_id AND zip = 19555);

ROWID
-----

For backwards compatibility, Oracle supports ROWID snapshots in addition to the


default primary
key snapshots. A ROWID snapshot is based on the physical row identifiers (ROWIDs)
of the rows in a master table.
ROWID snapshots should be used only for snapshots based on master tables FROM an
Oracle7 database,
and should not be used when creating new snapshots based on master tables FROM
Oracle release 8.0 or greater databases.

CREATE SNAPSHOT sales.customer REFRESH WITH ROWID AS


SELECT * FROM sales.customer@dbs1.acme.com;
Complex
-------

To be fast refreshed, the defining query for a snapshot must observe certain
restrictions.
If you require a snapshot whose defining query is more general and cannot observe
the restrictions,
then the snapshot is complex and cannot be fast refreshed.

Specifically, a snapshot is considered complex when the defining query of the


snapshot contains:

A CONNECT BY clause

Clauses that do not comply with the requirements detailed in Table 3-1,
"Restrictions for Snapshots with Subqueries"

A set operation, such as UNION, INTERSECT, or MINUS

In most cases, a distinct or aggregate function, although it is possible


to have a distinct or aggregate function in the defining query and still have a
simple snapshot

See Also:
Oracle8i Data Warehousing Guide for more information about complex materialized
views.
"Snapshot" is synonymous with "materialized view" in Oracle documentation, and
"materialized view"
is used in the Oracle8i Data Warehousing Guide.

The following statement is an example of a complex snapshot CREATE statement:

CREATE SNAPSHOT scott.snap_employees AS


SELECT emp.empno, emp.ename FROM scott.emp@dbs1.acme.com
UNION ALL
SELECT new_emp.empno, new_emp.ename FROM scott.new_emp@dbs1.acme.com;

Read Only
---------

Any of the previously described types of snapshots can be made read-only by


omitting the FOR UPDATE clause or disabling the equivalent checkbox in the
Replication Manager interface.
Read-only snapshots use many of the same mechanisms as updateable snapshots,
except that they do not need to belong to a snapshot group.

Snapshot Registration at a Master Site


--------------------------------------

At the master site, an Oracle database automatically registers information about a

snapshots based on its master table(s).


The following sections explain more about Oracle's snapshot registration
mechanism.

DBA_REGISTERED_SNAPSHOTS and DBA_SNAPSHOT_REFRESH_TIMES dictionary views


You can query the DBA_REGISTERED_SNAPSHOTS data dictionary view to list the
following information about a remote snapshot:

The owner, name, and database that contains the snapshot


The snapshot's defining query
Other snapshot characteristics, such as its refresh method (fast or complete)

You can also query the DBA_SNAPSHOT_REFRESH_TIMES view at the master site to
obtain the last refresh times for each snapshot. Administrators can use this
information
to monitor snapshot activity FROM master sites and coordinate changes to snapshot
sites
if a master table needs to be dropped, altered, or relocated.

Internal Mechanisms
Oracle automatically registers a snapshot at its master database when you create
the snapshot,
and unregisters the snapshot when you drop it.

Caution:
Oracle cannot guarantee the registration or unregistration of a snapshot at
its master site during the creation or drop of the snapshot, respectively.
If Oracle cannot successfully register a snapshot during creation,
Oracle completes snapshot registration during a subsequent refresh of the
snapshot.
If Oracle cannot successfully unregister a snapshot when you drop the snapshot,
the registration information for the snapshot persists in the master database
until
it is manually unregistered. Complex snapshots might not be registered.

Manual registration
-------------------

If necessary, you can maintain registration manually.


Use the REGISTER_SNAPSHOT and UNREGISTER_SNAPSHOT procedures of the
DBMS_SNAPSHOT package at the master site to add, modify, or remove snapshot
registration information.

Snapshot Log
------------

When you create a snapshot log for a master table, Oracle creates an underlying
table
as the snapshot log. A snapshot log holds the primary keys and/or the ROWIDs of
rows
that have been updated in the master table. A snapshot log can also contain filter
columns
to support fast refreshes of snapshots with subqueries.
The name of a snapshot log's table is MLOG$_master_table_name.
The snapshot log is created in the same schema as the target master table.
One snapshot log can support multiple snapshots on its master table.

As described in the previous section, the internal trigger adds change information

to the snapshot log whenever a DML transaction has taken place on the target
master table.

There are three types of snapshot logs:

Primary Key: The snapshot records changes to the master table based on the primary
key of the affected rows.
Row ID: The snapshot records changes to the master table based on the ROWID of the
affected rows.
Combination: The snapshot records changes to the master table based on both the
primary key and the
ROWID of the affected rows. This snapshot log supports both primary key and ROWID
snapshots, which is helpful for mixed environments.

A combination snapshot log works in the same manner as the primary key and ROWID
snapshot log,
except that both the primary key and the ROWID of the affected row are recorded.

Though the difference between snapshot logs based on primary keys and ROWIDs is
small
(one records affected rows using the primary key, while the other records affected
rows using the physical ROWID),
the practical impact is large. Using ROWID snapshots and snapshot logs makes
reorganizing and truncating your master tables
difficult because it prevents your ROWID snapshots FROM being fast refreshed.
If you reorganize or truncate your master table, your ROWID snapshot must be
COMPLETE refreshed
because the ROWIDs of the master table have changed.

To delete a snapshot log, execute the DROP SNAPSHOT LOG SQL statement in SQL*Plus.

For example, the following statement deletes the snapshot log for a table named
CUSTOMERS in the SALES schema:

DROP SNAPSHOT LOG ON sales.customers;

To delete the master table, use

truncate table TABLE_NAME purge snapshot log;

=============
18. Triggers:
=============

A trigger is PL/SQL code block attached and executed by an event which occurs to a
database table.
Triggers are implicitly invoked by DML commands. Triggers are stored as text and
compiled at
execute time, because of this it is wise not to include much code in them but to
call out to
previously stored procedures or packages as this will greatly improve perfoRMANce.

You may not use COMMIT, ROLLBACK and SAVEPOINT statements within trigger blocks.
Remember that triggers may be executed thousands of times for a large update -
they can seriously affect SQL execution perfoRMANce.
Triggers may be called BEFORE or AFTER the following events :-

INSERT, UPDATE and DELETE.

Triggers may be STATEMENT or ROW types.

- STATEMENT triggers fire BEFORE or AFTER the execution of the statement


that caused the trigger to fire.

- ROW triggers fire BEFORE or AFTER any affected row is processed.

An example of a statement trigger follows :-

CREATE OR REPLACE TRIGGER MYTRIG1


BEFORE DELETE OR INSERT OR UPDATE ON JD11.BOOK
BEGIN
IF (TO_CHAR(SYSDATE,'DAY') IN ('sat','sun')) OR (TO_CHAR(SYSDATE,'hh24:mi') NOT
BETWEEN '08:30' AND '18:30') THEN
RAISE_APPLICATION_ERROR(-20500,'Table is secured');
END IF;
END;

After the CREATE OR REPLACE statement is the object identifier (TRIGGER) and the
object name (MYTRIG1).
This trigger specifies that before any data change event on the BOOK table this
PL/SQL code block
will be compiled and executed. The user will not be allowed to update the table
outside of normal working hours.

An example of a row trigger follows :-

CREATE OR REPLACE TRIGGER MYTRIG2


AFTER DELETE OR INSERT OR UPDATE ON JD11.BOOK
FOR EACH ROW
BEGIN
IF DELETING THEN
INSERT INTO JD11.XBOOK (PREVISBN, TITLE, DELDATE) VALUES (:OLD.ISBN,
:OLD.TITLE, SYSDATE);
ELSIF INSERTING THEN
INSERT INTO JD11.NBOOK (ISBN, TITLE, ADDDATE) VALUES (:NEW.ISBN, :NEW.TITLE,
SYSDATE);
ELSIF UPDATING ('ISBN) THEN
INSERT INTO JD11.CBOOK (OLDISBN, NEWISBN, TITLE, UP_DATE) VALUES (:OLD.ISBN
:NEW.ISBN, :NEW.TITLE, SYSDATE);
ELSE /* UPDATE TO ANYTHING ELSE THAN ISBN */
INSERT INTO JD11.UBOOK (ISBN, TITLE, UP_DATE) VALUES (:OLD.ISBN :NEW.TITLE,
SYSDATE);
END IF
END;

In this case we have specified that the trigger will be executed after any data
change event on any affected row.
Within the PL/SQL block body we can check which update action is being performed
for the
currently affected row and take whatever action we feel is appropriate. Note that
we can
specify the old and new values of updated rows by prefixing column names with the
:OLD and :NEW qualifiers.

--------------------------------------------------------------------------------

The following statement creates a trigger for the Emp_tab table:

CREATE OR REPLACE TRIGGER Print_salary_changes


BEFORE DELETE OR INSERT OR UPDATE ON Emp_tab
FOR EACH ROW
WHEN (new.Empno > 0)
DECLARE
sal_diff number;
BEGIN
sal_diff := :new.sal - :old.sal;
dbms_output.put('Old salary: ' || :old.sal);
dbms_output.put(' New salary: ' || :new.sal);
dbms_output.put_line(' Difference ' || sal_diff);
END;
/

If you enter a SQL statement, such as the following:


UPDATE Emp_tab SET sal = sal + 500.00 WHERE deptno = 10;
Then, the trigger fires once for each row that is updated,
and it prints the new and old salaries, and the difference.

CREATE OR REPLACE TRIGGER "SALES".HENKILOROOLI_CHECK2


AFTER INSERT OR UPDATE OR DELETE ON AH_HENKILOROOLI

BEGIN
IF INSERTING OR DELETING THEN
handle_delayed_triggers ('AH_HENKILOROOLI', 'HENKILOROOLI_CHECK');
END IF;
IF INSERTING OR UPDATING OR DELETING THEN /* FE */
handle_delayed_triggers('AH_HENKILOROOLI', 'FRONTEND_FLAG'); /* FE */
END IF; /* FE */

END;

A trigger is either a stored PL/SQL block or a PL/SQL, C, or Java procedure


associated with a table,
view, schema, or the database itself. Oracle automatically executes a trigger when
a specified event takes place,
which may be in the form of a system event or a DML statement being issued against
the table.

Triggers can be:

-DML triggers on tables.


-INSTEAD OF triggers on views.
-System triggers on DATABASE or SCHEMA: With DATABASE, triggers fire for each
event for all users;
with SCHEMA, triggers fire for each event
for that specific user.

BEFORE and AFTER Options

The BEFORE or AFTER option in the CREATE TRIGGER statement specifies exactly when
to fire the
trigger body in relation to the triggering statement that is being run.
In a CREATE TRIGGER statement, the BEFORE or AFTER option is specified just before
the triggering statement.
For example, the PRINT_SALARY_CHANGES trigger in the previous example is a BEFORE
trigger.

INSTEAD OF Triggers

The INSTEAD OF option can also be used in triggers. INSTEAD OF triggers provide a
transparent way
of modifying views that cannot be modified directly through UPDATE, INSERT, and
DELETE statements.
These triggers are called INSTEAD OF triggers because, unlike other types of
triggers,
Oracle fires the trigger instead of executing the triggering statement.
The trigger performs UPDATE, INSERT, or DELETE operations directly on the
underlying tables.

CREATE TABLE Project_tab (


Prj_level NUMBER,
Projno NUMBER,
Resp_dept NUMBER);
CREATE TABLE Emp_tab (
Empno NUMBER NOT NULL,
Ename VARCHAR2(10),
Job VARCHAR2(9),
Mgr NUMBER(4),
Hiredate DATE,
Sal NUMBER(7,2),
Comm NUMBER(7,2),
Deptno NUMBER(2) NOT NULL);

CREATE TABLE Dept_tab (


Deptno NUMBER(2) NOT NULL,
Dname VARCHAR2(14),
Loc VARCHAR2(13),
Mgr_no NUMBER,
Dept_type NUMBER);

The following example shows an INSTEAD OF trigger for inserting rows into the
MANAGER_INFO view.

CREATE OR REPLACE VIEW manager_info AS


SELECT e.ename, e.empno, d.dept_type, d.deptno, p.prj_level,
p.projno
FROM Emp_tab e, Dept_tab d, Project_tab p
WHERE e.empno = d.mgr_no
AND d.deptno = p.resp_dept;

CREATE OR REPLACE TRIGGER manager_info_insert


INSTEAD OF INSERT ON manager_info
REFERENCING NEW AS n -- new manager information

FOR EACH ROW


DECLARE
rowcnt number;
BEGIN
SELECT COUNT(*) INTO rowcnt FROM Emp_tab WHERE empno = :n.empno;
IF rowcnt = 0 THEN
INSERT INTO Emp_tab (empno,ename) VALUES (:n.empno, :n.ename);
ELSE
UPDATE Emp_tab SET Emp_tab.ename = :n.ename
WHERE Emp_tab.empno = :n.empno;
END IF;
SELECT COUNT(*) INTO rowcnt FROM Dept_tab WHERE deptno = :n.deptno;
IF rowcnt = 0 THEN
INSERT INTO Dept_tab (deptno, dept_type)
VALUES(:n.deptno, :n.dept_type);
ELSE
UPDATE Dept_tab SET Dept_tab.dept_type = :n.dept_type
WHERE Dept_tab.deptno = :n.deptno;
END IF;
SELECT COUNT(*) INTO rowcnt FROM Project_tab
WHERE Project_tab.projno = :n.projno;
IF rowcnt = 0 THEN
INSERT INTO Project_tab (projno, prj_level)
VALUES(:n.projno, :n.prj_level);
ELSE
UPDATE Project_tab SET Project_tab.prj_level = :n.prj_level
WHERE Project_tab.projno = :n.projno;
END IF;
END;

FOR EACH ROW Option

The FOR EACH ROW option determines whether the trigger is a row trigger or a
statement trigger.
If you specify FOR EACH ROW, then the trigger fires once for each row of the table
that is affected
by the triggering statement. The absence of the FOR EACH ROW option indicates that
the trigger fires only once
for each applicable statement, but not separately for each row affected by the
statement.

For example, you define the following trigger:

--------------------------------------------------------------------------------
Note:
You may need to set up the following data structures for certain examples to work:

CREATE TABLE Emp_log (


Emp_id NUMBER,
Log_date DATE,
New_salary NUMBER,
Action VARCHAR2(20));

--------------------------------------------------------------------------------

CREATE OR REPLACE TRIGGER Log_salary_increase


AFTER UPDATE ON Emp_tab
FOR EACH ROW
WHEN (new.Sal > 1000)
BEGIN
INSERT INTO Emp_log (Emp_id, Log_date, New_salary, Action)
VALUES (:new.Empno, SYSDATE, :new.SAL, 'NEW SAL');
END;

Then, you enter the following SQL statement:

UPDATE Emp_tab SET Sal = Sal + 1000.0


WHERE Deptno = 20;

If there are five employees in department 20, then the trigger fires five times
when this statement is entered,
because five rows are affected.

The following trigger fires only once for each UPDATE of the Emp_tab table:

CREATE OR REPLACE TRIGGER Log_emp_update


AFTER UPDATE ON Emp_tab
BEGIN
INSERT INTO Emp_log (Log_date, Action)
VALUES (SYSDATE, 'Emp_tab COMMISSIONS CHANGED');
END;

Trigger Size
The size of a trigger cannot be more than 32K.

Valid SQL Statements in Trigger Bodies


The body of a trigger can contain DML SQL statements. It can also contain SELECT
statements,
but they must be SELECT... INTO... statements or the SELECT statement in the
definition of a cursor.

DDL statements are not allowed in the body of a trigger.


Also, no transaction control statements are allowed in a trigger.
ROLLBACK, COMMIT, and SAVEPOINT cannot be used.For system triggers,
{CREATE/ALTER/DROP} TABLE statements
and ALTER...COMPILE are allowed.

Recompiling Triggers
Use the ALTER TRIGGER statement to recompile a trigger manually.
For example, the following statement recompiles the PRINT_SALARY_CHANGES trigger:
ALTER TRIGGER Print_salary_changes COMPILE;

Disable enable trigger:

ALTER TRIGGER Reorder DISABLE;


ALTER TRIGGER Reorder ENABLE;

Or in 1 time for all triggers on a table:

ALTER TABLE Inventory


DISABLE ALL TRIGGERS;

ALTER DATABASE rename GLOBAL_NAME TO NEW_NAME;

====================================
19 BACKUP RECOVERY, TROUBLESHOOTING:
====================================

19.1 SCN:
--------

The Control files and all datafiles contain the last SCN (System Change Number)
after:

- checkpoint, for example via ALTER SYSTEM CHECKPOINT,


- shutdown normal/immediate/transactional,
- log switch occurs by the system
- via alter system switch logfile,
- alter tablespace begin backup etc..

at checkpoint the following occurs:


------------------------------------

- The database writer (DBWR) writes all modified database


blocks in the buffer cache back to datafiles,
- Log writer (LGWR) or Checkpoint process (CHKPT) updates both the controlfile
and
the datafiles to indicate when the last checkpoint
occurred (SCN)

Log switching causes a checkpoint, but a checkpoint does


not cause a logswitch.

LGWR writes logbuffers to online redo log:


------------------------------------------

- at commit
- redolog buffers 1/3 full, > 1 MB changes
- before DBWR writes modified blocks to datafiles

LOG_CHECKPOINT_INTERVAL init.ora parameter:


-------------------------------------------

The LOG_CHECKPOINT_INTERVAL init.ora parameter controls how often a checkpoint


operation will be performed based upon the number of operating system blocks
that have been written to the redo log. If this value is larger than the size
of the redo log, then the checkpoint will only occur when Oracle performs a
log switch FROM one group to another, which is preferred.

NOTE: Starting with Oracle 8.1, LOG_CHECKPOINT_INTERVAL will be interpreted


to mean that the incremental checkpoint should not lag the tail of the
log by more than log_checkpoint_interval number of redo blocks.

On most Unix systems the operating system block size is 512 bytes. This means
that setting LOG_CHECKPOINT_INTERVAL to a value of 10,000 (the default
setting), causes a checkpoint to occur after 5,120,000 (5M) bytes are written
to the redo log. If the size of your redo log is 20M, you are taking 4
checkpoints for each log.

LOG_CHECKPOINT_TIMEOUT init.ora parameter:


------------------------------------------

The LOG_CHECKPOINT_TIMEOUT init.ora parameter controls how often a checkpoint


will be performed based on the number of seconds that have passed since the
last checkpoint.

NOTE: Starting with Oracle 8.1, LOG_CHECKPOINT_TIMEOUT will be interpreted


to mean that the incremental checkpoint should be at the log position
WHERE the tail of the log was LOG_CHECKPOINT_TIMEOUT seconds ago.

Checkpoint frequency impacts the time required for the


database to recover FROM an unexpected failure. Longer intervals between
checkpoints mean that more time will be required during database recovery.

LOG_CHECKPOINTS_TO_ALERT init.ora parameter:


--------------------------------------------

The LOG_CHECKPOINTS_TO_ALERT init.ora parameter, when set to a value of TRUE,


allows you to log checkpoint start and stop times in the alert log. This is
very helpful in determining if checkpoints are occurring at the optimal
frequency and gives a chronological view of checkpoints and other database
activities occurring in the background.

It is a misconception that setting LOG_CHECKPOINT_TIMEOUT to a given value


will initiate a log switch at that interval, enabling a recovery window used
for a stand-by database configuration. Log switches cause a checkpoint, but a
checkpoint does not cause a log switch. The only way to cause a log switch is
manually with ALTER SYSTEM SWITCH LOGFILE or resizing the redo logs to cause
more

FAST_START_MTTR_TARGET init.ora parameter:


------------------------------------------

FAST_START_MTTR_TARGET enables you to specify the number of seconds the database


takes to perform crash recovery of a single instance.
It is the number of seconds it takes to recover FROM crash recovery.
The lower the value, the more often DBWR will write the blocks to disk.
FAST_START_MTTR_TARGET
can be overridden by either FAST_START_IO_TARGET or LOG_CHECKPOINT_INTERVAL.
FAST_START_IO_TARGET init.ora paramater:
----------------------------------------

FAST_START_IO_TARGET (available only with the Oracle Enterprise Edition)


specifies the number of I/Os that should be needed during crash or instance
recovery.

Smaller values for this parameter result in faster recovery times.


This improvement in recovery perfoRMANce is achieved at the expense of
additional writing activity during normal processing.

ARCHIVE_LAG_TARGET init.ora parameter:


--------------------------------------

The following initialization parameter setting sets the log switch interval
to 30 minutes (a typical value).

ARCHIVE_LAG_TARGET = 1800

Note: More on SCN:


==================

>>>> thread from asktom

You Asked
Tom,
Would you tell me what snapshot too old error. When does it happen? What's the
possible
causes? How to fix it?

Thank you very much.

Jane

and we said...
I think support note <Note:40689.1> covers this topic very well:

ORA-01555 "Snapshot too old" - Detailed Explanation


===================================================

Overview
~~~~~~~~

This article will discuss the circumstances under which a query can return the
Oracle
error ORA-01555 "snapshot too old (rollback segment too small)". The article will
then
proceed to discuss actions that can be taken to avoid the error and finally will
provide
some simple PL/SQL scripts that illustrate the issues discussed.

Terminology
~~~~~~~~~~~

It is assumed that the reader is familiar with standard Oracle terminology such as

'rollback segment' and 'SCN'. If not, the reader should first read the Oracle
Server
Concepts manual and related Oracle documentation.

In addition to this, two key concepts are briefly covered below which help in the
understanding of ORA-01555:

1. READ CONSISTENCY:
====================

This is documented in the Oracle Server Concepts manual and so will not be
discussed
further. However, for the purposes of this article this should be read and
understood if
not understood already.

Oracle Server has the ability to have multi-version read consistency which is
invaluable
to you because it guarantees that you are seeing a consistent view of the data (no
'dirty
reads').

2. DELAYED BLOCK CLEANOUT:


==========================

This is best illustrated with an example: Consider a transaction that updates a


million
row table. This obviously visits a large number of database blocks to make the
change to
the data. When the user commits the transaction Oracle does NOT go back and
revisit these
blocks to make the change permanent. It is left for the next transaction that
visits any
block affected by the update to 'tidy up' the block (hence the term 'delayed block

cleanout').

Whenever Oracle changes a database block (index, table, cluster) it stores a


pointer in
the header of the data block which identifies the rollback segment used to hold
the
rollback information for the changes made by the transaction. (This is required if
the
user later elects to not commit the changes and wishes to 'undo' the changes
made.)

Upon commit, the database simply marks the relevant rollback segment header entry
as
committed. Now, when one of the changed blocks is revisited Oracle examines the
header of
the data block which indicates that it has been changed at some point. The
database needs
to confirm whether the change has been committed or whether it is currently
uncommitted.
To do this, Oracle determines the rollback segment used for the previous
transaction
(from the block's header) and then determines whether the rollback header
indicates
whether it has been committed or not.

If it is found that the block is committed then the header of the data block is
updated
so that subsequent accesses to the block do not incur this processing.

This behaviour is illustrated in a very simplified way below. Here we walk through
the
stages involved in updating a data block.

STAGE 1 - No changes made

Description: This is the starting point. At the top of the


data block we have an area used to link active
transactions to a rollback
segment (the 'tx' part), and the rollback segment
header has a table that stores information upon
all the latest transactions
that have used that rollback segment.

In our example, we have two active transaction


slots (01 and 02)
and the next free slot is slot 03. (Since we are
free to overwrite committed transactions.)

Data Block 500 Rollback Segment Header 5


+----+--------------+ +----------------------+---------+
| tx | None | | transaction entry 01 |ACTIVE |
+----+--------------+ | transaction entry 02 |ACTIVE |
| row 1 | | transaction entry 03 |COMMITTED|
| row 2 | | transaction entry 04 |COMMITTED|
| ... .. | | ... ... .. | ... |
| row n | | transaction entry nn |COMMITTED|
+-------------------+ +--------------------------------+

STAGE 2 - Row 2 is updated

Description: We have now updated row 2 of block 500. Note that


the data block header is updated to point to the
rollback segment 5, transaction
slot 3 (5.3) and that it is marked uncommitted
(Active).

Data Block 500 Rollback Segment Header 5


+----+--------------+ +----------------------+---------+
| tx |5.3uncommitted|-+ | transaction entry 01 |ACTIVE |
+----+--------------+ | | transaction entry 02 |ACTIVE |
| row 1 | +-->| transaction entry 03 |ACTIVE |
| row 2 *changed* | | transaction entry 04 |COMMITTED|
| ... .. | | ... ... .. | ... |
| row n | | transaction entry nn |COMMITTED|
+------------------+ +--------------------------------+

STAGE 3 - The user issues a commit

Description: Next the user hits commit. Note that all that
this does is it
updates the rollback segment header's
corresponding transaction
slot as committed. It does *nothing* to the data
block.

Data Block 500 Rollback Segment Header 5


+----+--------------+ +----------------------+---------+
| tx |5.3uncommitted|--+ | transaction entry 01 |ACTIVE |
+----+--------------+ | | transaction entry 02 |ACTIVE |
| row 1 | +--->| transaction entry 03 |COMMITTED|
| row 2 *changed* | | transaction entry 04 |COMMITTED|
| ... .. | | ... ... .. | ... |
| row n | | transaction entry nn |COMMITTED|
+------------------+ +--------------------------------+

STAGE 4 - Another user selects data block 500

Description: Some time later another user (or the same user)
revisits data block 500. We can see that there
is an uncommitted change in the
data block according to the data block's header.

Oracle then uses the data block header to look up


the corresponding rollback segment transaction
table slot, sees that it has been committed, and
changes data block 500 to reflect the
true state of the datablock. (i.e. it performs
delayed cleanout).

Data Block 500 Rollback Segment Header 5


+----+--------------+ +----------------------+---------+
| tx | None | | transaction entry 01 |ACTIVE |
+----+--------------+ | transaction entry 02 |ACTIVE |
| row 1 | | transaction entry 03 |COMMITTED|
| row 2 | | transaction entry 04 |COMMITTED|
| ... .. | | ... ... .. | ... |
| row n | | transaction entry nn |COMMITTED|
+------------------+ +--------------------------------+

ORA-01555 Explanation
~~~~~~~~~~~~~~~~~~~~~

There are two fundamental causes of the error ORA-01555 that are a result of
Oracle
trying to attain a 'read consistent' image. These are :

o The rollback information itself is overwritten so that Oracle is unable to


rollback
the (committed) transaction entries to attain a sufficiently old enough version of
the
block.

o The transaction slot in the rollback segment's transaction table (stored in


the
rollback segment's header) is overwritten, and Oracle cannot rollback the
transaction
header sufficiently to derive the original rollback segment transaction slot.

Note: If the transaction of User A is not committed, the rollback segment


entries will NOT be
reused, but if User A commits, the entries become free for reuse, and if a query
of User B
takes a lot of time, and "meet" those overwritten entries, user B gets an error.

Both of these situations are discussed below with the series of steps that cause
the
ORA-01555. In the steps, reference is made to 'QENV'. 'QENV' is short for 'Query
Environment', which can be thought of as the environment that existed when a query
is
first started and to which Oracle is trying to attain a read consistent image.
Associated
with this environment is the SCN
(System Change Number) at that time and hence, QENV 50 is the query environment
with SCN
50.

CASE 1 - ROLLBACK OVERWRITTEN

This breaks down into two cases: another session overwriting the rollback that
the
current session requires or the case where the current session overwrites the
rollback
information that it requires. The latter is discussed in this article because this
is
usually the harder one to understand.

Steps:

1. Session 1 starts query at time T1 and QENV 50

2. Session 1 selects block B1 during this query

3. Session 1 updates the block at SCN 51

4. Session 1 does some other work that generates rollback information.

5. Session 1 commits the changes made in steps '3' and '4'.


(Now other transactions are free to overwrite this rollback information)

6. Session 1 revisits the same block B1 (perhaps for a different row).

Now, Oracle can see from the block's header that it has been changed and it
is
later than the required QENV (which was 50). Therefore we need to get an image of
the
block as of this QENV.
If an old enough version of the block can be found in the buffer cache then
we
will use this, otherwise we need to rollback the current block to generate another

version of the block as at the required QENV.

It is under this condition that Oracle may not be able to get the required
rollback information because Session 1's changes have generated rollback
information that
has overwritten it and returns the ORA-1555 error.

CASE 2 - ROLLBACK TRANSACTION SLOT OVERWRITTEN

1. Session 1 starts query at time T1 and QENV 50

2. Session 1 selects block B1 during this query

3. Session 1 updates the block at SCN 51

4. Session 1 commits the changes


(Now other transactions are free to overwrite this rollback information)

5. A session (Session 1, another session or a number of other sessions) then


use the
same rollback segment for a series of committed transactions.

These transactions each consume a slot in the rollback segment transaction


table
such that it eventually wraps around (the slots are written to in a circular
fashion) and
overwrites all the slots. Note that Oracle is free to reuse these slots since all
transactions are committed.

6. Session 1's query then visits a block that has been changed since the
initial QENV
was established. Oracle therefore needs to derive an image of the block as at that
point
in time.

Next Oracle attempts to lookup the rollback segment header's transaction


slot
pointed to by the top of the data block. It then realises that this has been
overwritten
and attempts to rollback the changes made to the rollback segment header to get
the
original transaction slot entry.

If it cannot rollback the rollback segment transaction table sufficiently


it will
return ORA-1555 since Oracle can no longer derive the required version of the data
block.

It is also possible to encounter a variant of the transaction slot being


overwritten
when using block cleanout. This is briefly described below :
Session 1 starts a query at QENV 50. After this another process updates the
blocks that
Session 1 will require. When Session 1 encounters these blocks it determines that
the
blocks have changed and have not yet been cleaned out (via delayed block
cleanout).
Session 1 must determine whether the rows in the block existed at QENV 50, were
subsequently changed,

In order to do this, Oracle must look at the relevant rollback segment


transaction table
slot to determine the committed SCN. If this SCN is after the QENV then Oracle
must try
to construct an older version of the block and if it is before then the block just
needs
clean out to be good enough for the QENV.

If the transaction slot has been overwritten and the transaction table cannot be
rolled
back to a sufficiently old enough version then Oracle cannot derive the block
image and
will return ORA-1555.

(Note: Normally Oracle can use an algorithm for determining a block's SCN during
block
cleanout even when the rollback segment slot has been overwritten. But in this
case
Oracle cannot guarantee that the version of the block has not changed since the
start of
the query).

Solutions
~~~~~~~~~

This section lists some of the solutions that can be used to avoid the ORA-01555
problems
discussed in this article. It addresses the cases where rollback segment
information is
overwritten by the same session and when the rollback segment transaction table
entry is
overwritten.

It is worth highlighting that if a single session experiences the ORA-01555 and it


is not
one of the special cases listed at the end of this article, then the session must
be
using an Oracle extension whereby fetches across commits are tolerated. This does
not
follow the ANSI model and in the rare cases where
ORA-01555 is returned one of the solutions below must be used.

CASE 1 - ROLLBACK OVERWRITTEN

1. Increase size of rollback segment which will reduce the likelihood of


overwriting
rollback information that is needed.

2. Reduce the number of commits (same reason as 1).


3. Run the processing against a range of data rather than the whole table.
(Same
reason as 1).

4. Add additional rollback segments. This will allow the updates etc. to be
spread
across more rollback segments thereby reducing the chances of overwriting required

rollback information.

5. If fetching across commits, the code can be changed so that this is not
done.

6. Ensure that the outer select does not revisit the same block at different
times
during the processing. This can be achieved by :

- Using a full table scan rather than an index lookup


- Introducing a dummy sort so that we retrieve all the data, sort it and
then
sequentially visit these data blocks.

CASE 2 - ROLLBACK TRANSACTION SLOT OVERWRITTEN

1. Use any of the methods outlined above except for '6'. This will allow
transactions
to spread their work across multiple rollback segments therefore reducing the
likelihood
or rollback segment transaction table slots being consumed.

2. If it is suspected that the block cleanout variant is the cause, then force
block
cleanout to occur prior to the transaction that returns the ORA-1555. This can be
achieved by issuing the following in SQL*Plus, SQL*DBA or Server Manager :

alter session set optimizer_goal = rule;


select count(*) from table_name;

If indexes are being accessed then the problem may be an index block and
clean out
can be forced by ensuring that all the index is traversed. Eg, if the index is on
a
numeric column with a minimum value of 25 then the following query will force
cleanout of
the index :

select index_column from table_name where index_column > 24;

Examples
~~~~~~~~

Listed below are some PL/SQL examples that can be used to illustrate the ORA-1555
cases
given above. Before these PL/SQL examples will return this error the database must
be
configured as follows :
o Use a small buffer cache (db_block_buffers).

REASON: You do not want the session executing the script to be able to find
old
versions of the block in the buffer cache which can be used to satisfy a block
visit
without requiring the rollback information.

o Use one rollback segment other than SYSTEM.

REASON: You need to ensure that the work being done is generating rollback
information that will overwrite the rollback information required.

o Ensure that the rollback segment is small.

REASON: See the reason for using one rollback segment.

ROLLBACK OVERWRITTEN

rem * 1555_a.sql -
rem * Example of getting ora-1555 "Snapshot too old" by
rem * session overwriting the rollback information required
rem * by the same session.

drop table bigemp;


create table bigemp (a number, b varchar2(30), done char(1));

drop table dummy1;


create table dummy1 (a varchar2(200));

rem * Populate the example tables.


begin
for i in 1..4000 loop
insert into bigemp values (mod(i,20), to_char(i), 'N');
if mod(i,100) = 0 then
insert into dummy1 values ('ssssssssssss');
commit;
end if;
end loop;
commit;
end;
/

rem * Ensure that table is 'cleaned out'.


select count(*) from bigemp;

declare
-- Must use a predicate so that we revisit a changed block at a different
-- time.

-- If another tx is updating the table then we may not need the predicate
cursor c1 is select rowid, bigemp.* from bigemp where a < 20;

begin
for c1rec in c1 loop

update dummy1 set a = 'aaaaaaaa';


update dummy1 set a = 'bbbbbbbb';
update dummy1 set a = 'cccccccc';
update bigemp set done='Y' where c1rec.rowid = rowid;
commit;
end loop;
end;
/

ROLLBACK TRANSACTION SLOT OVERWRITTEN

rem * 1555_b.sql - Example of getting ora-1555 "Snapshot too old" by


rem * overwriting the transaction slot in the rollback
rem * segment header. This just uses one session.

drop table bigemp;


create table bigemp (a number, b varchar2(30), done char(1));

rem * Populate demo table.


begin
for i in 1..200 loop
insert into bigemp values (mod(i,20), to_char(i), 'N');
if mod(i,100) = 0 then
commit;
end if;
end loop;
commit;
end;
/

drop table mydual;


create table mydual (a number);
insert into mydual values (1);
commit;

rem * Cleanout demo table.


select count(*) from bigemp;

declare

cursor c1 is select * from bigemp;

begin

-- The following update is required to illustrate the problem if block


-- cleanout has been done on 'bigemp'. If the cleanout (above) is commented
-- out then the update and commit statements can be commented and the
-- script will fail with ORA-1555 for the block cleanout variant.
update bigemp set b = 'aaaaa';
commit;

for c1rec in c1 loop


for i in 1..20 loop
update mydual set a=a;
commit;
end loop;
end loop;
end;
/
Special Cases
~~~~~~~~~~~~~

There are other special cases that may result in an ORA-01555. These are given
below but
are rare and so not discussed in this article :

o Trusted Oracle can return this if configured in OS MAC mode. Decreasing


LOG_CHECKPOINT_INTERVAL on the secondary database may overcome the problem.

o If a query visits a data block that has been changed by using the Oracle
discrete
transaction facility then it will return ORA-01555.

o It is feasible that a rollback segment created with the OPTIMAL clause maycause
a
query to return ORA-01555 if it has shrunk during the life of the query causing
rollback
segment information required to generate consistent read versions of blocks to be
lost.

Summary
~~~~~~~

This article has discussed the reasons behind the error ORA-01555 "Snapshot too
old", has
provided a list of possible methods to avoid the error when it is encountered, and
has
provided simple PL/SQL scripts that illustrate the cases discussed.

>>>>> thread about SCN

Do It Yourself (DIY) Oracle replication

Here's a demonstration. First I create a simple table, called TBL_SRC. This is the
table on which
we want to perform change-data-capture (CDC).

create table tbl_src


(
x number primary key,
y number
);

Next, I show a couple of CDC tables, and the trigger on TBL_SRC that will load the
CDC tables.

create table trx


(
trx_id varchar2(25) primary key,
SCN number,
username varchar2(30)
);

create table trx_detail


( trx_id varchar(25)
, step_id number
, step_tms date
, old_x number
, old_y number
, new_x number
, new_y number
, operation char(1)
);

alter table trx_detail add constraint xp_trx_detail primary key ( trx_id,


step_id );

create or replace trigger b4_src


before insert or update or delete on tbl_src
for each row
DECLARE
l_trx_id VARCHAR2(25);
l_step_id NUMBER;
BEGIN
BEGIN
l_trx_id := dbms_transaction.local_transaction_id;
l_step_id := dbms_transaction.step_id;
INSERT INTO trx VALUES (l_trx_id, userenv('COMMITSCN'), USER);
EXCEPTION
WHEN dup_val_on_index THEN
NULL;
END;
INSERT INTO trx_detail
(trx_id, step_id, step_tms, old_x, old_y, new_x, new_y)
VALUES
(l_trx_id, l_step_id, SYSDATE, :OLD.x, :OLD.y, :NEW.x, :NEW.y);
END;
/

Let's see the magic in action. I'll insert a record. We'll see the 'provisional'
SCN in the TRX table.
Then we'll commit, and see the 'true'/post-commit SCN:

insert into tbl_src values ( 1, 1 );

1 row created.

select * from trx;

TRX_ID SCN USERNAME


------------------------- ---------- -------------------
3.4.33402 3732931665 CIDW

commit;

Commit complete.

select * from trx;


TRX_ID SCN USERNAME
------------------------- ---------- -------------------
3.4.33402 3732931668 CIDW

Notice how the SCN "changed" from 3732931665 to 3732931668. Oracle was doing some
background transactions in between.

And we can look at the details of the transaction:

column step_id format 999,999,999,999,999,999,999;


/

TRX_ID STEP_ID STEP_TMS OLD_X


OLD_Y NEW_X NEW_Y O
------------------------- ---------------------------- --------- ----------
---------- ---------- ---------- -
3.4.33402 4,366,162,821,393,448 11-NOV-06
1 1

This approach works back to at least Oracle 7.3.4. Not perfect, because it only
captures DML.
A TRUNCATE is DDL, and that's not captured. For the actual implementation, I
stored the before and after values
as CSV strings.

For 9i or later, I'd use built-in Oracle functionality.

19.2 init.ora parameters and ARCHIVE MODE:


----------------------------------------

LOG_ARCHIVE_DEST=/oracle/admin/cc1/arch
LOG_ARCHIVE_DEST_1=d:\oracle\oradata\arc
LOG_ARCHIVE_START=TRUE
LOG_ARCHIVE_FORMAT=arc_%s.log

LOG_ARCHIVE_DEST_1=
LOG_ARCHIVE_DEST_2=
LOG_ARCHIVE_MAX_PROCESSES=2

19.3 Enabling or disabling archive mode:


----------------------------------

ALTER DATABASE ARCHIVELOG (mounted, niet open)


ALTER DATABASE NOARCHIVELOG (mounted, niet open)

19.4 Implementation backup in archive mode via OS script:


--------------------------------------------------------
19.4.1 OS backup script in unix
------------------------------

###############################################
# Example archive log backup script in UNIX: #
###############################################

# Set up the environment to point to the correct database

ORACLE_SID=CC1; export ORACLE_SID


ORAENV_ASK=NO; export ORAENV_ASK
.oraenv

# Backup the tablespaces

svrmgrl <<EOFarch1
connect internal

alter tablespace SYSTEM begin backup;


! tar -cvf /dev/rmt/0hc /u01/oradata/sys01.dbf
alter tablespace data end backup;

alter tablespace DATA begin backup;


! tar -rvf /dev/rmt/0hc /u02/oradata/data01.dbf
alter tablespace data end backup;
etc
..
..
# Now we backup the archived redo logs before we delete them.
# We must briefly stop the archiving process in order that
# we do not miss the latest files for sure.

archive log stop;


exit
EOFarch1

# Get a listing of all archived files.

FILES='ls /db01/oracle/arch/cc1/arch*.dbf'; export FILES

# Start archiving again

svrmgrl <<EOFarch2
connect internal
archive log start;
exit
EOFarch2

# Now backup the archived files to tape

tar -rvf /dev/rmt/0hc $FILES

# Delete the backupped archived files

rm -f $FILES

# Backup the control file


svrmgrl <<EOFarch3
connect internal
alter database backup controlfile to '/db01/oracle/cc1/cc1controlfile.bck';
exit
EOFarch3

tar -rvf /dev/rmt/0hc /db01/oracle/cc1/cc1controlfile.bck

###############################
# End backup script example #
###############################

19.5 Tablespaces en datafiles online/offline in non-archive en archive mode:


---------------------------------------------------------------------------

Tablespace:

Een tablespace kan in archive mode en non-archive mode offline worden


geplaatst zonder dat media recovery nodig is.
Dit is zo met de NORMAL clausule: alter tablespace offline normal;
Met de immediate clausule is wel recovery nodig.

Datafile;

Een datafile kan in archive mode offline worden gezet.


Als de datafile online wordt gebracht, moet eerst media recovery wordfen
toegepast.
Een datafile kan in non-archive mode niet offline worden geplaatst.

Backup mode:

When you issue ALTER TABLESPACE .. BEGIN BACKUP, it freezes the datafile header.
This is so that we know what redo logs we need to apply to a given file to make
it consistent. While you are backing up that file hot, we are still writing to
it -- it is logically inconsistent. Some of the backed up blocks could be from
the SCN in place at the time the backup began -- others from the time it ended
and others from various points in between.

19.6 Recovery in archive mode:


-----------------------------

19.6.1: recovery waarbij een current controlfile bestaat


=======================================================

Media recovery na de loss van datafile(s) en dergelijke,


gebeurt normaliter op basis van de SCN in de controlfile.

A1: complete recovery:


------------------
RECOVER DATABASE (database not open)
RECOVER TABLESPACE DATA (database open, except this tablespace)
RECOVER DATAFILE 5 (database open, except this datafile)

A2: incomplete recovery:


------------------------
time based: recover database until time '1999-12-31:23.40.00'
cancel based: recover database until cancel
change bases: recover database until change 60747681;

Bij beide recoveries worden de archived redo logs toegepast.

Een incomplete recovery altijd met


"alter database open resetlogs;"
uitvoeren om de nieuwe logentries te purgen uit de online redo files

19.6.2: Recovery zonder huidige controlfile


==========================================

media recovery wanneer er geen huidige controlfile bestaat

De control file bevat dus een SCN die te oud is t.o.v. de SCN's
in de archived redo logs.
Dit moet je Oracle laten weten via

RECOVER DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE;

specifying "using backup controlfile" is effectively telling oracle that you've


lost your controlfile,
and thus SCN's in file headers cannot be compared to anything. So Oracle will
happily keep applying archives
until you tell it to stop (or run out)

19.7 Queries om SCN te vinden:


-----------------------------

Iedere redo log is geassocieerd met een hoog en laag scn

In V$LOG_HISTORY, V$ARCHIVED_LOG, V$DATABASE, V$DATAFILE_HEADER, V$DATAFILE staan


scn's:

Queries:
--------

SELECT file#, substr(name, 1, 30), status, checkpoint_change# -- uit


controlfile
FROM V$DATAFILE;

SELECT file#, substr(name, 1, 30), status, fuzzy, checkpoint_change# -- uit


file header
FROM V$DATAFILE_HEADER;

SELECT first_change#, next_change#, sequence#, archived, substr(name, 1, 40)


FROM V$ARCHIVED_LOG;

SELECT recid, first_change#, sequence#, next_change#


FROM V$LOG_HISTORY;

SELECT resetlogs_change#, checkpoint_change#, controlfile_change#, open_resetlogs


FROM V$DATABASE;

SELECT * FROM V$RECOVER_FILE -- Which file needs recovery

Find the latest archived redologs:

SELECT name
FROM v$archived_log
WHERE sequence# = (SELECT max(sequence#) FROM v$archived_log
WHERE 1699499 >= first_change#;

sequence# : geeft het nummer aan van de archived redo log


first_change# : eerste scn in archived redo log
next_change# : laatste scn in archived redo log, en de eerste scn van de
volgende log
checkpoint_change# : laatste actuele SCN
FUZZY : Y/N, indien YES dan bevat de file changes die later zijn
dan de scn in de header
A datafile that contains a block whose SCN is more recent than the SCN of its
header is called a fuzzy datafile.

19.8 Archived redo logs nodig voor recovery:


-------------------------------------------

In V$RECOVERY_LOG staan die archived logs vermeld


die nodig zijn bij een recovery.

Je kunt ook V$RECOVER_FILE gebruiken om te bepalen welke files moeten recoveren.

SELECT * FROM v$recover_file;

Hier vind je de FILE# en deze kun je weer gebruiken met v$datafile en


v$tablespace:

SELECT d.name, t.name


FROM v$datafile d, v$tablespace t
WHERE t.ts# = d.ts#
AND d.file# in (14,15,21); # use values obtained FROM V$RECOVER_FILE query

19.9 voorbeeld recovery 1 datafile:


----------------------------------

Stel 1 datafile is corrupt. Nu behoeft slechts die ene file te worden teruggezet
en daarna recovery toe te passen.

SVRMGRL>alter database datafile '/u01/db1/users01.dbf' offline;

$ cp /stage/users01.dbf /u01/db1

SVRMGRL>recover datafile '/u01/db1/users01.dbf';

en oracle komt met een suggestie van het toepassen van archived logfiles

SVRMGRL>alter database datafile '/u01/db1/users01.dbf' online;


19.10 voorbeeld recovery database:
---------------------------------

Stel meerdere datafiles zijn verloren. Zet nu backup files terug.

SVRMGRL>startup mount;
SVRMGRL>recover database;

en oracle zal de archived redo logfiles toepassen.

media recovery complete

SVRMGRL>alter database open;

19.11 restore naar ANDere disks:


-------------------------------

- alter database backup controlfile to trace;


- restore files naar nieuwe lokatie:
- edit control file met nieuwe lokatie files
- save dit als .sql script en voer het uit:
SVRMGRL>@new.sql

controlfile:

startup nomount
create controlfile reuse database "brdb" noresetlogs archivelog
maxlogfiles 16
maxlogmembers 2
maxdatafiles 100
maxinstances 1
maxloghistory 226

logfile
group 1 ('/disk03/db1/redo/redo01a.dbf', '/disk04/db1/redo/redo01b.dbf') size 2M,
group 2 ('/disk03/db1/redo/redo02a.dbf', '/disk04/db1/redo/redo02b.dbf') size 2M

datafile
'/disk04/oracle/db1/sys01.dbf',
'/disk05/oracle/db1/rbs01.dbf',
'/disk06/oracle/db1/data01.dbf',
'/disk04/oracle/db1/index01.dbf',

character set 'us7ascii'


;
RECOVER DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE;
ALTER DATABASE OPEN RESETLOGS;

19.12 Copy van database naar ANDere Server:


------------------------------------------

1. kopieer alle files precies van ene lokatie naar ANDere


2. source server: alter database backup controlfile to trace

3. Maak een juiste init.ora met references nieuwe server

4. edit de ascii versie controlfile uit stap 2 waarbij alle schijflokaties


verwijzen naar de target

STARTUP NOMOUNT

CREATE CONTROLFILE REUSE SET DATABASE "FSYS" RESETLOGS noARCHIVELOG


MAXLOGFILES 8
MAXLOGMEMBERS 4
etc..

ALTER DATABASE OPEN resetlogs;

of

CREATE CONTROLFILE REUSE SET DATABASE "TEST" RESETLOGS ARCHIVELOG


..
#RECOVER DATABASE
ALTER DATABASE OPEN RESETLOGS;

ALTER DATABASE OPEN RESETLOGS;

CREATE CONTROLFILE REUSE DATABASE "PROD" NORESETLOGS ARCHIVELOG


..
..
RECOVER DATABASE
# All logs need archiving AND a log switch is needed.
ALTER SYSTEM ARCHIVE LOG ALL;
# Database can now be opened normally.
ALTER DATABASE OPEN;

5. SVRMGRL>@script

bij probleem: delete originele controlfiles en geen reuse.

Voorbeeld create controlfile:


-----------------------------

If you want another database name use CREATE CONTROLFILE SET DATABASE

STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "O901" RESETLOGS NOARCHIVELOG
MAXLOGFILES 50
MAXLOGMEMBERS 5
MAXDATAFILES 100
MAXINSTANCES 1
MAXLOGHISTORY 113
LOGFILE
GROUP 1 'D:\ORACLE\ORADATA\O901\REDO01.LOG' SIZE 100M,
GROUP 2 'D:\ORACLE\ORADATA\O901\REDO02.LOG' SIZE 100M,
GROUP 3 'D:\ORACLE\ORADATA\O901\REDO03.LOG' SIZE 100M
DATAFILE
'D:\ORACLE\ORADATA\O901\SYSTEM01.DBF',
'D:\ORACLE\ORADATA\O901\UNDOTBS01.DBF',
'D:\ORACLE\ORADATA\O901\CWMLITE01.DBF',
'D:\ORACLE\ORADATA\O901\DRSYS01.DBF',
'D:\ORACLE\ORADATA\O901\EXAMPLE01.DBF',
'D:\ORACLE\ORADATA\O901\INDX01.DBF',
'D:\ORACLE\ORADATA\O901\TOOLS01.DBF',
'D:\ORACLE\ORADATA\O901\USERS01.DBF'
CHARACTER SET UTF8
;

Voorbeeld controlfile:
----------------------

STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "SALES" NORESETLOGS ARCHIVELOG
MAXLOGFILES 5
MAXLOGMEMBERS 2
MAXDATAFILES 255
MAXINSTANCES 2
MAXLOGHISTORY 1363
LOGFILE
GROUP 1 (
'/oradata/system/log/log1.log',
'/oradata/dump/log/log1.log'
) SIZE 100M,
GROUP 2 (
'/oradata/system/log/log2.log',
'/oradata/dump/log/log2.log'
) SIZE 100M
DATAFILE
'/oradata/system/system.dbf',
'/oradata/rbs/rollback.dbf',
'/oradata/rbs/rollbig.dbf',
'/oradata/system/users.dbf',
'/oradata/temp/temp.dbf',
'/oradata/data_big/ahp_lkt_data_small.dbf',
'/oradata/data_small/ahp_lkt_data_big.dbf',
'/oradata/data_big/ahp_lkt_index_small.dbf',
'/oradata/index_small/ahp_lkt_index_big.dbf',
'/oradata/data_small/maniin_ah_data_small.dbf',
'/oradata/index_small/maniin_ah_data_big.dbf',
'/oradata/index_big/maniin_ah_index_small.dbf',
'/oradata/index_big/maniin_ah_index_big.dbf',
'/oradata/index_big/fe_heat_data_big.dbf',
'/oradata/data_small/fe_heat_index_big.dbf',
'/oradata/data_small/eksa_data_small.dbf',
'/oradata/data_big/eksa_data_big.dbf',
'/oradata/index_small/eksa_index_small.dbf',
'/oradata/index_big/eksa_index_big.dbf',
'/oradata/data_small/provisioning_data_small.dbf',
'/oradata/data_small/softplan_data_small.dbf',
'/oradata/index_small/provisioning_index_small.dbf',
'/oradata/system/tools.dbf',
'/oradata/index_small/fe_heat_index_small.dbf',
'/oradata/data_small/softplan_data_big.dbf',
'/oradata/index_small/softplan_index_small.dbf',
'/oradata/index_small/softplan_index_big.dbf',
'/oradata/data_small/fe_heat_data_small.dbf'
;
# Recovery is required if any of the datafiles are restored backups,
# or if the last shutdown was not normal or immediate.
RECOVER DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE;
ALTER DATABASE OPEN RESETLOGS;

19.13 PROBLEMS DURING RECOVERY:


-------------------------------

BEGIN BACKUP END BACKUP normal business


|
system=453 switch logfile |
users=455 | | CRASH
tools=459 | | |
| | | |
------------------------------------------------------------------------------
t=t0 t=t1 t=t2 t=t3

ORA-01194, ORA-01195:
---------------------

-------
Note 1:
-------

Suppose the system comes with:

ORA-01194: file 1 needs more recovery to be consistent


ORA-01110: data file 1: '/u03/oradata/tstc/dbsyst01.dbf'

Either you had the database in archive mode or in non archive mode:

archive mode

RECOVER DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE;


ALTER DATABASE OPEN RESETLOGS;

non-archive mode:

# RECOVER DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE;


ALTER DATABASE OPEN RESETLOGS;

If you have checked that the scn's of all files are the samed number,
you might try in the init.ora file:

_allow_resetlogs_corruption = true

-------
Note 2:
-------

Problem Description
-------------------
You restored your hot backup and you are trying to do a point-in-time recovery.
When you tried to open your database you received the following error:
ORA-01195: online backup of file <name> needs more recovery to be consistent

Cause: An incomplete recovery session was started, but an insufficient

number of redo logs were applied to make the file consistent.

The reported file is an online backup that must be recovered to the time the
backup ended.
Action: Either apply more redo logs until the file is consistent or restore the
file from an older backup
and repeat the recovery.
For more information about online backup, see the index entry
"online backups" in the <Oracle7 Server Administrator's Guide>.
This is assuming that the hot backup completed error free.

Solution Description
--------------------
Continue to apply the requested logs until you are able to open the database.

Explanation
-----------
When you perform hot backups on a file, the file header is frozen. For example,
datafile01 may have a file header frozen at SCN #456. When you backup the next
datafile
the SCN # may be differnet. For example the file header for datafile02 may be
frozen
with SCN #457. Therefore, you must apply archive logs until you reach the SCN #
of the
last file that was backed up. Usually, applying one or two more archive logs will
solve
the problem, unless there was alot of activity on the database during the backup.

-------
Note 3:
-------

ORA-01194: file 1 needs more recovery to be consistent

I am working with a test server, I can load it again but I would like to know if
this
kind of problem could be solved or not. Just to let you know, that I am new
in Oracle Database Administration.

I ran a hot backup script, which deleted the old ARCHIVE, logs at the end.
After checking the script's log, I realized that the hot backup was not successful
and it
deleted the Archives. I tried to startup the database and an error occurred;
"ORA-01589: must use RESETLOGS or NORESETLOGS option for database open"
I tried to open it with the RESETLOGS option then another error occurred;
"ORA-01195: online backup of file 1 needs more recovery to be consistent"

Just because, it was a test environment, I have never taken any cold backups.
I still have hot backups. I don't know how to recover from those.
If anyone can tell me how to do it from SQLPLUS (SVRMGRL is not loaded),
I would really appreciate it.

Thanks,

Hi Hima,

The following might help. You now have a database that is operating
like it's in noarchive mode since the logs are gone.

1. Mount the database.


2. Issue the following query:

SELECT V1.GROUP#, MEMBER, SEQUENCE#, FIRST_CHANGE#


FROM V$LOG V1, V$LOGFILE V2
WHERE V1.GROUP# = V2.GROUP# ;

This will list all your online redolog files and their respective
sequence and first change numbers.

3. If the database is in NOARCHIVELOG mode, issue the query:

SELECT FILE#, CHANGE# FROM V$RECOVER_FILE;

If the CHANGE# is GREATER than the minimum FIRST_CHANGE#


of your logs, the datafile can be recovered.

4. Recover the datafile, after taking offline, you cannot take


system offline which is the file in error in your case.

RECOVER DATAFILE '<full_path_file_name>'

5. Confirm each of the logs that you are prompted for until you
receive the message "Media recovery complete". If you are prompted for a non-
existing
archived log, Oracle probably needs one or more of the online logs to proceed with
the recovery.
Compare the sequence number referenced in the ORA-280 message with the sequence
numbers
of your online logs. Then enter the full path name of one of the members of the
redo group
whose sequence number matches the one you are being asked for. Keep entering
online logs as requested until you receive the message "Media recovery
complete".

6. Bring the datafile online. No need for system.

7. If the database is at mount point, open it

Perform a full closed backup of the existing database

-------
Note 4:
-------

Recover until time using backup controlfile


Hi,

I am trying to perform an incomplete recovery to an arbitrary point in time in the


past. Eg. I want
to go back five minutes.

I have a hot backup of my database. (Tablespaces into hotbackup mode, copy files,
tablespaces out
of hotbackup mode, archive current log, backup controlfile to a file and also to a
trace).
(yep im in archivelog mode as well)

I shutdown the current database and blow the datafiles,online redo


logs,controlfiles away.

I restore my backup copy of the database - (just the datafiles) startup nomount
and then run
an edited controlfile trace backup (with resetlogs).

I then RECOVER DATABSE UNTIL TIME 'whenever' USING BACKUP CONTROLFILE.

I'm prompted for logs in the usual way but the recovery ends with an ORA-1547 -

Recover succeeded but open resetlogs would give the following error.
The next error is that datafile 1 (system ts) - would need more recovery.

Now metalink tells me that this is usually due to backups being restored that are
older
than the archive redo logs - this isn't the case. I have all the archive redo logs
I need to
cover the time the backup was taken up to the present. The time specified in the
recovery
is after the backup as well.
What am I missing here? Its driving me nuts. I'm off back to the docs again!

Thanks in advance

Tim

--------------------------------------------------------------------------------

From: Anand Devaraj 15-Aug-02 15:15


Subject: Re : Recover until time using backup controlfile

The error indicates that Oracle requires a few more scns to get all the datafiles
in sync.
It is quite possible that those scns are present in the online redo logfiles which
were lost.
In such cases when Oracle asks for a non-existent archive log, you should provide
the complete path
of the online log file for the recovery to succeed.
Since you dont have an online log file you should use
RECOVER DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE.

In this case when you exhaust all the archive log files, you issue the cancel
command which will
automatically rollback all the incomplete transactions and get all the datafile
headers
in sync with the controlfile.

To do an incomplete recovery using time,you usually require the online logfiles to


be present.

Anand

--------------------------------------------------------------------------------

From: Radhakrishnan paramukurup 15-Aug-02 16:19


Subject: Re : Recover until time using backup controlfile

I am not sure whether you have missed this step or just missed in the note.
You need to also to switch the log at the end of the back up (I do as a matter of
practice else you
need the next log which is not sure to be available in case of a failure).
Otherwise some of the changes
to reach a consistant state is still in the online log and you can never open
untill
you reach a consistent state.

Hope this helps ........

--------------------------------------------------------------------------------

From: Mark Gokman 15-Aug-02 16:41


Subject: Re : Recover until time using backup controlfile

To successfully perform incomplete recovery, you need a full db backup that was
completed prior
to the point to which you want to recover, plus you need all archive logs
containing all SCNs
up to the point to which you want to recover.
Applying these rules to your case, I have two questions:
- are you recovering to the point in time AFTER the time the successful full
backup was copleted?
- is there an archive log that was generated AFTER the time you specify in until
time?
If both answers are yes, then you should have no problems.
I actually recently performed such a recovery several times.

--------------------------------------------------------------------------------

From: Tim Palmer 15-Aug-02 18:02


Subject: Re : Re : Recover until time using backup controlfile

Thanks Guys! I think Mark has hit the nail on the head here. I was being an idiot!

Ive ran this exercise a few more times (with success) and I am convinced that what
I was doing
was trying to recover to a point in time that basically was before the latest scn
of any one file
in the hot backup set I was using - convinced myself that I wasnt -
but I must have been..... perhaps I need a holiday!

Thanks again

Tim

--------------------------------------------------------------------------------

From: Oracle, Rowena Serna 16-Aug-02 15:44


Subject: Re : Recover until time using backup controlfile

Thanks to mark for his input for helping you out.

-------
Note 5:
-------

ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01152: file 2 was not restored from a sufficiently old backup
ORA-01110: data file 2: 'D:\ORACLE\ORADATA\<instance>\UNDOTBS01.DBF'

File number, name and directory may vary depending on Oracle configuration

Details:
Undo tablespace data description

In an Oracle database, Undo tablespace data is an image or snapshot of the


original contents
of a row (or rows) in a table. This data is stored in Undo segments (formerly
Rollback segments
in earlier releases of Oracle) in the Undo tablespace. When a user begins to make
a change to the data
in a row in an Oracle table, the original data is first written to Undo segments
in the Undo tablespace.
The entire process (including the creation of the Undo data) is recorded in Redo
logs before
the change is completed and written in the Database Buffer Cache, and then the
data files via the
database writer (DBWn) process.

If the transaction does not complete due to some error or should there be a user
decision
to reverse (rollback) the change, this Undo data is critical for the ability to
roll back
or undo the changes that were made. Undo data also ensures a way to provide read
consistency
in the database. Read consistency means that if there is a data change in a row of
data that
is not yet committed, a new query of this same row or table will not display any
of the
uncommitted data to other users, but will use the information from the Undo
segments in the Undo tablespace
to actually construct and present a consistent view of the data that only includes

committed transactions or information.

During recovery, Oracle uses its Redo logs to play forward through transactions in
a database
so that all lost transactions (data changes and their Undo data generation) are
replayed into
the database. Then, once all the Redo data is applied to the data files, Oracle
uses the information
in the Undo segments to undo or roll back all uncommitted transactions. Once
recovery is complete,
all data in the database is committed data, the System Change Numbers (SCN) on all
data files
and the control_files match, and the database is considered consistent.

As for Oracle 9i, the default method of Undo management is no longer manual, but
automatic;
there are no Rollback segments in individual user tablespaces, and all Undo
management is processed
by the Oracle server, using the Undo tablespace as the container to maintain the
Undo segments
for the user tablespaces in the database. The tablespace that still maintains its
own Rollback segments
is the System tablespace, but this behavior is by design and irrelevant to the
discussion here.

If this configuration is left as the default for the database, and the 5.022 or
5.025 version of the
VERITAS Backup Exec (tm) Oracle Agent is used to perform Oracle backups, the Undo
tablespace
will not be backed up. If Automatic Undo Management is disabled and the database
administrator (DBA)
has modified the locations for the Undo segments (if the Undo data is no longer in
the Undo tablespace),
this data may be located elsewhere, and the issues addressed by this TechNote may
not affect
the ability to fully recover the database, although it is still recommended that
the upgrade
to the 5.026 Oracle Agent be performed.

Scenario 1

The first scenario would be a recovery of the entire database to a previous point-
in-time.
This type of recovery would utilize the RECOVER DATABASE USING BACKUP CONTROLFILE
statement
and its customizations to restore the entire database to a point before the entry
of improper
or corrupt data or to roll back to a point before the accidental deletion of
critical data.
In this type of situation, the most common procedure for the restore is to just
restore
the entire online backup over the existing Oracle files with the database
shutdown.
(See the Related Documents section for the appropriate instructions on how to
restore and recover
an Oracle database to a point-in-time using an online backup.)

In this scenario, where the entire database would be rolled back in time, an
offline restore
would include all data files, archived log files, and the backup control_file from
the tape
or backup media. Once the RECOVER DATABASE USING BACKUP CONTROLFILE command was
executed,
Oracle would begin the recovery process to roll forward through the Redo log
transactions,
and it would then roll back or undo uncommitted transactions.

At the point when the recovery process started on the actual Undo tablespace,
Oracle would see that
the SCN of that tablespace was too high (in relation to the record in the
control_file).
This would happen simply because the Undo tablespace wasn't on the tape or backup
media that was restored,
so the original Undo tablespace wouldn't have been overwritten, as were the other
data files,
during the restore operation. The failure would occur because the Undo tablespace
would still be
at its SCN before the restore from backup (an SCN in the future as related to the
restored backup control_file).
All other tablespaces and control_files would be back at their older SCNs (not
necessarily consistent yet),
and the Oracle server would respond with the following error messages:

ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01152: file 2 was not restored from a sufficiently old backup
ORA-01110: data file 2: 'D:\ORACLE\ORADATA\<instance>\UNDOTBS01.DBF'

At this point, the database cannot be opened with the RESETLOGS option, nor in a
normal mode.
Any attempt to do so yields the error referenced above.

SQL> alter database open resetlogs;


alter database open resetlogs
*

Error at line 1:
ORA-01152: file 2 was not restored from a sufficiently old backup
ORA-01110: data file 2: 'D:\ORACLE\ORADATA\DRTEST\UNDOTBS01.DBF'

The only recourse here is to recover or restore an older backup that contains an
Undo tablespace,
whether from an older online backup, or from a closed or offline backup or copy of
the database.
Without this ability to acquire an older Undo tablespace to rerun the recovery
operation,
it will not be possible to start the database. At this point, Oracle Technical
Support must be contacted.

Scenario 2

The second scenario would involve the actual corruption or loss of the Undo
tablespace's data files.
If the Undo tablespace data is lost or corrupted due to media failure or other
internal
logical error or user error, this data/tablespace must be recovered.

Oracle 9i does offer the ability to create a new Undo tablespace and to alter the
Oracle Instance to use
this new tablespace when deemed necessary by the DBA. One of the requirements to
accomplish this change, though,
is that there cannot be any active transactions in the Undo segments of the
tablespace when it is time to
actually drop it. In the case of data file corruption, uncommitted transactions in
the database that have
data in Undo segments can be extremely troublesome because the existence of any
uncommitted transactions
will lock the Undo segments holding the data so that they cannot be dropped. This
will be evidenced by
an "ORA-01548" error if this is attempted. This error, in turn, prevents the drop
and recreation of
the Undo tablespace, and thus prevents the successful recovery of the database.

To overcome this problem, the transaction tables of the Undo segments can be
traced to provide details
on transactions that Oracle is trying to recover via rollback and these traces
will also identify
the objects that Oracle is trying to apply the undo to. Oracle Doc ID: 94114.1 may
be referenced to set up
a trace on the database startup so that the actual transactions that are locking
the Undo segments
can be identified and dropped. Dropping objects that contain uncommitted
transactions that are holding
locks on Undo segments does entail data loss, and the amount of loss depends on
how much uncommitted data
was in the Undo segments at the point of failure.

When utilized, this trace is actually monitoring or dumping data from the
transaction tables in the headers
of the Undo segments (where the records that track the data in the Undo segments
are located),
but if the Undo tablespace's data file is actually missing, has been offline
dropped, or if these
Undo segment headers have been corrupted, even the ability to dump the transaction
table data is lost
and the only recourse at this point may be to open the database, export, and
rebuild. At this point,
Oracle Technical Support must be contacted.

Backup Exec Agent for Oracle 5.022 and 5.025 should be upgraded to 5.026
When using the 5.022 or 5.025 version of the Backup Exec for Windows Servers
Oracle Agent
(see the Related Documents section for the appropriate instructions on how to
identify the version
of the Oracle Agent in use), the Oracle Undo tablespace is not available for
backup because the
Undo tablespace falls into the type category of Undo, and only tablespaces with a
content type of
PERMANENT are located and made available for backup. Normal full backups with all
Oracle components
selected will run without error and will complete with a successful status since
the Undo tablespace
is not actually flagged as a selection.

In most Oracle recovery situations, this absence of the Undo tablespace data for
restore would not
cause any problem because the original Undo tablespace is still available on the
database server.
Restores of User tablespaces, which do not require a rollback in time, would
proceed normally
since lost data or changes would be replayed back into the database, and Undo data
would be
available to roll back uncommitted transactions to leave the database in a
consistent
state and ready for user access.

However, in certain recovery scenarios, (in which a rollback in time or full


database recovery
is attempted, or in the case of damaged or missing Undo tablespace data files)
this missing Undo data
can result in the inability to properly recover tablespaces back to a point-in-
time, and could potentially
render the database unrecoverable without an offline backup or the assistance of
Oracle Technical Support.
The scenarios in this TechNote describe two examples (this does not necessarily
imply that these
are the only scenarios) of how this absence of the Undo tablespace on tape or
backup media, and thus its
inability to be restored, can result in failure of the database to open and can
result in actual data loss.

The only solution to the problems referenced within this TechNote is to upgrade
the Backup Exec for
Windows Servers Oracle Agent to version 5.026, and to take new offline (closed
database) and then
new online (running database) backups of the entire Oracle 9i database as per the
Oracle Agent
documentation in the Backup Exec 9.0 for Windows Servers Administrator's Guide.
Oracle 9i database backups
made with the 5.022 and 5.025 Agent that shipped with Backup Exec 9.0 for Windows
Servers
build 4367 or build 4454 should be considered suspect in the context of the
information
provided in this TechNote.

Note: The 5.022, 5.025, and 5.026 versions of the Oracle Agent are compatible with

Backup Exec 8.6 for Windows NT and Windows 2000, which includes support for Oracle
9i,
as well as Backup Exec 9.0 for Windows Servers. See the Related Documents section
for
instructions on how to identify the version of the Oracle Agent in use.

-------
Note 6:
-------

- Backup

a) Consistent backups
A consistent backup means that all data files and control files are consistent
to a point in time. I.e. they have the same SCN. This is the only method of
backup when the database is in NO Archive log mode.
b) Inconsistent backups
An Inconsistent backup is possible only when the database is in Archivelog mode
and proper Oracle aware software is used. Most default backup software can not
backup open files. Special precautions need to be used and testing needs to be
done. You must apply redo logs to the data files, in order to restore the
database to a consistent state.

c) Database Archive mode


The database can run in either Archivelog mode or noarchivelog mode.
When you first create the database, you specify if it is to be in Archivelog
mode. Then in the init.ora file you set the parameter log_archive_start=true
so that archiving will start automatically on startup.
If the database has not been created with Archivelog mode enabled, you can
issue the command whilst the database is mounted, not open.
SVRMGR> alter database Archivelog;.
SVRMGR> log archive start
SVRMGR> alter database open
SVRMGR> archive log list
This command will show you the log mode and if automatic archival is set.

d) Backup Methods
Essentially, there are two backup methods, hot and cold, also known as online
and offline, respectively.
A cold backup is one taken when the database is shutdown.
A hot backup is on taken when the database is running.
Commands for a hot backup:
1. Svrmgr>alter database Archivelog
Svrmgr> log archive start
Svrmgr> alter database open
2. Svrmgr> archive log list
--This will show what the oldest online log sequence is. As a precaution,
always keep the all archived log files starting from the oldest online log
sequence.
3. Svrmgr> Alter tablespace tablespace_name BEGIN BACKUP
4. --Using an OS command, backup the datafile(s) of this tablespace.
5. Svrmgr> Alter tablespace tablespace_name END BACKUP
--- repeat step 3, 4, 5 for each tablespace.
6. Svrmgr> archive log list
---do this again to obtain the current log sequence. You will want to make
sure you have a copy of this redo log file.
7. So to force an archived log, issue
Svrmgr> ALTER SYSTEM SWITCH LOGFILE
A better way to force this would be:
svrmgr> alter system archive log current;
8. Svrmgr> archive log list
This is done again to check if the log file had been archived and to find
the latest archived sequence number.

9. Backup all archived log files determined from steps 2 and 8.


Do not backup the online redo logs. These will contain the end-of-backup
marker and can cause corruption if use doing recovery.

10. Back up the control file:


Svrmgr> Alter database backup controlfile to 'filename'

e) Incremental backups
These are backups that are taken on blocks that have been modified since the
last backup. These are useful as they don't take up as much space and time.
There are two kinds of incremental backups
Cumulative and Non cumulative.
Cumulative incremental backups include all blocks that were changed since the
last backup at a lower level. This one reduces the work during restoration as
only one backup contains all the changed blocks.
Noncumulative only includes blocks that were changed since the previous backup
at the same or lower level.
Using rman, you issue the command "backup incremental level n"
f) Support scenarios
When the database crashes, you now have a backup. You restore the backup and
then recover the database. Also, don't forget to take a backup of the control
file whenever there is a schema change.

RECOVERY
=========
There are several kinds of recovery you can perform, depending on the type of
failure and the kind of backup you have. Essentially, if you are not running in
archive log mode, then you can only recover the cold backup of the database and
you will lose any new data and changes made since that backup was taken.
If, however, the database is in Archivelog mode you will be able to restore the
database up to the time of failure.
There are three basic types of recovery:
1. Online Block Recovery.
This is performed automatically by Oracle.(pmon) Occurs when a process dies
while changing a buffer. Oracle will reconstruct the buffer using the online
redo logs and writes it to disk.
2. Thread Recovery.
This is also performed automatically by Oracle. Occurs when an instance
crashes while having the database open. Oracle applies all the redo changes
in the thread that occurred since the last time the thread was checkpointed.
3. Media Recovery.
This is required when a data file is restored from backup. The checkpoint
count in the data files here are not equal to the check point count in the
control file.
This is also required when a file was offlined without checkpoint and when
using a backup control file.
Now let's explain a little about Redo vs Rollback.
Redo information is recorded so that all commands that took place can be
repeated during recovery. Rollback information is recorded so that you can undo
changes made by the current transaction but were not committed. The Redo Logs
are used to Roll Forward the changes made, both committed and non- committed
changes. Then from the Rollback segments, the undo information is used to
rollback the uncommitted changes.
Media Failure and Recovery in Noarchivelog Mode
In this case, your only option is to restore a backup of your Oracle
files.
The files you need are all datafiles, and control files.
You only need to restore the password file or parameter files if they are lost
or are corrupted.
Media Failure and Recovery in Archivelog Mode
In this case, there are several kinds of recovery you can perform, depending on
what has been lost. The three basic kinds of recovery are:
1. Recover database - here you use the recover database command and the database
must be closed and mounted. Oracle will recover all datafiles that are online.
2. Recover tablespace - use the recover tablespace command. The database can be
open but the tablespace must be offline.
3. Recover datafile - use the recover datafile command. The database can be
open but the specified datafile must be offline.
Note: You must have all archived logs since the backup you restored from,
or else you will not have a complete recovery.
a) Point in Time recovery:
A typical scenario is that you dropped a table at say noon, and want to recover
it. You will have to restore the appropriate datafiles and do a point-in-time
recovery to a time just before noon.
Note: you will lose any transactions that occurred after noon.
After you have recovered until noon, you must open the database with resetlogs.
This is necessary to reset the log numbers, which will protect the database
from having the redo logs that weren't used be applied.
The four incomplete recovery scenarios all work the same:
Recover database until time '1999-12-01:12:00:00';
Recover database until cancel; (you type in cancel to stop)
Recover database until change n;
Recover database until cancel using backup controlfile;
Note: When performing an incomplete recovery, the datafiles must be online.
Do a select name, status from v$datafile to find out if there are any files
which are offline. If you were to perform a recovery on a database which has
tablespaces offline, and they had not been taken offline in a normal state, you
will lose them when you issue the open resetlogs command. This is because the
data file needs recovery from a point before the resetlogs option was used.
b) Recovery without control file
If you have lost the current control file, or the current control file is
inconsistent with files that you need to recover, you need to recover either by
using a backup control file command or create a new control file. You can also
recreate the control file based on the current one using the
'backup control file to trace' command which will create a script for you to
run to create a new one.
Recover database using backup control file command must be used when using a
control file other that the current. The database must then be opened with
resetlogs option.
c) Recovery of missing datafile with rollback segment
The tricky part here is if you are performing online recovery. Otherwise you
can just use the recover datafile command. Now, if you are performing an
online recovery, you must first ensure that in the init.ora file, you remove
the parameter rollback_segments. Otherwise, oracle will want to use those
rollback segments when opening the database, but can't find them and wont open.
Until you recover the datafiles that contain the rollback segments, you need to
create some temporary rollback segments in order for new transactions to work.
Even if other rollback segments are ok, they will have to be taken offline.
So, all the rollback segments that belong to the datafile need to be recovered.
If all the datafiles belonging to the tablespace rollback_data were lost, you
can now issue a recover tablespace rollback_data.
Next bring the tablespace online and check the status of the rollback segments
by doing a select segment_name, status from dba_rollback_segs;
You will see the list of rollback segments that are in status Need Recovery.
Simply issue alter rollback segment online command to complete.
Don't forget to reset the rollback_segments parameter in the init.ora.
d) Recovery of missing datafile without rollback segment
There are three ways to recover in this scenario, as mentioned above.
1. recover database
2. recover datafile 'c:\orant\database\usr1orcl.ora'
3. recover tablespace user_data
e) Recovery with missing online redo logs
Missing online redo logs means that somehow you have lost your redo logs before
they had a chance to archived. This means that crash recovery cannot be
performed, so media recovery is required instead. All datafiles will need to
berestored and rolled forwarded until the last available archived log file is
applied. This is thus an incomplete recovery, and as such, the recover
database command is necessary.
(i.e. you cannot do a datafile or tablespace recovery).
As always, when an incomplete recovery is performed, you must open the database
with resetlogs.
Note: the best way to avoid this kind of a loss, is to mirror your online log
files.
f) Recovery with missing archived redo logs
If your archives are missing, the only way to recover the database is to
restore from your latest backup. You will have lost any uncommitted
transactions which were recorded in the archived redo logs. Again, this is why
Oracle strongly suggests mirroring your online redo logs and duplicating copies
of the archives.
g) Recovery with resetlogs option
Reset log option should be the last resort, however, as we have seen from above,
it may be required due to incomplete recoveries. (recover using a backup
control file, or a point in time recovery). It is imperative that you backup
up the database immediately after you have opened the database with reset logs.
The reason is that oracle updates the control file and resets log numbers, and
you will not be able to recover from the old logs.
The next concern will be if the database crashes after you have opened the
database with resetlogs, but have not had time to backup the database.
How to recover?
Shut down the database
Backup all the datafiles and the control file
Startup mount
Alter database open resetlogs
This will work, because you have a copy of a control file after the
resetlogs point.
Media failure before a backup after resetlogs.
If a media failure should occur before a backup was made after you opened the
database using resetlogs, you will most likely lose data.
The reason is because restoring a lost datafile from a backup prior to the
resetlogs will give an error that the file is from a point in time earlier,
and you don't have its backup log anymore.
h) Recovery with corrupted/missing rollback segments.
If a rollback segment is missing or corrupted, you will not be able to open the
database. The first step is to find out what object is causing the rollback to
appear corrupted. If we can determine that, we can drop that object.
If we can't we will need to log an iTar to engage support.
So, how do we find out if it's actually a bad object?
1. Make sure that all tablespaces are online and all datafiles are online.
This can be checked through v$datafile, under the status column.
For tablespaces associated with the datafiles, look in dba_tablespaces.
If this doesn't show us anything, i.e., all are online, then
2. Put the following in the init.ora:
event = "10015 trace name context forever, level 10"
This event will generate a trace file that will reveal information about the
transaction Oracle is trying to roll back and most importantly, what object
Oracle is trying to apply the undo to.
Stop and start the database.
3. Check in the directory that is specified by the user_dump_dest parameter
(in the init.ora or show parameter command) for a trace file that was
generated at startup time.
4. In the trace file, there should be a message similar to:
error recovery tx(#,#) object #.
TX(#,#) refers to transaction information.
The object # is the same as the object_id in sys.dba_objects.
5. Use the following query to find out what object Oracle is trying to
perform recovery on.
select owner, object_name, object_type, status
from dba_objects where object_id = <object #>;
6. Drop the offending object so the undo can be released. An export or relying
on a backup may be necessary to restore the object after the corrupted
rollback segment goes away.
7. After dropping the object, put the rollback segment back in the init.ora
parameter rollback_segments, remove the event, and shutdown and startup
the database.
In most cases, the above steps will resolve the problematic rollback segment.
If this still does not resolve the problem, it may be likely that the
corruption is in the actual rollback segment.
If in fact the rollback segment itself is corrupted, we should see if we can
restore from a backup. However, that isn't always possible, there may not be a
recent backup etc. In this case, we have to force the database open with the
unsupported, hidden parameters, you will need to log an iTar to engage support.
Please note, that this is potentially dangerous!
When these are used, transaction tables are not read on opening of the database
Because of this, the typical safeguards associated with the rollback segment
are disabled.
Their status is 'offline' in dba_rollback_segs.
Consequently, there is no check for active transactions before dropping the
rollback segment. If you drop a rollback segment which contains active
transactions then you will have logical corruption. Possibly this corruption
will be in the data dictionary.
If the rollback segment datafile is physically missing, has been offlined
dropped, or the rollback segment header itself is corrupt, there is no way to
dump the transaction table to check for active transactions. So the only thing
to do is get the database open, export and rebuild. Log an iTar to engage support
to help with this process.
If you cannot get the database open, there is no other alternative than
restoring from a backup.
i) Recovery with System Clock change.
You can end up with duplicate timestamps in the datafiles when a system clock
changes.
A solution here is to recover the database until time 'yyyy-mm-dd:00:00:00',
and set the time to be later than the when the problem occurred. That way it
will roll forward through the records that were actually performed later, but
have an earlier time stamp due to the system clock change.
Performing a complete recovery is optimal, as all transactions will be applied.
j) Recovery with missing System tablespace.
The only option is to restore from a backup.
k) Media Recovery of offline tablespace
When a tablespace is offline, you cannot recover datafiles belonging to this
tablespace using recover database command. The reason is because a recover
database command will only recover online datafiles. Since the tablespace is
offline, it thinks the datafiles are offline as well, so even if you recover
database and roll forward, the datafiles in this tablespace will not be touched.
Instead, you need to perform a recover tablespace command. Alternatively, you
could restored the datafiles from a cold backup, mount the database and select
from the v$datafile view to see if any of the datafiles are offline. If they
are, bring them online, and then you can perform a recover database command.
l) Recovery of Read-Only tablespaces
If you have a current control file, then recovery of read only tablespaces is
no different than recovering read-write files.
The issues with read-only tablespaces arise if you have to use a backup control
file. If the tablespace is in read-only mode, and hasn't changed to read-write
since the last backup, then you will be able to media recovery using a backup
control file by taking the tablespace offline. The reason here is that when you
are using the backup control file, you must open the database with resetlogs.
And we know that Oracle wont let you read files from before a resetlogs was
done. However, there is an exception with read-only tablespaces. You will be
able to take the datafiles online after you have opened the database.
When you have tablespaces that switch modes and you don't have a current control
file, you should use a backup control file that recognizes the tablespace in
read-write mode. If you don't have a backup control file, you can create a new
one using the create controlfile command.
Basically, the point here is that you should take a backup of the control file
every time you switch a tablespaces mod

ORA-01547:
ORA-01110:
ORA-01588
ORA-00205:
----------

OTHER ERRORS:
=============

1. Control file missing

ORA-00202: controlfile: 'g:\oradata\airm\control03.ctl'


ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.

Sat May 24 20:02:40 2003


ORA-205 signalled during: alter database airm mount...

Solution: just copy one of the present to the missing one

ORA=00214
---------

1. one Control file is different version

Solution: just copy one of the present to the different one

19.13 recovery FROM


------------------

alter system disable distributed recovery

ORA-2019 ORA-2058 ORA-2068 ORA-2050: FAILED DISTRIBUTED TRANSACTIONS


for step by step instructions on how to proceed.

The above errors indicates that there is a failed distributed transaction that
needs to be manually cleaned up.
See <Note 1012842.102>
In some cases, the instance may crash before the solutions are implemented.
If this is the case, issue an 'alter system disable distributed recovery'
immediately after the database starts to allow the database to run without
having reco terminate the instance.

19.14 get a tablespace out of backup mode:


--------------------------------------

SVRMGR> connect internal


SVRMGR> startup mount
SVRMGR> SELECT df.name,bk.time FROM v$datafile df,v$backup bk
2> WHERE df.file# = bk.file# and bk.status = 'ACTIVE';
Shows the datafiles currently in a hot backup state.
SVRMGR> alter database datafile
2> '/u03/oradata/PROD/devlPROD_1.dbf' end backup;
Do an "end backup" on those listed hot backup datafiles.
SVRMGR> alter database open;

19.15 Disk full, corrupt archive log


---------------------------------

Archive mandatory in log_archive_dest is unavailable and it's impossible


to make a full recovery.

Workaround
Configure log_archive_min_succeed_dest = 2
Do not use log_archive_duplex_dest

19.16 ORA-1578 ORACLE data block corrupted (file # %s, block # %s)
---------------------------------------------------------------

SELECT segment_name , segment_type , owner , tablespace_name


FROM sys.dba_extents
WHERE file_id = &bad_file_id
AND &bad_block_id BETWEEN block_id and block_id + blocks -1

19.17 Database does not start (1) SGADEF.DBF LK.DBF


--------------------------------------------------

Note:1034037.6
Subject: ORA-01102: WHEN STARTING THE DATABASE
Type: PROBLEM
Status: PUBLISHED
Content Type: TEXT/PLAIN
Creation Date: 25-JUL-1997
Last Revision Date: 10-FEB-2000

Problem Description:
====================

You are trying to startup the database and you receive the following error:
ORA-01102: cannot mount database in EXCLUSIVE mode
Cause: Some other instance has the database mounted exclusive
or shared.
Action: Shutdown other instance or mount in a compatible mode.
or

scumnt: failed to lock /opt/oracle/product/8.0.6/dbs/lkSALES


Fri Sep 13 14:29:19 2002
ORA-09968: scumnt: unable to lock file
SVR4 Error: 11: Resource temporarily unavailable
Fri Sep 13 14:29:19 2002
ORA-1102 signalled during: alter database mount...
Fri Sep 13 14:35:20 2002
Shutting down instance (abort)

Problem Explanation:
====================

A database is started in EXCLUSIVE mode by default. Therefore, the


ORA-01102 error is misleading and may have occurred due to one of the
following reasons:

- there is still an "sgadef<sid>.dbf" file in the "ORACLE_HOME/dbs"


directory
- the processes for Oracle (pmon, smon, lgwr and dbwr) still exist
- shared memory segments and semaphores still exist even though the
database has been shutdown
- there is a "ORACLE_HOME/dbs/lk<sid>" file

Search Words:
=============

ORA-1102, crash, immediate, abort, fail, fails, migration

Solution Description:
=====================

Verify that the database was shutdown cleanly by doing the following:

1. Verify that there is not a "sgadef<sid>.dbf" file in the directory


"ORACLE_HOME/dbs".

% ls $ORACLE_HOME/dbs/sgadef<sid>.dbf

If this file does exist, remove it.

% rm $ORACLE_HOME/dbs/sgadef<sid>.dbf

2. Verify that there are no background processes owned by "oracle"

% ps -ef | grep ora_ | grep $ORACLE_SID

If background processes exist, remove them by using the Unix


command "kill". For example:

% kill -9 <Process_ID_Number>
3. Verify that no shared memory segments and semaphores that are owned
by "oracle" still exist

% ipcs -b

If there are shared memory segments and semaphores owned by "oracle",


remove the shared memory segments

% ipcrm -m <Shared_Memory_ID_Number>

and remove the semaphores

% ipcrm -s <Semaphore_ID_Number>

NOTE: The example shown above assumes that you only have one
database on this machine. If you have more than one
database, you will need to shutdown all other databases
before proceeding with Step 4.

4. Verify that the "$ORACLE_HOME/dbs/lk<sid>" file does not exist

5. Startup the instance

Solution Explanation:
=====================

The "lk<sid>" and "sgadef<sid>.dbf" files are used for locking shared memory.
It seems that even though no memory is allocated, Oracle thinks memory is
still locked. By removing the "sgadef" and "lk" files you remove any knowledge
oracle has of shared memory that is in use. Now the database can start.
.

19.18 Rollback segment missing, active transactions


------------------------------------------------

Note:1013221.6
Subject: RECOVERING FROM A LOST DATAFILE IN A ROLLBACK TABLESPACE
Type: PROBLEM
Status: PUBLISHED
Content Type: TEXT/PLAIN
Creation Date: 16-OCT-1995
Last Revision Date: 18-JUN-2002

Solution 1:
---------------

Error scenario:

1. set transaction use rollback segment rb1;


2. INSERTS into's...
3. SHUTDOWN ABORT; (simulate Media errors)
4. Delete file rb1.ora (Tablespace RB1 with segment rb1 );
5. Restore a backup of the file
Recover:

1. comment out INIT.ORA ROLLBACK_SEGMENT parameter , so ORACLE does not try to


find the incorrect segment rb1
2. STARTUP MOUNT
3. ALTER DATABASE DATAFILE 'rb1.ora' OFFLINE;
4. ALTER DATABASE OPEN # now we are in business
5. CREATE ROLLBACK SEGMENT rbtemp TABLESPACE SYSTEM;
# We need Temporary RBS for further steps;
6. ALTER ROLLBACK SEGMENT rbtemp ONLINE;
7. RECOVER TABLESPACE RB1;
8. ALTER TABLESPACE RB1 ONLINE;
9. ALTER ROLLBACK SEGMENT rb1 ONLINE;
10. ALTER ROLLBACK SEGMENT rbtemp OFFLINE;
11. DROP ROLLBACK SEGMENT rbtemp;

Result: Successfully rollback uncommitted Transactions, no suspect instance.

Solution 2:
---------------

INTRODUCTION
------------
Rollback segments can be monitored through the data dictionary view,
dba_rollback_segs. There is a status column that describes what state the
rollback segment is currently in. Normal states are either online or offline.
Occasionally, the status of "needs recovery" will appear.

When a rollback segment is in this state, bringing the rollback segment


offline or online either through the alter rollback segment command or
removing it FROM the rollback_segments parameter in the init.ora usually
has no effect.

UNDERSTANDING
-------------
A rollback segment falls into this status of needs recovery whenever
Oracle tries to roll back an uncommitted transaction in its transaction
table and fails.

Here are some examples of why a transaction may need to rollback:


1-A user may do a dml transaction and decides to issue rollback
2-A shutdown abort occurs and the database needs to do an instance recovery
in which case, Oracle has to roll back all uncommitted transactions.

When a rollback of a transaction occurs, undo must be applied to the


data block the modified row/s are in. If for whatever reason, that data
block is unavailable, the undo cannot be applied. The result is a 'corrupted'
rollback segment with the status of needs recovery.

What could be some reasons a datablock is unaccessible for undo?


1-If a tablespace or a datafile is offline or missing.
2-If the object the datablock belongs to is corrupted.
3-If the datablock that is corrupt is actually in the rollback segment
itself rather than the object.
HOW TO RESOLVE IT
-----------------
1-MAKE sure that all tablespaces are online and all datafiles are
online. This can be checked through v$datafile, under the
status column. For tablespaces associated with the datafiles,
look in dba_tablespaces.

If that still does not resolve the problem then

2-PUT the following in the init.ora-


event = "10015 trace name context forever, level 10"

Setting this event will generate a trace file that will reveal the
necessary information about the transaction Oracle is trying to roll
back and most importantly, what object Oracle is trying to apply
the undo to.

3-SHUTDOWN the database (if normal does not work, immediate, if that does
not work, abort) and bring it back up.

Note: An ora-1545 may be encountered, or other errors. If the database


cannot startup, contact customer support at this point.

4-CHECK in the directory that is specified by the user_dump_dest parameter


(in the init.ora or show parameter command) for a trace file that was
generated at startup time.

5-IN the trace file, there should be a message similar to-


error recovery tx(#,#) object #.

TX(#,#) refers to transaction information.


The object # is the same as the object_id in sys.dba_objects.

6-USE the following query to find out what object Oracle is trying to
perform recovery on.

SELECT owner, object_name, object_type, status


FROM dba_objects WHERE object_id = <object #>;

7-THIS object must be dropped so the undo can be released. An export or relying
on a backup may be necessary to restore the object after the corrupted
rollback segment goes away.

8-AFTER dropping the object, put the rollback segment back in the init.ora
parameter rollback_segments, removed the event, and shutdown and startup
the database.

In most cases, the above steps will resolve the problematic rollback segment.
If this still does not resolve the problem, it may be likely that the
corruption is in the actual rollback segment.
At this point, if the problem has not been resolved, please contact
customer support.

Solution 3:
---------------

Recovery FROM the loss of a Rollback segment datafile containing active


transactions

How do I recover the datafile containing rollback segments having active


transactions
and if the backup is done with RMAN without using catalog.
I have tried the case study FROM the Oracle recovery handbook,but
when i tried to open the database after offlining the Rollback segment file I got
the following errors

ORA-00604: error occurred at recursive SQL level 2


ORA-00376: file 2 cannot be read at this time
ORA-01110:data file 2: '/orabackup/CCD1prod/oradata/rbs01CCD1prod.dbf'

the status of the datafile was "Recover".


Anyhow shutting down and starup mounting the database allows for the database or
the datafile recovery,
but this was done through SVRMGRL.

Here is whats happening.

simulate the loss of datafile by removing FROM the os and shut down abort the
database.
mount the database so RMAN can restore the file,
at this point offlining the file succeeds but you cannot open the database.
so the question is can we offline a rollback segment datafile containing active
transactions and open the database ?
How to perform recovery in such case using an RMAN backup without using the
catalog.
I appreciate for any insight and tips into this issue.

Madhukar

FROM: Oracle, Tom Villane 01-May-02 21:04


Subject: Re : Recovery FROM the loss of a Rollback segment datafile containing
active transactions

Hi,

The only supported way to recover FROM the loss of a rollback segment datafile
containing
a rollback segment with a potentially active data dictionary transaction is to
restore the datafile
FROM backup and roll forward to a point in time prior to the loss of the datafile
(assuming archivelog mode).

Tom Villane Oracle Support Metalink Analyst

FROM: Madhukar Yedulapuram 02-May-02 06:46


Subject: Re : Recovery FROM the loss of a Rollback segment datafile containing
active transactions

Hi Tom,
What does Rollforward upto a time prior to the loss of the datafile got to do with
the recovery,
are you suggesting this so that active transaction is not lost,is it possible ?
Because during the recovery the rollforward is followed by rollback and all the
active transactions
FROM the rollback segment's transaction table will be rolled back isnt it ?
My question is if I have a active transaction in a rollback segment and the file
containing
that rollback segment is lost and the database crashed or did a shutdown abort can
we open the
database after offlining the datafile and commenting out the rollback_segments
parameter in the init.ora parameter,
I tried to do it and got the errors which I mentioned earlier.
So in this case I have to do offline recovery only or what ?
Thanks,
madhukar

FROM: Oracle, Tom Villane 02-May-02 16:24


Subject: Re : Re : Recovery FROM the loss of a Rollback segment datafile
containing active transactions

Hi,

You won't be able to open the database if you lose a rollback segment datafile
that contains an active transaction.
You will have to:
Restore a good backup of the file
RECOVER DATAFILE '<name>'
ALTER DATABASE DATAFILE '<name>' ONLINE;

The only way you would be able to open the database is if the status of the
rollback were OFFLINE,
any other status requires that you recover as noted before.

As recovering FROM rollback corruption needs to be done properly,


you may want to log an iTAR if you have additional questions.

Regards
Tom Villane
Oracle Support Metalink Analyst

FROM: Madhukar Yedulapuram 03-May-02 07:22


Subject: Re : Recovery FROM the loss of a Rollback segment datafile containing
active transactions

Hi Tom,
Thank you for the reply.you said that the only way the database can be opened is
if the status of the rollback segment
was offline,but what happens to an active transaction which was using this
rollback segment,
once the database is opened and the media recovery performed on the datafile,the
database will show
values which were part of an active transaction and not committed,isnt this the
logical corruption?

madhukar
FROM: Madhukar Yedulapuram 05-May-02 08:14
Subject: Re : Recovery FROM the loss of a Rollback segment datafile containing
active transactions

Tom,
Can I get some reponse to my questions.

Thank You,
Madhukar

FROM: Oracle, Tom Villane 07-May-02 13:53


Subject: Re : Re : Recovery FROM the loss of a Rollback segment datafile
containing active transactions

Hi,

Sorry for the confusion, I should not have said "rolling forward to a point in
time..." in my previous reply.
No, there won't be corruption or inconsistency. The redo logs will contain the
information for both
committed and uncommitted transactions. Since this includes changes made to
rollback segment blocks,
it follows that rollback data is also (indirectly) recorded in the redo log.
To recover FROM a loss of Datafiles in the SYSTEM tablespace or
datafiles with active rollback segments. You must perform closed database
recovery.
-Shutdown the database
-Restore the file FROM backup
-Recover the datafile
-Open the database.

References:
Oracle8i Backup and Recovery Guide, chapter 6 under "Losing Datafiles in
ARCHIVELOG Mode ".

Regards
Tom Villane
Oracle Support Metalink Analyst

FROM: Madhukar Yedulapuram 07-May-02 22:23


Subject: Re : Recovery FROM the loss of a Rollback segment datafile containing
active transactions

Hi Tom,
After offlining the rollback segment containing active transaction you can open
the database and do the recovery
and after that any active transactions should be rolled back and the data should
not show up,
but I performed the following test and Oracle is showing logical corruption by
showing data which was never committed.

SVRMGR> create tablespace test_rbs datafile


'/orabackup/CCD1prod/oradata/test_rbs01.dbf' size 10M
2> default storage (initial 1M next 1M minextents 1 maxextents 1024);
Statement processed.
SVRMGR> create rollback segment test_rbs tablespace test_rbs;
Statement processed.
SVRMGR> create table case5 (c1 number) tablespace tools;
Statement processed.
SVRMGR> set transaction use rollback segment test_rbs;
ORA-01598: rollback segment 'TEST_RBS' is not online
SVRMGR> alter rollback segment test_rbs online;
Statement processed.
SVRMGR> set transaction use rollback segment test_rbs;
Statement processed.
SVRMGR> insert into case5 values (5);
1 row processed.
SVRMGR> alter rollback segment test_rbs offline;
Statement processed.
SVRMGR> shutdown abort
ORACLE instance shut down.
SVRMGR> startup mount
ORACLE instance started.
Total System Global Area 145981600 bytes
Fixed Size 73888 bytes
Variable Size 98705408 bytes
Database Buffers 26214400 bytes
Redo Buffers 20987904 bytes
Database mounted.
SVRMGR> alter database datafile '/orabackup/CCD1prod/oradata/test_rbs01.dbf'
offline;
Statement processed.
SVRMGR> alter database open;
Statement processed.
SVRMGR> recover tablespace test_rbs;
Media recovery complete.
SVRMGR> alter tablespace test_rbs online;
Statement processed.
SVRMGR> SELECT * FROM case5;
C1
----------
5
1 row SELECTed.
SVRMGR> alter rollback segment test_rbs online;
Statement processed.
SVRMGR> SELECT * FROM case5;
C1
----------
5
1 row SELECTed.
SVRMGR> drop rollback segment test_rbs;
drop rollback segment test_rbs
*
ORA-01545: rollback segment 'TEST_RBS' specified not available
SVRMGR> SELECT segment_name,status FROM dba_rollback_segs;
SEGMENT_NAME STATUS
------------------------------ ----------------
SYSTEM ONLINE
R0 OFFLINE
R01 OFFLINE
R02 OFFLINE
R03 OFFLINE
R04 OFFLINE
R05 OFFLINE
R06 OFFLINE
R07 OFFLINE
R08 OFFLINE
R09 OFFLINE
R10 OFFLINE
R11 OFFLINE
R12 OFFLINE
BIG_RB OFFLINE
TEST_RBS ONLINE
16 rows SELECTed.

SVRMGR> drop rollback segment test_rbs;


drop rollback segment test_rbs
*
ORA-01545: rollback segment 'TEST_RBS' specified not available

Here I have to bring the rollback segment offline to dropt it.

Can this be explained or is this a bug,because this caused logical corruption.

FROM: Oracle, Tom Villane 10-May-02 13:19


Subject: Re : Re : Recovery FROM the loss of a Rollback segment datafile
containing active transactions

Hi,

What you are showing is expected and normal, and not corruption.
At the time that you issue the "alter rollback segment test_rbs online;" Oracle
does an implicit commit
becuase any "ALTER" statement is considered DDL and Oracle issues an
implicit COMMIT before and after any data definition language (DDL)statement.

Regards
Tom Villane
Oracle Support Metalink Analyst

--------------------------------------------------------------------------------

FROM: Madhukar Yedulapuram 14-May-02 20:12


Subject: Re : Recovery FROM the loss of a Rollback segment datafile containing
active transactions

Hi Tom,
So what you are saying is the moment I say
Alter rollback segment RBS# online,oracle will issue
an implicit commit,but if you look at my test just after performing the tablespace
recovery
(had only one datafile in the RBS tablespace
which was offlined before opening the database and doing the recovery),

I brought the tablespace online and did a SELECT FROM the table which was having
the active transaction in one of the rollback segments,so this statement has
issued an
implicit commit and I could see the data which was never actually committed,doesnt
this
contradict the Oracle's stance that only that data will be shown which shown which
is committed,
I think this statement is true for Intance and Crash recovery,not for media
recovery as the case
in point proves,but still if you say Oracle issues an implicit commit,then the
stance of oracle is consistent.

madhukar

FROM: Oracle, Tom Villane 15-May-02 18:30


Subject: Re : Re : Recovery FROM the loss of a Rollback segment datafile
containing active transactions

Hi,

A slight correction to what I posted, I should have said the implicit commit
happened
when the rollback segment was altered offline.

Whether it's an implicit commit (before and after a DDL statement like CREATE,
DROP, RENAME, ALTER)
or if the user did the commit, or if the user exits the application (forces a
commit).
All of the above are considered commits and the data will be saved.

Regards
Tom Villane
Oracle Support Metalink Analyst

FROM: Madhukar Yedulapuram 16-May-02 23:17


Subject: Re : Recovery FROM the loss of a Rollback segment datafile containing
active transactions

Hi Tom,
Thank You very much,so the moment i brought the RBS offline,the transaction was
committed and the data saved in the table,is that what you are saying.
So the data was committed even before performing the recovery,so recovery is
essentially not applying anything in this case.

madhukar

FROM: Oracle, Tom Villane 17-May-02 12:18


Subject: Re : Re : Recovery FROM the loss of a Rollback segment datafile
containing active transactions

Hi,
Yes, that is what happened.

Regards
Tom Villane
Oracle Support Metalink Analyst

19.19 After backup you increase a datafile.


------------------------------------------

problem 2: "the backed up


datafile size is smaller, and Oracle won't
accept it for recovery."

isn't a problem because we most certainly will accept that file. As a test you
can do this (i just did)

o create a small 1m tablespace with a datafile.


o alter it and begin backup.
o copy the datafile
o alter it and end backup.
o alter the datafile and "autoextend on next 1m" it.
o create a table with initial 2m initial extent. This will
grow the datafile.
o offline the tablespace
o copy the 1m original file back.
o try to online it -- it'll tell you the file that needs
recovery (its already accepted the smaller file at this
point)
o alter database recover datafile 'that file';
o alter the tablespace online again -- all is well.

As for the questions:

1) There is such a command -- "alter database create datafile". Here is an


example I just ran through:

tkyte@TKYTE816> alter tablespace t begin backup;


Tablespace altered.

I copied the single datafile that is in T at this point

tkyte@TKYTE816> alter tablespace t end backup;


Tablespace altered.

tkyte@TKYTE816> alter tablespace t add datafile 'c:\temp\t2.dbf' size 1m;


Tablespace altered.

So, I added a datafile AFTER the backup...

tkyte@TKYTE816> alter tablespace t offline;


Tablespace altered.

At this point, I went out and erased the two datafiles associated with T. I
moved the copy of the one datafile in place...
tkyte@TKYTE816> alter tablespace t online;
alter tablespace t online
*
ERROR at line 1:
ORA-01113: file 9 needs media recovery
ORA-01110: data file 9: 'C:\TEMP\T.DBF'

So, it sees the copy is out of sync...

tkyte@TKYTE816> recover tablespace t;


ORA-00283: recovery session canceled due to errors
ORA-01157: cannot identify/lock data file 10 - see DBWR trace file
ORA-01110: data file 10: 'C:\TEMP\T2.DBF'

and now it tells of the missing datafile -- all we need do at this point is:

tkyte@TKYTE816> alter database create datafile 'c:\temp\t2.dbf';


Database altered.

tkyte@TKYTE816> recover tablespace t;


Media recovery complete.
tkyte@TKYTE816> alter tablespace t online;
Tablespace altered.

and we are back in business....

19.22 Setting Trace Events


-------------------------

database level via init.ora

EVENT="604 TRACE NAME ERRORSTACK FOREVER"


EVENT="10210 TRACE NAME CONTEXT FOREVER, LEVEL 10"

session level

ALTER SESSION SET EVENTS 'IMMEDIATE TRACE NAME BLOCKDUMP LEVEL 67109037';
ALTER SESSION SET EVENTS 'IMMEDIATE TRACE NAME CONTROLF LEVEL 10';

system trace dump file


ALTER SESSION SET EVENTS 'IMMEDIATE TRACE NAME SYSTEMSTATE LEVEL 10';

19.23 DROP TEMP DATAFILE


-----------------------

SVRMGRL>startup mount
SVRMGRL>alter database open;
ora-01157 cannot identify datafile 4 - file not found
ora-01110 data file 4 '/oradata/temp/temp.dbf'
SVRMGRL>alter database datafile '/oradata/temp/temp.dbf' offline drop;
SVRMGRL>alter database open;
SVRMGRL>drop tablespace temp including contents;
SVRMGRL>create tablespace temp datafile '....
19.24 SYSTEM DATAFILE RECOVERY
-----------------------------

- a normal datafile can be taken offline and the database started up.
- the system file can be taken offline but the database cannot start

- restore a backup copy of the system file


- recover the file

19.25 Strange processes=.. and database does not start


-----------------------------------------------------

Does the PROCESSES initialization parameter of init.ora depend on some other


parameter ?
We were getting the error as
maximum no of process (50) exceeded.....
The value was initially set to 50, so when the value was....changed to 200, and
the database
was restarted, it gave an error of "end-of-file on communication channel"
The value was reduced to 150 & 100 and the same error was encountered....
when it was set back to 50, the database started....
Can anyone clear ?

check out ur semaphore settings in /etc/system.


try increasing seminfo_semmns

19.26 ORA-00600
--------------

I work with ORACLE DB ver.8.0.5


and recieved an error in alert.log
ksedmp: internal or fatal error
ORA-00600: internal error code, arguments: [12700], [3383], [41957137], [44], [],
[], [], []

oerr ora 600


00600, 00000, "internal error code, arguments: [%s], [%s], [%s], [%s], [%s], [%s],
[%s], [%s]"
Cause: This is the generic internal error number for
Oracle program
exceptions. This indicates that a process has
encountered an exceptional condition.
Action: Report as a bug - the first argument is the
internal error number
Number [12700] indicates
"invalid NLS parameter value (%s)"
Cause: An invalid or unknown NLS configuration
parameter was specified.

19.27 segment has reached it's max_extents


-----------------------------------------

oracle later than 7.3.x


Version 7.3 and later:
You can set the MAXEXTENTS storage parameter value to UNLIMITED for any
object.
Rollback Segment
================
ALTER ROLLBACK SEGMENT rollback_segment STORAGE ( MAXEXTENTS UNLIMITED);

Temporary Segment
=================
ALTER TABLESPACE tablespace DEFAULT STORAGE ( MAXEXTENTS UNLIMITED);

Table Segment
=============
ALTER TABLE MANIIN_ASIAKAS STORAGE ( MAXEXTENTS UNLIMITED);

ALTER TABLE MANIIN_ASIAKAS STORAGE ( NEXT 5M );

Index Segment
=============
ALTER INDEX index STORAGE ( MAXEXTENTS UNLIMITED);

Table Partition Segment


=======================
ALTER TABLE table MODIFY PARTITION partition STORAGE (MAXEXTENTS UNLIMITED);

19.28 max logs


--------------

Problem Description
-------------------
In the "alert.log", you find the following warning messages:
kccrsz: denied expansion of controlfile section 9 by 65535 record(s)
the number of records is already at maximum value (65535)
krcpwnc: following controlfile record written over:
RECID #520891 Recno 53663 Record timestamp ...
kccrsz: denied expansion of controlfile section 9 by 65535 record(s)
the number of records is already at maximum value (65535)
krcpwnc: following controlfile record written over:
RECID #520892 Recno 53664 Record timestamp

The database is still running.


The CONTROL_FILE_RECORD_KEEP_TIME init parameter is set to 7.
If you display the records used in the LOG HISTORY section 9 of the controlfile:

SQL> SELECT * FROM v$controlfile_record_section WHERE type='LOG HISTORY' ;


TYPE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX LAST_RECID
------------- ------------- ------------ ----------- ---------- ----------
LOG HISTORY 65535 65535 33864 33863 520892

The number of RECORDS_USED has reached the maximum allowed in RECORDS_TOTAL.

Solution Description
--------------------
Set the CONTROL_FILE_RECORD_KEEP_TIME to 0:
* Insert the parameter CONTROL_FILE_RECORD_KEEP_TIME = 0 IN "INIT.ORA"
-OR-
* Set it momentarily if you cannot shut the database down now:

SQL> alter system set control_file_record_keep_time=0;

Explanation
-----------
The default value for * the CONTROL_FILE_RECORD_KEEP_TIME is 7 days.
SELECT value FROM v$parameter
WHERE name='control_file_record_keep_time';
VALUE
-----
7
* the MAXLOGHISTORY database parameter has already reached the maximum of
65535 and it cannot be increased anymore.

SQL> alter database backup controlfile to trace;

=> in the trace file, MAXLOGHISTORY is 65535


The MAXLOGHISTORY increases dynamically when the
CONTROL_FILE_RECORD_KEEP_TIME is set to a value different FROM 0,
but does not exceed 65535. Once reached, the message appears in the
alert.log warning you that a controlfile record is written over.

19.29 ORA-470 maxloghistory


--------------------------

Problem Description:
====================
Instance cannot be started because of ORA-470. LGWR has also died
creating a trace file with an ORA-204 error. It is possible that the
maxloghistory limit of 65535 as specified in the controlfile has
been reached.
Diagnostic Required:
====================
The following information should be requested for diagnostics:
1. LGWR trace file produced
2. Dump of the control file - using the command:
ALTER SESSION SET EVENTS 'immediate trace name controlf level 10'
3. Controlfile contents, using the command:
ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
Diagnostic Analysis:
====================
The following observations will indicate that we have the maxloghistory
limit of 65535:
1. The Lgwr trace file should show the following stack trace:
- in 8.0.3 and 8.0.4, OSD skgfdisp returns ORA-27069,
stack:
kcrfds -> kcrrlh -> krcpwnc -> kccroc -> kccfrd -> kccrbl -> kccrbp
- in 8.0.5 kccrbl causes SEGV before the call to skgfdisp
with wrong block number.
stack:
kcrfds -> kcrrlh -> krcpwnc -> kccwnc -> kccfrd -> kccrbl
2. FROM the 'dump of the controlfile':
...
... numerous lines omittted
...
LOG FILE HISTORY RECORDS:
(blkno = 0x13, size = 36, max = 65535, in-use = 65535, last-recid= 188706)
...
the max value of 65535 reconfirms that the limit has been reached.
3. Further confirmation can be seen FROM the controlfile trace:
CREATE CONTROLFILE REUSE DATABASE "ORCL" NORESETLOGS NOARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 2
MAXDATAFILES 50
MAXINSTANCES 1
MAXLOGHISTORY 65535
...
Diagnostic Solution:
===================
1. Set control_file_record_keep_time = 0 in the init.ora.
This parameter specifies the minimum age of a log history record
in days before it can be reused. With the parameter set to 0,
reusable sections never expand and records are reused immediately
as required.
[NOTE:1063567.6] <ml2_documents.showDocument?p_id=1063567.6&p_database_id=NOT>
gives a good description on the use of this parameter.
2. Mount the database and retrieve details of online redo log files for use in
step 6. Because the recovery will need to roll forward through current online
redo logs, a list of online log details is required to indicate which redo
log is current. This can be obtained using the following command:
startup mount
SELECT * FROM v$logfile;
3. Open the database.
This is a very important step. Although the startup will fail, it is a
very important step before recreating the controlfile in step 5 and hense,
enabling crash recovery to repair any incomplete log switch. Without this
step it may be impossible to recover the database.
alter database open
4. Shutdown the database, if it did not already crash in step 3.
5. Using the backup controlfile trace, recreate the controlfile with a smaller
maxloghistory value. The MAXLOGHISTORY section of the current control file
cannot be extended beyond 65536 entries. The value should reflect the amount
of log history that you wish to maintain.
An ORA-219 may be returned when the size of the controlfile, based on the
values of the MAX- parameters, is higher then the maximum allowable size.
[NOTE:1012929.6] <ml2_documents.showDocument?p_id=1012929.6&p_database_id=NOT>
gives a good step-by-step guide to recreating the control file.
6. Recover the database.
The database will automatically be mounted due to the recreation of the
controlfile in step 5 :
Recover database using backup controlfile;
At the recovery prompt apply the online logs in sequence by typing the
unquoted full path and file name of the online redo log to apply, as noted
in step 2. After applying the current redo log, you will receive the
message 'Media Recovery Complete'.
7. Once media recovery is complete, open the database as follows:
alter database open resetlogs;

Note: keep recurring "Control file resized from"


> /dbms/tdbaplay/playroca/admin/dump/udump/playroca_ora_1548438.trc
> Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
> With the Partitioning, OLAP and Data Mining options
> ORACLE_HOME = /dbms/tdbaplay/ora10g/home
> System name: AIX
> Node name: pl003
> Release: 3
> Version: 5
> Machine: 00CB560D4C00
> Instance name: playroca
> Redo thread mounted by this instance: 1
> Oracle process number: 28
> Unix process pid: 1548438, image: oracle@pl003 (TNS V1-V3)
>
> *** 2008-02-21 12:51:57.587
> *** ACTION NAME:(0000010 FINISHED67) 2008-02-21 12:51:57.583
> *** SERVICE NAME:(SYS$USERS) 2008-02-21 12:51:57.583
> *** SESSION ID:(518.643) 2008-02-21 12:51:57.583
> Control file resized from 454 to 470 blocks
> kccrsd_append: rectype = 28, lbn = 227, recs = 1128

19.30 Compatible init.ora change:


--------------------------------

Database files have the COMPATIBLE version in the file header. If you
set the parameter to a higher value, all the headers will be updated at next
database startup. This means that if you shutdown your database, downgrade the
COMPATIBLE parameter, and try to restart your database, you'll receive an error
message something like:

ORA-00201: control file version 7.3.2.0.0 incompatible with ORACLE version


7.0.12.0.0
ORA-00202: control file: '/usr2/oracle/dbs/V73A/ctrl1V73A.ctl'
In the above case, database was running with COMPATIBLE 7.3.2.0. I commented out
the parameter in init.ora, that is; kernel uses default 7.0.12.0 and returns an
error before mounting since kernel cannot read the controlfile header.

- You may only change the value of COMPATIBLE after a COLD Backup.
- You may only change the value of COMPATIBLE if the database has been
shutdown in NORMAL/IMMEDIATE mode.

This parameter allows you to use a new release, while at the same time
guaranteeing backward
compatibility with an earlier release (in case it becomes necessary to revert to
the earlier release).
This parameter specifies the release with which Oracle7 Server must maintain
compatibility.
Some features of the current release may be restricted. For example, if you are
running release 7.2.2.0
with compatibility set to 7.1.0.0 in order to guarantee compatibility, you will
not be able to use 7.2 features.
When using the standby database and feature, this parameter must have the same
value on the primary
and standby databases, and the value must be 7.3.0.0.0 or higher. This parameter
allows you to immediately
take advantage of the maintenance improvements of a new release in your production
systems
without testing the new functionality in your environment. The default value is
the earliest release with which
compatibility can be guaranteed. Ie: It is not possible to set COMPATIBLE to 7.3
on an Oracle8 database.
-----------------

Hi Tom, Just installed DB9.0.1, I tried to modify parameter in init.ora file:


compatible=9.0.0(default) to 8.1.0.
After I restarted the 901 DB, I got error below when I login to sqlplus: ERROR:
ORA-01033:
ORACLE initialization or shutdown in progress Anything wrong with that? If I
change back, everything is ok.

The database could not start up. If you start the database manually, from the
command line --
you would discover this. For example:

idle> startup pfile=initora920.ora


ORACLE instance started.
Total System Global Area 143725064 bytes
Fixed Size 451080 bytes
Variable Size 109051904 bytes
Database Buffers 33554432 bytes
Redo Buffers 667648 bytes
Database mounted.
ORA-00402: database changes by release 9.2.0.0.0 cannot be used by release
8.1.0.0.0
ORA-00405: compatibility type "Locally Managed SYSTEM tablespace" .....
Generally, compatible cannot be set DOWN as you are already using new features
many times that are not compatible with the older release.
You would have had to of created the database with 8.1 file formats (compatible
set to 8.1 from the very beginning)
------------------------------

19.31 ORA-27044: unable to write the header block of file:


---------------------------------------------------------

Problem Description:
====================

When you manually switch redo logs, or when the log buffer causes the redo
threads to switch, you see errors similar to the following in your alert log:

...
Fri Apr 24 13:42:00 1998
Thread 1 advanced to log sequence 170
Current log# 4 seq# 170 mem# 0: /.../rdlACPT04.rdl
Fri Apr 24 13:42:04 1998
Errors in file /.../acpt_arch_15973.trc:
ORA-202: controlfile: '/.../ctlACPT01.dbf'
ORA-27044: unable to write the header block of file
SVR4 Error: 48: Operation not supported
Additional information: 3
Fri Apr 24 13:42:04 1998
kccexpd: controlfile resize from 356 to 368 block(s) denied by OS
...

Note: The particular SVR4 error observed may differ in your case and is
irrelevant here.

ORA-00202: "controlfile: '%s'"


Cause: This message reports the name file involved in other messages.
Action: See associated error messages for a description of the problem.

ORA-27044: "unable to write the header block of file"


Cause: write system call failed, additional information indicates
which function encountered the error
Action: check errno

Solution Description:
=====================

To workaround this problem you can:

1. Use a database blocksize smaller than 16k. This may not be practical
in all cases, and to change the db_block_size of a database
you must rebuild the database.

- OR -

2. Set the init.ora parameter CONTROL_FILE_RECORD_KEEP_TIME equal to


zero. This can be done by adding the following line to your
init.ora file:

CONTROL_FILE_RECORD_KEEP_TIME = 0

The database must be shut down and restarted to have the changed
init.ora file read.

Explanation:
============

This is [BUG:663726] <ml2_documents.showDocument?p_id=663726&p_database_id=BUG>,


which is fixed in release 8.0.6.

The write of a 16K buffer to a control file seems to fail during an implicit
resize operation on the controlfile that came as a result of adding log
history records (V$LOG_HISTORY) when archiving an online redo log after a log
switch.

Starting with Oracle8 the control file can grow to a much larger size than it
was able to in Oracle7. Bug 663726
<ml2_documents.showDocument?p_id=663726&p_database_id=BUG>
is only reproducible when the control file
needs to grow AND when the db_block_size = 16k. This has been tested on
instances with a smaller database block size and the problem has not been able
to be reproduced.
Records in some sections in the control file are circularly reusable while
records in other sections are never reused. CONTROL_FILE_RECORD_KEEP_TIME
applies to reusable sections. It specifies the minimum age in days that a
record must have before it can be reused. In the event a new record needs to
be added to a reusable section and the oldest record has not aged enough, the
record section expands.

If CONTROL_FILE_RECORD_KEEP_TIME is set to 0, then reusable sections never


expand and records are reused as needed.

19.32 ORA-04031 error shared_pool:


---------------------------------

DIAGNOSING AND RESOLVING ORA-04031 ERROR

For most applications, shared pool size is critical to Oracle perfoRMANce. The
shared pool holds both the d
ata dictionary cache and the fully parsed or compiled representations of PL/SQL
blocks and SQL statements.
When any attempt to allocate a large piece of contiguous memory in the shared pool
fails
Oracle first flushes all objects
that are not currently in use from the pool and the resulting free memory chunks
are merged.
If there is still not a single chunk large enough to satisfy the request ORA-04031
is returned.
The message that you will get when this error appears is the following:
Error: ORA 4031
Text: unable to allocate %s bytes of shared memory (%s,%s,%s)

The ORA-04031 error is usually due to fragmentation in the library cache


or shared pool reserved space. Before of increasing the shared pool size consider
to tune the application to use shared sql and tune
SHARED_POOL_SIZE, SHARED_POOL_RESERVED_SIZE, and SHARED_POOL_RESERVED_MIN_ALLOC.

First determine if the ORA-04031 was a result of fragmentation in the library


cache or in the shared pool reserved space by issuing the following query:

SELECT free_space, avg_free_size, used_space,


avg_used_size, request_failures, last_failure_size
FROM v$shared_pool_reserved;

The ORA-04031 is a result of lack of contiguous space in the shared pool


reserved space if:
REQUEST_FAILURES is > 0 and LAST_FAILURE_SIZE is >
SHARED_POOL_RESERVED_MIN_ALLOC.

To resolve this consider increasing SHARED_POOL_RESERVED_MIN_ALLOC to lower


the number of objects being cached into the shared pool reserved space and
increase SHARED_POOL_RESERVED_SIZE and SHARED_POOL_SIZE to increase the
available memory in the shared pool reserved space.
The ORA-04031 is a result of lack of contiguous space in the library cache if:
REQUEST_FAILURES is > 0 and LAST_FAILURE_SIZE is <
SHARED_POOL_RESERVED_MIN_ALLOC
or
REQUEST_FAILURES is 0 and LAST_FAILURE_SIZE is < SHARED_POOL_RESERVED_MIN_ALLOC
The first step would be to consider lowering SHARED_POOL_RESERVED_MIN_ALLOC to
put more objects into the shared pool reserved space and increase
SHARED_POOL_SIZE.

This view keeps information of every SQL statement and PL/SQL block executed in
the database.
The following SQL can show you statements with literal values or candidates to
include bind variables:

SELECT substr(sql_text,1,40) "SQL",


count(*) ,
sum(executions) "TotExecs"
FROM v$sqlarea
WHERE executions < 5
GROUP BY substr(sql_text,1,40)
HAVING count(*) > 30
ORDER BY 2;

19.33 ORA-4030 Out of memory:


----------------------------

Possibly no memory left in Oracle, or the OS does not grant more memory.
Also inspect the size of any swap file.

The errors is also reported if execute permissions are not in place


on some procedure.

19.34 wrong permissions on oracle:


----------------------------------

Hi,

I am under very confusing situation.


I'm running database (8.1.7)
My oracle is installed under ownership of userid "oracle"

when i login with unix id "TEST" and give oracle_sid,oracle_home,PATH variables


and then do sqlplus sys

after logging in when i give


"select file#,error from v$datafile_header;"

for some file# i get error as "CAN NOT READ HEADER"

but when i login through other unix id and do the same thing.
I'm not getting any error..

This seems very very confusing,


Could you tell me the reason behind this??

Thank & Regards,


Atul
Followup:
sounds like you did not run the root.sh during the install and the permissions
on the oracle binaries are wrong.

what does ls -l $ORACLE_HOME/bin/oracle look like. it should look like this:

$ ls -l $ORACLE_HOME/bin/oracle
-rwsr-s--x 1 ora920 ora920 51766646 Mar 31 13:03
/usr/oracle/ora920/bin/oracle

with the "s" bits set.

rwsr-s--x 1 oracle dba 494456 Dec 7 1999 lsnrctl

regardless of who I log in as, when you have a setuid program as the oracle
binary is, it'll be running "as the owner"

tell me, what does ipcs -a show you, who is the owner of the shared memory
segments associated with the SGA. If that is not Oracle -- you are "getting
confused" somewhere for the s bit would ensure that Oracle was the owner.

Some connection troubleshooting:


--------------------------------

19.35:
======

ORA-12545:
----------

This one is probaly due to the fact the IP or HOSTNAME in tnsnames is wrong.

ORA-12514:
----------

This one is probaly due to the fact the SERVICE_NAME in tnsnames is wrong or
should be
fully qualified with domain name.

ORA-12154:
----------

This one is probaly due to the fact the alias you have used in the logon dialogbox
is wrong.
fully qualified with domain name.

ORA-12535:
----------

The TNS-12535 or ORA-12535 error is normally a timeout error associated


with Firewalls or slow Networks.
+ It can also be an incorrect listener.ora parameter setting for the
CONNECT_TIMEOUT_<listener_name> value specified.
+ In essence, the ORA-12535/TNS-12535 is a timing issue between the client and
server.

ORA-12505:
----------

TNS:listener does not currently know of SID given in connect


descriptor

Note 1:
-------

Symptom:
When trying to connect to Oracle the following error is generated:

ORA-12224: TNS: listener could not resolve SID given in connection description.

Cause:
The SID specified in the connection was not found in the listener�s tables. This
error will be returned
if the database instance has not registered with the listener.

Possible Remedy:
Check to make sure that the SID is correct. The SIDs that are currently registered
with the listener can be obtained by typing:

LSNRCTL SERVICES <listener-name>

These SIDs correspond to SID_NAMEs in TNSNAMES.ORA or DB_NAME in the


initialisation file.

Note 2:
-------

ORA-12505: TNS:listener could not resolve SID given in connect descriptor


You are trying to connect to a database, but the SID is not known.

Although it is possible that a tnsping command succeeds, there might still a


problem with the SID parameter of the connection string.

eg.
C:>tnsping ora920

TNS Ping Utility for 32-bit Windows: Version 9.2.0.7.0 - Production

Copyright (c) 1997 Oracle Corporation. All rights reserved.

Used parameter files:


c:\oracle\ora920\network\admin\sqlnet.ora

Used TNSNAMES adapter to resolve the alias


Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL =
TCP)(HOST = DEV01)(PORT =
2491))) (CONNECT_DATA = (SID = UNKNOWN) (SERVER = DEDICATED)))
OK (20 msec)

As one can see, this is the connection information stored in a tnsnames.ora file:
ORA920.EU.DBMOTIVE.COM =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = DEV01)(PORT = 2491))
)
(CONNECT_DATA =
(SID = UNKNOWN)
(SERVER = DEDICATED)
)
)

However, the SID UNKNOWN is not known by the listener at the database server side.
In order to test the known services by a listener, we can issue following command
at the database server side:
C:>lsnrctl services

LSNRCTL for 32-bit Windows: Version 10.1.0.2.0 - Production

Copyright (c) 1991, 2004, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=DEV01)(PORT=1521)))
Services Summary...
Service "ORA10G.eu.dbmotive.com" has 1 instance(s).
Instance "ORA10G", status UNKNOWN, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0
LOCAL SERVER
Service "ORA920.eu.dbmotive.com" has 2 instance(s).
Instance "ORA920", status UNKNOWN, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0
LOCAL SERVER
Instance "ORA920", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:2 refused:0 state:ready
LOCAL SERVER
The command completed successfully

Know services are ORA10G and ORA920.

Changing the SID in our tnsnames.ora to a known service by the listener (ORA920)
solved the problem.

19.36 ORA-12560
---------------

Note 1:
-------

Oracle classify this as a �generic protocol adapter error�. In my experience it


indicates that
Oracle client does not know what instance to connect to or what TNS alias to use.

Set the correct ORACLE_HOME ans ORACLE_SID variables.

Note 2:
-------

Doc ID: Note:73399.1


Subject: WINNT: ORA-12560 DB Start via SVRMGRL or SQL*PLUS ORACLE_SID is set
correctly
Type: BULLETIN
Status: PUBLISHED
Content Type: TEXT/PLAIN
Creation Date: 28-JUL-1999
Last Revision Date: 14-JAN-2004

PURPOSE

To assist in resolving ORA-12560 errors on Oracle8i.

SCOPE & APPLICATION

Support Analysts and customers.

RELATED DOCUMENTS

PR:1070749.6
NOTE:1016454.102 <ml2_documents.showDocument?p_id=1016454.102&p_database_id=NOT>
TNS 12560 DB CREATE VIA INSTALLATION OR CONFIGURATION ASSISTANT FAILS
BUG:948671 <ml2_documents.showDocument?p_id=948671&p_database_id=BUG> ORADIM
SUCCSSFULLY CREATES AN UNUSABLE SID WITH NON-ALPHANUMERIC
CHARACTER
BUG:892253 <ml2_documents.showDocument?p_id=892253&p_database_id=BUG> ORA-12560
CREATING DATABASE WITH DB CONFIGURATION ASSISTANT IF
SID HAS NON-ALPHA

If you encounter an ORA-12560 error when you try to start Server Manager
or SQL*Plus locally on your Windows NT server, you should first check
the ORACLE_SID value. Make sure the SID is correctly set, either in the
Windows NT registry or in your environment (with a set command). Also, you
must verify that the service is running. See the entries above for more details.

If you have verified that ORACLE_SID is properly set, and the service
is running, yet you still get an ORA-12560, then it is possible that you
have created an instance with a non-alphanumeric character.

The Getting Started Guide for Oracle8i on Windows NT documents that SID
names can contain only alphanumerics, however if you attempt to create a SID
with an underscore or a dash on Oracle8i you are not prevented from doing so.
The service will be created and started successfully, but attempts to connect
will fail with an ORA-12560.

You must delete the instance and recreate it with no special characters -
only alphanumerics are allowed in the SID name.

See BUG#948671, which was logged against 8.1.5 on Windows NT for this issue.

Note 3:
-------

Doc ID </help/usaeng/Search/search.html>: Note:119008.1 Content Type:


TEXT/PLAIN
Subject: ORA-12560 Connecting to the Server on Unix - Troubleshooting
Creation Date: 04-SEP-2000
Type: PROBLEM Last Revision Date: 20-MAR-2003
Status: PUBLISHED
PURPOSE
-------

This note describes some of the possible reasons for ORA-12560 errors
connecting to server on Unix Box. The list below shows some of the
causes, the symptoms and the action to take. It is possible you will hit
a cause not described here, in that case the information above should allow
it to be identified.

SCOPE & APPLICATION


-------------------

Support Analysts and customers alike.

ORA-12560 CONNECTING TO THE SERVER ON UNIX - TROUBLESHOOTING


------------------------------------------------------------

ORA-12560: TNS:protocol adapter error


Cause: A generic protocol adapter error occurred.
Action: Check addresses used for proper protocol specification. Before
reporting this error, look at the error stack and check for lower
level transport errors. For further details, turn on tracing and
re execute the operation. Turn off tracing when the operation
is complete.

This is a high-level error just reporting an error occurred in the actual


transport layer. Look at the next error down the stack and process that.

1. ORA-12500 ORA-12560 MAKING MULTIPLE CONNECTIONS TO DATABASE

Problem:
Trying to connect to the database via listener and the ORA-12500 are
prompted. You may see in the listener.log ORA-12500 and ORA-12560:

ORA-12500: TNS:listener failed to start a dedicated server process


Cause: The process of starting up a dedicated server process
failed. The executable could not be found or the
environment maybe set up incorrectly.
Action: Turn on tracing at the ADMIN level and re execute the
operation. Verify that the ORACLE Server executable is
present and has execute permissions enabled. Ensure that
the ORACLE environment is specified correctly in
LISTENER.ORA. If error persists, contact Worldwide
Customer Support.

In many cases the error ORA-12500 is caused due to leak of resources in the
Unix Box, if you are enable to connect to database and randomly you get
the error your operating system is reached the maximum values for some
resources. Otherwise, if you get the error in first connection the problem
may be in the configuration of the system.
Solution:
Finding the resource which is been reached is difficult, the note 2064862.102
<ml2_documents.showDocument?p_id=2064862.102&p_database_id=NOT>
indicates some suggestion to solve the problems.

2. ORA-12538/ORA-12560 connecting to the database via SQL*Net

Problem:
Trying to connect to database via SQL*Net the error the error ORA-12538
is prompted. In the trace file you can see:

nscall: error exit


nioqper: error from nscall
nioqper: nr err code: 0
nioqper: ns main err code: 12538
nioqper: ns (2) err code: 12560
nioqper: nt main err code: 508
nioqper: nt (2) err code: 0
nioqper: nt OS err code: 0

Solution:
- Check the protocol used in the TNSNAMES.ORA by the connection string
- Ensure that the TNSNAMES.ORA you check is the one that is actually being
used by Oracle. Define the TNS_ADMIN environment variable to point to the
TNSNAMES directory.
- Using the $ORACLE_HOME/bin/adapters command, ensure the protocol is
installed. Run the command without parameters to check if the protocol is
installed, then run the command with parameters to see whether a
particular tool/application contains the protocol symbols e.g.:

1. $ORACLE_HOME/bin/adapters
2. $ORACLE_HOME/bin/adapters $ORACLE_HOME/bin/oracle
$ORACLE_HOME/bin/adapters $ORACLE_HOME/bin/sqlplus

Explanation:
If the protocol is not installed every connection attempting to use it will
fail with ORA-12538 because the executable doesn't contain the required
protocol symbol/s.

Error ORA-12538 may also be caused by an issue with the


'$ORACLE_HOME/bin/relink all' command. 'Relink All' does not relink the sqlplus
executable. If you receive error ORA-12538 when making a sqlplus connection, it
may be for this reason.

To relink sqlplus manually:


$ su - oracle
$ cd $ORACLE_HOME/sqlplus/lib
$ make -f ins_sqlplus.mk install
$ ls -l $ORACLE_HOME/bin/sqlplus --> should show a current date/time stamp

3. ORA-12546 ORA-12560 connecting locally to the database

Problem:
Trying to connect to database locally with a different account to the
software owner, the error the error ORA-12546 is prompted. In the trace file
you can see:
nioqper: error from nscall
nioqper: nr err code: 0
nioqper: ns main err code: 12546
nioqper: ns (2) err code: 12560
nioqper: nt main err code: 516
nioqper: nt (2) err code: 13
nioqper: nt OS err code: 0

Solution:
Make sure the permissions of oracle executable are correct, this should be:

52224 -rwsr-sr-x 1 oracle dba 53431665 Aug 10 11:07 oracle

Explanation:
The problem occurs due to an incorrect setting on the oracle executable.

4. ORA-12541 ORA-12560 TRYING TO CONNECT TO A DATABASE

Problem:
You are trying to connect to a database using SQL*Net and receive the
following error ORA-12541 ORA-12560 after change the TCP/IP port in the
listener.ora and you are using PARAMETER USE_CKPFILE_LISTENER in
listener.ora.

The following error struct appears in the SQLNET.LOG:

nr err code: 12203


TNS-12203: TNS:unable to connect to destination
ns main err code: 12541
TNS-12541: TNS:no listener
ns secondary err code: 12560
nt main err code: 511
TNS-00511: No listener
nt secondary err code: 239
nt OS err code: 0

Solution:
Check [NOTE:1061927.6]
<ml2_documents.showDocument?p_id=1061927.6&p_database_id=NOT> to resolve the
problem.

Explanation:
If TCP protocol is listed in the Listener.ora's ADDRESS_LIST section and
the parameter USE_CKPFILE_LISTENER = TRUE, the Listener ignores the TCP
port number defined in the ADDRESS section and listens on a random port.

RELATED DOCUMENTS
-----------------
Note:39774.1 <ml2_documents.showDocument?p_id=39774.1&p_database_id=NOT> LOG &
TRACE Facilities on NET .
Note:45878.1 <ml2_documents.showDocument?p_id=45878.1&p_database_id=NOT>
SQL*Net Common Errors & Diagnostic Worksheet
Net8i Admin/Ch.11 Troubleshooting Net8 / Resolving the Most Common
Error Messages
19.37 ORA-12637
---------------

Packet received failed.

A process was unable to receive a packet from another process. Possible causes
are: 1. The other process was terminated.
2. The machine on which the other process is running went down.
3. Some other communications error occurred.

Note 1:

Just edit the file sqlnet.ora and search for the string
SQLNET.AUTHENTICATION_SERVICES.
When it exists it�s set to = (TNS), change this to = (NONE). When it doesn�t
exist, add the string
SQLNET.AUTHENTICATION_SERVICES = (NONE)

Note 2:

What does SQLNET.AUTHENTICATION_SERVICES do?

SQLNET.AUTHENTICATION_SERVICES
Purpose
Use the parameter SQLNET.AUTHENTICATION_SERVICES to enable one or more
authentication services.
If authentication has been installed, it is recommended that this parameter be set
to either none or to one
of the authentication methods.

Default
None

Values
Authentication Methods Available with Oracle Net Services:
none for no authentication methods. A valid username and password can be used to
access the database.
all for all authentication methods
nts for Windows NT native authentication
Authentication Methods Available with Oracle Advanced Security:
kerberos5 for Kerberos authentication
cybersafe for Cybersafe authentication
radius for RADIUS authentication
dcegssapi for DCE GSSAPI authentication

See Also:
Oracle Advanced Security Administrator's Guide

Example
SQLNET.AUTHENTICATION_SERVICES=(kerberos5, cybersafe)

Note 3:

ORA-12637 for members of one NT group, using OPS$ login


Being "identified externally", users can work fine until the user is added to a
"wwwauthor" NT group to allow them
to publish documents on Microsoft IIS (intranet) -- then they get ORA-12637
starting the Oracle c/s application
(document management system).
The environment is: Oracle 9.2.0.1.0 on Windows 2000 Advanced Server w. SP4,
Windows 2003 domain controllers
in W2K compatible mode, client workstations with W2K and Win XP.
Any hint will be appreciated.

Problem solved. Specific NT group (wwwauthor) which caused problems had existed
already with specific permissions,
then it was dropped and created again with exactly the same name (but, of course,
with different internal ID).
This situation have been identified as causing some kind of mess.
A completely new group with different name has been created.

Note 4:

ORA-12637 packet receive failure

I added a second instance to the Oracle server. Since then, on the server and all
clients,
I get ORA-12637 packet receive failure when I try to connect to this database. Why
is this?

Hello

Try commenting out the SQLNET.CRYPTO_SEED and SQLNET.AUTHENTICATION_SERVICES in


the server's SQLNET.ORA
and on the client sqlnet file if they exist.

Please also verify that the server's LISTENER.ORA file contains the following
parameter:
CONNECT_TIMEOUT_LISTENER=0

Note 5:

Workaround is to turn off prespawned server processes in "listener.ora".


In the "listener.ora", comment out or delete the prespawn parameters, ie:
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = prd)
(ORACLE_HOME = /raid/app/oracle/product/7.3.4)
# (PRESPAWN_MAX = 99)
# (PRESPAWN_LIST =
# (PRESPAWN_DESC = (PROTOCOL = TCP) (POOL_SIZE = 1) (TIMEOUT = 30)) # )
) )

Note 6:

Problem Description
-------------------
Connections to Oracle 9.2 using a Cybersafe authenticated user fails on Solaris
2.6 with ORA-12637 and a core dump is generated.
Solution Description
--------------------
1) Shutdown Oracle, the listener and any clients.
2) In $ORACLE_HOME/lib take a backup copy of the file sysliblist
3) Edit sysliblist. Move the -lthread entry to the beginning.
So change from, -lnsl -lsocket -lgen -ldl -lsched -lthread To, -
lthread -lnsl -lsocket -lgen -ldl -lsched
4) Do $ORACLE_HOME/bin/relink all

Note 7:

fact: Oracle Server - Personal Edition 8.1


fact: MS Windows
symptom: Starting Server Manager (Svrmgrl) Fails
symptom: ORA-12637: Packet Receive Failed
cause: Oracle's installer will set the authentication to (NTS) by default.
However, if the Windows machine is not in a Domain where there
is a Windows Domain Controller, it will not be able to contact the
KDC (Key Distribtion Centre) needed for Authentication.

fix:

Comment out SQLNET.AUTHENTICATION_SERVICES=(NTS) in sqlnet.ora

19.38 ORA 02058:


================

dba_2pc_pending:
Lists all in-doubt distributed transactions. The view is empty until populated by
an in-doubt transaction.
After the transaction is resolved, the view is purged.

SQL> SELECT LOCAL_TRAN_ID, GLOBAL_TRAN_ID, STATE, MIXED, HOST, COMMIT#


2 FROM DBA_2PC_PENDING
3 /

LOCAL_TRAN_ID GLOBAL_TRAN_ID

---------------------- ----------------------------------------------------------
6.31.5950 1145324612.10D447310B5FCE408A296417959EBEEC00000000

SQL> select STATE, MIXED, HOST, COMMIT#


2 FROM DBA_2PC_PENDING
3 /

STATE MIX HOST

---------------- --- ------------------------------------------------------------


forced rollback no REBV\PGSS-TST-TCM

SQL> select * from dba_2pc_neighbors;

LOCAL_TRAN_ID IN_ DATABASE


---------------------- --- --------------------------------------------------
6.31.5950 in O

SQL> select state, tran_comment, advice from dba_2pc_pending;

STATE TRAN_COMMENT
---------------- ------------------------------------------------------------
prepared

SQL> rollback force '6.31.5950';

Rollback complete.

SQL> commit;

Doc ID: Note:290405.1


Subject: ORA-30019 When Executing Dbms_transaction.Purge_lost_db_entry
Type: PROBLEM
Status: MODERATED
Content Type: TEXT/X-HTML
Creation Date: 11-NOV-2004
Last Revision Date: 16-NOV-2004

The information in this document applies to:


Oracle Server - Enterprise Edition - Version: 9.2.0.5
This problem can occur on any platform.

Errors
ORA-30019 Illegal rollback Segment operation in Automatic Undo mode

Symptoms
Attempting to clean up the pending transaction using
DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY, getting ora-30019:

ORA-30019: Illegal rollback Segment operation in Automatic Undo mode


Changes
AUTO UNDO MANAGEMENT is running
Cause
DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY is not supported in AUTO UNDO MANAGEMENT
This is due to fact that "set transaction use rollback segment.." cannot be done
in AUM.

Fix
1.) alter session set "_smu_debug_mode" = 4;
2.) execute DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY('local_tran_id');

19.39. ORA-600 [12850]:


=======================

Doc ID </help/usaeng/Search/search.html>: Note:1064436.6 Content Type:


TEXT/PLAIN
Subject: ORA-00600 [12850], AND ORA-00600 [15265]: WHEN SELECT OR DESCRIBE ON
TABLE Creation Date: 14-JAN-1999
Type: PROBLEM Last Revision Date: 29-FEB-2000
Status: PUBLISHED

Problem Description:
---------------------
You are doing a describe or select on a table and receive:

ORA-600 [12850]:
Meaning: 12850 occurs when it can't find the user who owns the object
from the dictionary.

If you try to delete the table, you receive:

ORA-600 [15625]:
Meaning: The arguement 15625 is occuring because some index entry for the
table is not found in obj$.

Problem Explanation:
--------------------
The data dictionary is corrupt.

You cannot drop the tables in question because the data dictionary doesn't know
they exist.

Search Words:
-------------
ORA-600 [12850]
ORA-600 [15625]
describe
delete
table

Solution Description:
---------------------
You need to rebuild the database.

Solution Explanation:
---------------------

Since the table(s) cannot be accessed or dropped because of the data dictionary
corruption, rebuilding the database is the only option.

19.40 ORA-01092:
================

----------------------------------------------------------------------------------
---------

Doc ID </help/usaeng/Search/search.html>: Note:222132.1 Content Type:


TEXT/PLAIN
Subject: ORA-01599 and ORA-01092 while starting databaseCreation Date: 03-
DEC-2002
Type: PROBLEM Last Revision Date: 07-AUG-2003
Status: PUBLISHED
PURPOSE
-------
The purpose of this Note is to fix errors ORA-01599 & ORA-01092 when
recieved at startup.

SCOPE & APPLICATION


-------------------

All DBAs, Support Analyst.

Symptom(s)
~~~~~~~~~~

Starting the database gives errors similar to:

ORA-01599: failed to acquire rollback segment (20), cache space is


full (currently has (19) entries)
ORA-01092: ORACLE instance terminated

Change(s)
~~~~~~~~~~

Increased shared_pool_size parameter.


Increased processes and/or sessions parameters.

Cause
~~~~~~~

Low value for max_rollback_segments


The above changes changed the value for max_rollback_segments internally.

Fix
~~~~

The value for max_rollback_segments which is to be calculated as follows:

max_rollback_segments = transactions/transactions_per_rollback_segment or
30 whichever is greater.

transactions = session * 1.1;

sessions = (processes * 1.1) + 5;

The default value for transactions_per_rollback_segment = 5;

1. Use these calculations and find out the value for max_rollback_segments.
2. Set it to this value or 30 whichever is greater.
3. Startup database after this correct setting.

Reference info
~~~~~~~~~~~~~~
[BUG:2233336] <ml2_documents.showDocument?p_id=2233336&p_database_id=BUG> -
RDBMS ERRORS AT STARTUP CAN CAUSE ODMA TO OMIT CLEANUP ACTIONS
[NOTE:30764.1] <ml2_documents.showDocument?p_id=30764.1&p_database_id=NOT> -
Init.ora Parameter "MAX_ROLLBACK_SEGMENTS" Reference Note

----------------------------------------------------------------------------------
----------

Doc ID </help/usaeng/Search/search.html>: Note:1038418.6 Content Type:


TEXT/PLAIN
Subject: ORA-01092 STARTING UP ORACLE RDBMS DATABASE Creation Date: 17-
NOV-1997
Type: PROBLEM Last Revision Date: 06-JUL-1999
Status: PUBLISHED

Problem Summary:
================

ORA-01092 starting up Oracle RDBMS database.

Problem Description:
====================

When you startup your Oracle RDBMS database, you receive the following error:

ORA-01092: ORACLE instance terminated. Disconnection forced.

Problem Explanation:
====================

Oracle cannot write to the alert_<SID>.log file because the


ownership and/or permissions on the BACKGROUND_DUMP_DEST directory
are incorrect.

Solution Summary:
=================

Modify the ownership and permissions of directory BACKGROUND_DUMP_DEST.

Solution Description:
=====================

To allow oracle to write to the BACKGROUND_DUMP_DEST directory (contains


alert_<SID>.log), modify the ownership of directory BACKGROUND_DUMP_DEST
so that the oracle user (software owner) is the owner and make the
permissions on directory BACKGROUND_DUMP_DEST 755.

Follow these steps:

1. Determine the location of the BACKGROUND_DUMP_DEST parameter


defined in the init<SID>.ora or config<SID>.ora files.

2. Login as root.

3. Change directory to the location of BACKGROUND_DUMP_DEST.

4. Change the owner of all the files and the directory to the
software owner.
For example:

% chown oracle *

5. Change the permissions on the directory to 755.

% chmod 755 .

Solution Explanation:
=====================

Changing the ownership and permissions of the BACKGROUND_DUMP_DEST


directory, enables oracle to write to the alert_<SID>.log file.

---------------------------------------------------------------------------

Doc ID </help/usaeng/Search/search.html>: Note:273413.1 Content Type:


TEXT/X-HTML
Subject: Database Does not Start, Ora-00604 Ora-25153 Ora-00604 Ora-1092
Creation Date: 19-MAY-2004
Type: PROBLEM Last Revision Date: 04-OCT-2004
Status: MODERATED
The information in this article applies to:
Oracle Server - Enterprise Edition - Version: 8.1.7.4 to 10.1.0.4
This problem can occur on any platform.
Errors
ORA-1092 Oracle instance terminated.
ORA-25153 Temporary Tablespace is Empty
ORA-604 error occurred at recursive SQL level <num>
Symptoms
The database is not opening and in the alert.log the following errors are
reported:

ORA-00604: error occurred at recursive SQL level 1


ORA-25153: Temporary Tablespace is Empty
Error 604 happened during db open, shutting down database
USER: terminating instance due to error 604
Instance terminated by USER, pid = xxxxx
ORA-1092 signalled during: alter database open...

You might find SQL in the trace file like:

select distinct d.p_obj#,d.p_timestamp from sys.dependency$ d, obj$ o where


d.p_obj#>=:1 and d.d_obj#=o.obj#
and o.status!=5

Cause
In the case where there's locally managed temp tablespace in the database,after
controlfile is
re-created using the statement generated by "alter database backup controlfile to
trace", the database
can't be opened again because it complains that temp tablespace is empty. However
no tempfiles can be added
to the temp tablespace, nor can the temp tablespace be dropped because the
database is not yet open.
The query failed because of inadequate sort space(memory + disk)

Fix
We can increase the sort_area_size and sort_area_retained_size to a very high
value so that the query completes.
Then DB will open and we can take care of the TEMP tablespace

If the error still persists after increasing the sort_area_size and


sort_area_retained_size to a high vale,
then the only option remains is to restore and recover.

-------------------------------------------------------------------------------

Displayed below are the messages of the selected thread.

Thread Status: Active

From: Ronald Shaffer 17-Mar-05 19:23


Subject: Deleted OUTLN and now I get ORA-1092 and ORA-18008

RDBMS Version: 10G


Operating System and Version: RedHat ES 3
Error Number (if applicable): ORA-1092 and ORA-18008
Product (i.e. SQL*Loader, Import, etc.):
Product Version:

Deleted OUTLN and now I get ORA-1092 and ORA-18008

One of our DBAs dropped the OUTLN user in 10G and now the instance will not start.

We get an ORA-18008 specifying the schema is missing and an ORA-1092 when it


attempts to OPEN.
Startup mount is as far as we can get. Any experience with this issue out there?

Thanks...

From: Fairlie Rego 23-Mar-05 01:26


Subject: Re : Deleted OUTLN and now I get ORA-1092 and ORA-18008

Hi Ronald,

You are hitting bug 3786479


AFTER DROPPING THE OUTLN USER/SCHEMA, DB WILL NO LONGER OPEN.ORA-18008

http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id
=BUG&p_id=3786479

If this is still an issue file a Tar and get a backport.

Regards,
Fairlie Rego

----------------------------------------------------------------------------------
Displayed below are the messages of the selected thread.

Thread Status: Closed

From: Henry Lau 06-Mar-03 10:38


Subject: ORA-01092 while alter datbase open

RDBMS Version: 9.0.1.3


Operating System and Version: Linux Redhat 7.1
Error Number (if applicable): ORA-01092
Product (i.e. SQL*Loader, Import, etc.): ORACLE DATABASE
Product Version: 9.0.1.3

ORA-01092 while alter datbase open

Hi,

Since our undotbs is very large and we try to follow the Doc ID: 157278.1, we are
trying to change the undotbs
to a new one

We try to
1. Create UNDO tablespace undotb2 datafile $ORACLE_HOME/oradata/undotb2.dbf size
300M
2. ALTER SYSTEM SET undo_tablespace=undotb2;
3. Change undo = undotb2;
4. Restart the database;
5. alter tablespace undotbs offline;
6. when we restart the database, it shows the following error.

SQL> startup mount pfile=$ORACLE_HOME/admin/TEST/pfile/init.ora


ORACLE instance started.

Total System Global Area 386688540 bytes


Fixed Size 280092 bytes
Variable Size 318767104 bytes
Database Buffers 67108864 bytes
Redo Buffers 532480 bytes
Database mounted.
SQL> alter database nomount;
alter database nomount
*
ERROR at line 1:
ORA-02231: missing or invalid option to ALTER DATABASE

SQL> alter database open;


alter database open
*
ERROR at line 1:
ORA-01092: ORACLE instance terminated. Disconnection forced

I have checked the Log file as follow:

SQL>
/u01/oracle/product/9.0.1/admin/TEST/udump/ora_29151.trc
Oracle9i Release 9.0.1.3.0 - Production
JServer Release 9.0.1.3.0 - Production
ORACLE_HOME = /u01/oracle/product/9.0.1
System name: Linux
Node name: utxrho01.unitex.com.hk
Release: 2.4.2-2smp
Version: #1 SMP Sun Apr 8 20:21:34 EDT 2001
Machine: i686
Instance name: TEST
Redo thread mounted by this instance: 1
Oracle process number: 9
Unix process pid: 29151, image: oracle@utxrho01.unitex.com.hk (TNS V1-V3)

*** SESSION ID:(8.3) 2003-03-06 17:25:38.615


Evaluating checkpoint for thread 1 sequence 8 block 2
ORA-00376: file 2 cannot be read at this time
ORA-01110: data file 2: '/u01/oracle/product/9.0.1/oradata/TEST/undotbs01.dbf'
~
~
~
~
Please help to check what the problem is ??
Thank you !!

Regards,
Henry

From: Oracle, Pravin Sheth 07-Mar-03 09:31


Subject: Re : ORA-01092 while alter datbase open

Hi Henry,
What you are seeing is bug 2360088, which is fixed in Oracle 9.2.0.2.
I suggest that you log an iSR (formerly iTAR) for a quicker solution for the
problem.
Regards
Pravin

----------------------------------------------------------------------------------
-

19.41 ORA-600 [qerfxFetch_01]


=============================

Note 1:
-------

Doc ID: Note:255881.1


Subject: ORA-600 [qerfxFetch_01]
Type: REFERENCE
Status: PUBLISHED
Content Type: TEXT/X-HTML
Creation Date: 10-NOV-2003
Last Revision Date: 12-NOV-2004
<Internal_Only>

This note contains information that has not yet been reviewed by the
PAA Internals group or DDR.

As such, the contents are not necessarily accurate and care should be
taken when dealing with customers who have encountered this error.

If you are going to use the information held in this note then please
take whatever steps are needed to in order to confirm that the
information is accurate. Until the article has been set to EXTERNAL, we
do not guarantee the contents.

Thanks. PAA Internals Group

(Note - this section will be deleted as the note moves to publication)

</Internal_Only>

Note: For additional ORA-600 related information please read Note 146580.1

PURPOSE:
This article represents a partially published OERI note.

It has been published because the ORA-600 error has been


reported in at least one confirmed bug.

Therefore, the SUGGESTIONS section of this article may help


in terms of identifying the cause of the error.

This specific ORA-600 error may be considered for full publication


at a later date. If/when fully published, additional information
will be available here on the nature of this error.

<Internal_Only>
PURPOSE:
This article discusses the internal error "ORA-600 [qerfxFetch_01]", what
it means and possible actions. The information here is only applicable
to the versions listed and is provided only for guidance.

ERROR:
ORA-600 [qerfxFetch_01]

VERSIONS:
versions 9.2

DESCRIPTION:

During database operations, user interrupts need to be handled correctly.

ORA-600 [qerfxFetch_01] is raised when an interrupt has been trapped


but has not been handled correctly.

FUNCTIONALITY:
Fixed table row source.
IMPACT:
NON CORRUPTIVE - No underlying data corruption.

</Internal_Only>
SUGGESTIONS:

If the Known Issues section below does not help in terms of identifying
a solution, please submit the trace files and alert.log to Oracle
Support Services for further analysis.

Known Issues:

Bug# 2306106 See Note 2306106.8


OERI:[qerfxFetch_01] possible - affects OEM
Fixed: 9.2.0.2, 10.1.0.2

<Internal_Only>

INTERNAL ONLY SECTION - NOT FOR PUBLICATION OR DISTRIBUTION TO CUSTOMERS


========================================================================

Ensure that this note comes out on top in Metalink when searched
ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600
ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600
qerfxFetch_01 qerfxFetch_01 qerfxFetch_01 qerfxFetch_01 qerfxFetch_01
qerfxFetch_01 qerfxFetch_01 qerfxFetch_01 qerfxFetch_01 qerfxFetch_01
qerfxFetch_01 qerfxFetch_01 qerfxFetch_01 qerfxFetch_01 qerfxFetch_01
qerfxFetch_01 qerfxFetch_01 qerfxFetch_01 qerfxFetch_01 qerfxFetch_01

</Internal_Only>

Note 2:
-------

Doc ID </help/usaeng/Search/search.html>:
Note:2306106.8 Content Type: TEXT/X-HTML
Subject: Support Description of Bug 2306106
Creation Date: 13-AUG-2003
Type: PATCH Last Revision Date: 14-AUG-2003
Status: PUBLISHED

Click here <javascript:getdoc('NOTE:245840.1')> for details of sections in this


note.
Bug 2306106 OERI:[qerfxFetch_01] possible - affects OEM
This note gives a brief overview of bug 2306106.
Affects:
Product (Component) Oracle Server (RDBMS)
Range of versions believed to be affected Versions >= 9.2 but < 10G
Versions confirmed as being affected 9.2.0.1
Platforms affected Generic (all / most platforms affected)
Fixed:
This issue is fixed in 9.2.0.2 (Server Patch Set) 10G Production Base Release
Symptoms:
Error may occur <javascript:taghelp('TAGS_ERROR')>
Internal Error may occur (ORA-600) <javascript:taghelp('TAGS_OERI')>
ORA-600 [qerfxFetch_01]
Related To:
(None Specified)
Description
ORA-600 [qerfxFetch_01] possible - affects OEM

Note 3:
-------

Bug 2306106 is fixed in the 9.2.0.2 patchset. This bug is not published and thus
cannot be viewed externally
in MetaLink. All it says on this bug is 'ORA-600 [qerfxFetch_01] possible -
affects OEM'.

19.42 Undo corruption:


======================

Note 1:
-------

Doc ID </help/usaeng/Search/search.html>: Note:2431450.8 Content Type:


TEXT/X-HTML
Subject: Support Description of Bug 2431450 Creation Date: 08-AUG-2003
Type: PATCH Last Revision Date: 05-JAN-2004
Status: PUBLISHED
Click here <javascript:getdoc('NOTE:245840.1')> for details of sections in this
note.

Bug 2431450 SMU Undo corruption possible on instance crash

This note gives a brief overview of bug 2431450.


Affects:
Product (Component) (Rdbms)
Range of versions believed to be affected Versions >= 9 but < 10G
Versions confirmed as being affected 9.0.1.4 9.2.0.3
Platforms affected Generic (all / most platforms affected)
Fixed:
This issue is fixed in 9.0.1.5 iAS Patch Set 9.2.0.4 (Server Patch Set) 10g
Production Base Release
Symptoms:
Corruption (Physical) <javascript:taghelp('TAGS_CORR_PHY')>
Internal Error may occur (ORA-600) <javascript:taghelp('TAGS_OERI')>
ORA-600 [kteuPropTime-2] / ORA-600 [4191]
Related To:
System Managed Undo
Description
SMU (System Managed Undo) Undo corruption possible on instance crash.

This can result in subsequent ORA-600 errors due to the undo


corruption.

Note 2:
-------

Doc ID </help/usaeng/Search/search.html>: Note:233864.1 Content Type:


TEXT/X-HTML
Subject: ORA-600 [kteuproptime-2] Creation Date: 28-MAR-2003
Type: REFERENCE Last Revision Date: 07-APR-2005
Status: PUBLISHED

Note: For additional ORA-600 related information please read Note 146580.1
</metalink/plsql/showdoc?db=NOT&id=146580.1>

PURPOSE:
This article discusses the internal error "ORA-600 [kteuproptime-2]",
what it means and possible actions. The information here is only
applicable to the versions listed and is provided only for guidance.

ERROR:
ORA-600 [kteuproptime-2]

VERSIONS:
versions 9.0 to 9.2

DESCRIPTION:

Oracle has encountered an error propagating Extent Commit Times in


the Undo Segment Header / Extent Map Blocks, for System Managed Undo
Segments

The extent being referenced is not valid.

FUNCTIONALITY:
UNDO EXTENTS

IMPACT:
INSTANCE FAILURE
POSSIBLE PHYSICAL CORRUPTION

SUGGESTIONS:

If instance is down and fails to restart due to this error then set the
following parameter, which will gather additional information to
assist support in identifing the cause:

# Dump Undo Segment Headers during transaction recovery


event="10015 trace name context forever, level 10"

Restart the instance and submit the trace files and alert.log to
Oracle Support Services for further analysis.

Do not set any other undo/rollback_segment parameters without direction


from Support.

Known Issues:

Bug# 2431450 See Note 2431450.8 </metalink/plsql/showdoc?db=NOT&id=2431450.8>


SMU Undo corruption possible on instance crash
Fixed: 9.2.0.4, 10.1.0.2

Note 3:
-------
Hi,

apply patchset 9.2.0.2, bug 2431450 is fixed in 9.2.0.2 that made


SMU (System Managed Undo) Undo corruption possible on instance crash.

It's a very rare scenario :

This will only cause a problem if there was an instance crash after a
transaction committed but before it propogated the extent commit times to all
its extents AND there was a shrink of extents before the transaction could
be recovered.

But still, this bug was not published (not for any particular reason
except it was found internal).

Greetings,

Note 4:
-------

From: Oracle, Ken Robinson 21-Feb-03 17:44


Subject: Re : ORA-600 kteuPropTime-2

Forgot to mention the second bug for this....bug 2689239.

Regards,
Ken Robinson
Oracle Server EE Analyst

ORA-600 [4191] possible on shrink of system managed undo segment.

Note 5:
-------

BUGBUSTER - System-managed undo segment corruption

Affects Versions: 9.2.0.1.0, 9.2.0.2.0, 9.2.0.3.0


Fixed in: Patch 2431450, 9.2.0.4.0
BUG# (if recognised) 2431450
This info. correct on: 31-AUG-2003

Symptoms

Oracle instance crashes and details of the ORA-00600 error are written to the
alert.log
ORA-00600: internal error code, arguments: [kteuPropTime-2], [], [], []

Followed by
Fatal internal error happened while SMON was doing active transaction recovery.

Then
SMON: terminating instance due to error 600
Instance terminated by SMON, pid = 22972
This occurs as Oracle encounters an error when propagating Extent Commit Times in
the Undo Segment Header Extent Map Blocks.
It could be because SMON is over-enthusiastic in shrinking extents in SMU
segments. As a result, extent commit times
do not get written to all the extents and SMON causes the instance to crash,
leaving one or more of the undo segments
corrupt.

When opening the database following the crash, Oracle tries to perform crash
recovery and encounters problems
recovering committed transactions stored in the corrupt undo segments. This leads
to more ORA-00600 errors
and a further instance crash. The net result is that the database cannot be
opened:

"Error 600 happened during db open, shutting down database"

Workaround

Until the corrupt undo segment can be identified and offlined then unfortunately
the database will not open.
Identify the corrupt undo segment by setting the following parameters in the
init.ora file:

_smu_debug_mode=1
event="10015 trace name context forever, level 10"

(set event 10511)


event="10511 trace name context forever, level 2"

_smu_debug_mode simply collects diagnostic information for support purposes. Event


10015 is the undo segment
recovery tracing event. Use this to identify corrupted rollback/undo segments when
a database cannot be started.

With these parameters set, an attempt to open the database will still cause a
crash, but Oracle will write
vital information about the corrupt rollback/undo segments to a trace file in
user_dump_dest.
This is an extract from such a trace file, revealing that undo segment number 6
(_SYSSMU6$) is corrupt.
Notice that the information stored in the segment header about the number of
extents was inconsistent
with the extent map.

Recovering rollback segment _SYSSMU6$


UNDO SEG (BEFORE RECOVERY): usn = 6 Extent Control Header
-----------------------------------------------------------------
Extent Header:: spare1: 0 spare2: 0 #extents: 7 #blocks: 1934
last map 0x00805f89 #maps: 1 offset: 4080
Highwater:: 0x0080005b ext#: 0 blk#: 1 ext size: 7
#blocks in seg. hdr's freelists: 0
#blocks below: 0
mapblk 0x00000000 offset: 0
Unlocked
Map Header:: next 0x00805f89 #extents: 5 obj#: 0 flag: 0x40000000
Extent Map
-----------------------------------------------------------------
0x0080005a length: 7
0x00800061 length: 8
0x0081ac89 length: 1024
0x00805589 length: 256
0x00805a89 length: 256

Retention Table
-----------------------------------------------------------
Extent Number:0 Commit Time: 1060617115
Extent Number:1 Commit Time: 1060611728
Extent Number:2 Commit Time: 1060611728
Extent Number:3 Commit Time: 1060611728
Extent Number:4 Commit Time: 1060611728

Comment out parameters undo_management and undo_tablespace and set the


undocumented _corrupted_rollback_segments
parameter to tell Oracle to ignore any corruptions and force the database open:

_corrupted_rollback_segments=(_SYSSMU6$)

This time, Oracle will start and open OK, which will allow you to check the status
of the undo segments
by querying DBA_ROLLBACK_SEGS.

select segment_id, segment_name, tablespace_name, status


from dba_rollback_segs
where owner='PUBLIC';

SEGMENT_ID SEGMENT_NAME TABLESPACE_NAME STATUS


---------- ------------ --------------- ----------------
1 _SYSSMU1$ UNDOTS OFFLINE
2 _SYSSMU2$ UNDOTS OFFLINE
3 _SYSSMU3$ UNDOTS OFFLINE
4 _SYSSMU4$ UNDOTS OFFLINE
5 _SYSSMU5$ UNDOTS OFFLINE
6 _SYSSMU6$ UNDOTS NEEDS RECOVERY
7 _SYSSMU7$ UNDOTS OFFLINE
8 _SYSSMU8$ UNDOTS OFFLINE
9 _SYSSMU9$ UNDOTS OFFLINE
10 _SYSSMU10$ UNDOTS OFFLINE

SMON will complain every 5 minutes by writing entries to the alert.log as long as
there are undo segments
in need of recovery

SMON: about to recover undo segment 6


SMON: mark undo segment 6 as needs recovery

At this point, you must either download and apply patch 2431450 or create private
rollback segments.

Note 6:
-------

Repair UNDO log corruption Don Burleson


In rare cases (usually DBA error) the Oracle UNDO tablespace can become corrupted.

This manifests with this error: ORA-00376: file xx cannot be read at this time

In cases of UNDO log corruption, you must:

� Change the undo_management parameter from �AUTO� to �MANUAL�


� Create a new UNDO tablespace
� Drop the old UNDO tablespace

Dropping the corrupt UNDO tablespace can be tricky and you may get the message:

ORA-00376: file string cannot be read at this time

To drop a corrupt UNDO tablespace:

1 � Identify the bad segment:

select
segment_name,
status
from
dba_rollback_segs
where
tablespace_name='undotbs_corrupt'
and
status = �NEEDS RECOVERY�;

SEGMENT_NAME STATUS
------------------------------ ----------------
_SYSSMU22$ NEEDS RECOVERY

2. Bounce the instance with the hidden parameter �_offline_rollback_segments�,


specifying the bad segment name:

_OFFLINE_ROLLBACK_SEGMENTS=_SYSSMU22$

3. Bounce database, nuke the corrupt segment and tablespace:


SQL> drop rollback segment "_SYSSMU22$";
Rollback segment dropped.

SQL > drop tablespace undotbs including contents and datafiles;


Tablespace dropped.

Note 7:
-------

Sometimes there can be trouble with an undo segment.


Actually there might be something with a normal object:

PUT the following in the init.ora-


event = "10015 trace name context forever, level 10"
Setting this event will generate a trace file that will reveal the
necessary information about the transaction Oracle is trying to
rollback and most importantly, what object Oracle is trying to apply
the undo to.

USE the following query to find out what object Oracle is trying to
perform recovery on.

select owner, object_name, object_type, status


from dba_objects where object_id = <object #>;

THIS object must be dropped so the undo can be released. An export or


relying on a backup may be necessary to restore the object after the corrupted
rollback segment goes away.

19.43 ORA-1653
==============

Note 1:
-------

Doc ID </help/usaeng/Search/search.html>: Note:151994.1 Content Type:


TEXT/PLAIN
Subject: Overview Of ORA-01653: Unable To Extend Table %s.%s By %s In Tablespace
%s Creation Date: 12-JUL-2001
Type: TROUBLESHOOTING Last Revision Date: 15-JUN-2004
Status: PUBLISHED
PURPOSE
-------
This bulletin is an overview of ORA-1653 error message for tablespace dictionary
managed.

SCOPE& APPLICATION
------------------
It is for users requiring further information on ORA-01653 error message.

When looking to resolve the error by using any of the solutions suggested, please
consult the DBA for assistance.

Error: ORA-01653
Text: unable to extend table %s.%s by %s in tablespace %s
-------------------------------------------------------------------------------
Cause: Failed to allocate an extent for table segment in tablespace.
Action: Use ALTER TABLESPACE ADD DATAFILE statement to add one or more
files to the tablespace indicated.

Explanation:
------------
This error does not necessarily indicate whether or not you have enough space
in the tablespace, it merely indicates that Oracle could not find a large enough
area of free
contiguous space in which to fit the next extent.
Diagnostic Steps:
-----------------
1. In order to see the free space available for a particular tablespace, you must
use the view DBA_FREE_SPACE. Within this view, each record represents one
fragment of space. How the view DBA_FREE_SPACE can be used to determine
the space available in the database is described in:
[NOTE:121259.1] <ml2_documents.showDocument?p_id=121259.1&p_database_id=NOT>
Using DBA_FREE_SPACE

2. The DBA_TABLES view describes the size of next extent (NEXT_EXTENT) and the
percentage increase (PCT_INCREASE) for all tables in the database.
The "next_extent" size is the size of extent that is trying to be allocated
(and for
which you have the error).

When the extent is allocated :


next_extent = next_extent * (1 + (pct_increase/100))

Algorythm to allocate extent for segment is described in the Concept Guide


Chapter : Data Blocks, Extents, and Segments - How Extents Are Allocated

3. Look to see if any users have the tablespace in question as their temporary
tablespace.
This can be checked by looking at DBA_USERS (TEMPORARY_TABLESPACE).

Possible solutions:
-------------------
- Manually Coalesce Adjacent Free Extents
ALTER TABLESPACE <tablespace name> COALESCE;
The extents must be adjacent to each other for this to work.

- Add a Datafile:
ALTER TABLESPACE <tablespace name> ADD DATAFILE '<full path and file
name>'
SIZE <integer> <k|m>;

- Resize the Datafile:


ALTER DATABASE DATAFILE '<full path and file name>' RESIZE <integer> <k|
m>;

- Enable autoextend:
ALTER DATABASE DATAFILE '<full path and file name>' AUTOEXTEND ON
MAXSIZE UNLIMITED;

- Defragment the Tablespace:

- Lower "next_extent" and/or "pct_increase" size:


ALTER <segment_type> <segment_name> STORAGE ( next <integer> <k|m>
pctincrease <integer>);

- If the tablespace is being used as a temporary tablespace, temporary segments


may
be still holding the space.

References:
-----------
[NOTE:1025288.6] <ml2_documents.showDocument?p_id=1025288.6&p_database_id=NOT> How
to Diagnose and Resolve ORA-01650, ORA-01652, ORA-01653, ORA-01654, ORA-01688 :
Unable to Extend < OBJECT > by %S in Tablespace
[NOTE:1020090.6] <ml2_documents.showDocument?p_id=1020090.6&p_database_id=NOT>
Script to Report on Space in Tablespaces
[NOTE:1020182.6] <ml2_documents.showDocument?p_id=1020182.6&p_database_id=NOT>
Script to Detect Tablespace Fragmentation
[NOTE:1012431.6] <ml2_documents.showDocument?p_id=1012431.6&p_database_id=NOT>
Overview of Database Fragmentation
[NOTE:121259.1] <ml2_documents.showDocument?p_id=121259.1&p_database_id=NOT>
Using DBA_FREE_SPACE
[NOTE:61997.1] <ml2_documents.showDocument?p_id=61997.1&p_database_id=NOT> SMON
- Temporary Segment Cleanup and Free Space Coalescing

Note 2:
-------

Doc ID </help/usaeng/Search/search.html>: Note:1025288.6 Content Type:


TEXT/PLAIN
Subject: How to Diagnose and Resolve ORA-01650,ORA-01652,ORA-01653,ORA-
01654,ORA-01688 : Unable to Extend < OBJECT > by %S in Tablespace %S Creation
Date: 02-JAN-1997
Type: TROUBLESHOOTING Last Revision Date: 10-JUN-2004
Status: PUBLISHED
PURPOSE
-------

This document can be used to diagnose and resolve space management errors - ORA-
1650, ORA-1652,
ORA-1653, ORA-1654 and ORA-1688.

SCOPE & APPLICATION


-------------------
You are working with the database and have encountered one of the
following errors:

ORA-01650: unable to extend rollback segment %s by %s in tablespace %s


Cause: Failed to allocate extent for the rollback segment in tablespace.
Action: Use the ALTER TABLESPACE ADD DATAFILE statement to add one or more
files to the specified tablespace.

ORA-01652: unable to extend temp segment by %s in tablespace %s


Cause: Failed to allocate an extent for temp segment in tablespace.
Action: Use ALTER TABLESPACE ADD DATAFILE statement to add one or more
files to the tablespace indicated or create the object in other
tablespace.

ORA-01653: unable to extend table %s.%s by %s in tablespace %s


Cause: Failed to allocate extent for table segment in tablespace.
Action: Use the ALTER TABLESPACE ADD DATAFILE statement to add one or more
files to the specified tablespace.

ORA-01654: unable to extend index %s.%s by %s in tablespace %s


Cause: Failed to allocate extent for index segment in tablespace.
Action: Use the ALTER TABLESPACE ADD DATAFILE statement to add one or more
files to the specified tablespace.
ORA-01688: unable to extend table %s.%s partition %s by %s in tablespace %s
Cause: Failed to allocate an extent for table segment in tablespace.
Action: Use ALTER TABLESPACE ADD DATAFILE statement to add one or more files
to the tablespace indicated.

How to Solve the Following Errors About UNABLE TO EXTEND


--------------------------------------------------------

An "unable to extend" error is raised when there is insufficient contiguous


space available to extend the object.

A. In order to address the UNABLE TO EXTEND issue, you need to get the following
information:

1. The largest contiguous space available for the tablespace

SELECT max(bytes)
FROM dba_free_space
WHERE tablespace_name = '<tablespace name>';

The above query returns the largest available contiguous chunk of space.

Please note that if the tablespace you are concerned with is of type
TEMPORARY,
then please refer to [NOTE:188610.1]
<ml2_documents.showDocument?p_id=188610.1&p_database_id=NOT>.

If this query is done immediately after the failure, it will show that the
largest contiguous space in the tablespace is smaller than the next extent
the object was trying to allocate.

2. => "next_extent" for the object


=> "pct_increase" for the object
=> The name of the tablespace in which the object resides

Use the "next_extent" size with "pct_increase" in the following formula to


determine the size of extent that is trying to be allocated.

extent size = next_extent * (1 + (pct_increase/100)

next_extent = 512000
pct_increase = 50
=> extent size = 512000 * (1 + (50/100)) = 512000 * 1.5 = 768000

ORA-01650 Rollback Segment


==========================

SELECT next_extent, pct_increase, tablespace_name


FROM dba_rollback_segs
WHERE segment_name = '<rollback segment name>';

Note: pct_increase is only needed for early versions of Oracle, by


default in later versions pct_increase for a rollback segment is 0.

ORA-01652 Temporary Segment


===========================

SELECT next_extent, pct_increase, tablespace_name


FROM dba_tablespaces
WHERE tablespace_name = '<tablespace name>';

Temporary segments take the default storage clause of the tablespace


in which they are created.

If this error is caused by a query, then try and ensure that the query
is tuned to perform its sorts as efficiently as possible.

To find the owner of a sort, please refer to [NOTE:1069041.6]


<ml2_documents.showDocument?p_id=1069041.6&p_database_id=NOT>

ORA-01653 Table Segment


=======================

SELECT next_extent, pct_increase , tablespace_name


FROM dba_tables
WHERE table_name = '<table name>' AND owner = '<owner>';

ORA-01654 Index Segment


=======================

SELECT next_extent, pct_increase, tablespace_name


FROM dba_indexes
WHERE index_name = '<index name>' AND owner = '<owner>';

ORA-01688 Table Partition


=========================

SELECT next_extent, pct_increase, tablespace_name


FROM dba_tab_partitions
WHERE partition_name='<partition name>' AND table_owner = '<owner>';

B. Possible Solutions

There are several options for solving errors due to failure to extend:

a. Manually Coalesce Adjacent Free Extents


---------------------------------------

ALTER TABLESPACE <tablespace name> COALESCE;

The extents must be adjacent to each other for this to work.

b. Add a Datafile
--------------

ALTER TABLESPACE <tablespace name>


ADD DATAFILE '<full path and file name>' SIZE <integer> <k|m>;

c. Lower "next_extent" and/or "pct_increase" size


----------------------------------------------

For non-temporary and non-partitioned segment problem:

ALTER <segment_type> <segment_name>


STORAGE ( next <integer> <k|m> pctincrease <integer>);

For non-temporary and partitioned segment problem:

ALTER TABLE <table_name> MODIFY PARTITION <partition_name>


STORAGE ( next <integer> <k|m> pctincrease <integer>);

For a temporary segment problem:

ALTER TABLESPACE <tablespace name>


DEFAULT STORAGE (initial <integer> next <integer> <k|m> pctincrease
<integer>);

d. Resize the Datafile


-------------------

ALTER DATABASE DATAFILE '<full path and file name>'


RESIZE <integer> <k|m>;

e. Defragment the Tablespace


-------------------------

If you would like more information on fragmentation, the following


documents are available from Oracle WorldWide Support .
(this is not a comprehensive list)

[NOTE:1020182.6]
<ml2_documents.showDocument?p_id=1020182.6&p_database_id=NOT> Script to Detect
Tablespace Fragmentation
[NOTE:1012431.6]
<ml2_documents.showDocument?p_id=1012431.6&p_database_id=NOT> Overview of
Database Fragmentation
[NOTE:30910.1] <ml2_documents.showDocument?p_id=30910.1&p_database_id=NOT>
Recreating Database Objects

Related Documents:
==================

[NOTE:15284.1] <ml2_documents.showDocument?p_id=15284.1&p_database_id=NOT>
Understanding and Resolving ORA-01547
<Note.151994.1> Overview Of ORA-01653 Unable To Extend Table %s.%s By %s In
Tablespace %s:
<Note.146595.1> Overview Of ORA-01654 Unable To Extend Index %s.%s By %s In
Tablespace %s:
[NOTE:188610.1] <ml2_documents.showDocument?p_id=188610.1&p_database_id=NOT>
DBA_FREE_SPACE Does not Show Information about Temporary Tablespaces
[NOTE:1069041.6] <ml2_documents.showDocument?p_id=1069041.6&p_database_id=NOT> How
to Find Creator of a SORT or TEMPORARY SEGMENT or Users
Performing Sorts for Oracle8 and 9

Search Words:
=============

ORA-1650 ORA-1652 ORA-1653 ORA-1654 ORA-1688


ORA-01650 ORA-01652 ORA-01653 ORA-01654 ORA-01688
1650 1652 1653 1654 1688

19.44: Other ORA- errors on 9i:


===============================

Doc ID </help/usaeng/Search/search.html>: Note:201342.1 Content Type:


TEXT/X-HTML
Subject: Top Internal Errors - Oracle Server Release 9.2.0 Creation Date:
27-JUN-2002
Type: BULLETIN Last Revision Date: 24-MAY-2004
Status: PUBLISHED
Top Internal Errors - Oracle Server Release 9.2.0

Additional information or documentation on ORA-600 errors not listed here


may be available from the ORA-600 Lookup tool : <Note:153788.1
</metalink/plsql/showdoc?db=Not&id=153788.1>>

<Note:189908.1 </metalink/plsql/showdoc?db=Not&id=189908.1>> Oracle9i Release 2


(9.2) Support Status and Alerts

ORA-600 [KSLAWE:!PWQ]
Possible bugs: Fixed in:
<Bug:3566420 </metalink/plsql/showdoc?db=Bug&id=3566420>> BACKGROUND PROCESS GOT
OERI:KSLAWE:!PWQ AND INSTANCE CRASHES 9.2.0.6, 10G

References:
<Note:271084.1 </metalink/plsql/showdoc?db=Not&id=271084.1>> ALERT: ORA-
600[KSLAWE:!PWQ] RAISED IN V92040 OR V92050 ON SUN 64BIT ORACLE

ORA-600 [ksmals]
Possible bugs: Fixed in:
<Bug:2662683 </metalink/plsql/showdoc?db=Bug&id=2662683>> ORA-7445 & HEAP
CORRUPTION WHEN RUNNING APPS PROGRAM THAT DOES HEAVY INSERTS 9.2.0.4

References:
<Note:247822.1 </metalink/plsql/showdoc?db=Not&id=247822.1>> ORA-600 [ksmals]

ORA-600 [4000]
Possible bugs: Fixed in:
<Bug:2959556 </metalink/plsql/showdoc?db=Bug&id=2959556>> STARTUP after an ORA-
701 fails with OERI[4000] 9.2.0.5, 10G
<Bug:1371820 </metalink/plsql/showdoc?db=Bug&id=1371820>> OERI:4506 / OERI:4000
possible against transported tablespace 8.1.7.4, 9.0.1.4, 9.2.0.1

References:
<Note:47456.1 </metalink/plsql/showdoc?db=Not&id=47456.1>> ORA-600 [4000]
"trying to get dba of undo segment header block from usn"

ORA-600 [4454]
Possible bugs: Fixed in:
<Bug:1402161 </metalink/plsql/showdoc?db=Bug&id=1402161>> OERI:4411/OERI:4454 on
long running job 8.1.7.3, 9.0.1.3, 9.2.0.1

References:
<Note:138836.1 </metalink/plsql/showdoc?db=Not&id=138836.1>> ORA-600 [4454]

ORA-600 [kcbgcur_9]
Possible bugs: Fixed in:
<Bug:2722809 </metalink/plsql/showdoc?db=Bug&id=2722809>> OERI:kcbgcur_9 on
direct load into AUTO space managed segment 9.2.0.4, 10G
<Bug:2392885 </metalink/plsql/showdoc?db=Bug&id=2392885>> Direct path load may
fail with OERI:kcbgcur_9 / OERI:ktfduedel2 9.2.0.4, 10G
<Bug:2202310 </metalink/plsql/showdoc?db=Bug&id=2202310>> OERI:KCBGCUR_9 possible
from SMON dropping a rollback segment in locally managed tablespace 9.0.1.4,
9.2.0.1
<Bug:2035267 </metalink/plsql/showdoc?db=Bug&id=2035267>> OERI:KCBGCUR_9 possible
during TEMP space operations 9.0.1.3, 9.2.0.1
<Bug:1804676 </metalink/plsql/showdoc?db=Bug&id=1804676>> OERI:KCBGCUR_9 possible
from ONLINE REBUILD INDEX with concurrent DML 8.1.7.3, 9.0.1.3, 9.2.0.1
<Bug:1785175 </metalink/plsql/showdoc?db=Bug&id=1785175>> OERI:kcbgcur_9 from
CLOB TO CHAR or BLOB TO RAW conversion 9.2.0.2, 10G

References:
<Note:114058.1 </metalink/plsql/showdoc?db=Not&id=114058.1>> ORA-600
[kcbgcur_9] "Block class pinning violation"

ORA-600 [qerrmOFBu1], [1003]


Possible bugs: Fixed in:
<Bug:2308496 </metalink/plsql/showdoc?db=Bug&id=2308496>> SQL*PLUS CRASH IN TTC
LOGGING INTO ORACLE 7.3.4 DATABASE

References:
<Note:209363.1 </metalink/plsql/showdoc?db=Not&id=209363.1>> ORA-600
[qerrmOFBu1] - "Error during remote row fetch operation
<Note:207319.1 </metalink/plsql/showdoc?db=Not&id=207319.1>> ALERT: Connections
from Oracle 9.2 to Oracle7 are Not Supported

ORA-600 [ktsgsp5] or ORA-600 [kdddgb2]


Possible bugs: Fixed in:
<Bug:2384289 </metalink/plsql/showdoc?db=Bug&id=2384289>> ORA-600 [KDDDGB2]
[435816] [2753588] & PROBABLE INDEX CORRUPTION 9.2.0.2

References:
<Note:139037.1 </metalink/plsql/showdoc?db=Not&id=139037.1>> ORA-600 [kdddgb2]

<Note:139180.1 </metalink/plsql/showdoc?db=Not&id=139180.1>> ORA-600 [ktsgsp5]

<Note:197737.1 </metalink/plsql/showdoc?db=Not&id=197737.1>> ALERT: Corruption


/ Internal Errors possible after Upgrading to 9.2.0.1

19.45: ADJUST SCN:


==================

Note 1 Adjust SCN:


------------------
Doc ID: Note:30681.1
Subject: EVENT: ADJUST_SCN - Quick Reference
Type: REFERENCE
Status: PUBLISHED
Content Type: TEXT/PLAIN
Creation Date: 20-OCT-1997
Last Revision Date: 04-AUG-2000
Language: USAENG

ADJUST_SCN Event
~~~~~~~~~~~~~~~~
*** WARNING ***
This event should only ever be used under the guidance
of an experienced Oracle analyst.
If an SCN is ahead of the current database SCN, this indicates
some form of database corruption. The database should be rebuilt
after bumping the SCN.
****************

The ADJUST_SCN event is useful in some recovery situations where the


current SCN needs to be incremented by a large value to ensure it
is ahead of the highest SCN in the database. This is typically
required if either:
a. An ORA-600 [2662] error is signalled against database blocks
or
b. ORA-1555 errors keep occuring after forcing the database open
or ORA-604 / ORA-1555 errors occur during database open.
(Note: If startup reports ORA-704 & ORA-1555 errors together
then the ADJUST_SCN event cannot be used to bump the
SCN as the error is occuring during bootstrap.
Repeated startup/shutdown attempts may help if the SCN
mismatch is small)
or
c. If a database has been forced open used _ALLOW_RESETLOGS_CORRUPTION
(See <Parameter:Allow_Resetlogs_Corruption> )

The ADJUST_SCN event acts as described below.

**NOTE: You can check that the ADJUST_SCN event has fired as it
should write a message to the alert log in the form
"Debugging event used to advance scn to %s".
If this message is NOT present in the alert log the event
has probably not fired.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If the database will NOT open:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Take a backup.
You can use event 10015 to trigger an ADJUST_SCN on database open:

startup mount;

alter session set events '10015 trace name adjust_scn level 1';

(NB: You can only use IMMEDIATE here on an OPEN database. If the
database is only mounted use the 10015 trigger to adjust SCN,
otherwise you get ORA 600 [2251], [65535], [4294967295] )

alter database open;

If you get an ORA 600:2256 shutdown, use a higher level and reopen.

Do *NOT* set this event in init.ora or the instance will crash as soon
as SMON or PMON try to do any clean up. Always use it with the
"alter session" command.

~~~~~~~~~~~~~~~~~~~~~~~~~~
If the database *IS* OPEN:
~~~~~~~~~~~~~~~~~~~~~~~~~~
You can increase the SCN thus:

alter session set events 'IMMEDIATE trace name ADJUST_SCN level 1';

LEVEL: Level 1 is usually sufficient - it raises the SCN to 1 billion


(1024*1024*1024)
Level 2 raises it to 2 billion etc...

If you try to raise the SCN to a level LESS THAN or EQUAL to its
current setting you will get <OERI:2256> - See below.
Ie: The event steps the SCN to known levels. You cannot use
the same level twice.

Calculating a Level from 600 errors:


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To get a LEVEL for ADJUST_SCN:

a) Determine the TARGET scn:


ora-600 [2662] See <OERI:2662> Use TARGET >= blocks SCN
ora-600 [2256] See <OERI:2256> Use TARGET >= Current SCN

b) Multiply the TARGET wrap number by 4. This will give you the level
to use in the adjust_scn to get the correct wrap number.
c) Next, add the following value to the level to get the desired base
value as well :

Add to Level Base


~~~~~~~~~~~~ ~~~~~~~~~~~~
0 0
1 1073741824
2 2147483648
3 3221225472

Note 2: Adjust SCN


------------------

Subject: OERR: 600 2662 Block SCN is ahead of Current SCN


Creation Date: 21-OCT-1997

ORA-600 [2662] [a] [b] [c] [d] [e]


Versions: 7.0.16 - 8.0.5 Source: kcrf.h
===========================================================================
Meaning:
There are 3 forms of this error.

4/5 argument forms -


The SCN found on a block (dependant SCN) was ahead of the
current SCN. See below for this

1 Argument (before 7.2.3):


Oracle is in the process of writing a block to a log file.
If the calculated block checksum is less than or equal to 1
(0 and 1 are reserved) ORA-600 [2662] is returned.
This is a problem generating an offline immediate log marker
(kcrfwg).
*NOT DOCUMENTED HERE*

---------------------------------------------------------------------------
Argument Description:

Until version 7.2.3 this internal error can be logged for two separate
reasons, which we will refer to as type I and type II. The two types can
be distinguished by the number of arguments:
Type I has four or five arguments after the [2662].
Type II has one argument after the [2662].
From 7.2.3 onwards type II no longer exists.

Type I
~~~~~~
a. Current SCN WRAP
b. Current SCN BASE
c. dependant SCN WRAP
d. dependant SCN BASE
e. Where present this is the DBA where the dependant SCN came from.
From kcrf.h:
If the SCN comes from the recent or current SCN then a dba
of zero is saved. If it comes from undo$ because the undo segment is
not available then the undo segment number is saved, which looks like
a block from file 0. If the SCN is for a media recovery redo (i.e.
block number == 0 in change vector), then the dba is for block 0
of the relevant datafile. If it is from another database for
distribute xact then dba is DBAINF(). If it comes from a TX lock
then the dba is really usn<<16+slot.

Type II
~~~~~~~
a. checksum -> log block checksum - zero if none (thread # in old format)

---------------------------------------------------------------------------

Diagnosis:
~~~~~~~~~~
In addition to different basic types from above, there are different
situations and coherences where ORA-600 [2662] type 'I' can be raised.

For diagnosis we can split up startup-issues and no-startup-issues.


Usually the startup-issues are more critical.

Getting started:
~~~~~~~~~~~~~~~~
(1) is the error raised during normal database operations (i.e. when the
database is up) or during startup of the database?
(2) what is the SCN difference [d]-[b] ( subtract argument 'b' from arg 'd')?
(3) is there a fifth argument [e] ?
If so convert the dba to file# block#
Is it a data dictionary object? (file#=1)
If so find out object name with the help of reference dictionary
from second database
(4) What is the current SQL statement? (see trace)
Which table is refered to?
Does the table match the object you found in step before?

Be careful at this point:


there may be no relationship between DBA in [e] and real source of
problem (blockdump).

Deeper analysis:
~~~~~~~~~~~~~~~~
- investigate trace file
this will be a user trace file normally but could be an smon trace too

- search for: 'buffer'


("buffer dba" in Oracle7 dumps, "buffer tsn" in Oracle8 dumps)
this will bring you to a blockdump which usually represents the
'real' source of OERI:2662
WARNING: There may be more than one buffer pinned to the process
so ensure you check out all pinned buffers.

-> does the blockdump match the dba from e.?


-> what kind of blockdump is it?
(a) rollbacksegment header
(b) datablock
(c) other

SEE BELOW for EXAMPLES which demonstrate the sort of output you may
see in trace files and the things to check.

Check list and possible causes


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

- If Parallel Server check both nodes are using the same lock manager
instance & point at the same control files.

- If not Parallel Server check that 2 instances haven't mounted the


same database (Is there a second PMON process around ?? - shut
down any other instances to be sure)

Possible causes:
- doing an open resetlogs with _ALLOW_RESETLOGS_CORRUPTION enabled
- a hardware problem, like a faulty controller, resulting in a failed
write to the control file or the redo logs
- restoring parts of the database from backup and not doing the
appropriate recovery
- restoring a control file and not doing a RECOVER DATABASE USING BACKUP
CONTROLFILE
- having _DISABLE_LOGGING set during crash recovery
- problems with the DLM in a parallel server environment
- a bug

Solutions:
- if the SCNs in the error are very close:
Attempting a startup several times will bump up the dscn every time we
open the database even if open fails. The database will open when
dscn=scn.

- ** You can bump the SCN on open using <Event:ADJUST_SCN>


See [NOTE:30681.1]
Be aware that you should really rebuild the database if you use this
option.

- Once this has occurred you would normally want to rebuild the
database via exp/rebuild/imp as there is no guarantee that some
other blocks are not ahead of time.

Articles:
~~~~~~~~~
Solutions:
[NOTE:30681.1] Details of the ADJUST_SCN Event
[NOTE:1070079.6] alter system checkpoint

Possible Causes:
[NOTE:1021243.6] CHECK INIT.ORA SETTING _DISABLE_LOGGING
[NOTE:74903.1] How to Force the Database Open (_ALLOW_RESETLOGS_CORRUPTION)
[NOTE:41399.1] Forcing the database open with `_ALLOW_RESETLOGS_CORRUPTION`
[NOTE:851959.9] OERI:2662 DURING CREATE SNAPSHOT AT MASTER SITE

Known Bugs:
~~~~~~~~~~~

Fixed In. Bug No. Description


---------+------------+----------------------------------------------------
7.0.14 BUG:153638
7.1.5 BUG:229873
7.1.3 Bug:195115 Miscalculation of SCN on startup for distributed TX ?
7.1.6.2.7 Bug:297197 Port specific Solaris OPS problem
7.3 Bug:336196 Port specific IBM SP AIX problem -> dlm issue
7.3.4.5 Bug:851959 OERI:2662 possible from distributed OPS select

---------------------------------------------------------------------------
---------------------------------------------------------------------------

Examples:
~~~~~~~~
Below are some examples of this type of error and the information
you will see in the trace files.

~~~~~~~~~~
CASE (a)
~~~~~~~~~~
blockdump should look like this:

***
buffer dba: 0x05000002 inc: 0x00000001 seq: 0x0001a9c6
ver: 1 type: 1=KTU UNDO HEADER

Extent Control Header


-----------------------------------------------------------------
Extent Control:: inc#: 716918 tsn: 4 object#: 0
***

-> interpret:
dba: 0x05000002 -> 83886082 (0x05000002) = 5,2
XXX tsn: 4 -> this is rollback segment 4
tsn: 4 -> this rollback segment is in tablespace 4

ORA-00600: Interner Fehlercode, Argumente:


[2662], [0], [71183], [0], [71195], [83886082], [], []

-> [e] > 0 and represents dba from block which is in trace
-> [d]-[b] = 71195 - 71183 = 12

-> convert [b] to hex: 71195 = 0x1161B


so this value can be found in blockdump:

***
TRN TBL::

index state cflags wrap# uel scn dba


------------------------------------------------------------------
...
0x4e 9 0x00 0x00d6 0xffff 0x0000.0001161b 0x00000000
...
***

-> possible cause


so in this case the CURRENT SCN is LOWER than the SCN on this transaction
ie: The current SCN looks like it has decreased !!
This could happen if the database is opened with the
_allow_resetlogs_corruption parameter

-> If some recovery steps have just been performed review these steps
as the mismatch may be due to open resetlogs with
_allow_resetlogs_corruption enabled or similar.
See <Parameter:Allow_Resetlogs_corruption> for information on this
parameter.
------------------------------------------------------------------

~~~~~~~~~~
CASE (b)
~~~~~~~~~~
blockdump looks like this:

***
buffer dba: 0x0100012f inc: 0x00000815 seq: 0x00000d48
ver: 1 type: 6=trans data

Block header dump: dba: 0x0100012f


Object id on Block? Y
seg/obj: 0xe csc: 0x00.5fed6 itc: 2 flg: O typ: 1 - DATA
fsl: 0 fnx: 0x0

Itl Xid Uba Flag Lck Scn/Fsc


0x01 0x0000.00b.0000036c 0x0100261c.0138.04 --U- 1 fsc 0x0000.0005fed7
0x02 0x0000.00a.0000037b 0x0100261d.0138.01 --U- 1 fsc 0x0000.0005fed4

data_block_dump
===============
...
***
interpret:
dba: 0x0100012f -> 8,10 ==> 16777519 (0x0100012f) = 1,303
(0x1 0x12f)

***
SVRMGR> SELECT SEGMENT_NAME, SEGMENT_TYPE FROM DBA_EXTENTS
2> WHERE FILE_ID = 1 AND 303 BETWEEN BLOCK_ID AND
3> BLOCK_ID + BLOCKS - 1;
SEGMENT_NAME SEGMENT_TYPE
---------------------------------------------------------- -----------------
UNDO$ TABLE
1 row selected.
***

-> current sql-statement (trace):


***
update undo$ set
name=:2,file#=:3,block#=:4,status$=:5,user#=:6,
undosqn=:7,xactsqn=:8,scnbas=:9,scnwrp=:10,inst#=:11 where us#=:1

ksedmp: internal or fatal error


ORA-00600: internal error code, arguments:
[2662], [0], [392916], [0], [392919], [0], [], []
***

-> e. = 0 info not available


-> d-b = 392919 - 392916 = 3
-> dba from blockdump matches the object from current sql statement
-> convert b. to hex: = 0x5FED7
so this value can be found in blockdump -> see ITL slot 0x01!

---------------------------------------------------------------------------
---------------------------------------------------------------------------
---------------------------------------------------------------------------

Some more internals:


~~~~~~~~~~~~~~~~~~~~

I will try to give another example in oder to answer question if current


SCN is decreased or dependant SCN increase.

hypothesis:
current SCN decreased

Evidence:
reproduced ORA-600 [2662] by aborting tx and using _allow_resetlog_corruption
while open resetlogs. check database SCN before!

Prerequisits: _allow_resetlogs_corruption = true in init<SID>.ora


shutdown/startup db

*** BEGIN TESTCASE

SVRMGR> drop table tx;


Statement processed.
SVRMGR> create table tx (scn# number);
Statement processed.
SVRMGR> insert into tx values( userenv('COMMITSCN') );
1 row processed.
SVRMGR> select * from tx;
SCN#
----------
392942
1 row selected.

************ another session **************


SQL> connect scott/tiger
Connected.
SQL> update emp set sal=sal+1;
13 rows processed.
SQL>
-- no commit here
*******************************************

SVRMGR> insert into tx values( userenv('COMMITSCN') );


1 row processed.
SVRMGR> select * from tx;
SCN#
----------
392942
392943
2 rows selected.

-- so current SCN will be 392943

SVRMGR> shutdown abort


ORACLE instance shut down.

-- this breaks tx

SVRMGR> startup mount pfile=e:\jv734\initj734.ora


ORACLE instance started.
Total System Global Area 11018952 bytes
Fixed Size 35760 bytes
Variable Size 7698200 bytes
Database Buffers 3276800 bytes
Redo Buffers 8192 bytes
Database mounted.

SVRMGR> recover database until cancel;


ORA-00279: Change 392925 generated at 10/26/99 17:13:03 needed for thread 1
ORA-00289: Suggestion : e:\jv734\arch\arch_2.arc
ORA-00280: Change 392925 for thread 1 is in sequence #2
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}

cancel
Media recovery cancelled.
SVRMGR> alter database open resetlogs;
alter database open resetlogs
*
ORA-00600: internal error code, arguments:
[2662], [0], [392928], [0], [392931], [0], [], []

*** END TESTCASE

because we know current SCN before (392943) we see, that current SCN has
decreased

after solving the problem with:


shutdown abort/startup -> works

SVRMGR> drop table tx;


Statement processed.
SVRMGR> create table tx (scn# number);
Statement processed.
SVRMGR> insert into tx values( userenv('COMMITSCN') );
1 row processed.
SVRMGR> select * from tx;
SCN#
----------
392943
1 row selected.

so we have exactly reached the current SCN from before 'shutdown abort'
So current SCN was bumpt up from 392928 to 392942.

Note 3: Adjust SCN


------------------

Doc ID </help/usaeng/Search/search.html>: Note:28929.1 Content Type:


TEXT/X-HTML
Subject: ORA-600 [2662] "Block SCN is ahead of Current SCN" Creation Date:
21-OCT-1997
Type: REFERENCE Last Revision Date: 15-OCT-2004
Status: PUBLISHED
<Internal_Only>

This note contains information that was not reviewed by DDR.

As such, the contents are not necessarily accurate and care should be
taken when dealing with customers who have encountered this error.

Thanks. PAA Internals Group

</Internal_Only>

Note: For additional ORA-600 related information please read Note 146580.1
</metalink/plsql/showdoc?db=NOT&id=146580.1>

PURPOSE:
This article discusses the internal error "ORA-600 [2662]", what
it means and possible actions. The information here is only applicable
to the versions listed and is provided only for guidance.

ERROR:
ORA-600 [2662] [a] [b] [c] [d] [e]

VERSIONS:
versions 6.0 to 10.1

DESCRIPTION:

A data block SCN is ahead of the current SCN.

The ORA-600 [2662] occurs when an SCN is compared to the dependent SCN
stored in a UGA variable.

If the SCN is less than the dependent SCN then we signal the ORA-600 [2662]
internal error.

ARGUMENTS:
Arg [a] Current SCN WRAP
Arg [b] Current SCN BASE
Arg [c] dependent SCN WRAP
Arg [d] dependent SCN BASE
Arg [e] Where present this is the DBA where the dependent SCN came from.

FUNCTIONALITY:
File and IO buffer management for redo logs

IMPACT:
INSTANCE FAILURE
POSSIBLE PHYSICAL CORRUPTION

SUGGESTIONS:

There are different situations where ORA-600 [2662] can be raised.

It can be raised on startup or duing database operation.

If not using Parallel Server, check that 2 instances have not mounted
the same database.

Check for SMON traces and have the alert.log and trace files ready
to send to support.

Check the SCN difference [argument d]-[argument b].

If the SCNs in the error are very close, then try to shutdown and startup
the instance several times.

In some situations, the SCN increment during startup may permit the
database to open. Keep track of the number of times you attempted a
startup.

If the Known Issues section below does not help in terms of identifying
a solution, please submit the trace files and alert.log to Oracle
Support Services for further analysis.
Known Issues:
Bug# 2899477 See Note 2899477.8 </metalink/plsql/showdoc?db=NOT&id=2899477.8>
Minimise risk of a false OERI[2662]
Fixed: 9.2.0.5, 10.1.0.2

Bug# 2764106 See Note 2764106.8 </metalink/plsql/showdoc?db=NOT&id=2764106.8>


False OERI[2662] possible on SELECT which can crash the instance
Fixed: 9.2.0.5, 10.1.0.2

Bug# 2054025 See Note 2054025.8 </metalink/plsql/showdoc?db=NOT&id=2054025.8>


OERI:2662 possible on new TEMPORARY index block
Fixed: 9.0.1.3, 9.2.0.1

Bug# 851959 See Note 851959.8 </metalink/plsql/showdoc?db=NOT&id=851959.8>


OERI:2662 possible from distributed OPS select
Fixed: 7.3.4.5

Bug# 647927 P See Note 647927.8 </metalink/plsql/showdoc?db=NOT&id=647927.8>


Digital Unix ONLY: OERI:2662 could occur under heavy load
Fixed: 8.0.4.2, 8.0.5.0

<Internal_Only>

INTERNAL ONLY SECTION - NOT FOR PUBLICATION OR DISTRIBUTION TO CUSTOMERS


========================================================================

There were 2 forms of this error until 7.2.3:

Type I: 4/5 argument forms -


The SCN found on a block (dependent SCN) is ahead of the
current SCN. See below for this

Type II: 1 Argument (before 7.2.3 only):


Oracle is in the process of writing a block to a log file.
If the calculated block checksum is less than or equal to 1
(0 and 1 are reserved) ORA-600 [2662] is returned.
This is a problem generating an offline immediate log marker
(kcrfwg).
*NOT DOCUMENTED HERE*

Type I
~~~~~~
a. Current SCN WRAP
b. Current SCN BASE
c. dependent SCN WRAP
d. dependent SCN BASE
e. Where present this is the DBA where the dependent SCN came from.
From kcrf.h:
If the SCN comes from the recent or current SCN then a dba
of zero is saved. If it comes from undo$ because the undo segment is
not available then the undo segment number is saved, which looks like
a block from file 0. If the SCN is for a media recovery redo (i.e.
block number == 0 in change vector), then the dba is for block 0
of the relevant datafile. If it is from another database for a
distributed transaction then dba is DBAINF(). If it comes from a TX
lock then the dba is really usn<<16+slot.
Type II
~~~~~~~
a. checksum -> log block checksum - zero if none (thread # in old format)

---------------------------------------------------------------------------

Diagnosis:
~~~~~~~~~~
In addition to different basic types from above, there are different
situations where ORA-600 [2662] type I can be raised.

Getting started:
~~~~~~~~~~~~~~~~
(1) is the error raised during normal database operations (i.e. when the
database is up) or during startup of the database?
(2) what is the SCN difference [d]-[b] ( subtract argument 'b' from arg 'd')?
(3) is there a fifth argument [e] ?
If so convert the dba to file# block#
Is it a data dictionary object? (file#=1)
If so find out object name with the help of reference dictionary
from second database
(4) What is the current SQL statement? (see trace)
Which table is refered to?
Does the table match the object you found in previous step?

Be careful at this point: there may be no relationship between DBA in [e]


and the real source of problem (blockdump).

Deeper analysis:
~~~~~~~~~~~~~~~~
(1) investigate trace file:
this will be a user trace file normally but could be an smon trace too
(2) search for: 'buffer'
("buffer dba" in Oracle7 dumps, "buffer tsn" in Oracle8/Oracle9 dumps)
this will bring you to a blockdump which usually represents the
'real' source of OERI:2662

WARNING: There may be more than one buffer pinned to the process
so ensure you check out all pinned buffers.

-> does the blockdump match the dba from e.?


-> what kind of blockdump is it?
(a) rollback segment header
(b) datablock
(c) other

Check list and possible causes


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

If Parallel Server check both nodes are using the same lock manager
instance & point at the same control files.

Possible causes:

(1) doing an open resetlogs with _ALLOW_RESETLOGS_CORRUPTION enabled


(2) a hardware problem, like a faulty controller, resulting in a failed
write to the control file or the redo logs
(3) restoring parts of the database from backup and not doing the
appropriate recovery
(4) restoring a control file and not doing a RECOVER DATABASE USING BACKUP
CONTROLFILE
(5) having _DISABLE_LOGGING set during crash recovery
(6) problems with the DLM in a parallel server environment
(7) a bug

Solutions:

(1) if the SCNs in the error are very close, attempting a startup several
times will bump up the dscn every time we open the database even if
open fails. The database will open when dscn=scn.

(2)You can bump the SCN either on open or while the database is open
using <Event:ADJUST_SCN> (see Note 30681.1
</metalink/plsql/showdoc?db=NOT&id=30681.1>).
Be aware that you should rebuild the database if you use this
option.

Once this has occurred you would normally want to rebuild the
database via exp/rebuild/imp as there is no guarantee that some
other blocks are not ahead of time.

Articles:
~~~~~~~~~
Solutions:
Note 30681.1 </metalink/plsql/showdoc?db=NOT&id=30681.1> Details of the
ADJUST_SCN Event
Note 1070079.6 </metalink/plsql/showdoc?db=NOT&id=1070079.6> Alter System
Checkpoint

Possible Causes:
Note 1021243.6 </metalink/plsql/showdoc?db=NOT&id=1021243.6> CHECK INIT.ORA
SETTING _DISABLE_LOGGING
Note 41399.1 </metalink/plsql/showdoc?db=NOT&id=41399.1> Forcing the database
open with `_ALLOW_RESETLOGS_CORRUPTION`
Note 851959.9 </metalink/plsql/showdoc?db=NOT&id=851959.9> OERI:2662 DURING
CREATE SNAPSHOT AT MASTER SITE

Known Bugs:
~~~~~~~~~~~

Fixed In. Bug No. Description


---------+------------+----------------------------------------------------
7.1.5 Bug 229873 </metalink/plsql/showdoc?db=Bug&id=229873>
7.1.3 Bug 195115 </metalink/plsql/showdoc?db=Bug&id=195115> Miscalculation
of SCN on startup for distributed TX ?
7.1.6.2.7 Bug 297197 </metalink/plsql/showdoc?db=Bug&id=297197> Port specific
Solaris OPS problem
7.3 Bug 336196 </metalink/plsql/showdoc?db=Bug&id=336196> Port specific
IBM SP AIX problem -> dlm issue
7.3.4.5 Bug 851959 </metalink/plsql/showdoc?db=Bug&id=851959> OERI:2662
possible from distributed OPS select
Not fixed Bug 2216823 </metalink/plsql/showdoc?db=Bug&id=2216823> OERI:2662
reported when reusing tempfile with restored DB
8.1.7.4 Bug 2177050 </metalink/plsql/showdoc?db=Bug&id=2177050> OERI:729 space
leak possible (with tags "define var info"/"oactoid info")
can corrupt UGA and cause OERI:2662

---------------------------------------------------------------------------

Ensure that this note comes out on top in Metalink when searched
ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600
ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600
2662 2662 2662 2662 2662 2662 2662 2662 2662
2662 2662 2662 2662 2662 2662 2662 2662 2662

</Internal_Only

19.47: _allow_read_only_corruption
==================================

If you have a media failure and for some reason (such as having lost an archived
log file) you cannot perform
a complete recovery on some datafiles, then you might need this parameter. It is
new for 8i. Previously there
was only _allow_resetlogs_corruption which allowed you to do a RESETLOGS open of
the database
in such situations. Of course, a database forced open in this way would be in a
crazy state
because the current SCN would reflect the extent of the incomplete recovery, but
some datafiles
would have blocks in the future, which would lead to lots of nasty ORA-00600
errors
(although there is an ADJUST_SCN event that could be used for relief). Once in
this position,
the only thing to do would be to do a full database export, rebuild the database,
import and then assess the damage.

The new _allow_read_only_corruption provides a much cleaner solution to the same


problem.
You should only use it if all other recovery options have been exhausted, and you
cannot open
the database read/write. Once again, the intent is to export, rebuild and import.
Not pleasant, but sometimes
better than going back to an older usable backup and performing incomplete
recovery to a consistent state.
Also, the read only open allows you to assess better which recovery option you
want to take without committing
you to either.

19.48: _allow_resetlogs_corruption
==================================

log problem:

Try this approach to solve problems with redolog files:


1. create a backup of all datafiles, redolog files and controlfiles.
2.set next initialization parameter in init.ora

_allow_resetlogs_corruption = true

3.startup the database and try to open it


4. if the database can't be opened, then mount it and try to issue:

alter session set events '10015 trace name adjust_scn level 1';

#or if previous doesn't work increase the level to 2

alter session set events '10015 trace name adjust_scn level 4096';

5. alter database open

You can try with recover database until cancel and then open iz with resetlogs
option.

With this procedure I succesfully recovered from loosing my redolog files.

Using event 10015 you are forcing a SCN jump that will eventually syncronize the
SCN values from your
datafiles and controlfiles.
The level controls how much the SCN will be incremented with. In the case of a
9.0.1 I had, it worked
only with 4096, however it may be that even a level of 1 to 3 would make the SCN
jump 1 million.
So you have to dump those headers and compare the SCNs inside before and after the
event 10015.
I was succeful too in opening a db after loosing controlfile and online redo logs
,
however Oracle support made it pretty clear that the only usage for the database
afterwards is to do a
full export and recreate it from that. It would be better if Oracle support walks
you through this procedure.

19.49: ORA-01503: CREATE CONTROLFILE failed


============================================

ORA-01503: CREATE CONTROLFILE failed


ORA-01161: database name PEGACC in file header does not match given name of
PEGSAV
ORA-01110: data file 1: '/u02/oradata/pegsav/system01.dbf'

Note 1:
=======

Problem:

You are attempting to recreate a controlfile with a 'createcontrolfile'


script and the script fails with the following error when it tries to access
one of the datafiles:

ORA-1161, database name <name> in file header does not match given name
You are certain that the file is good and that it belongs to that database.

Solution:

Check the file's properties in Windows Explorer and verify that it is not
a "Hidden" file.

Explanation:

If you have set the "Show All Files' option under Explorer, View, Options,
you are able to see 'hidden' files that other users and/or applications
cannot. If any or all datafiles are marked as 'hidden' files, Oracle does
not see them when it tries to recreate the controlfile.

You must change the properties of the file by right-clicking on the file
in Windows Explorer and then deselecting the check box marked "Hidden" under
the General tab. You should then be able to create the controlfile.

References:

Note 1084048.6 ORA-01503, ORA-01161: on Create Controlfile.

Note 2:
=======

This message may result, if the db_name in the init.ora does not match with the
set "db_name" given
while creating the controlfile.

Also, remove any old controlfiles present in the specified directory.

Thanks,

Note 3:
=======

We ran into a similar problem when trying to create a new instance with datafiles
from another database.
The error comes in the create control file statement. Oracle uses REUSE as the
default option when you do the
alter database backup controlfile to trace. If you delete REUSE then the new
database name you will change
all the header information in all the database datafiles and you will be able to
start up the instance.
Hope this helps.

Note 4:
=======

Try this command "CREATE CONTROLFILE SET DATABASE..." instead of "CREATE


CONTROLFILE REUSE DATABASE..."
I think it would be better.
19.50. ORA-01031
================

Note 1:
-------

The 'OSDBA' and 'OSOPER' groups are chosen at installation time and usually
both default to the group 'dba'. These groups are compiled into the 'oracle'
executable and so are the same for
all databases running from a given ORACLE_HOME directory. The actual groups being
used for OSDBA and OSOPER
can be checked thus:
cd $ORACLE_HOME/rdbms/lib
cat config.[cs]
The line '#define SS_DBA_GRP "group"' should name the chosen OSDBA group.
The line '#define SS_OPER_GRP "group"' should name the chosen OSOPER group.

Note 2:
-------

Bookmark Fixed font Go to End

Doc ID: Note:69642.1 Content Type: TEXT/PLAIN


Subject: UNIX: Checklist for Resolving Connect AS SYSDBA Issues Creation
Date: 20-APR-1999
Type: TROUBLESHOOTING Last Revision Date: 31-DEC-2004
Status: PUBLISHED
Introduction:
~~~~~~~~~~~~~
This bulletin lists the documented causes of getting

---> prompted for a password when trying to CONNECT as SYSDBA


---> errors such as ORA-01031, ORA-01034, ORA-06401, ORA-03113,ORA-09925,
ORA-09817, ORA-12705, ORA-12547

a) SQLNET.ORA Checks:
---------------------
1. The "sqlnet.ora" can be found in the following locations (listed by search
order):

$TNS_ADMIN/sqlnet.ora
$HOME/sqlnet.ora
$ORACLE_HOME/network/admin/sqlnet.ora

Depending upon your operating system, it may also be located in:

/var/opt/oracle/sqlnet.ora
/etc/sqlnet.ora

A corrupted "sqlnet.ora" file, or one with security options set, will cause
a 'connect internal' request to prompt for a password.
To determine if this is the problem, locate the "sqlnet.ora" that is being
used.
The one being used will be the first one found according to the search order
listed above.
Next, move the file so that it will not be found by this search:

% mv sqlnet.ora sqlnet.ora_save

Try to connect internal again.


If it still fails, search for other "sqlnet.ora" files according to the search
order listed
above and repeat using the move command until you are sure there are no other
"sqlnet.ora" files being used.
If this does not resolve the issue, use the move command to put all the
"sqlnet.ora" files back where they were before you made the change:

% mv sqlnet.ora_save sqlnet.ora

If moving the "sqlnet.ora" resolves the issue, then verify the contents of the
file:

a) SQLNET.AUTHENTICATION_SERVICES

If you are not using database links, comment this line out or try setting it
to:

SQLNET.AUTHENTICATION_SERVICES = (BEQ,NONE)

b) SQLNET.CRYPTO_SEED

This should not be set in a "sqlnet.ora" file on UNIX.


If it is, comment the line out. (This setting is added to the "sqlnet.ora"
if it is built by one of Oracle's network cofiguration products shipped with
client products)

c) AUTOMATIC_IPC

If this is set to "ON" it can force a "TWO_TASK" connection.


Try setting this to "OFF":

AUTOMATIC_IPC = OFF

2. Set the permissions correctly in the "TNS_ADMIN" files.


The environment variable TNS_ADMIN defines the directory where the
"sqlnet.ora",
"tnsnames.ora", and "listener.ora" files reside.
These files must contain the correct permissions, which are set when "root.sh"
runs
during installation.
As root, run "root.sh" or edit the permissions on the "sqlnet.ora",
"tnsnames.ora",
and "listener.ora" files by hand as follows:

$ cd $TNS_ADMIN
$ chmod 644 sqlnet.ora tnsnames.ora listener.ora
$ ls -l sqlnet.ora tnsnames.ora listener.ora

-rw-r--r-- 1 oracle dba 1628 Jul 12 15:25 listener.ora


-rw-r--r-- 1 oracle dba 586 Jun 1 12:07 sqlnet.ora
-rw-r--r-- 1 oracle dba 82274 Jul 12 15:23 tnsnames.ora

b) Software and Operating System Issues:


----------------------------------------
1. Be sure $ORACLE_HOME is set to the correct directory and does not have any
typing mistakes:

% cd $ORACLE_HOME
% pwd

If this returns a location other than your "ORACLE_HOME" or is invalid, you


will need to reset the value of this environment variable:

sh or ksh:
----------
$ ORACLE_HOME=<path_to_ORACLE_HOME>
$ export ORACLE_HOME

Example:
$ ORACLE_HOME=/u01/app/oracle/product/7.3.3
$ export ORACLE_HOME

csh:
----
% setenv ORACLE_HOME <path_to_ORACLE_HOME>

Example:
% setenv ORACLE_HOME /u01/app/oracle/product/7.3.3

If your "ORACLE_HOME" contains a link or the instance was started with the
"ORACLE_HOME" set to another value, the instance may try to start using the
memory location that another instance is using.
An example of this might be:

You have "ORACLE_HOME" set to "/u01/app/oracle/product/7.3.3" and start the


instance.
Then you do something like:

% ln -s /u01/app/oracle/product/7.3.3 /u01/app/oracle/7.3.3
% setenv ORACLE_HOME /u01/app/oracle/7.3.3
% svrmgrl

SVRMGR> connect internal

If this prompts for a password then most likely the combination of your
"ORACLE_HOME" and "ORACLE_SID" hash to the same shared memory address of
another running instance. Otherwise you may be able to connect internal
but you will receive an ORA-01034 "Oracle not available" error.

In most cases using a link as part of your "ORACLE_HOME" is fine as long as


you are consistent.
Oracle recommends that links not be used as part of the "ORACLE_HOME", but
their use is supported.

2. Check that $ORACLE_SID is set to the correct SID, (including capitalization),


and does not have any typos:

% echo $ORACLE_SID

Refer to Note:1048876.6 for more information.

3. Ensure $TWO_TASK is not set.


To check if "TWO_TASK" is set, do the following:

sh, ksh or on HP/UX only csh:


-----------------------------
env |grep -i two
- or -
echo $TWO_TASK

csh:
----
setenv |grep -i two

If any lines are returned such as:

TWO_TASK=
- or -
TWO_TASK=PROD

You will need to unset the environment variable "TWO_TASK":

sh or ksh:
----------
unset TWO_TASK

csh:
----
unsetenv TWO_TASK

Example :

$ TWO_TASK=V817
$ export TWO_TASK
$ sqlplus /nolog

SQL*Plus: Release 8.1.7.0.0 - Production on Fri Dec 31 10:12:25 2004


(c) Copyright 2000 Oracle Corporation. All rights reserved.

SQL> conn / as sysdba


ERROR:
ORA-01031: insufficient privileges

$ unset TWO_TASK
$ sqlplus /nolog
SQL> conn / as sysdba
Connected.

If you are running Oracle release 8.0.4, and upon starting "svrmgrl" you
receive an ORA-06401 "NETCMN: invalid driver designator" error, you should
also unset two_task.
The login connect string may be getting its value from the TWO_TASK
environment variable if this is set for the user.
4. Check the permissions on the Oracle executable:

% cd $ORACLE_HOME/bin
% ls -l oracle ('ls -n oracle' should work as well)

The permissions should be rwsr-s--x, or 6751.


If the permissions are incorrect, do the following as the "oracle"
software owner:

% chmod 6751 oracle

If you receive an ORA-03113 "end-of-file on communication" error followed


by a prompt for a password, then you may also need to check the ownership
and permissions on the dump directories.
These directories must belong to Oracle, group dba, (or the appropriates names
for your installation).
This error may occur while creating a database.

Permissions should be: 755 (drwxr-xr-x)

Also, the alert.log must not be greater than 2 Gigabytes in size.


When you start up "nomount" an Oracle pseudo process will try to write the
"alert.log" file in "udump".
When Oracle cannot do this (either because of permissions or because of the
"alert.log" being greater than 2 Gigabytes in size), it will issue the
ORA-03113 error.

5. "osdba" group checks:

a. Make sure the operating system user issuing the CONNECT INTERNAL belongs
to the "osdba" group as defined in the "$ORACLE_HOME/rdbms/lib/config.s"
or "$ORACLE_HOME/rdbms/lib/config.c". Typically this is set to "dba".
To verify the operating system groups the user belongs to, do the following:

% id
uid=1030(oracle) gid=1030(dba)

The "gid" here is "dba" so the "config.s" or "config.c" may contain an


entry such as:

/* 0x0008 15 */ .ascii "dba\0"

If these do not match, you either need to add the operating system user
to the group as it is seen in the "config" file, or modify the "config"
file and relink the "oracle" binary.

Refer to entry [NOTE:50507.1] section 3 for more details.

b. Be sure you are not logged in as the "root" user and that the environment
variables "USER", "USERNAME", and "LOGNAME" are not set to "root".
The "root" user is a special case and cannot connect to Oracle as the
"internal" user unless the effective group is changed to the "osdba" group,
which is typically "dba".
To do this, either modify the "/etc/password" file (not recommended) or
use the "newgrp" command:
# newgrp dba

"newgrp" always opens a new shell, so you cannot issue "newgrp" from
within a shell script.
Keep this in mind if you plan on executing scripts as the "root" user.

c. Verify that the "osdba" group is only listed once in the "/etc/group" file:

% grep dba /etc/group


dba::1010:
dba::1100:

If more than one line starting with the "osdba" group is returned, you
need to remove the ones that are not correct.
It is not possible to have more than one group use a group name.

d. Check that the oracle user uid and gid are matching with /etc/passwd and
/etc/group :

$ id
uid=500(oracle) gid=235(dba)

$ grep oracle /etc/passwd


oracle:x:500:235:oracle:/home/oracle:/bin/bash
^^^
$ grep dba /etc/group
dba:x:253:oracle
^^^
The mismatch also causes an ORA-1031 error.

6. Verify that the file system is not mounted no set uid:

% mount
/u07 on /dev/md/dsk/d7 nosuid/read/write

If the filesytem is mounted "nosuid", as seen in this example, you will need
to unmount the filesystem and mount it without the "nosuid" option.
Consult your operating system documentation or your operating system vendor
for instruction on modifying mount options.

7. Please read the following warning before you attempt to use the information
in this step:

******************************************************************
* *
* WARNING: If you remove segments that belong to a running *
* instance you will crash the instance, and this may *
* cause database corruption. *
* *
* Please call Oracle Support Services for assistance *
* if you have any doubts about removing shared memory *
* segments. *
* *
******************************************************************

If an instance crashed or was killed off using "kill" there may be shared
memory segments hanging around that belong to the down instance.
If there are no other instances running on the machine you can issue:

% ipcs -b

T ID KEY MODE OWNER GROUP SEGSZ


Shared Memory:
m 0 0x50000ffe --rw-r--r-- root root 68
m 1601 0x0eedcdb8 --rw-r----- oracle dba 4530176

In this case the "ID" of "1601" is owned by "oracle" and if there are no
other instances running in most cases this can safely be removed:

% ipcrm -m 1601

If your SGA is split into multiple segments you will have to remove all
segments associated with the instance. If there are other instances
running, and you are not sure which memory segments belong to the failed
instance, you can do the following:

a. Shut down all the instances on the machine and remove whatever shared
memory still exists that is owned by the software owner.
b. Reboot the machine.
c. If your Oracle software is release 7.3.3 or newer, you can connect into
each instance that is up and identify the shared memory owned by that
instance:

% svrmgrl
SVRMGR> connect internal
SVRMGR> oradebug ipc

In Oracle8:
-----------
Area #0 `Fixed Size', containing Subareas 0-0
Total size 000000000000b8c0, Minimum Subarea size 00000000
Subarea Shmid Size Stable Addr
0 7205 000000000000c000 80000000

In Oracle7:
-----------

-------------- Shared memory --------------


Seg Id Address Size
2016 80000000 4308992
Total: # of segments = 1, size = 4308992

Note the "Shmid" for Oracle8 and "Seg Id" for Oracle7 for each running
instance.
By process of elimination find the segments that do not belong to an
instance and remove them.

8. If you are prompted for a password and then receive error ORA-09925 "unable
to create audit trail file" or error ORA-09817 "write to audit file failed",
along with "SVR4 Error: 28: No space left on device", do the following:

Check your "pfile". It is typically in the "$ORACLE_HOME/dbs" directory


and will be named "init<your_sid>.ora, where "<your_sid>" is the value of
"ORACLE_SID" in your environment. If the "init<your_sid>.ora" file has
the "ifile" parameter set, you will also have to check the included file
as well. You are looking for the parameter "audit_file_dest".

If "audit_file_dest" is set, change to that directory; otherwise change to


the "$ORACLE_HOME/rdbms/audit" directory, as this is the default location
for audit files. If the directory does not exist, create it.
Ensure that you have enough space to create the audit file.
The audit file is generally 600 bytes in size.
If it does exist, verify you can write to the directory:

% touch afile

If it could not create the called "afile", you need to change the permissions
on your audit directory:

% chmod 751

9. If connect internal prompts you for a password and then you receive an
ORA-12705 "invalid or unknown NLS parameter value specified" error, you
need to verify the settings for "ORA_NLS", "ORA_NLS32", "ORA_NLS33" or
"NLS_LANG".
You will need to consult your Installation and Configuration Guide for the
proper settings for these environment variables.

10. If you have installed Oracle software and are trying to connect with
Server Manager to create or start the database, and receive a TNS-12571
"packet writer failure" error, please refer to Note:1064635.6

11. If in SVRMGRL (Server Manager line mode), you are running the "startup.sql"
script and receive the following error:

ld:so.1: oracle_home/bin/svrmgrl fatal relocation error


symbol not found kgffiop

RDBMS v7.3.2 is installed.


RDBMS v8.0.4 is a separate "oracle_home", and you are attempting to have
it coexist.
This is due to the wrong version of the client shared library "libclntsh.so.1"

being used at runtime.


Verify environment variable settings.

You need to ensure that "ORACLE_HOME" and "LD_LIBRARY_PATH" are set correctly.

For C-shell, type:

% setenv LD_LIBRARY_PATH $ORACLE_HOME/lib


% setenv ORACLE_HOME /u01/app/oracle/product/8.0.4

For Bourne or Korn shell, type:

$ LD_LIBRARY_PATH=$ORACLE_HOME/lib
$ export LD_LIBRARY_PATH
$ ORACLE_HOME=/u01/app/oracle/product/8.0.4
$ export ORACLE_HOME
12. Ensure that the disk the instance resides on has not reached 100% capacity.

% df -k

If it has reached 100% capacity, this may be the cause of 'connect internal'
prompting for a password.
Additional disk space will need to be made available before 'connect internal'

will work.

For additional information refer to Note:97849.1

13. Delete process.dat and regid.dat files in $ORACLE_HOME/otrace/admin directory.

Oracle Trace is enabled by default on 7.3.2 and 7.3.3 (depends on platform)


This can caused high disk space usage by these files and cause a number of
apparently mysterious side effects.
See Note:45482.1 for more details.

14. When you get ora-1031 "Insufficient privileges" on connect internal after you
supply a valid password and you have multiple instances running from the same
ORACLE_HOME, be sure that if an instance has REMOTE_LOGIN_PASSWORDFILE set to
exclusive that the file $ORACLE_HOME/dbs/orapw<sid> does exist, otherwise it
defaults to the use of the file orapw that consequently causes access problems

for any other database that has the parameter set to shared.
Set the parameter REMOTE_LOGIN_PASSWORDFILE to shared for all instances that
share
the common password file and create an exclusive orapw<sid> password files for
any
instances that have this set to exclusive.

15. Check permissions on /etc/passwd file (Unix only).


If Oracle cannot open the password file, connect internal fails with
ORA-1031, since Oracle is not able to verify if the user trying to connect
is indeed in the dba group.
Example:
--------
# chmod 711 /etc/passwd
# ls -ltr passwd
-rwx--x--x 1 root sys 901 Sep 21 14:26 passwd

$ sqlplus '/ as sysdba'

SQL*Plus: Release 9.2.0.1.0 - Production on Sat Sep 21 16:21:18 2002

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

ERROR:
ORA-01031: insufficient privileges

Trussing sqlplus will show also the problem:

25338: munmap(0xFF210000, 8192) = 0


25338: lwp_mutex_wakeup(0xFF3E0778) = 0
25338: lwp_mutex_lock(0xFF3E0778) = 0
25338: time() = 1032582594
25338: open("/etc/passwd", O_RDONLY) Err#13 EACCES
25338: getrlimit(RLIMIT_NOFILE, 0xFFBE8B28) = 0

c) Operating System Specific checks:


------------------------------------
1. On OpenVMS, check that the privileges have been granted at the Operating System

level:

$ SET DEFAULT SYS$SYSTEM:


$ RUN AUTHORIZE

If the list returned by AUTHORIZE does not contain ORA_<SID>_DBA, or ORA_DBA,


then you do not have the correct OS privileges to issue a connect internal.
If ORA_<SID>_DBA was added AFTER ORA_DBA, then ORA_DBA needs to be removed
and granted again to be updated.
Please refer to Note:1010852.6 for more details.

2. On Windows NT, check if DBA_AUTHORIZATION is set to BYPASS in the registry.

3. On Windows NT, if you are able to connect internally but then startup fails
for some reason, successive connect internal attempts might prompt for a
password. You may also receive errors such as:

ORA-12705: invalid or unknown NLS parameter value specified


ORA-01012: not logged on
LCC-00161: Oracle error (possible syntax error)
ORA-01031: insufficient privileges

Refer to entry Note:1027964.6 for suggestions on how to resolve this problem

4. If you are using Multi-Threaded Server (MTS), make sure you are using a
dedicated
server connection.
A dedicated server connection is required to start up or shutdown the database.

Unless the database alias in the "TNSNAMES.ORA" file includes a parameter to


make
a dedicated server connection, it will make a shared connection to a
dispatcher.
See Note:1058680.6 for more details.

5. On Solaris, if the file "/etc/.name_service_door" has incorrect permissions,


Oracle cannot read the file. You will receive a message that "The Oracle
user cannot access "/etc/.name_service_door" (permission denied).
This file is a flavor of IPC specific to Solaris which Oracle software is using

This can also cause connect internal problems. See entry Note:1066589.6

6. You are on Digital Unix, running SVRMGRL (Server Manager line mode), and you
receive an ORA-12547 "TNS:lost contact" error and a password prompt.

This problem occurs when using Parallel Server and the True Cluster software
together.
If Parallel Server is not linked in, svrmgrl works as expected.
Oracle V8.0.5 requires an Operating System patch which previous versions of
Oracle did not require.
The above patch allows svrmgrl to communicate with the TCR software.

You can determine if the patch is applied by running:


% nm /usr/ccs/lib/libssn.a | grep adjust

If this returns nothing, then you need to:

1. Obtain the patch for TCR 1.5 from Digital.


This patch is for the MC SCN and adds the symbol "adjustSequenceNumber"
to the library /usr/ccs/lib/libssn.a.
2. Apply the patch.
3. Relink Oracle

Another possibility is that you need to raise the value of kernel parameter

per-proc-stack-size

when increased from its default value of 2097152 to 83886080 resolved this
problem.

7. You are on version 6.2 of the Silicon Graphics UNIX (IRIX) operating system
and you have recently installed RDBMS release 8.0.3.
If you are logged on as "oracle/dba" and an attempt to log in to Server Manager

using "connect/internal" prompts you for a password, you should refer to entry
Note:1040607.6

8. On AIX 4.3.3 after applying ML5 or higher you can not longer connect as
internal
or if on 9.X '/ as sysdba' does not work as well.
This is a known AIX bug and it occurs on all RS6000 ports including SP2.
There is two workarounds and one solution. They are as follows:

1) Use mkpasswd command to remove the index.


This is valid until a new user is added to "/etc/passwd" or modified:

# mkpasswd -v -d

2) Touch the "/etc/passwd" file.


If the "/etc/passwd" file is newer than the index it will not use the
password file index:

# touch /etc/passwd

3) Obtain APAR IY22458 from IBM.


Any questions about this APAR should be directed to IBM.

d) Additional Information:
--------------------------
1. In the "Oracle7 Administrator's Reference for UNIX", there is a note that
states:

If REMOTE_OS_AUTHENT is set to true, users who are members of the dba group
on the remote machine are able to connect as INTERNAL without a password.
However, if you are connecting remotely, that is connecting via anything
except the bequeath adapter, you will be prompted for a password regardless
of the value of "REMOTE_OS_AUTHENT".
Refer to bug 644988

References:
~~~~~~~~~~~
[NOTE:1048876.6] UNIX: Connect internal prompts for password after install
[NOTE:1064635.6] ORA-12571: PACKET WRITER FAILURE WHEN STARTING SVRMGR
[NOTE:1010852.6] OPENVMS: ORA-01031: WHEN ISSUING "CONNECT INTERNAL" IN SQL*DBA
OR SERVER MANAGER
[NOTE:1027964.6] LCC-00161 AND ORA-01031 ON STARTUP
[NOTE:1058680.6] ORA-00106 or ORA-01031 ERROR when trying to STARTUP or SHUTDOWN
DATABASE
[NOTE:1066589.6] UNIX: Connect Internal asks for password when TWO_TASK is set
[NOTE:1040607.6] SGI: ORA-01012 ORA-01031: WHEN USING SRVMGR AFTER 8.0.3 INSTALL
[NOTE:97849.1] Connect internal Requires Password
[NOTE:50507.1] SYSDBA and SYSOPER Privileges in Oracle8 and Oracle7
[NOTE:18089.1] UNIX: Connect INTERNAL / AS SYSBDA Privilege on Oracle 7/8
[BUG:644988] REMOTE_OS_AUTHENT=TRUE: NOT ALLOWING USERS TO CONNECT INTERNAL
WITHOUT PASSWORD

Search Words:
~~~~~~~~~~~~~
svrmgrm sqldba sqlplus sqlnet
remote_login_passwordfile

Note 3:
-------

ORA-01031: insufficient privileges


Cause: An attempt was made to change the current username or password without the
appropriate privilege.
This error also occurs if attempting to install a database without the necessary
operating system privileges.
Action: Ask the database administrator to perform the operation or grant the
required privileges.

Note 4:
-------

ORA-01031: insufficient privileges


In most cases, the user receiving this error lacks a privilege to create an
object (such as a table, view,
procedure and the like). Grant the required privilege like so:
grant create table to user_lacking_privilege;
Startup
If someone receives this error while trying to startup the instance, the logged on
user must belong
to the ora_dba group on Windows or dba group on Unix.

Note 5:
-------
I am not sure it is the same, but I got this error today in windows when
sql_authentication in sqlnet.ora was NONE.
Changing it to NTS solved the problem.

19.51 ORA-00600: internal error code, arguments: [17059]:


=========================================================

Note 1:
-------

Doc ID </help/usaeng/Search/search.html>: Note:138554.1 Content Type:


TEXT/PLAIN
Subject: ORA-600 [17059] Creation Date: 02-APR-2001
Type: REFERENCE Last Revision Date: 09-DEC-2004
Status: PUBLISHED
Note: For additional ORA-600 related information please read [NOTE:146580.1]
<ml2_documents.showDocument?p_id=146580.1&p_database_id=NOT>

PURPOSE:
This article discusses the internal error "ORA-600 [17059]", what
it means and possible actions. The information here is only applicable
to the versions listed and is provided only for guidance.

ERROR:
ORA-600 [17059] [a]

VERSIONS:
versions 7.1 to 10.1

DESCRIPTION:

While building a table to hold the list of child cursor dependencies


relating to a given parent cursor, we exceed the maximum possible size
of the table.

ARGUMENTS:
Arg [a] Object containing the table

FUNCTIONALITY:
Kernel Generic Library cache manager

IMPACT:
PROCESS FAILURE
NON CORRUPTIVE - No underlying data corruption.

SUGGESTIONS:

One symptom of this error is that the session will appear to hang for a
period of time prior to this error being reported.

If the Known Issues section below does not help in terms of identifying
a solution, please submit the trace files and alert.log to Oracle
Support Services for further analysis.

Issuing this SQL as SYS (SYSDBA) may help show any problem
objects in the dictionary:

select do.obj#,
po.obj# ,
p_timestamp,
po.stime ,
decode(sign(po.stime-p_timestamp),0,'SAME','*DIFFER*') X
from sys.obj$ do, sys.dependency$ d, sys.obj$ po
where P_OBJ#=po.obj#(+)
and D_OBJ#=do.obj#
and do.status=1 /*dependent is valid*/
and po.status=1 /*parent is valid*/
and po.stime!=p_timestamp /*parent timestamp not match*/
order by 2,1
;

Normally the above select would return no rows. If any rows are
returned the listed dependent objects may need recompiling.

Known Issues:

Bug# 3555003 See [NOTE:3555003.8]


<ml2_documents.showDocument?p_id=3555003.8&p_database_id=NOT>
View compilation hangs / OERI:17059 after DBMS_APPLY_ADM.SET_DML_HANDLER
Fixed: 9.2.0.6

Bug# 2707304 See [NOTE:2707304.8]


<ml2_documents.showDocument?p_id=2707304.8&p_database_id=NOT>
OERI:17059 / OERI:kqlupd2 / PLS-907 after adding partitions to Partitioned
IOT
Fixed: 9.2.0.3, 10.1.0.2

Bug# 2636685 See [NOTE:2636685.8]


<ml2_documents.showDocument?p_id=2636685.8&p_database_id=NOT>
Hang / OERI:[17059] after adding a list value to a partition
Fixed: 9.2.0.3, 10.1.0.2

Bug# 2626347 See [NOTE:2626347.8]


<ml2_documents.showDocument?p_id=2626347.8&p_database_id=NOT>
OERI:17059 accessing view after ADD / SPLIT PARTITION
Fixed: 9.2.0.3, 10.1.0.2

Bug# 2306331 See [NOTE:2306331.8]


<ml2_documents.showDocument?p_id=2306331.8&p_database_id=NOT>
Hang / OERI[17059] on view after SET_KEY or SET_DML_INVOKATION on base table

Fixed: 9.2.0.2

Bug# 1115424 See [NOTE:1115424.8]


<ml2_documents.showDocument?p_id=1115424.8&p_database_id=NOT>
Cursor authorization and dependency lists too long - can impact shared pool
/ OERI:17059
Fixed: 8.0.6.2, 8.1.6.2, 8.1.7.0

Bug# 631335 See [NOTE:631335.8]


<ml2_documents.showDocument?p_id=631335.8&p_database_id=NOT>
OERI:17059 from extensive re-user of a cursor
Fixed: 8.0.4.2, 8.0.5.0, 8.1.5.0

Bug# 558160 See [NOTE:558160.8]


<ml2_documents.showDocument?p_id=558160.8&p_database_id=NOT>
OERI:17059 from granting privileges multiple times
Fixed: 8.0.3.2, 8.0.4.0, 8.1.5.0

Note 2:
-------

Doc ID </help/usaeng/Search/search.html>: Note:234457.1 Content Type:


TEXT/X-HTML
Subject: ORA-600 [17059] Error When Compiling A Package Creation Date: 19-
FEB-2003
Type: PROBLEM Last Revision Date: 24-AUG-2004
Status: PUBLISHED

fact:
fact: Oracle Server - Enterprise Edition

fact: Partitioned Tables / Indexes

symptom: ORA-600 [17059] Error When Compiling A Package

symptom: When Compiling a Package

symptom: The Package Accesses a Partitioned Table

symptom: ORA-00600: internal error code, arguments: [%s], [%s], [%s], [%s],
[%s], [%s], [%s]

symptom: internal error code, arguments: [17059], [352251864]

symptom: Calling Location kglgob

symptom: Calling Location kgldpo

symptom: Calling Location kgldon

symptom: Calling Location pkldon

symptom: Calling Location pkloud

symptom: Calling Location - phnnrl_name_resolve_by_loading

cause: This is due to <bug:2073948 </metalink/plsql/showdoc?db=bug&id=2073948>>


fixed in 10i, and occurs when accessing a
partitioned table via a dblink within the package, where DDL (such as
adding/dropping partitions) is performed on the table.

fix:
This is fixed in 9.0.1.4, 9.2.0.2 & 10i. One-off patches are available
for 8.1.7.4. A workaround is to flush the shared pool.

Note 3:
-------

Doc ID </help/usaeng/Search/search.html>: Note:239796.1 Content Type:


TEXT/PLAIN
Subject: ORA-600 [17059] when querying dba_tablespaces, dba_indexes,
dba_ind_partitions etc Creation Date: 28-MAY-2003
Type: PROBLEM Last Revision Date: 13-AUG-2004
Status: PUBLISHED
Problem:
~~~~~~~~

The information in this article applies to:

Internal Error ORA-600 [17059] when querying Data dictionary views like
dba_tablespaces,
dba_indexes, dba_ind_partitions etc

Symptom(s)
~~~~~~~~~~
While querying Data dictionary views like dba_tablespaces,
dba_indexes, dba_ind_partitions etc, getting internal error ORA-600 [17059]

Change(s)
~~~~~~~~~~
You probably altered some objects or executed some cat*.sql scripts.

Cause
~~~~~~~
Some SYS objects are INVALID.

Fix
~~~~
Connect SYS
run $ORACLE_HOME/rdbms/admin/utlrp.sql and make sure all the objects are valid.

19.52: ORA-00600: internal error code, arguments: [17003]


=========================================================

Note 1:
-------

The information in this article applies to:


Oracle Forms - Version: 9.0.2.7 to 9.0.2.12
Oracle Server - Enterprise Edition - Version: 9.2
This problem can occur on any platform.

Errors
ORA 600 "internal error code, arguments: [%s],[%s],[%s], [%s], [%s],

Symptoms
The following error occurs when compiling a form or library ( fmb / pll ) against
RDBMS 9.2

PL/SQL ERROR 0 at line 0, column 0


ORA-00600: internal error code, arguments: [17003], [0x11360BC], [275], [1], [],
[], [], []

The error reproduces everytime.

Triggers / local program units in the form / library contain calls to stored
database procedures and / or functions.

The error does not occur when compiling against RDBMS 9.0.1 or lower.
Cause
This is a known bug / issue. The compilation error occurs when the form contains a
call to a stored database
function / procedure which has two DATE IN variables receiving DEFAULT values such
as SYSDATE.
Reference:
<Bug:2713384> Abstract: INTERNAL ERROR [1401] WHEN COMPILE FUNCTION WITH 2
DEFAULT DATE VARIABLES ON 9.2
Fix
The bug is fixed in Oracle Forms 10g (9.0.4). There is no backport fix available
for
Forms 9i (9.0.2)

To work-around, modify the offending calls to the stored database procedure/


functions so that DEFAULT parameter values
are not passed directly .
For example, pass the DEFAULT value SYSDATE indirectly to the stored database
procedure/ function by first
assigning it to a local variable in the form.

Note 2:
-------

Doc ID </help/usaeng/Search/search.html>: Note:138537.1 Content Type:


TEXT/PLAIN
Subject: ORA-600 [17003] Creation Date: 02-APR-2001
Type: REFERENCE Last Revision Date: 15-OCT-2004
Status: PUBLISHED
Note: For additional ORA-600 related information please read [NOTE:146580.1]
<ml2_documents.showDocument?p_id=146580.1&p_database_id=NOT>

PURPOSE:
This article discusses the internal error "ORA-600 [17003]", what
it means and possible actions. The information here is only applicable
to the versions listed and is provided only for guidance.

ERROR:
ORA-600 [17003] [a] [b] [c]

VERSIONS:
versions 7.0 to 10.1

DESCRIPTION:
The error indicates that we have tried to lock a library cache object by
using the dependency number to identify the target object and have found
that no such dependency exists.

Under this situation we will raise an ORA-600 [17003] if the dependency


number that we are using exceeds the number of entries in the dependency
table or the dependency entry is not marked as invalidated.

ARGUMENTS:
Arg [a] Library Cache Object Handle
Arg [b] Dependency number
Arg [c] 1 or 2 (indicates where the error was raised internally)

FUNCTIONALITY:
Kernel Generic Library cache manager

IMPACT:
PROCESS MEMORY FAILURE
NO UNDERLYING DATA CORRUPTION.

SUGGESTIONS:

A common condition where this error is seen is problematic upgrades.

If a patchset has recently been applied, please confirm that there were
no errors associated with this upgrade.

Specifically, there are some XDB related bugs which can lead to this error
being reported.

Known Issues:
Bug# 2611590 See [NOTE:2611590.8]
<ml2_documents.showDocument?p_id=2611590.8&p_database_id=NOT>
OERI:[17003] running XDBRELOD.SQL
Fixed: 9.2.0.3, 10.1.0.2

Bug# 3073414
XDB may not work after applying a 9.2 patch set
Fixed: 9.2.0.5

19.53: ORA-00600: internal error code, arguments: [qmxiUnpPacked2], [121], [], [],
[], [], [], []
==================================================================================
===============

Note 1.
-------

Doc ID: Note:222876.1 Content Type: TEXT/PLAIN


Subject: ORA-600 [qmxiUnpPacked2] Creation Date: 09-DEC-2002
Type: REFERENCE Last Revision Date: 15-OCT-2004
Status: PUBLISHED
Note: For additional ORA-600 related information please read [NOTE:146580.1]
PURPOSE:
This article discusses the internal error "ORA-600 [qmxiUnpPacked2]", what
it means and possible actions. The information here is only applicable
to the versions listed and is provided only for guidance.

ERROR:
ORA-600 [qmxiUnpPacked2] [a]

VERSIONS:
versions 9.2 to 10.1

DESCRIPTION:

When unpickling an XOB or an array of XOBs an unexpected datatype was


found.

Generally due to XMLType data that has not been successfully upgraded from
a previous version.

ARGUMENTS:
Arg [a] Type of XOB

FUNCTIONALITY:
Qernel xMl support Xob to/from Image

IMPACT:
PROCESS FAILURE
NON CORRUPTIVE - No underlying data corruption.

SUGGESTIONS:

Please review the following article on Metalink :

[NOTE:235423.1] How to resolve ORA-600 [qmxiUnpPacked2] during upgrade

If you still encounter the error having tried the suggestions in the
above article, or the article isn't applicible to your environment then
ensure that the upgrade to current version was completed succesfully
without error.

If the Known Issues section below does not help in terms of identifying
a solution, please submit the trace files and alert.log to Oracle
Support Services for further analysis.

Known Issues:
Bug# 2607128 See [NOTE:2607128.8]
OERI:[qmxiUnpPacked2] if CATPATCH.SQL/XDBPATCH.SQL fails
Fixed: 9.2.0.3

Bug# 2734234
CONSOLIDATION BUG FOR ORA-600 [QMXIUNPPACKED2] DURING CATPATCH.SQL 9.2.0.2

Note 2.
-------
Doc ID: Note:235423.1 Content Type: TEXT/X-HTML
Subject: How to resolve ORA-600 [qmxiUnpPacked2] during upgrade Creation
Date: 14-APR-2003
Type: HOWTO Last Revision Date: 18-MAR-2005
Status: PUBLISHED

The information in this article applies to:

Oracle 9.2.0.2
Multiple Platforms, 64-bit

Symptom(s)
~~~~~~~~~~

ORA-600 [qmxiUnpPacked2] []

Cause
~~~~~

If the error is seen after applying 9.2.0.2 on a 9.2.0.1 database or if


using DBCA in 9.2.0.2 to create a new database (which is using the 9.2.0.1
seed database) then it is very likely that either shared_pool_size or
java_pool_size was too small when catpatch.sql was executed.

Error is generally seen as

ORA-600: internal error code, arguments: [qmxiUnpPacked2], [121]

There are 3 options to proceed from here:-

Fix
~~~~

Option 1
========

If your shared_pool_size and java_pool_size are less than 150Mb the do the
following :-

1/ Set your shared_pool_size and java_pool_size to 150Mb each. In some case


you may need to use larger pool sizes.

2/ Get the xdbpatch.sql script from Note 237305.1

3/ Copy xdbpatch.sql to $ORACLE_HOME/rdbms/admin/xdbpatch.sql having taken a


backup of the original file first

4/ Restart the instance with:

startup migrate;

5/ spool catpatch
@?/rdbms/admin/catpatch.sql

Option 2
========

If you already have shared_pool_size and java_pool_size set at greater than


150Mb then the problem may be caused by the shared memory allocated during
the JVM upgrade is not released properly. In which case do the following :-

1/ Set your shared_pool_size and java_pool_size to 150Mb each. In some case


you may need to use larger pool sizes.

2/ Get the xdbpatch.sql script from Note 237305.1

3/ Edit the xdbpatch.sql script and add the following as the first line in
the script:-

alter system flush shared_pool;

3/ Copy xdbpatch.sql to $ORACLE_HOME/rdbms/admin/xdbpatch.sql having taken a


backup of the original file first

3/ Restart the instance with:

startup migrate;

4/ spool catpatch

@?/rdbms/admin/catpatch.sql

Option 3
========

If XDB is NOT in use and there are NO registered XML Schemas an alternative
is to drop, and maybe re-install XDB :-

1/ To drop the XDB subsystem connect as sys and run

@?/rdbms/admin/catnoqm.sql

2/ You can then run catpatch.sql to perform the upgrade

startup migrate;

@?/rdbms/admin/catpatch.sql

3/ Once complete you may chose to re-install the XDB subsystem, if so


connect as sys and run catqm.sql

@?/rdbms/admin/catqm.sql <XDB_PASSWD> <TABLESPACE> <TEMP_TABLESPACE>

If the error is seen during normal database operation, ensure that upgrade
to current version was completed succesfully without error. Once this is
confirmed attempt to reproduce the error, if successful forward ALERT.LOG,
trace files and full error stack to Oracle Support Services for further
analysis.
References
~~~~~~~~~~~

Bug 2734234 CONSOLIDATION BUG FOR ORA-600 [QMXIUNPPACKED2] DURING CATPATCH.SQL


9.2.0.2
Note 237305.1 Modified xdbpatch.sql

19.54 ORA-00600: internal error code, arguments: [kcbget_37], [1], [], [], [], [],
[], []
==================================================================================
=======

ORA-00600: internal error code, arguments: [kcbso1_1], [], [], [], [], [], [], []
ORA-00600: internal error code, arguments: [kcbget_37], [1], [], [], [], [], [],
[]

Doc ID: Note:2652771.8


Subject: Support Description of Bug 2652771
Type: PATCH
Status: PUBLISHED
Content Type: TEXT/X-HTML
Creation Date: 13-AUG-2003
Last Revision Date: 14-AUG-2003

Click here for details of sections in this note.

Bug 2652771 AIX: OERI[1100] / OERI[KCBGET_37] SGA corruption


This note gives a brief overview of bug 2652771.

Affects:
Product (Component) Oracle Server (RDBMS)
Range of versions believed to be affected Versions < 10G
Versions confirmed as being affected 8.1.7.4
9.2.0.2

Platforms affected Aix 64bit 5L


Aix 64bit 433

Fixed:
This issue is fixed in 9.2.0.3 (Server Patch Set)

Symptoms:
Memory Corruption
Internal Error may occur (ORA-600)
ORA-600 [1100] / ORA-600 [kcbget_37]

Known Issues: Bug# 2652771 P See [NOTE:2652771.8]


AIX: OERI[1100] / OERI[KCBGET_37] SGA corruption Fixed: 9.2.0.3
19.55 ORA-00600: internal error code, arguments: [kcbzwb_4], [], [], [], [], [],
[], []
==================================================================================
=====

Doc ID: Note:4036717.8


Subject: Bug 4036717 - Truncate table in exception handler can causes
OERI:kcbzwb_4
Type: PATCH
Status: PUBLISHED
Content Type: TEXT/X-HTML
Creation Date: 25-FEB-2005
Last Revision Date: 09-MAR-2005

Click here for details of sections in this note.

Bug 4036717 Truncate table in exception handler can causes OERI:kcbzwb_4


This note gives a brief overview of bug 4036717.

Affects:
Product (Component) PL/SQL (Plsql)
Range of versions believed to be affected Versions < 10.2
Versions confirmed as being affected 10.1.0.3

Platforms affected Generic (all / most platforms affected)

Fixed:
This issue is fixed in 9.2.0.7 (Server Patch Set)
10.1.0.4 (Server Patch Set)
10g Release 2 (future version)

Symptoms: Related To:


Internal Error May Occur (ORA-600)
ORA-600 [kcbzwb_4]
PL/SQL
Truncate

Description
Truncate table in exception handler can cause OERI:kcbzwb_4
with the fix for bug 3768052 installed.

Workaround:
Turn off or deinstall the fix for bug 3768052.
Note that the procedure containing the affected transactional commands
will have to be recompiled after backing out the bug fix.

Doc ID: Note:4036717.8


Subject: Bug 4036717 - Truncate table in exception handler can causes
OERI:kcbzwb_4
Type: PATCH
Status: PUBLISHED
Content Type: TEXT/X-HTML
Creation Date: 25-FEB-2005
Last Revision Date: 09-MAR-2005

Click here for details of sections in this note.

Bug 4036717 Truncate table in exception handler can causes OERI:kcbzwb_4


This note gives a brief overview of bug 4036717.

Affects:
Product (Component) PL/SQL (Plsql)
Range of versions believed to be affected Versions < 10.2
Versions confirmed as being affected 10.1.0.3

Platforms affected Generic (all / most platforms affected)

Fixed:
This issue is fixed in 9.2.0.7 (Server Patch Set)
10.1.0.4 (Server Patch Set)
10g Release 2 (future version)

Symptoms: Related To:


Internal Error May Occur (ORA-600)
ORA-600 [kcbzwb_4]
PL/SQL
Truncate

Description
Truncate table in exception handler can cause OERI:kcbzwb_4
with the fix for bug 3768052 installed.

Workaround:
Turn off or deinstall the fix for bug 3768052.
Note that the procedure containing the affected transactional commands
will have to be recompiled after backing out the bug fix.

19.56 ORA-00600: internal error code, arguments: [kcbgtcr_6], [], [], [], [], [],
[], []
==================================================================================
======

Doc ID: Note:248874.1


Subject: ORA-600 [kcbgtcr_6]
Type: REFERENCE
Status: PUBLISHED
Content Type: TEXT/X-HTML
Creation Date: 18-SEP-2003
Last Revision Date: 25-MAR-2004

<Internal_Only>

This note contains information that has not yet been reviewed by DDR.

As such, the contents are not necessarily accurate and care should be
taken when dealing with customers who have encountered this error.
Thanks. PAA Internals Group

</Internal_Only>

Note: For additional ORA-600 related information please read Note 146580.1

PURPOSE:
This article discusses the internal error "ORA-600 [kcbgtcr_6]", what
it means and possible actions. The information here is only applicable
to the versions listed and is provided only for guidance.

ERROR:
ORA-600 [kcbgtcr_6] [a]

VERSIONS:
versions 8.0 to 10.1

DESCRIPTION:

Two buffers have been found in the buffer cache that are both current
and for the same DBA (Data Block Address).

We should not have two 'current' buffers for the same DBA in the cache,
if this is the case then this error is raised.

ARGUMENTS:
Arg [a] Buffer class

Note that for Oracle release 9.2 and earlier there are no additional
arguments reported with this error.

FUNCTIONALITY:
Kernel Cache Buffer management

IMPACT:
PROCESS FAILURE
POSSIBLE INSTANCE FAILURE
NON CORRUPTIVE - No underlying data corruption.

SUGGESTIONS:

Retry the operation.

Does the error still occur after an instance bounce?

If using 64bit AIX then ensure that minimum version in use is 9.2.0.3
or patch for Bug 2652771 has been applied.

If the Known Issues section below does not help in terms of identifying
a solution, please submit the trace files and alert.log to Oracle
Support Services for further analysis.

Known Issues:
Bug 2652771 Shared data structures corrupted around latch code on 64bit
AIX ports.
Fixed 9.2.0.3
backports available for older versions (8.1.7) from Metalink.
<Internal_Only>

ORA-600 [kcbgtcr_6]
Versions: 8.0.5 - 10.1 Source: kcb.c

Meaning:

We have two 'CURRENT' buffers for the same DBA.

Argument Description:

None

---------------------------------------------------------------------------
Explanation:

We have identified two 'CURRENT' buffers for the same DBA in the cache,
this is incorrect, and this error will be raised.

---------------------------------------------------------------------------
Diagnosis:

Check the trace file, this will show the buffers i.e :-

BH (0x70000003ffe9800) file#: 39 rdba: 0x09c131e6 (39/78310) class 1 ba:


0x70000003fcf0000
set: 6 dbwrid: 0 obj: 11450 objn: 11450
hash: [70000000efa9b00,70000004d53a870] lru:
[70000000efa9b68,700000006fb8d68]
ckptq: [NULL] fileq: [NULL]
st: XCURRENT md: NULL rsop: 0x0 tch: 1
LRBA: [0x0.0.0] HSCN: [0xffff.ffffffff] HSUB: [255] RRBA: [0x0.0.0]

BH (0x70000000efa9b00) file#: 39 rdba: 0x09c131e6 (39/78310) class 1 ba:


0x70000000e4f6000
set: 6 dbwrid: 0 obj: 11450 objn: 11450
hash: [70000004d53a870,70000003ffe9800] lru:
[700000012fbaf68,70000003ffe9868]
ckptq: [NULL] fileq: [NULL]
st: XCURRENT md: NULL rsop: 0x0 tch: 2
LRBA: [0x0.0.0] HSCN: [0xffff.ffffffff] HSUB: [255] RRBA: [0x0.0.0]

Here it is clear that we have two current buffers for the dba.

Most likely cause for this is 64bit AIX Bug 2652771.

If this isn't the case check the error reproduces consistently after
bouncing the instance?

Via SQLplus? What level of concurrency to reproduce? Is a testcase


available?

Check OS memory for errors.

---------------------------------------------------------------------------
Known Bugs:

Bug 2652771 Shared data structures corrupted around latch code on 64bit
AIX ports.
- Fixed 9.2.0.3, backports available for older versions.

19.57 ORA-00600: internal error code, arguments: [1100], [0x7000002FDF83F40],


[0x7000002FDF83F40], [], [], [], [], []
==================================================================================
===================================

Doc ID: Note:138123.1


Subject: ORA-600 [1100]
Type: REFERENCE
Status: PUBLISHED
Content Type: TEXT/X-HTML
Creation Date: 28-MAR-2001
Last Revision Date: 08-FEB-2005

Note: For additional ORA-600 related information please read Note 146580.1

PURPOSE:
This article discusses the internal error "ORA-600 [1100]", what
it means and possible actions. The information here is only applicable
to the versions listed and is provided only for guidance.

ERROR:
ORA-600 [1100] [a] [b] [c] [d] [e]

VERSIONS:
versions 6.0 to 9.2

DESCRIPTION:

This error relates to the management of standard double-linked (forward


and backward) lists.

Generally, if the list is damaged an attempt to repair the links is


performed.

Additional information will accompany this internal error. A dump of the


link and often a core dump will coincide with this error.

This is a problem with a linked list structure in memory.

FUNCTIONALITY:
GENERIC LINKED LISTS

IMPACT:
PROCESS FAILURE
POSSIBLE INSTANCE FAILURE IF DETECTED BY PMON PROCESS
No underlying data corruption.
SUGGESTIONS:

Known Issues:

Bug# 3724548 See Note 3724548.8


OERI[kglhdunp2_2] / OERI[1100] under high load
Fixed: 9.2.0.6, 10.1.0.4, 10.2

Bug# 3691672 + See Note 3691672.8


OERI[17067]/ OERI[26599] / dump (kgllkdl) from JavaVM / OERI:1100 from PMON
Fixed: 10.1.0.4, 10.2

Bug# 2652771 P See Note 2652771.8


AIX: OERI[1100] / OERI[KCBGET_37] SGA corruption
Fixed: 9.2.0.3

Bug# 1951929 See Note 1951929.8


ORA-7445 in KQRGCU/kqrpfr/kqrpre possible
Fixed: 8.1.7.3, 9.0.1.2, 9.2.0.1

Bug# 959593 See Note 959593.8


CTRL-C During a truncate crashes the instance
Fixed: 8.1.6.3, 8.1.7.0

<Internal_Only>

INTERNAL ONLY SECTION - NOT FOR PUBLICATION OR DISTRIBUTION TO CUSTOMERS

No internal information at the present time.

Ensure that this note comes out on top in Metalink when searched
ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600
ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600
1100 1100 1100 1100 1100 1100 1100 1100 1100 1100

1100 1100 1100 1100 1100 1100 1100 1100 1100 1100

</Internal_Only>

Note 2:
-------

Doc ID: Note:3724548.8


Subject: Bug 3724548 - OERI[kglhdunp2_2] / OERI[1100] under high load
Type: PATCH
Status: PUBLISHED
Content Type: TEXT/X-HTML
Creation Date: 24-SEP-2004
Last Revision Date: 13-JAN-2005

Click here for details of sections in this note.

Bug 3724548 OERI[kglhdunp2_2] / OERI[1100] under high load


This note gives a brief overview of bug 3724548.
Affects:
Product (Component) Oracle Server (Rdbms)
Range of versions believed to be affected Versions < 10.2
Versions confirmed as being affected 9.2.0.4
9.2.0.5

Platforms affected Generic (all / most platforms affected)

Fixed:
This issue is fixed in 9.2.0.6 (Server Patch Set)
10.1.0.4 (Server Patch Set)
10g Release 2 (future version)

Symptoms: Related To:


Memory Corruption
Internal Error May Occur (ORA-600)
ORA-600 [kglhdunp2_2]
ORA-600 [1100]
(None Specified)

Description
When an instance is under high load it is possible for sessions to get
ORA-600[KGLHDUNP2_2] and ORA-600 [1100] errors. This can also show
as a corrupt linked list in the SGA.

The full bug text (if published) can be seen at <Bug:3724548> (This link will not
work for UNPUBLISHED bugs)
You can search for any interim patches for this bug here <Patch:3724548> (This
link will Error if no interim patches exist)

19.58 Compilation problems DBI DBD:


===================================

We upgraded Oracle from 8.1.6 to 9.2.0.5 and I tried to rebuild the


DBD::Oracle module but it threw errors like:

.
gcc: unrecognized option `-q64'
ld: 0711-736 ERROR: Input file /lib/crt0_64.o:
XCOFF64 object files are not allowed in 32-bit mode.
collect2: ld returned 8 exit status
make: 1254-004 The error code from the last command is 1.
Stop.

After some digging I found out that this is because the machine is AIX 5.2
running under 32-bit and it is looking at the oracle's lib directory which
has 64 bit libraries. So after running "perl Makefile.PL", I edited the
Makefile
1. changing the references to Oracle's ../lib to ../lib32,
2. changing change crt0_64.o to crt0_r.o.
3. Remove the -q32 and/or -q64 options from the list of libraries to link
with.
Now when I ran "make" it went smoothly, so did make test and make install.
I ran my own simple perl testfile which connects to the Oracle and gets
some info and it works fine.

Now I have an application which can be customised to call perl scripts and
when I call this test script from that application it fails with:

install_driver(Oracle) failed: Can't load


'/usr/local/perl/lib/site_perl/5.8.5/a
ix/auto/DBD/Oracle/Oracle.so' for module DBD::Oracle: 0509-022 Cannot
load mod
ule /usr/local/perl/lib/site_perl/5.8.5/aix/auto/DBD/Oracle/Oracle.so.
0509-150 Dependent module
/u00/oracle/product/9.2.0/lib/libclntsh.a(sh
r.o) could not be loaded.
0509-103 The module has an invalid magic number.
0509-022 Cannot load module
/u00/oracle/product/9.2.0/lib/libclntsh.a.
0509-150 Dependent module
/u00/oracle/product/9.2.0/lib/libclntsh.a co
uld not be loaded. at /usr/local/perl/lib/5.8.5/aix/DynaLoader.pm line
230.
at (eval 3) line 3
Compilation failed in require at (eval 3) line 3.
Perhaps a required shared library or dll isn't installed where expected
at /opt/dscmdevc/src/udps/test_oracle_dbd.pl line 45

whats happening here is that the application sets its own LIBPATH to
include oracle's lib(instead of lib32) in the beginning and that makes
perl look at the wrong place for the file - libclntsh.a .Unfortunately it
will take too long for the application developers to change this in their
application and I am looking for a quick solution. The test script is
something like:

use Env;
use strict;
use lib qw( /opt/harvest/common/perl/lib ) ;
#use lib qw( $ORACLE_HOME/lib32 ) ;
use DBI;
my $connect_string="dbi:Oracle:";
my $datasource="d1ach2";
$ENV{'LIBPATH'} = "${ORACLE_HOME}/lib32:$ENV{'LIBPATH'}" ;
.
.
my $dbh = DBI->connect($connect_string, $dbuser, $dbpwd)
or die "Can't connect to $datasource: $DBI::errstr";
.
.

Adding 'use lib' or using'$ENV{LIBPATH}' to change the LIBPATH is not


working because I need to make this work in this perl script and the "use
DBI" is run (or whatever the term is) in the compile-phase before the
LIBPATH is set in the run-phase.

I have a work around for it: write a wrapper ksh script which exports the
LIBPATH and then calls the perl script which works fine but I was
wondering if there is a way to set the libpath or do something else inside
the current perl script so that it knows where to look for the right
library files inspite of the wrong LIBPATH?

Or did I miss something when I changed the Makefile and did not install
everything right? Is there anyway I check this? (the make install didnot
throw any errors)

Any help or thoughts on this would be much appreciated.

Thanks!
Rachana.

note 12:
--------

P550:/ # find . -name "libclnt*" -print


./apps/oracle/product/9.2/lib/libclntst9.a
./apps/oracle/product/9.2/lib/libclntsh.a
./apps/oracle/product/9.2/lib32/libclntst9.a
./apps/oracle/product/9.2/lib32/libclntsh.a
./apps/oracle/oui/bin/aix/libclntsh.so.9.0
P550:/ #

19.59 Listener problem: IBM/AIX RISC System/6000 Error: 13: Permission denied
-----------------------------------------------------------------------------

When starting listener


start listener

TNS-12546: TNS:permission denied


TNS-12560: TNS:protocol adapter error
TNS-00516: Permission denied
IBM/AIX RISC System/6000 Error: 13: Permission denied

Note 1:

'TNS-12531: TNS:cannot allocate memory' may be misleading, it seems to be a


permission problem
(see also IBM/AIX RISC System/6000 Error: 13: Permission denied). A possible
reason is:
Oracle (more specific the listener) is unable to read /etc/hosts, because of
permission problems.
So host resolution is not possible.

..
..
The problem really was in permissions of /etc/hosts on the node2. It was -rw-
r----- (640).
Now it is -rw-rw-r-- (664) and everything goes ok.
Thank you!

BUGS WITH REGARDS TO PRO*COBOL ON 9i:


10.59 Listener problem: IBM/AIX RISC System/6000 Error: 79: Connection refused
------------------------------------------------------------------------------

d0planon@zb121l01:/data/oracle/d0planon/admin/home/$ lsnrctl

LSNRCTL for IBM/AIX RISC System/6000: Version 10.2.0.3.0 - Production on 12-OCT-


2007 08:29:14

Copyright (c) 1991, 2006, Oracle. All rights reserved.

Welcome to LSNRCTL, type "help" for information.

LSNRCTL> status
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
TNS-12541: TNS:no listener
TNS-12560: TNS:protocol adapter error
TNS-00511: No listener
IBM/AIX RISC System/6000 Error: 79: Connection refused

Answer 1:

Check if the oracle user can read /etc/hosts

Answer 2:

Maybe there are multiple instances of the listener, so if you try the following

LSNRCTL> status <listener_name>

You might have a correct response.

19.61: 64BIT PRO*COBOL IS NOT THERE EVNN AFTER UPGRDING TO 9.2.0.3 ON AIX-5L BOX
--------------------------------------------------------------------------------

Bookmark Fixed font Go to End Monitor Bug

Bug No. 2859282


Filed 19-MAR-2003 Updated 01-NOV-2003
Product Precompilers Product Version 9.2.0.3
Platform AIX5L Based Systems (64-bit) Platform Version 5.*
Database Version 9.2.0.3 Affects Platforms Port-Specific
Severity Severe Loss of Service Status Closed, Duplicate Bug
Base Bug 2440385 Fixed in Product Version No Data

Problem statement:

64BIT PRO*COBOL IS NOT THERE EVNN AFTER UPGRDING TO 9.2.0.3 ON AIX-5L BOX
*** 03/19/03 10:13 am ***
2889686.996
.
=========================
PROBLEM:
.
1. Clear description of the problem encountered:
.
cst. has upgraded from 9.2.0.2 to 9.2.0.3 on a AIX 5L 64-Bit Box and is not
seeing the 64-bit Procob executable. Actually the same problem existed when
upgraded from 9.2.0.1 to 9.2.0.2, but the one-off patch has been provided in
the Bug#2440385 to resolve the issue. As per the Bug, problem has been fixed
in 9.2.0.3. But My Cst. is facing the same problem on 9.2.0.3 also.
.
This is what the Cst. says
============================
This is the original bug # 2440385. The fix provides 64 bit versions of
Pro*Cobol.There are two versions of the patch for the bug: one is for the
9.2.0.1 RDBMS and the other is for 9.2.0.2. So the last time I hit this
issue, I applied the 9.2.0.2 RDBMS patch to the 9.2.0.1 install. The 9.2.0.2
patch also experienced the relinking problem on rtsora just like the 9.2.0.1
install did. I ignored the error to complete the patch application. Then I
used the patch for the 2440385 bug to get 64 bit procob/rtsora executables
(the patch actually provides executables rather than performing a successful
relinking) to get the Pro*Cobol 1.8.77 precompiler to work with the
MicroFocus Server Express 2.0.11 (64 bit) without encountering "bad magic
number" error.
.
The current install that I am performing I've downloaded the Oracle 9.2.0.3
Pro*Cobol capability fix either so the rtsora relinking fails as well. Thus I
don't have a working Pro*Cobol precompiler to allow me to generate our Cobol
programs against the database.
.
2. Pertinent configuration information (MTS/OPS/distributed/etc)
.
3. Indication of the frequency and predictability of the problem
.
4. Sequence of events leading to the problem
.
5. Technical impact on the customer. Include persistent after effects.
.
=========================
DIAGNOSTIC ANALYSIS:
.
One-off patch should be provided on top of 9.2.0.3 as provided on top of
9.2.0.2/9.2.0.1
.
=========================
WORKAROUND:
.
.
=========================
RELATED BUGS:
.
2440385
.
=========================
REPRODUCIBILITY:
.
1. State if the problem is reproducible; indicate where and predictability
.
2. List the versions in which the problem has reproduced
.
9.2.0.3
.
3. List any versions in which the problem has not reproduced

Further notes on PRO*COBOL:


===========================

Note 1:
=======
9201,9202,9203,9204,9205
32 bit cobol: procob32 or procob18_32.
64 bit cobol: procob or procob18

PATCHES:

1. Patch 2663624: (Cobol patch for 9202 AIX 5L)


-----------------------------------------------

PSE FOR BUG2440385 ON 9.2.0.2 FOR AIX5L PORT 212


Patchset Exception: 2663624 / Base Bug 2440385
#-------------------------------------------------------------------------
#
# DATE: November 26, 2002
# -----------------------
# Platform Patch for : AIX Based Systems (Oracle 64bit) for 5L
# Product Version # : 9.2.0.2
# Product Patched : RDBMS
#
# Bugs Fixed by this patch:
# -------------------------
# 2440385 : PLEASE PROVIDE THE PATCH FOR SUPPORTING 64BIT PRO*COBOL
#
# Patch Installation Instructions:
# --------------------------------
# To apply the patch, unzip the PSE container file;
#
# % unzip p2440385_9202_AIX64-5L.zip
#
# Set your current directory to the directory where the patch
# is located:
#
# % cd 2663624
#
# Ensure that the directory containing the opatch script appears in
# your $PATH; then enter the following command:
#
# % opatch apply
2. Patch 2440385:
-----------------

Results for Platform : AIX5L Based Systems (64-bit)

Patch Description Release Updated Size


2440385 Pro*COBOL: PATCH FOR SUPPORTING 64BIT PRO*COBOL 9.2.0.3 27-APR-2003 34M

2440385 Pro*COBOL: PATCH FOR SUPPORTING 64BIT PRO*COBOL 9.2.0.2 26-NOV-2002 17M

2440385 Pro*COBOL: PATCH FOR SUPPORTING 64BIT PRO*COBOL 9.2.0.1 01-OCT-2002 17M

3. Patch 3501955 9205:


----------------------

Also includes 2440385. Provide the patch for supporting 64-bit Pro*COBOL.

Note 2:
=======

Problem precompiling Cobol program under Oracle 9i......

Hi, we recently upgraded to 9i. However, we still have 32 bit Cobol, so we're
using the procob18_32 precompiler
to compile our programs. Some of my compiles have worked successfully. However,
I'm receiving the follow error
in one of my compiles:

1834 183400 01 IB0-STATUS PIC 9. 7SA


350
1834 ...................................^
PCC-S-0018: Expected "PICTURE clause", but found "9" at line 1834 in file

What's strange is that if I compile the program against the same DB using procob
instead of procob18_32,
it compiles cleanly. I noticed in my compile that failed using procob18_32, it had
the following message:

System default option values taken from: /u01/app/oracle/product/9.2.0.4/precomp


/admin/pcccob.cfg

Yet, when I used procob, it had this message:

System default option values taken from: /u01/app/oracle/product/9.2.0.4/precomp


/admin/pcbcfg.cfg

..
..

Hi, I started using procob32 instead of procob18_32, and that resolved my problem.

Thanks for any help you may have already started to provide.
Note 3:
=======

Doc ID: Note:257934.1 Content Type: TEXT/X-HTML


Subject: Pro*COBOL Application Fails in Runtime When Using Customized old Make
Files With Signal 11 (MF Errror 114) Creation Date: 20-NOV-2003
Type: PROBLEM Last Revision Date: 04-APR-2005
Status: MODERATED
The information in this article applies to:
Precompilers - Version: 9.2.0.4
This problem can occur on any platform.
Symptoms
After upgrading from Oracle server and Pro*COBOL 9.2.0.3.0 to 9.2.0.4.0
application are failing with cobol runtime error 114 when using 32-bit builds.
Platform is AIX 4.3.3 which does not support 64-bit builds with Micro Focus
Server Express 2.0.11.

Execution error : file 'sample1'


error code: 114, pc=0, call=1, seg=0
114 Attempt to access item beyond bounds of memory (Signal 11)
Changes
Upgraded from 9.2.0.3.0 to 9.2.0.4.0.
Cause
The customized old make files for building 32-bit applications invoked the 64-bit
precompilers procob or procob18 instead of procob32 or procob18_32.
Fix
Use the Oracle Supplied make templates or change the customized old make files for
32-bit application builds
$ORACLE_HOME/precomp/demo/procob2/demo_procob_32.mk,
$ORACLE_HOME/precomp/demo/procob/demo_procob_32.mk and
$ORACLE_HOME/precomp/demo/procob/demo_procob18_32.mk
invoke the wrong precompiler.

To fix the problem add the following to


$ORACLE_HOME/precomp/demo/procob2/demo_procob_32.mk:

PROCOB=procob32

Using $ORACLE_HOME/precomp/demo/procob/demo_procob_32.mk:

PROCOB_32=procob32

Using $ORACLE_HOME/precomp/demo/procob/demo_procob18_32.mk

PROCOB18_32=procob18_32

The change can be added to the bottom of the make file.


References
Bug 3220095 - Procobol App Fails114 Attempt To Access Item Beyond Bounds Of Memory
(Signal 11)

Note 4:
=======

Displayed below are the messages of the selected thread.


Thread Status: Closed
From: Jean-Daniel DUMAS 23-Nov-04 16:39
Subject: PROCOB18_32 Problem at execution ORA-00933

PROCOB18_32 Problem at execution ORA-00933

We try to migrate from Oracle 8.1.7.4 to Oracle 9.2.0.5.


We've got problems with a lot of procobol programs using host table variables in
PL SQL blocks like:

EXEC SQL EXECUTE


BEGIN
FOR nIndice IN 1..:WI-NB-APPELS-TFO009S LOOP
UPDATE tmp_edition_erreur
SET mon_nb_dec = :WTI-S2-MON-NB-DEC (nIndice)
WHERE mon_cod = :WTC-S2-MON-COD (nIndice)
AND run_id = :WC-O-RUN-ID;
END LOOP;
END;
END-EXEC

At execution, we've got "ORA-00933 SQL command not properly ended".


The problem seems to appear only if the host table variable is used inside a
SELECT,UPDATE or DELETE command.
For the INSERT VALUES command, it seems that we've got no problem.

A workaround consists to assign host table variables into oracle table variables
and replace inside SQL command host table
variables by oracle table variables.
But, as we've got a lot a program like this, we don't enjoy to do this.
Have somebody another idea ?

jddumas@eram.fr

From: Oracle, Amit Joshi 05-Jan-05 06:26


Subject: Re : PROCOB18_32 Problem at execution ORA-00933

Hi

Please refer to bug 3802067 on Metalink.

From the details provided , it seems you are hitting the same.

Best Regards
Amit Joshi

Note 5:
=======

Re: Server Express 64bit and Oracle 9i problem (114) on AIX 5.2
Hi Wayne (and Panos)

Apologies if you're aware of some of this already, but I just wanted to


clarify the steps involved in creating and executing a Pro*COBOL application
with Micro Focus Server Express on UNIX.
When installing Pro*COBOL on UNIX (as part of the main Oracle installation),
you need to have your COBOL environment setup, in order for the installer to
relink a COBOL RTS containing the Oracle support libraries
(rtsora/rtsora32/rtsora64).

The 64-bit edition of Oracle 9i on AIX 5.x creates rtsora -- the 64-bit
version of the run-time -- and rtsora32 -- the 32-bit version of the
run-time.

It's imperative that you use the correct edition of Server Express, i.e.
32-bit or 64-bit -- note well, that these are separate products on this
platform -- for the mode in which you wish to use Oracle. In addition, you
need to ensure that LIBPATH is set to point to the correct Oracle 'lib'
directory -- $ORACLE_HOME/lib32 for 32-bit, or $ORACLE_HOME/lib for 64-bit

If you wish to recreate those executables, say if you've updated your COBOL
environment since installing Oracle, then from looking at the makefiles --
ins_precomp.mk and env_precomp.mk -- then the effective commands to use to
re-link the run-time correctly are as follows (logged in under your Oracle
user ID) :

either mode:
<set up COBDIR, ORACLE_HOME, ORACLE_BASE, ORACLE_SID as appropriate for your
installation>
export PATH=$COBDIR/bin:$ORACLE_HOME/bin:$PATH

32-bit :
export LIBPATH=$COBDIR/lib:$ORACLE_HOME/lib32:$LIBPATH
cd $ORACLE_HOME/precomp/lib
make LIBDIR=lib32 -f ins_precomp.mk EXE=rtsora32 rtsora32

64-bit:
export LIBPATH=$COBDIR/lib:$ORACLE_HOME/lib:$LIBPATH
cd $ORACLE_HOME/precomp/lib
make -f ins_precomp.mk rtsora

Regarding precompiling your application, Oracle provide two versions of


Pro*COBOL. Again, you need to use the correct one depending on whether
you're creating a 32-bit or 64-bit application, as the precompiler will
generate different code.

If invoking Pro*COBOL directly, you need to use :

32-bit : procob32 / procob18_32 , e.g.


procob32 myapp.pco
cob -it myapp.cob
rtsora32 myapp.int

or
64-bit : procob / procob18 , e.g.
procob myapp.pco
cob -it myapp.cob
rtsora myapp.int

If you're using Server Express 2.2 SP1 or later, you can also compile using
the Cobsql preprocessor, which will invoke the correct version of Pro*COBOL
under the covers, allowing for a single precompile-compile step, e.g.
cob -ik myapp.pco -C "p(cobsql) csqlt==oracle8 endp"

This method also aids debugging, as you will see the original source code
while animating, rather than the output from the precompiler. See the Server
Express Database Access manual. Prior to SX 2.2 SP1, Cobsql only supported
the creation of 32-bit applications.

I hope this helps -- if you're still having problems, please let me know.

Regards,
SimonT.

Re: Re: Server Express 64bit and Oracle 9i problem (114) on AIX 5.2
Hi Simon (and anyone else)

Thanks for that. We still seem to be getting a very unusual error with our c
ompiles in or makes.

A bit of background: we are "upgrading" from Oracle8i, SAS6, Solaris, MF COB


OL 4.5 to AIX 5L, Oracle9i, SAS8 and MF Server Express COBOL.

When we attempt to compile our COBOL it works fine. However if the COBOL has
embedded Oracle SQL our procomp makes try to access ADA. We do not use ADA.
I thought this must have been included by accident; but can find no flag or
install option for it. So can you give us any clues as to why we are suffer
ing an ADA plague :-))

Wayne

Re: Server Express 64bit and Oracle 9i problem (114) on AIX 5.2
Hi Wayne.

On the surface, it appears as if you're not picking up the correct Pro*COBOL


binary.

If you invoke 'procob' from the command line, you should see something along
the lines of :

Pro*COBOL: Release 9.2.0.4.0 - Production on Mon Apr 19 13:38:07 2004

followed by a list of Pro*COBOL options.

Do you see this, or do you see a different banner (say, Pro*ADA, or


Pro*Fortran)? Assuming you see something other than a Pro*COBOL banner, then
if you invoke 'whence procob', does it show procob as being picked up from
your Oracle bin directory (/home/oracle/9.2.0/bin/procob in my case) ?

If you're either not seeing the correct Pro*COBOL banner, or it's not
located in the correct directory, I'd suggest rebuilding the procob and
procob32 binaries. Logged in under your Oracle user ID, with the Oracle
environment set up :

cd $ORACLE_HOME/precomp/lib
make -f ins_precomp.mk procob32 procob
and then try your compilation process again.

Regards,
SimonT.

Re: Re: Server Express 64bit and Oracle 9i problem (114) on AIX 5.2
Hi Simon

Firstly, thanks for all your help, it was greatly appreciated.

We have the solution to our problem:

The problem is resolved by modifying the line in the job from:

make -f $SRC_DIR/procob.mk COBS="$SRC_DIR/PFEM025A.cob SYSDATE.cob CNTLGET.


cob" EXE=$SRC_DIR/PFEM025A
to
make -f $SRC_DIR/procob.mk build COBS="$SRC_DIR/PFEM025A.cob SYSDATE.cob CN
TLGET.cob" EXE=$SRC_DIR/PFEM025A

It appears this (build keyword) is not a requirement for the job to run on S
olaris but is for AIX.

All is working fine.

Cheers

Wayne

Note 6:
=======

Doc ID: Note:2440385.8 Content Type: TEXT/X-HTML


Subject: Support Description of Bug 2440385 Creation Date: 08-AUG-2003
Type: PATCH Last Revision Date: 15-AUG-2003
Status: PUBLISHED
Click here for details of sections in this note.
Bug 2440385 AIX: Support for 64 bit ProCobol
This note gives a brief overview of bug 2440385.
Affects:

Product (Component) Precompilers (Pro*COBOL)


Range of versions believed to be affected Versions >= 7 but < 10G
Versions confirmed as being affected 9.2.0.3
Platforms affected Aix 64bit 5L
Fixed:

This issue is fixed in 9.2.0.4 (Server Patch Set)


Symptoms:
(None Specified)
Related To:
Pro* Precompiler
Description
Add support for 64 bit ProCobol
The full bug text (if published) can be seen at Bug 2440385
This link will not work for UNPUBLISHED bugs.

Note 7:
=======

Displayed below are the messages of the selected thread.

Thread Status: Closed

From: Cathy Agada 18-Sep-03 21:40


Subject: How do I relink rtsora for 64 bit processing

How do I relink rtsora for 64 bit processing

I have the following error while relinking "rtsora" on AIX 5L/64bit platform on
oracle 9.2.0.3
(I believe my patch is up-to-date). Our Micro Focus compiler version is 2.0.11

$>make -f ins_precomp.mk relink EXENAME=rtsora


/bin/make -f ins_precomp.mk LIBDIR=lib32
EXE=/app/oracle/product/9.2.0/precomp/lib/rtsora rtsora32
Linking /app/oracle/product/9.2.0/precomp/lib/rtsora
cob64: bad magic number: /app/oracle/product/9.2.0/precomp/lib32/cobsqlintf.o
make: 1254-004 The error code from the last command is 1.

Stop.
make: 1254-004 The error code from the last command is 2.

My environment variable is as follows:


COBDIR=/usr/lpp/cobol
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/app/oracle/product/9.2.0/network/lib
SHLIB_PATH=$ORACLE_HOME/lib64:/app/oracle/product/9.2.0/lib32

I added 'define=bit64' on precomp config file.

Any ideas on what could be wrong. Thanks.

From: Oracle, Amit Chitnis 19-Sep-03 05:26


Subject: Re : How do I relink rtsora for 64 bit processing

Cathy,

Support for 64 bit Pro*Cobol 9.2.0.3 on AIX 5.1 was provided through one off patch
for bug 2440385

You will need to download and apply the patch for bug 2440385.

==OR==

You can dowload and apply the latest 9.2.0.4 patchset where the bug is fixed.
Thanks,
Amit Chitnis.

Note 8:
=======

Doc ID: Note:215279.1 Content Type: TEXT/X-HTML


Subject: Building Pro*COBOL Programs Fails With "cob64: bad magic number:"
Creation Date: 08-APR-2003
Type: PROBLEM Last Revision Date: 15-APR-2003
Status: PUBLISHED

fact: Pro*COBOL 9.2.0.2

fact: Pro*COBOL 9.2.0.1

fact: AIX-Based Systems (64-bit)

symptom: Building Pro*COBOL programs fails

symptom: cob64: bad magic number: %s

symptom: /oracle/product/9.2.0/precomp/lib32/cobsqlintf.o

cause: Bug 2440385 AIX: Support for 64 bit ProCobol

fix:

This is fixed in Pro*COBOL 9.2.0.3


One-Off patch for Pro*COBOL 9.2.0.2 has been provided in Metalink Patch Number
2440385

Reference:

How to Download a Patch from Oracle

Note 9:
=======

If you wish to recreate those executables, say if you've updated your COBOL
environment since installing Oracle, then from looking at the makefiles --
ins_precomp.mk and env_precomp.mk -- then the effective commands to use to
re-link the run-time correctly are as follows (logged in under your Oracle
user ID) :

either mode:
<set up COBDIR, ORACLE_HOME, ORACLE_BASE, ORACLE_SID as appropriate for your
installation>
export PATH=$COBDIR/bin:$ORACLE_HOME/bin:$PATH

32-bit :
export LIBPATH=$COBDIR/lib:$ORACLE_HOME/lib32:$LIBPATH
cd $ORACLE_HOME/precomp/lib
make LIBDIR=lib32 -f ins_precomp.mk EXE=rtsora32 rtsora32

64-bit:
export LIBPATH=$COBDIR/lib:$ORACLE_HOME/lib:$LIBPATH
cd $ORACLE_HOME/precomp/lib
make -f ins_precomp.mk rtsora

Note 10:
========

On 9.2.0.5, try to get the pro cobol patch for 9203. Then just copy the procobol
files
to the cobol directory.

19.62: ORA-12170:
=================

Connection Timeout.

Doc ID: Note:274303.1 Content Type: TEXT/X-HTML


Subject: Description of parameter SQLNET.INBOUND_CONNECT_TIMEOUT Creation
Date: 26-MAY-2004
Type: BULLETIN Last Revision Date: 10-FEB-2005
Status: MODERATED

***
This article is being delivered in Draft form and may contain
errors. Please use the MetaLink "Feedback" button to advise
Oracle of any issues related to this article.
***

PURPOSE
-------

To specify the time, in seconds, for a client to connect with the database server
and provide the necessary authentication information.

Description of parameter SQLNET.INBOUND_CONNECT_TIMEOUT


-------------------------------------------------------
This parameter has been introduced in 9i version.
This has to be configured in sqlnet.ora file.

Use the SQLNET.INBOUND_CONNECT_TIMEOUT parameter to specify the time,


in seconds, for a client to connect with the database server
and provide the necessary authentication information.

If the client fails to establish a connection and complete authentication


in the time specified, then the database server terminates the connection.
In addition, the database server logs the IP address of the client
and an ORA-12170: TNS:Connect timeout occurred error message to the sqlnet.log
file. The client receives either an ORA-12547: TNS:lost contact or
an ORA-12637: Packet receive failed error message.

Without this parameter, a client connection to the database server can stay open
indefinitely without authentication. Connections without authentication can
introduce possible denial-of-service attacks, whereby malicious clients attempt to
flood database servers with
connect requests that consume resources.

To protect both the database server and the listener,


Oracle Corporation recommends setting this parameter in combination with the
INBOUND_CONNECT_TIMEOUT_listener_name parameter in the listener.ora file.
When specifying values for these parameters,
consider the following recommendations:

*Set both parameters to an initial low value.


*Set the value of the INBOUND_CONNECT_TIMEOUT_listener_name parameter to a
lower value than the SQLNET.INBOUND_CONNECT_TIMEOUT parameter.
For example, you can set INBOUND_CONNECT_TIMEOUT_listener_name to 2 seconds and
INBOUND_CONNECT_TIMEOUT parameter to 3 seconds.
If clients are unable to complete connections within the specified time
due to system or network delays that are normal for the particular
environment, then increment the time as needed.

By default is set to None

Example
SQLNET.INBOUND_CONNECT_TIMEOUT=3

RELATED DOCUMENTS
-----------------

Oracle9i Net Services Reference Guide, Release 2 (9.2), Part Number A96581-02

SQLNET.EXPIRE_TIME:
-------------------

Purpose:
Determines time interval to send a probe to verify the session is alive

See Also: Oracle Advanced Security Administrator's Guide

Default:
None

Minimum Value:
0 minutes

Recommended Value:
10 minutes

Example:
sqlnet.expire_time=10

sqlnet.expire_time
Enables dead connection detection, that is, after the specifed time (in minutes)
the server checks
if the client is still connected.
If not, the server process exits. This parameter must be set on the server

PROBLEM:
Long query (20 minutes) returns ORA-01013 after about a minute.

SOLUTION:
The SQLNET.ORA parameter SQLNET.EXPIRE_TIME was set to a one(1).
The parameter was changed to...
SQLNET.EXPIRE_TIME=2147483647
This allowed the query to complete.
This is documented in the Oracle Troubleshooting manual on page 324.
The manual part number is A54757.01.

Keywords:

SQLNET.EXPIRE_TIME,SQLNET.ORA,ORA-01013

sqlnet.expire_time should be set on the server. The server sends keep alive
traffic over connections
that have already been established. You won't need to change your firewall.

sqlnet.expire_time is actually intended to test connections in order to allow


oracle to clean up resources
from connection that abnormally terminated.

The architecture to do that means that the server will send a probe packet to the
client. That probe packet
is viewed by the most firewalls as traffic on the line. That will in short reset
the idle timers on the firewall.
If you happen to have the disconnects from idle timers then it may help.
It was not intended for that feature but it is a byproduct of the design.

19.63: Tracing SQLNET:


======================

Note 1:
-------

Doc ID: Note:219968.1


Subject: SQL*Net, Net8, Oracle Net Services - Tracing and Logging at a Glance
Type: BULLETIN
Status: PUBLISHED
Content Type: TEXT/X-HTML
Creation Date: 20-NOV-2002
Last Revision Date: 26-AUG-2003

TITLE
-----
SQL*Net, Net8, Oracle Net Services - Tracing and Logging at a Glance.

PURPOSE
-------

The purpose of Oracle Net tracing and logging is to provide detailed


information to track and diagnose Oracle Net problems such as connectivity
issues, abnormal disconnection and connection delay. Tracing provides varying
degrees of information that describe connection-specific internal operations
during Oracle Net usage. Logging reports summary, status and error messages.

Oracle Net Services is the replacement name for the Oracle Networking product
formerly known as SQL*Net (Oracle7 [v2.x]) and Net8 (Oracle8/8i [v8.0/8.1]).
For consistency, the term Oracle Net is used thoughout this article and refers
to all Oracle Net product versions.

SCOPE & APPLICATION


-------------------

The aim of this document is to overview SQL*Net, Net8, Oracle Net Services
tracing and logging facilities. The intended audience includes novice Oracle
users and DBAs alike. Although only basic information on how to enable and
disable tracing and logging features is described, the document also serves
as a quick reference. The document provides the reader with the minimum
information necessary to generate trace and log files with a view to
forwarding them to Oracle Support Services (OSS) for further diagnosis. The
article does not intend to describe trace/log file contents or explain how to
interpret them.

LOG & TRACE PARAMETER OVERVIEW


------------------------------

The following is an overview of Oracle Net trace and log parameters.

TRACE_LEVEL_[CLIENT|SERVER|LISTENER] = [0-16|USER|ADMIN|SUPPORT|OFF]
TRACE_FILE_[CLIENT|SERVER|LISTENER] = <FILE NAME>
TRACE_DIRECTORY_[CLIENT|SERVER|LISTENER] = <DIRECTORY>
TRACE_UNIQUE_[CLIENT|SERVER|LISTENER] = [ON|TRUE|OFF|FALSE]
TRACE_TIMESTAMP_[CLIENT|SERVER|LISTENER] = [ON|TRUE|OFF|FALSE] #Oracle8i+
TRACE_FILELEN_[CLIENT|SERVER|LISTENER] = <SIZE in KB> #Oracle8i+
TRACE_FILENO_[CLIENT|SERVER|LISTENER] = <NUMBER> #Oracle8i+

LOG_FILE_[CLIENT|SERVER|LISTENER] = <FILE NAME>


LOG_DIRECTORY_[CLIENT|SERVER|LISTENER] = <DIRECTORY NAME>
LOGGING_LISTENER = [ON|OFF]

TNSPING.TRACE_LEVEL = [0-16|USER|ADMIN|SUPPORT|OFF]
TNSPING.TRACE_DIRECTORY = <DIRECTORY>

NAMES.TRACE_LEVEL = [0-16|USER|ADMIN|SUPPORT|OFF]
NAMES.TRACE_FILE = <FILE NAME>
NAMES.TRACE_DIRECTORY = <DIRECTORY>
NAMES.TRACE_UNIQUE = [ON|OFF]
NAMES.LOG_FILE = <FILE NAME>
NAMES.LOG_DIRECTORY = <DIRECTORY>
NAMES.LOG_UNIQUE = [ON|OFF]

NAMESCTL.TRACE_LEVEL = [0-16|USER|ADMIN|SUPPORT|OFF]
NAMESCTL.TRACE_FILE = <FILE NAME>
NAMESCTL.TRACE_DIRECTORY = <DIRECTORY>
NAMESCTL.TRACE_UNIQUE = [ON|OFF]

Note: With the exception of parameters suffixed with LISTENER, all other
parameter suffixes and prefixes [CLIENT|NAMES|NAMESCTL|SERVER|TNSPING]
are fixed and cannot be changed. For parameters suffixed with LISTENER,
the suffix name should be the actual Listener name. For example, if
the Listener name is PROD_LSNR, an example trace parameter name would
be TRACE_LEVEL_PROD_LSNR=OFF.

CONFIGURATION FILES
-------------------

Files required to enable Oracle Net tracing and logging features include:

Oracle Net Listener LISTENER.ORA LISTENER.TRC


Oracle Net - Client SQLNET.ORA on client SQLNET.TRC
Oracle Net - Server SQLNET.ORA on server SQLNET.TRC
TNSPING Utility SQLNET.ORA on client/Server TNSPING.TRC
Oracle Name Server NAMES.ORA NAMES.TRC
Oracle NAMESCTL SQLNET.ORA on server
Oracle Connection Manager CMAN.ORA

CONSIDERATIONS WHEN USING LOGGING/TRACING


-----------------------------------------

1. Verify which Oracle Net configuration files are in use.


By default, Oracle Net configuration files are sought and resolved from
the following locations:

TNS_ADMIN environment variable (incl. Windows Registry Key)


/etc or /var/opt/oracle (Unix)
$ORACLE_HOME/network/admin (Unix)
%ORACLE_HOME%/Network/Admin or %ORACLE_HOME%/Net80/Admin (Windows)

Note: User-specific Oracle Net parameters may also reside in


$HOME/sqlnet.ora file.
An Oracle Net server installation is also a client.

2. Oracle Net tracing and logging can consume vast quantities of disk space.
Monitor for sufficient disk space when tracing is enabled.
On some Unix operating systems, /tmp is used for swap space.
Although generally writable by all users, this is not an ideal location for
trace/log file generation.

3. Oracle Net tracing should only be enabled for the duration of the issue at
hand. Oracle Net tracing should always be disabled after problem resolution.

4. Large trace/log files place an overhead on the processes that generate them.
In the absence of issues, the disabling of tracing and/or logging will
improve Oracle Net overall efficiency.
Alternatively, regularly truncating log files will also improve efficiency.
5. Ensure that the target trace/log directory is writable by the connecting
user, Oracle software owner and/or user that starts the Net Listener.

LOG & TRACE PARAMETERS


----------------------

This section provides a detailed description of each trace and log parameter.

TRACE LEVELS

TRACE_LEVEL_[CLIENT|SERVER|LISTENER] = [0-16|USER|ADMIN|SUPPORT|OFF]
Determines the degree to which Oracle Net tracing is provided.
Configuration file is SQLNET.ORA, LISTENER.ORA.
Level 0 is disabled - level 16 is the most verbose tracing level.
Listener tracing requires the Net Listener to be reloaded or restarted
after adding trace parameters to LISTENER.ORA.
Oracle Net (client/server) tracing takes immediate effect after tracing
parameters are added to SQLNET.ORA.
By default, the trace level is OFF.

OFF (equivalent to 0) disabled - provides no tracing.


USER (equivalent to 4) traces to identify user-induced error conditions.
ADMIN (equivalent to 6) traces to identify installation-specific problems.
SUPPORT (equivalent to 16) trace information required by OSS for
troubleshooting.

TRACE FILE NAME

TRACE_FILE_[CLIENT|SERVER|LISTENER] = <FILE NAME>


Determines the trace file name.
Any valid operating system file name.
Configuration file is SQLNET.ORA, LISTENER.ORA.
Trace file is automatically appended with '.TRC'.
Default trace file name is SQLNET.TRC, LISTENER.TRC.

TRACE DIRECTORY

TRACE_DIRECTORY_[CLIENT|SERVER|LISTENER] = <DIRECTORY>
Determines the directory in which trace files are written.
Any valid operating system directory name.
Configuration file is SQLNET.ORA, LISTENER.ORA.
Directory should be writable by the connecting user and/or Oracle software
owner.
Default trace directory is $ORACLE_HOME/network/trace.

UNIQUE TRACE FILES

TRACE_UNIQUE_[CLIENT|SERVER|LISTENER] = [ON|TRUE|OFF|FALSE]
Allows generation of unique trace files per connection.
Trace file names are automatically appended with '_<PID>.TRC'.
Configuration file is SQLNET.ORA, LISTENER.ORA.
Unique tracing is ideal for sporadic issues/errors that occur infrequently
or randomly.
Default value is OFF

TRACE TIMING
TRACE_TIMESTAMP_[CLIENT|SERVER|LISTENER] = [ON|TRUE|OFF|FALSE]
A timestamp in the form of [DD-MON-YY 24HH:MI;SS] is recorded against each
operation traced by the trace file.
Configuration file is SQLNET.ORA, LISTENER.ORA
Suitable for hanging or slow connection issues.
Available from Oracle8i onwards.
Default value is is OFF.

MAXIMUM TRACE FILE LENGTH

TRACE_FILELEN_[CLIENT|SERVER|LISTENER] = <SIZE>
Determines the maximum trace file size in Kilobytes (Kb).
Configuration file is SQLNET.ORA, LISTENER.ORA.
Available from Oracle8i onwards.
Default value is UNLIMITED.

TRACE FILE CYCLING

TRACE_FILENO_[CLIENT|SERVER|LISTENER] = <NUMBER>
Determines the maximum number of trace files through which to perform
cyclic tracing.
Configuration file is SQLNET.ORA, LISTENER.ORA.
Suitable when disk space is limited or when tracing is required to be
enabled for long periods.
Available from Oracle8i onwards.
Default value is 1 (file).

LOG FILE NAME

LOG_FILE_[CLIENT|SERVER|LISTENER] = <FILE NAME>


Determines the log file name.
May be any valid operating system file name.
Configuration file is SQLNET.ORA, LISTENER.ORA.
Log file is automatically appended with '.LOG'.
Default log file name is SQLNET.LOG, LISTENER.LOG.

LOG DIRECTORY

LOG_DIRECTORY_[CLIENT|SERVER|LISTENER] = <DIRECTORY NAME>


Determines the directory in which log files are written.
Any valid operating system directory name.
Configuration file is SQLNET.ORA, LISTENER.ORA.
Directory should be writable by the connecting user or Oracle software
owner.
Default directory is $ORACLE_HOME/network/log.

DISABLING LOGGING

LOGGING_LISTENER = [ON|OFF]
Disables Listener logging facility.
Configuration file is LISTENER.ORA.
Default value is ON.

ORACLE NET TRACE/LOG EXAMPLES


-----------------------------
CLIENT (SQLNET.ORA)
trace_level_client = 16
trace_file_client = cli
trace_directory_client = /u01/app/oracle/product/9.0.1/network/trace
trace_unique_client = on
trace_timestamp_client = on
trace_filelen_client = 100
trace_fileno_client = 2
log_file_client = cli
log_directory_client = /u01/app/oracle/product/9.0.1/network/log
tnsping.trace_directory = /u01/app/oracle/product/9.0.1/network/trace
tnsping.trace_level = admin

SERVER (SQLNET.ORA)

trace_level_server = 16
trace_file_server = svr
trace_directory_server = /u01/app/oracle/product/9.0.1/network/trace
trace_unique_server = on
trace_timestamp_server = on
trace_filelen_server = 100
trace_fileno_server = 2
log_file_server = svr
log_directory_server = /u01/app/oracle/product/9.0.1/network/log

namesctl.trace_level = 16
namesctl.trace_file = namesctl
namesctl.trace_directory = /u01/app/oracle/product/9.0.1/network/trace
namesctl.trace_unique = on

LISTENER (LISTENER.ORA)

trace_level_listener = 16
trace_file_listener = listener
trace_directory_listener = /u01/app/oracle/product/9.0.1/network/trace
trace_timestamp_listener = on
trace_filelen_listener = 100
trace_fileno_listener = 2
logging_listener = off
log_directory_listener = /u01/app/oracle/product/9.0.1/network/log
log_file_listener=listener

NAMESERVER TRACE (NAMES.ORA)

names.trace_level = 16
names.trace_file = names
names.trace_directory = /u01/app/oracle/product/9.0.1/network/trace
names.trace_unique = off

CONNECTION MANAGER TRACE (CMAN.ORA)

tracing = yes

RELATED DOCUMENTS
-----------------
Note 16658.1 (7) Tracing SQL*Net/Net8
Note 111916.1 SQLNET.ORA Logging and Tracing Parameters
Note 39774.1 Log & Trace Facilities on Net v2
Note 73988.1 How to Get Cyclic SQL*Net Trace Files when Disk Space is Limited
Note 1011114.6 SQL*Net V2 Tracing
Note 1030488.6 Net8 Tracing

Note 2:
-------

Doc ID: Note:39774.1


Subject: LOG & TRACE Facilities on NET v2.
Type: FAQ
Status: PUBLISHED
Content Type: TEXT/X-HTML
Creation Date: 25-JUL-1996
Last Revision Date: 31-JAN-2002

LOG AND TRACE FACILITIES ON SQL*NET V2


======================================

This article describes the log and trace facilities that can be used to
examine application connections that use SQL*Net. This article is based on
usage of SQL*NET v2.3. It explains how to invoke the trace facility and how
to use the log and trace information to diagnose and resolve operating problems.
Following topics are covered below:

o What the log facility is

o What the trace facility is

o How to invoke the trace facility

o Logging and tracing parameters

o Sample log output

o Sample trace output

Note: Information in this section is generic to all operating system


environments. You may require further information from the Oracle
operating system-specific documentation for some details of your specific
operating environment.

________________________________________

1. What is the Log Facility?


============================

All errors encountered in SQL*Net are logged to a log file for evaluation by a
network or database administrator. The log file provides additional information
for an administrator when the error on the screen is inadequate to understand
the failure. The log file, by way of the error stack, shows the state of the
TNS software at various layers. The properties of the log file are:
o Error information is appended to the log file when an error occurs.

o Generally, a log file can only be replaced or erased by an administrator,


although client log files can be deleted by the user whose application
created them. (Note that in general it is bad practice to delete these
files while the program using them is still actively logging.)

o Logging of errors for the client, server, and listener cannot be


disabled. This is an essential feature that ensures all errors are
recorded.

o The Navigator and Connection Manager components of the MultiProtocol


Interchange may have logging turned on or off. If on, logging includes
connection statistics.

o The Names server may have logging turned on or off. If on, a Names
server's operational events are written to a specified logfile. You set
logging parameters using the Oracle Network Manager.

________________________________________

2. What is the Trace Facility?


==============================

The trace facility allows a network or database administrator to obtain more


information on the internal operations of the components of a TNS network
than is provided in a log file. Tracing an operation produces a detailed
sequence of statements that describe the events as they are executed. All
trace output is directed to trace output files which can be evaluated after
the failure to identify the events that lead up to an error. The trace
facility is typically invoked during the occurrence of an abnormal
condition, when the log file does not provide a clear indication of the
cause.

Attention: The trace facility uses a large amount of disk space and may have
a significant impact upon system performance. Therefore, you are
cautioned to turn the trace facility ON only as part of a diagnostic
procedure and to turn it OFF promptly when it is no longer necessary.

Components that can be traced using the trace facility are:

o Network listener
o SQL*Net version 2 components
- SQL*Net client
- SQL*Net server
o MultiProtocol Interchange components
- the Connection Manager and pumps
- the Navigator
o Oracle Names
- Names server
- Names Control Utility

The trace facility can be used to identify the following types of problems:
- Difficulties in establishing connections
- Abnormal termination of established connections
- Fatal errors occurring during the operation of TNS network
components

________________________________________

3. What is the Difference between Logging and Tracing?


======================================================

While logging provides the state of the TNS components at the time of an
error, tracing provides a description of all software events as they occur,
and therefore provides additional information about events prior to an
error. There are three levels of diagnostics, each providing more
information than the previous level. The three levels are:

1. The reported error from Oracle7 or tools; this is the single error that
is commonly returned to the user.

2. The log file containing the state of TNS at the time of the error. This
can often uncover low level errors in interaction with the underlying
protocols.

3. The trace file containing English statements describing what the TNS
software has done from the time the trace session was initiated until the
failure is recreated.

When an error occurs, a simple error message is displayed and a log file is
generated. Optionally, a trace file can be generated for more information.
(Remember, however, that using the trace facility has an impact on your
system performance.)

In the following example, the user failed to use Oracle Network Manager to
create a configuration file, and misspelled the word "PORT" as "POT" in the
connect descriptor. It is not important that you understand in detail the
contents of each of these results; this example is intended only to provide
a comparison.

Reported Error (On the screen in SQL*Forms):

ERROR: ORA-12533: Unable to open message file (SQL-02113)

Logged Error (In the log file, SQLNET.LOG):

****************************************************************
Fatal OSN connect error 12533, connecting to:
(DESCRIPTION=(CONNECT_DATA=(SID=trace)(CID=(PROGRAM=)(HOST=lala)
(USER=ginger)))(ADDRESS_LIST=(ADDRESS=(PROTOCOL=ipc)
(KEY=bad_port))(ADDRESS=(PROTOCOL=tcp)(HOST=lala)(POT=1521))))

VERSION INFORMATION:
TNS for SunOS: Version 2.0.14.0.0 - Developer's Release
Oracle Bequeath NT Protocol Adapter for SunOS: Version
2.0.14.0.0 - Developer's Release
Unix Domain Socket IPC NT Protocol Adaptor for SunOS: Version
2.0.14.0.0 - Developer's Release
TCP/IP NT Protocol Adapter for SunOS: Version 2.0.14.0.0 -
Developer's Release
Time: 07-MAY-93 17:38:50
Tracing to file: /home/ginger/trace_admin.trc
Tns error struct:
nr err code: 12206
TNS-12206: TNS:received a TNS error while doing navigation
ns main err code: 12533
TNS-12533: TNS:illegal ADDRESS parameters
ns secondary err code: 12560
nt main err code: 503
TNS-00503: Illegal ADDRESS parameters
nt secondary err code: 0
nt OS err code: 0

Example of Trace of Error


-------------------------

The trace file, SQLNET.TRC at the USER level, contains the


following information:

--- TRACE CONFIGURATION INFORMATION FOLLOWS ---


New trace stream is "/private1/oracle/trace_user.trc"
New trace level is 4
--- TRACE CONFIGURATION INFORMATION ENDS ---

--- PARAMETER SOURCE INFORMATION FOLLOWS ---


Attempted load of system pfile source
/private1/oracle/network/admin/sqlnet.ora
Parameter source was not loaded
Error stack follows:
NL-00405: cannot open parameter file

Attempted load of local pfile source /home/ginger/.sqlnet.ora


Parameter source loaded successfully

-> PARAMETER TABLE LOAD RESULTS FOLLOW <-


Some parameters may not have been loaded
See dump for parameters which loaded OK
-> PARAMETER TABLE HAS THE FOLLOWING CONTENTS <-
TRACE_DIRECTORY_CLIENT = /private1/oracle
trace_level_client = USER
TRACE_FILE_CLIENT = trace_user
--- PARAMETER SOURCE INFORMATION ENDS ---

--- LOG CONFIGURATION INFORMATION FOLLOWS ---


Attempted open of log stream "/tmp_mnt/home/ginger/sqlnet.log"
Successful stream open
--- LOG CONFIGURATION INFORMATION ENDS ---

Unable to get data from navigation file tnsnav.ora


local names file is /home/ginger/.tnsnames.ora
system names file is /etc/tnsnames.ora
-<ERROR>- failure, error stack follows
-<ERROR>- NL-00427: bad list
-<ERROR>- NOTE: FILE CONTAINS ERRORS, SOME NAMES MAY BE MISSING

Calling address:
(DESCRIPTION=(CONNECT_DATA=(SID=trace)(CID=(PROGRAM=)(HOST=lala)(USER=ging
er)))
(ADDRESS_LIST=(ADDRESS=(PROTOCOL=ipc)(KEY=bad_port))(ADDRESS=(PROTOCOL=tcp
)(HOST
Getting local community information
Looking for local addresses setup by nrigla
No addresses in the preferred address list
TNSNAV.ORA is not present. No local communities entry.
Getting local address information
Address list being processed...
No community information so all addresses are "local"
Resolving address to use to call destination or next hop
Processing address list...
No community entries so iterate over address list
This a local community access
Got routable address information
Making call with following address information:
(DESCRIPTION=(EMPTY=0)(ADDRESS=(PROTOCOL=ipc)(KEY=bad_port)))
Calling with outgoing connect data
(DESCRIPTION=(CONNECT_DATA=(SID=trace)(CID=(PROGRAM=)(HOST=lala)(USER=ging
er)))
(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)(HOST=lala)(POT=1521))))
(DESCRIPTION=(EMPTY=0)(ADDRESS=(PROTOCOL=ipc)(KEY=bad_port)))
KEY = bad_port
connecting...
opening transport...
-<ERROR>- sd=8, op=1, resnt[0]=511, resnt[1]=2, resnt[2]=0
-<ERROR>- unable to open transport
-<ERROR>- nsres: id=0, op=1, ns=12541, ns2=12560; nt[0]=511, nt[1]=2,
nt[2]=0
connect attempt failed
Call failed...
Call made to destination
Processing address list so continuing
Getting local community information
Looking for local addresses setup by nrigla
No addresses in the preferred address list
TNSNAV.ORA is not present. No local communities entry.
Getting local address information
Address list being processed...
No community information so all addresses are "local"
Resolving address to use to call destination or next hop
Processing address list...
No community entries so iterate over address list
This a local community access
Got routable address information
Making call with following address information:
(DESCRIPTION=(EMPTY=0)(ADDRESS=(PROTOCOL=tcp)(HOST=lala)(POT=1521)))
Calling with outgoing connect data
(DESCRIPTION=(CONNECT_DATA=(SID=trace)(CID=(PROGRAM=)(HOST=lala)(USER=ging
er)))
(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)(HOST=lala)(POT=521))))
(DESCRIPTION=(EMPTY=0)(ADDRESS=(PROTOCOL=tcp)(HOST=lala)(POT=1521)))

-<FATAL?>- failed to recognize: POT

-<ERROR>- nsres: id=0, op=13, ns=12533, ns2=12560; nt[0]=503, nt[1]=0,


nt[2]=0
Call failed...
Exiting NRICALL with following termination result -1
-<ERROR>- error from nricall
-<ERROR>- nr err code: 12206
-<ERROR>- ns main err code: 12533
-<ERROR>- ns (2) err code: 12560
-<ERROR>- nt main err code: 503
-<ERROR>- nt (2) err code: 0
-<ERROR>- nt OS err code: 0
-<ERROR>- Couldn't connect, returning 12533

In the trace file, note that unexpected events are preceded with an
-<ERROR>- stamp. These events may represent serious errors, minor errors, or
merely unexpected results from an internal operation. More serious and
probably fatal errors are stamped with the -<FATAL?>- prefix.

In this example trace file, you can see that the root problem, the
misspelling of "PORT," is indicated by the trace line: -<FATAL?>- failed to
recognize: POT

Most tracing is very similar to this. If you have a basic understanding of


the events the components will perform, you can identify the probable cause
of an error in the text of the trace.
________________________________________

4. Log File Names


=================

Log files produced by different components have unique names. The default
file names are:

SQLNET.LOG Contains client and/or server


information

LISTENER.LOG Contains listener information

INTCHG.LOG Contains Connection Manager and pump


information

NAVGATR.LOG Contains Navigator information

NAMES.LOG Contains Names server information

You can control the name of the log file. For each component, any valid
string can be used to create a log file name. The parameters are of the
form:

LOG_FILE_component = string

For example:

LOG_FILE_LISTENER = TEST

Some platforms have restrictions on the properties of a file name. See your
Oracle operating system specific manuals for platform specific restrictions.
_____________________________________

5. Using Log Files


==================

Follow these steps to track an error using a log file:

1. Browse the log file for the most recent error that matches the error
number you have received from the application. This is almost always the
last entry in the log file. Notice that an entry or error stack in the log
file is usually many lines in length. In the example earlier in this
chapter, the error number was 12207.

2. Starting at the bottom, look up to the first non-zero entry in the error
report. This is usually the actual cause. In the example earlier in this
chapter, the last non-zero entry is the "ns" error 12560.

3. Look up the first non-zero entry in later chapters of this book for its
recommended cause and action. (For example, you would find the "ns" error
12560 under ORA-12560.) To understand the notation used in the error report,
see the previous chapter, "Interpreting Error Messages."

4. If that error does not provide the desired information, move up the error
stack to the second to last error and so on.

5. If the cause of the error is still not clear, turn on tracing and
re-execute the statement that produced the error message. The use of the
trace facility is described in detail later in this chapter. Be sure to turn
tracing off after you have re-executed the command.

________________________________________

6. Using the Trace Facility


===========================

The steps used to invoke tracing are outlined here. Each step is fully
described in subsequent sections.

1. Choose the component to be traced from the list:

o Client
o Server
o Listener
o Connection Manager and pump (cmanager)
o Navigator (navigator)
o Names server
o Names Control Utility

2. Save existing trace file if you need to retain information on it. By default
most trace files will overwrite an existing ones. TRACE_UNIQUE parameter needs
to be included in appropriate config. files if unique trace files are required.
This appends Process Id to each file.
For Example:
For Names server tracing, NAMES.TRACE_UNIQUE=ON needs to be set in NAMES.
ORA file. For Names Control Utility, NAMESCTL.TRACING_UNIQUE=TRUE needs
to be in SQLNET.ORA. TRACE_UNIQUE_CLIENT=ON in SQLNET.ORA for Client
Tracing.
3. For any component, you can invoke the trace facility by editing the
component configuration file that corresponds to the component traced. The
component config. files are SQLNET.ORA, LISTENER.ORA, INTCHG.ORA, and NAMES.
ORA.

4. Execute or start the component to be traced. If the trace component


configuration files are modified while the component is running, the
modified trace parameters will take effect the next time the component is
invoked or restarted. Specifically for each component:

CLIENT: Set the trace parameters in the client-side SQLNET.ORA and invoke
a client application, such as SQL*Plus, a Pro*C application, or
any application that uses the Oracle network products.

SERVER: Set the trace parameters in the server-side SQLNET.ORA. The next
process started by the listener will have tracing enabled. The
trace parameters must be created or edited manually.

LISTENER: Set the trace parameters in the LISTENER.ORA

CONNECTION MANAGER:
Set the trace parameters in INTCHG.ORA and start the Connection
Manager from the Interchange Control Utility or command line. The
pumps are started automatically with the Connection Manager, and
their trace files are controlled by the trace parameters for the
Connection Manager.

NAVIGATOR:Again, set the trace parameters in INTCHG.ORA and start the


Navigator

NAMES SERVER:
Trace parameters needs to be set in NAMES.ORA and start the Names
server.

NAMES CONTROL UTILITY:


Set the trace parameters in SQLNET.ORA and start the Names Control
Utility

5. Be sure to turn tracing off when you do not need it for a specific
diagnostic purpose.

________________________________________

7. Setting Trace Parameters


===========================

The trace parameters are defined in the same configuration files as the log
parameters. Table below shows the configuration files for different network
components and the default names of the trace files they generate.
--------------------------------------------------------
| Trace Parameters | Configuration | |
| Corresponding to | File | Output Files |
|-------------------|-----------------|------------------|
| | | |
| Client | SQLNET.ORA | SQLNET.TRC |
| Server | | SQLNET.TRC |
| TNSPING Utility | | TNSPING.TRC |
| Names Control | | |
| Utility | | NAMESCTL.TRC |
|-------------------|-----------------|------------------|
| Listener | LISTENER.ORA | LISTENER.TRC |
|-------------------|-----------------|------------------|
| Interchange | INTCHG.ORA | |
| Connection | | |
| Manager | | CMG.TRC |
| Pumps | | PMP.TRC |
| Navigator | | NAV.TRC |
|-------------------|-----------------|------------------|
| Names server | NAMES.ORA | NAMES.TRC |
|___________________|_________________|__________________|

The configuration files for each component are located on the computer
running that component.

The trace characteristics for two or more components of an Interchange are


controlled by different parameters in the same configuration file. For
example, there are separate sets of parameters for the Connection Manager
and the Navigator that determine which components will be traced, and at
what level.

Similarly, if there are multiple listeners on a single computer, each


listener is controlled by parameters that include the unique listener name
in the LISTENER.ORA file.

For each component, the configuration files contain the following


information:

o A valid trace level to be used (Default is OFF)


o The trace file name (optional)
o The trace file directory (optional)

________________________________________

7a. Valid SQLNET.ORA Diagnostic Parameters


==========================================

The SQLNET.ORA caters for:


o Client Logging & Tracing
o Server Logging & Tracing
o TNSPING utility
o NAMESCTL program

------------------------------------------------------------------------------
| | | |
| PARAMETERS | VALUES | Example (DOS client, UNIX server) |
| | | |
|------------------------|----------------|------------------------------------|
|Parameters for Client |
|===================== |
|------------------------------------------------------------------------------|
| | | |
| TRACE_LEVEL_CLIENT | OFF/USER/ADMIN | TRACE_LEVEL_CLIENT=USER |
| | | |
| TRACE_FILE_CLIENT | string | TRACE_FILE_CLIENT=CLIENT |
| | | |
| TRACE_DIRECTORY_CLIENT | valid directory| TRACE_DIRECTORY_CLIENT=c:\NET\ADMIN|
| | | |
| TRACE_UNIQUE_CLIENT | OFF/ON | TRACE_UNIQUE_CLIENT=ON |
| | | |
| LOG_FILE_CLIENT | string | LOG_FILE_CLIENT=CLIENT |
| | | |
| LOG_DIRECTORY_CLIENT | valid directory| LOG_DIRECTORY_CLIENT=c:\NET\ADMIN |
|------------------------------------------------------------------------------|
|Parameters for Server |
|===================== |
|------------------------------------------------------------------------------|
| | | |
| TRACE_LEVEL_SERVER | OFF/USER/ADMIN | TRACE_LEVEL_SERVER=ADMIN |
| | | |
| TRACE_FILE_SERVER | string | TRACE_FILE_SERVER=unixsrv_2345.trc |
| | | |
| TRACE_DIRECTORY_SERVER | valid directory| TRACE_DIRECTORY_SERVER=/tmp/trace |
| | | |
| LOG_FILE_SERVER | string | LOG_FILE_SERVER=unixsrv.log |
| | | |
| LOG_DIRECTORY_SERVER | valid directory| LOG_DIRECTORY_SERVER=/tmp/trace |
|------------------------------------------------------------------------------|

---(SQLNET.ORA Cont.)---------------------------------------------------------
| | | |
| PARAMETERS | VALUES | Example (DOS client, UNIX server) |
| | | |
|------------------------|----------------|------------------------------------|
|
|Parameters for TNSPING |
|====================== |
|------------------------------------------------------------------------------|
| | | |
| TNSPING.TRACE_LEVEL | OFF/USER/ADMIN | TNSPING.TRACE_LEVEL=user |
| | | |
| TNSPING.TRACE_DIRECTORY| directory |TNSPING.TRACE_DIRECTORY= |
| | | /oracle7/network/trace |
| | | |
|------------------------------------------------------------------------------|
|Parameters for Names Control Utility |
|==================================== |
|------------------------------------------------------------------------------|
| | | |
| NAMESCTL.TRACE_LEVEL | OFF/USER/ADMIN |NAMESCTL.TRACE_LEVEL=user |
| | | |
| NAMESCTL.TRACE_FILE | file |NAMESCTL.TRACE_FILE=nc_south.trc |
| | | |
| NAMESCTL.TRACE_DIRECTORY| directory |NAMESCTL.TRACE_DIRECTORY=/o7/net/trace|
| | | |
| NAMESCTL.TRACE_UNIQUE | TRUE/FALSE |NAMESCTL.TRACE_UNIQUE=TRUE or ON/OFF|
| | | |
------------------------------------------------------------------------------

Note: You control log and trace parameters for the client through Oracle
Network Manager. You control log and trace parameters for the server by
manually adding the desired parameters to the SQLNET.ORA file.

Parameters for Names Control Utility & TNSPING Utility need to be added
manually to SQLNET.ORA file. You cannot create them using Oracle Network
Manager.

________________________________________

7b. Valid LISTENER.ORA Diagnostic Parameters


============================================

The following table shows the valid LISTENER.ORA parameters used in logging
and tracing of the listener.

------------------------------------------------------------------------------
| | | |
| PARAMETERS | VALUES | Example (DOS client, UNIX server) |
| | | |
|------------------------|----------------|------------------------------------|
| | | |
|TRACE_LEVEL_LISTENER | USER | TRACE_LEVEL_LISTENER=OFF |
| | | |
|TRACE_FILE_LISTENER | string | TRACE_FILE_LISTENER=LISTENER |
| | | |
|TRACE_DIRECTORY_LISTENER| valid directory| TRACE_DIRECTORY_LISTENER=$ORA_SQLNETV2
|
| | | |

|LOG_FILE_LISTENER | string | LOG_FILE_LISTENER=LISTENER |


| | | |

|LOG_DIRECTORY_LISTENER | valid directory| LOG_DIRECTORY_LISTENER=$ORA_ERRORS |


| | | |

------------------------------------------------------------------------------

________________________________________

7c. Valid INTCHG.ORA Diagnostic Parameters


==========================================

The following table shows the valid INTCHG.ORA parameters used in logging
and tracing of the Interchange.
---------------------------------------------------------------------------------
-
| | |
|
| PARAMETERS | VALUES | Example (DOS client, UNIX server)
|
| | (default)|
|
|------------------------|--------------------|-----------------------------------
-|
| | |
|
|TRACE_LEVEL_CMANAGER | OFF|USER|ADMIN | TRACE_LEVEL_CMANAGER=USER
|
| | |
|
|TRACE_FILE_CMANAGER | string (CMG.TRC) | TRACE_FILE_CMANAGER=CMANAGER
|
| | |
|
|TRACE_DIRECTORY_CMANAGER| valid directory | TRACE_DIRECTORY_CMANAGER=C:\ADMIN
|
| | |
|
|LOG_FILE_CMANAGER | string (INTCHG.LOG)| LOG_FILE_CMANAGER=CMANAGER
|
| | |
|
|LOG_DIRECTORY_CMANAGER | valid directory | LOG_DIRECTORY_CMANAGER=C:\ADMIN
|
| | |
|
|LOGGING_CMANAGER | OFF/ON | LOGGING_CMANAGER=ON
|
| | |
|
|LOG_INTERVAL_CMANAGER | Any no of minutes | LOG_INTERVAL_CMANAGER=60
|
| | (60 minutes)|
|
|TRACE_LEVEL_NAVIGATOR | OFF/USER/ADMIN | TRACE_LEVEL_NAVIGATOR=ADMIN
|
| | |
|
|TRACE_FILE_NAVIGATOR | string (NAV.TRC)| TRACE_FILE_NAVIGATOR=NAVIGATOR
|
| | |
|
|TRACE_DIRECTORY_NAVIGATOR| valid directory | TRACE_DIRECTORY_NAVIGATOR=C:\ADMIN
|
| | |
|
|LOG_FILE_NAVIGATOR |string (NAVGATR.LOG)| LOG_FILE_NAVIGATOR=NAVIGATOR
|
| | |
|
|LOG_DIRECTORY_NAVIGATOR | valid directory | LOG_DIRECTORY_NAVIGATOR=C:\ADMIN
|
| | |
|
|LOGGING_NAVIGATOR | OFF/ON | LOGGING_NAVIGATOR=OFF
|
| | |
|
|LOG_LEVEL_NAVIGATOR | ERRORS|ALL (ERRORS)| LOG_LEVEL_NAVIGATOR=ERRORS
|
| | |
|
---------------------------------------------------------------------------------
-

Note: The pump component shares the trace parameters of the Connection
Manager, but it generates a separate trace file with the unchangeable
default name PMPpid.TRC.

________________________________________

7d. Valid NAMES.ORA Diagnostic Parameters


=========================================

The following table shows the valid NAMES.ORA parameters used in logging and
tracing of the Names server.

------------------------------------------------------------------------------
| | | |
| PARAMETERS | VALUES | Example (DOS client, UNIX server) |
| | (default)| |
|------------------------|----------------|------------------------------------|
| | | |
| NAMES.TRACE_LEVEL | OFF/USER/ADMIN | NAMES.TRACE_LEVEL=ADMIN |
| | | |
| NAMES.TRACE_FILE | file(names.trc)| NAMES.TRACE_FILE=nsrv3.trc |
| | | |
| NAMES.TRACE_DIRECTORY | directory | NAMES.TRACE_DIRECTORY=/o7/net/trace|
| | | |
| NAMES.TRACE_UNIQUE | TRUE/FALSE | NAMES.TRACE_UNIQUE=TRUE or ON/OFF |
| | | |
| NAMES.LOG_FILE | file(names.log)| NAMES.LOG_FILE=nsrv1.log |
| | | |
| NAMES.LOG_DIRECTORY | directory | NAMES.LOG_DIRECTORY= /o7/net/log |
| | | |

------------------------------------------------------------------------------

________________________________________

8. Example of a Trace File


===========================
In the following example, the SQLNET.ORA file includes the following line:

TRACE_LEVEL_CLIENT = ADMIN

The following trace file is the result of a connection attempt that failed
because the hostname is invalid.
The trace output is a combination of debugging aids for Oracle specialists
and English information for network administrators. Several key events can
be seen by analyzing this output from beginning to end:

(A) The client describes the outgoing data in the connect


descriptor used to contact the server.

(B) An event is received (connection request).

(C) A connection is established over the available transport


(in this case TCP/IP).

(D) The connection is refused by the application, which is the


listener.

(E) The trace file shows the problem, as follows:

-<FATAL?>- ***hostname lookup failure! ***

(F) Error 12545 is reported back to the client.

If you look up Error 12545 in Chapter 3 of this Manual, you will find the
following description:

ORA-12545 TNS:Name lookup failure

Cause: A protocol specific ADDRESS parameter cannot be resolved.


Action: Ensure the ADDRESS parameters have been entered correctly;
the most likely incorrect value is the node name.

++++++ NOTE: TRACE FILE EXTRACT +++++++

--- TRACE CONFIGURATION INFORMATION FOLLOWS ---


New trace stream is "/private1/oracle/trace_admin.trc"
New trace level is 6
--- TRACE CONFIGURATION INFORMATION ENDS ---

++++++ NOTE: Loading Parameter files now. +++++++

--- PARAMETER SOURCE INFORMATION FOLLOWS ---


Attempted load of system pfile source
/private1/oracle/network/admin/sqlnet.ora
Parameter source was not loaded
Error stack follows:
NL-00405: cannot open parameter file

Attempted load of local pfile source /home/ginger/.sqlnet.ora


Parameter source loaded successfully

-> PARAMETER TABLE LOAD RESULTS FOLLOW <-


Some parameters may not have been loaded
See dump for parameters which loaded OK
-> PARAMETER TABLE HAS THE FOLLOWING CONTENTS <-
TRACE_DIRECTORY_CLIENT = /private1/oracle
trace_level_client = ADMIN
TRACE_FILE_CLIENT = trace_admin
--- PARAMETER SOURCE INFORMATION ENDS ---

++++++ NOTE: Reading Parameter files. +++++++

--- LOG CONFIGURATION INFORMATION FOLLOWS ---


Attempted open of log stream "/private1/oracle/sqlnet.log"
Successful stream open
--- LOG CONFIGURATION INFORMATION ENDS ---

Unable to get data from navigation file


tnsnav.ora
local names file is /home/ginger/.tnsnames.ora
system names file is /etc/tnsnames.ora
initial retry timeout for all servers is 500 csecs
max request retries per server is 2
default zone is [root]
Using nncin2a() to build connect descriptor for (possibly remote)
database.
initial load of /home/ginger/.tnsnames.ora
-<ERROR>- failure, error stack follows
-<ERROR>- NL-00405: cannot open parameter file
-<ERROR>- NOTE: FILE CONTAINS ERRORS, SOME NAMES MAY BE MISSING

initial load of /etc/tnsnames.ora


-<ERROR>- failure, error stack follows
-<ERROR>- NL-00427: bad list
-<ERROR>- NOTE: FILE CONTAINS ERRORS, SOME NAMES MAY BE MISSING

Inserting IPC address into connect descriptor returned from nncin2a().

++++++ NOTE: Looking for Routing Information. +++++++

Calling address:
(DESCRIPTION=(CONNECT_DATA=(SID=trace)(CID=(PROGRAM=)(HOST=lala)
(USER=ginger)))(ADDRESS_LIST=(ADDRESS=(PROTOCOL=ipc
(KEY=bad_host))(ADDRESS=(PROTOCOL=tcp)(HOST=lavender)
(PORT=1521))))
Getting local community information
Looking for local addresses setup by nrigla
No addresses in the preferred address list
TNSNAV.ORA is not present. No local communities entry.
Getting local address information
Address list being processed...
No community information so all addresses are "local"
Resolving address to use to call destination or next hop
Processing address list...
No community entries so iterate over address list
This a local community access
Got routable address information

++++++ NOTE: Calling first address (IPC). +++++++

Making call with following address information:


(DESCRIPTION=(EMPTY=0)(ADDRESS=(PROTOCOL=ipc)(KEY=bad_host)))
Calling with outgoing connect data
(DESCRIPTION=(CONNECT_DATA=(SID=trace)(CID=(PROGRAM=)(HOST=lala)
(USER=ginger)))(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)
(HOST=lavender)(PORT=1521))))
(DESCRIPTION=(EMPTY=0)(ADDRESS=(PROTOCOL=ipc)(KEY=bad_host)))
KEY = bad_host
connecting...
opening transport...
-<ERROR>- sd=8, op=1, resnt[0]=511, resnt[1]=2, resnt[2]=0
-<ERROR>- unable to open transport
-<ERROR>- nsres: id=0, op=1, ns=12541, ns2=12560; nt[0]=511, nt[1]=2,
nt[2]=0
connect attempt failed
Call failed...
Call made to destination
Processing address list so continuing

++++++ NOTE: Looking for Routing Information. +++++++

Getting local community information


Looking for local addresses setup by nrigla
No addresses in the preferred address list
TNSNAV.ORA is not present. No local communities entry.
Getting local address information
Address list being processed...
No community information so all addresses are "local"
Resolving address to use to call destination or next hop
Processing address list...
No community entries so iterate over address list
This a local community access
Got routable address information

++++++ NOTE: Calling second address (TCP/IP). +++++++

Making call with following address information:


(DESCRIPTION=(EMPTY=0)(ADDRESS=(PROTOCOL=tcp)
(HOST=lavender)(PORT=1521)))
Calling with outgoing connect data
(DESCRIPTION=(CONNECT_DATA=(SID=trace)(CID=(PROGRAM=)(HOST=lala)
(USER=ginger)))(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)
(HOST=lavender) (PORT=1521))))
(DESCRIPTION=(EMPTY=0)(ADDRESS=(PROTOCOL=tcp)
(HOST=lavender)(PORT=1521)))
port resolved to 1521
looking up IP addr for host: lavender

-<FATAL?>- *** hostname lookup failure! ***

-<ERROR>- nsres: id=0, op=13, ns=12545, ns2=12560; nt[0]=515, nt[1]=0,


nt[2]=0
Call failed...
Exiting NRICALL with following termination result -1
-<ERROR>- error from nricall
-<ERROR>- nr err code: 12206
-<ERROR>- ns main err code: 12545
-<ERROR>- ns (2) err code: 12560
-<ERROR>- nt main err code: 515
-<ERROR>- nt (2) err code: 0
-<ERROR>- nt OS err code: 0
-<ERROR>- Couldn't connect, returning 12545

Most tracing is very similar to this. If you have a basic understanding of


the events the components will perform, you can identify the probable cause
of an error in the text of the trace.

19.64 ORA-01595: error freeing extent (2) of rollback segment (9)):


===================================================================

Note 1:

ORA-01595, 00000, "error freeing extent (%s) of rollback segment (%s))"


Cause: Some error occurred while freeing inactive rollback segment extents.
Action: Investigate the accompanying error.

Note 2:

Two factors are necessary for this to happen.

A rollback segment has extended beyond OPTIMAL.

There are two or more transactions sharing the rollback segment at the time of the
shrink.
What happens is that the first process gets to the end of an extent, notices the
need to shrink
and begins the recursive transaction to do so. But the next transaction blunders
past the end
of that extent before the recursive transaction has been committed.
The preferred solution is to have sufficient rollback segments to eliminate the
sharing of
rollback segments between processes. Look in V$RESOURCE_LIMIT for the high-water-
mark of transactions.
That is the number of rollback segments you need. The alternative solution is to
raise OPTIMAL
to reduce the risk of the error.

Note 3:

This error is harmless. You can try (and probably should) set optimal to null
and
maxextents to unlimited (which might minimize the frequency of these errors).

These errors happen sometimes when oracle is shrinking the rollback segments
upto the optimal
size. The undo data for shrinking is also kept in the rollback segments. So
when it attempts to
shrink the same rollback segment where its trying to write the undo, it throws
this warning.

Its not a failure per se .. since oracle will retry and succeed.

19.65: OUI-10022: oraInventory cannot be used because it is in an invalid state


===============================================================================

Note 1:
-------

If there are other products installed through the OUI, create a copy of =
the
oraInst.loc file (depending on the UNIX system,
possibly in /etc or /var/opt/oracle).

Modify the inventory_loc parameter to point to a different location for =


the OUI to create the oraInventory directory.

Run the installer using the -invPtrLoc parameter


(eg: runInstaller -invPtrLoc /PATH/oraInst.loc).

This will retain the existing oraInventory directory and create a new =
one for use by the new product.

19.66: Failure to extend rollback segment because of 30036 condition


====================================================================

Not a serious problem. Do some undo tuning.

19.67: ORA-06502: PL/SQL: numeric or value error: character string buffer too
small
==================================================================================
=

Note 1:

Hi,

I am having a strange problem with an ORA-06502 error I am getting and don't


understand why.
I would expect this error to be quite easy to fix, it would suggest that a
variable is not large enough
to cope with a value being assigned to it. But I'm fairly sure that isn't the
problem. Anyway I have
a stored procedure similar to the following:

PROCEDURE myproc(a_user IN VARCHAR2,


p_1 OUT <my_table>.<my_first_column>%TYPE,
p_2 OUT <my_table>.<my_second_column>%TYPE)
IS

BEGIN

SELECT my_first_column,
my_second_column
INTO p_1,
p_2
FROM my_table
WHERE user_id = a_user;

END;
/
The procedure is larger than this, but using error_position variables I have
tracked it down
to one SQL statement. But I don't understand why I'm getting the ORA-06502,
because the variables I am selecting
into are defined as the same types as the columns I'm selecting. The variable I
am selecting into is in fact
a VARCHAR2(4), but if I replace the sql statement with p_1 := 'AB'; it still
fails.
It succeeds if I do p_1 := 'A';

Has anyone seen this before or anything similar that they might be able to help me
with please?

Thanks,

mtae.

-- Answer 1:

It is the code from which you are calling it that has the problem, e.g.

DECLARE
v1 varchar2(1);
v2 varchar2(1);
BEGIN
my_proc ('USER',v1,v2);
END;
/

-- Answer 2

try this:

PROCEDURE myproc(a_user IN VARCHAR2,


p_1 OUT varchar2,
p_2 OUT varchar2)
IS
v_1 <my_table>.<my_first_column>%TYPE;
v_2 <my_table>.<my_second_column>%TYPE;
BEGIN

SELECT my_first_column,
my_second_column
INTO v_1,
v_2
FROM my_table
WHERE user_id = a_user;
p_1 := v_1;
p_2 := v_2;
END;
/

Comment from mtae


Date: 07/28/2004 04:24AM PDT
Author Comment
It was the size of the variable that was being used as the actual parameter being
passed in.
Feeling very silly, but thanks, sometimes you can look at a problem too long.

19.68 ORA-00600: internal error code, arguments: [LibraryCacheNotEmptyOnClose],


[], [], [], [], [], [], []
==================================================================================
========================

thread:

see this error every time I shutdown a 10gR3 grid control database on 10.2.0.3
RDBMS, even though all opmn and OMS
processes are down. So far, I have not seen any problems, apart from the annoying
shutdown warning.

Note 365103.1 seems to indicate it can be ignored:

Cause
This is due to unpublished Bug 4483084 'ORA-600 [LIBRARYCACHENOTEMPTYONCLOSE]'

This is a bug in that an ORA-600 error is reported when it is found that something
is still going
on during shutdown. It does not indicate any damage or a problem in the system.

Solution

At the time of writing, it is likely that the fix will be to report a more
meaningful external error, although this
has not been finalised.

The error is harmless so it is unlikely that this will be backported to 10.2.

The error can be safely ignored as it does not indicate a problem with the
database.

thread:

ORA-00600: internal error code, arguments: [LibraryCacheNotEmptyOnClose], [],[],


[], [], [], [], []
14-DEC-06 05:15:35 GMT

Hi,

There is no patch available for the bug 4483084.

You need to Ignore this error, as there is absolutely no impact to the database
due to this error.

Thanks,
Ram
thread:

19.69: ORA-12518 Tns: Listener could not hand off:


--------------------------------------------------

>>>> thread 1:

Q:

ORA-12518 Tns: Listener could not hand off client conenction


Posted: May 31, 2007 2:02 AM Reply

Dear exeprts,

Plz tell me how can I resolve ORA-12518 Tns: Listener could not hand off client
conenction.

ORA-12518: TNS:listener could not hand off client connection

A:

Your server is probably running out of memory and need to swap memory to disk.
One cause can be an Oracle process consuming too much memory.

A possible workaround is to set following parameter in the listener.ora and


restart the listener:
DIRECT_HANDOFF_TTC_LISTENER=OFF

You might need to increase the value of large_pool_size.

Regards.

>>>> thread 2:

Q:

Hi All,

I'm using oracle 10g in window XP system. Java programmers will be


accessing the database. Frequently they will get "ORA-12518:
TNS:listener could not hand off" error and through sqlplus also i'll
get this error. But, after sometime it works fine. I checked
tnsnames.ora and listner.ora files entry. they seems to be ok. i have
used system name itself for HOST flag instead of IP address. But still
i'm getting this error.

Can anybody tell me what might be the problem?

Thanks,

A:

From Oracle's error messages docco, we see

--------
TNS-12518 TNS:listener could not hand off client connection

Cause: The process of handing off a client connection to another process


failed.

Action: Turn on listener tracing and re-execute the operation. Verify


that the listener and database instance are properly configured for direct
handoff. If the problem persists, contact Oracle Support Services.
--------

So what does the listener trace indicate?

A:

Did you by any chance upgrade with SP2? If so, you could
be running into firewall problems - 1521 is open, the initial
contact made, but the handoff to a random (blocked!) port
fails...
--

Regards,
Frank van Bortel

>>>> thread 3:

Q:

I install Oracle9i and Oracle8i on Win2000 Server. I used Listener of 9i. My


database based on Oracle8i.
I found this error:ORA-12518: TNS:listener could not hand off client
connectionwhen I logged on database.
If I restarted database and listener it run but a few minutes it failed. Can u
help me???

A:

Are you usting MTS?


First start the listener and then the database ( both the databases).
Now check the status of listener.
if nothing works, try
DIRECT_HANDOFF_TTC_<listener name> = OFF in listener.ora.

>>>> thread 4

Q:

This weekend I installed Oracle Enterprise 10g release 2 on Windows 2003 server.
The server is a Xeon dual processor 2.5MHz each with 3GB RAM and 300GB harddisk on
RAID 1.

The installation was fine, I then installed our application on it, that went
smoothly as well.
I had 3 users logged in to test the installation and everything was ok.

Today morning we had 100 users trying to login and some got access, but majority
got the ORA error above
and have no access. I checked the tnsnames.ora file and sqlnet.ora file, service
on the database all looks ok.

I also restarted the listener service on the server, but I still get this error
message.
I've also increased no of sessions to 1000.

Has anyone ever come across a issue like this in Oracle 10g.
Regards

A:

I think I've resolved the problem, majority of my users are away on easter break
so when they return
I will know whether this tweak has paid off or not.

Basically my SGA settings were quite high, so 60% of RAM was being used by SGA and
40% by Windows.
I basically reduced the total SGA to 800 MB and i've had no connection problems,
ever since.

>>>> thread 5

ORA-12518: TNS:listener could not hand off client connection


Your server is probably running out of memory and need to swap memory to disk.
One cause can be an Oracle process consuming too much memory.

A possible workaround is to set following parameter in the listener.ora and


restart the listener:
DIRECT_HANDOFF_TTC_LISTENER=OFF

Should you be working with Multi threaded server connections, you might need to
increase
the value of large_pool_size.

19.70: Private strand flush not complete:


-----------------------------------------

-- thread:

Q:

I just upgraded to Oracle 10g release 2 and I keep getting this error in my alert
log

Thread 1 cannot allocate new log, sequence 509


Private strand flush not complete
Current log# 2 seq# 508 mem# 0: /usr/local/o1_mf_2_2cx5wnw5_.log
Current log# 2 seq# 508 mem# 1: /usr/local/o1_mf_2_2cx5wrjk_.log

What causes the "private strand flush not complete" message?

A:

This is not a bug, it's the expected behavior in 10gr2. The "private strand flush
not complete" is a "noise" error,
and can be disregarded because it relates to internal cache redo file management.

Oracle Metalink note 372557.1 says that a "strand" is a new 10gr2 term for redo
latches. It notes that a strand is a new mechanism
to assign redo latches to multiple processes, and it's related to the
log_parallelism parameter. The note says that the number of
strands depends on the cpu_count.

When you switch redo logs you will see this alert log message since all private
strands have to be flushed to the current redo log.

-- thread:

Q:

HI,

I'm using the Oracle 10g R2 in a server with Red Hat ES 4.0, and i received the
following message in alert log
"Private strand flush not complete", somebody knows this error?

The part of log, where I found this error is:

Fri Feb 10 10:30:52 2006


Thread 1 advanced to log sequence 5415
Current log# 8 seq# 5415 mem# 0: /db/oradata/bioprd/redo081.log
Current log# 8 seq# 5415 mem# 1: /u02/oradata/bioprd/redo082.log
Fri Feb 10 10:31:21 2006
Thread 1 cannot allocate new log, sequence 5416
Private strand flush not complete
Current log# 8 seq# 5415 mem# 0: /db/oradata/bioprd/redo081.log
Current log# 8 seq# 5415 mem# 1: /u02/oradata/bioprd/redo082.log
Thread 1 advanced to log sequence 5416
Current log# 13 seq# 5416 mem# 0: /db/oradata/bioprd/redo131.log
Current log# 13 seq# 5416 mem# 1: /u02/oradata/bioprd/redo132.log

Thanks,

A:

Hi,

Note:372557.1 has brief explanation of this message.

Best Regards,

-- thread:

Q:

Hi,
I`m having such info in alert_logfile... maybee some ideas or info...
Private strand flush not complete
What could this posible mean ??
Thu Feb 9 22:03:44 2006
Thread 1 cannot allocate new log, sequence 387
Private strand flush not complete
Current log# 2 seq# 386 mem# 0: /path/redo02.log
Thread 1 advanced to log sequence 387
Current log# 3 seq# 387 mem# 0: /path/redo03.log

Thanks

A:

see http://download-
uk.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents003.htm#sthref4478

regards

log file switch (private strand flush incomplete)


User sessions trying to generate redo, wait on this event when LGWR waits for DBWR
to complete flushing redo from IMU buffers
into the log buffer; when DBWR is complete LGWR can then finish writing the
current log, and then switch log files.

Wait Time: 1 second

Parameters: None

Error message :Thread 1 cannot allocate new log


-----------------------------------------------

Note 1:
-------

Q:

Hi
Iam getting error message "Thread 1 cannot allocate new log", sequence40994

can any one help me out , how to overcome this problem.


Give me a solution.
regards

A:

Perhaps this will provide some guidance.

Rick

Sometimes, you can see in your alert.log file, the following corresponding
messages:

Thread 1 advanced to log sequence 248


Current log# 2 seq# 248 mem# 0: /prod1/oradata/logs/redologs02.log
Thread 1 cannot allocate new log, sequence 249
Checkpoint not complete

This message indicates that Oracle wants to reuse a redo log file, but
the
corresponding checkpoint associated is not terminated. In this case,
Oracle
must wait until the checkpoint is completely realized. This situation
may be encountered particularly when the transactional activity is
important.

This situation may also be checked by tracing two statistics in the


BSTAT/ESTAT report.txt file. The two statistics are:

- Background checkpoint started.


- Background checkpoint completed.

These two statistics must not be different more than once. If this is

not true, your database hangs on checkpoints. LGWR is unable to


continue
writing the next transactions until the checkpoints complete.

Three reasons may explain this difference:

- A frequency of checkpoints which is too high.


- A checkpoints are starting but not completing
- A DBWR which writes too slowly.

The number of checkpoints completed and started as indicated by


these statistics should be weighed against the duration of the
bstat/estat
report. Keep in mind the goal of only one log switch per hour, which
ideally
should equate to one checkpoint per hour as well.

The way to resolve incomplete checkpoints is through tuning


checkpoints and
logs:

1) Give the checkpoint process more time to cycle through the logs
- add more redo log groups
- increase the size of the redo logs
2) Reduce the frequency of checkpoints
- increase LOG_CHECKPOINT_INTERVAL
- increase size of online redo logs
3) Improve the efficiency of checkpoints enabling the CKPT process
with CHECKPOINT_PROCESS=TRUE
4) Set LOG_CHECKPOINT_TIMEOUT = 0. This disables the checkpointing
based on
time interval.
5) Another means of solving this error is for DBWR to quickly write
the dirty
buffers on disk. The parameter linked to this task is:

DB_BLOCK_CHECKPOINT_BATCH.

DB_BLOCK_CHECKPOINT_BATCH specifies the number of blocks which are


dedicated
inside the batch size for writing checkpoints. When you want to
accelerate
the checkpoints, it is necessary to increase this value.

Note 2:
-------

Q:

Hi All,
Lets generate a good discussion thread for this database performance
issue.
Sometimes this message is found in the alert log generated.
Thread 1 advanced to log sequence xxx
Current log# 2 seq# 248 mem# 0: /df/sdfds
Thread 1 cannot allocate new log, sequence xxx
Checkpoint not complete
I would appreciate a discussion on the following
1. What are the basic reasons for this warning
2. What is the preventive measure to be taken / Methods to detect its
occurance
3. What are the post occurance measures/solutions for this.

Regards

A:

Increase size of your redo logs.

A:

Amongst other reasons, this happens when redo logs are not sized properly.
A checkpoint could not be completed because a new log is trying to be
allocated while it is still in use (or hasn't been archived yet).
This can happen if you are running very long transactions that are
producing large amounts of redo (which you did not anticipate) and the redo
logs are too small to handle it.
If you are not archiving, increasing the size of your logfiles should help
(each log group should have at least 2 members on separate disks).
Also, be aware of what type of hardware you are using. Typically, raid-5 is
slower for writes than raid-1.
If you are archiving and have increased the size of the redo logs, also try
adding an additional arch process.

I have read plenty of conflicting documentation on how to resolve this


problem. One of the "solutions" is to increase the size of your logbuffer. I
have not found this to be helpful (for my particular databases).

In the future, make sure to monitor the ratio of redo log entries to
requests (it should be around 5000 to 1).
If it slips below this ratio, you may want to consider adding addtional
members to your log groups and increasing their size.

A:

Configuring redo logs is an art and you may never


archieve 100% of the time that there is no waiting for
available log files.

But in my opinion, the best bet for your situation


is to add one (or more) redo log instead of increase
the size of the redo logs. Because even if your redo
logs are huge, but if your disk controller is slow, a
large transaction (for example, data loading) may use
up all three redo logs before the first redo log
completes the archive and becomes available, thus
Oracle will halt until the archive is completed.

19.71: tkcrrsarc: (WARN) Failed to find ARCH for message (message:0x10):


------------------------------------------------------------------------

-- thread 1:

Q:

tkcrrsarc: (WARN) Failed to find ARCH for message (message:0x1)


tkcrrpa: (WARN) Failed initial attempt to send ARCH message (message:0x1)

Most repords speak of a harmless message.


Some reports refer to a bug affecting Oracle versions op to 10.2.0.2

19.72 ORA-600 12333


-------------------

thread 1:

ORA-600[12333] is reported with three additional numeric values when arequest is


being received from a network packet
and the request code inthe packet is not recognized. The three additional values
report theinvalid request values received.

The error may have a number of different root causes. For example, anetwork error
may have caused bad data to be received,
or the clientapplication may have sent wrong data, or the data in the network
buffermay have been overwritten.
Since there are many potential causes of thiserror, it is essential to have a
reproducible testcase to correctlydiagnose
the underlying cause. If operating system network logs areavailable, it is
advisable to check them for evidence of networkfailures
which may indicate network transmission problems.

thread 2:

We just found out that it was related to Block option DML RETURNINGVALUE in
Forms4.5
We set it to NO, and the problem was solved

Thanks anyway

thread 3:

From: Oracle, Kalpana Malligere 05-Oct-99 22:09


Subject: Re : ORA-00600: internal error code, arguments: [12333], [0], [3],
[81], [], [], []

Hello,

An ORA-600 12333 occurs because there has been a client/server


protocol violation. There can be many reasons for this: Network errors, network
hardware problems, etc. Where do you see or when do you get this error? Do you
have any idea what was going on at the time of this error? Which process
received it, i.e., was it a background or user process? Were you running
sql*loader? Does this error have any adverse impact on the application or
database?

We cannot generally progress unless there is reproducible test case or


reproducible environment. There are many bugs logged for this error
which are closed as 'could not reproduce'. In one such bug, the
developer indicated that "The problem does not normally have any bad
side effects." So suggest you try to isolate what is causing it as much as
possible. The error can be due to underlying network problems as well. It is
not indicative of a problem with the database itself.

19.73: SMON: Parallel transaction recovery tried:


-------------------------------------------------

Note 1:
-------

Q:

I was inserting 2.000.000 records in a table and the connection has been killed.
in my alert file I found the following message : "SMON: Parallel transaction
recovery tried"

here the content of the smon log file:


Redo thread mounted by this instance: 1

Oracle process number: 6

Windows thread id: 2816, image: ORACLE.EXE

*** 2006-06-29 21:33:05.484


*** SESSION ID:(5.1) 2006-06-29 21:33:05.453
*** 2006-06-29 21:33:05.484
SMON: Restarting fast_start parallel rollback
*** 2006-06-30 02:50:54.695
SMON: Parallel transaction recovery tried

A:

Hi,

This is an expected message when cleanup is occuring and you have


fast_start_parallel_rollback set to cleanup rollback segments
after a failed transaction

Note 2:
-------

You get this message if SMON failed to generate the slave servers necessary to
perform a parallel rollback of a transaction.
Check the value for the parameter, FAST_START_PARALLEL_ROLLBACK (default is LOW).
LOW limits the number of rollback processes to 2 * CPU_COUNT.
HIGH limits the number of rollback processes to 4 * CPU_COUNT.
You may want to set the value of this parameter to FALSE. Received on Wed Mar 10
2004 - 23:58:40 CST

Note 3:
-------

Q:

SMON: Parallel transaction recovery tried

We found above message in alert_sid.log file.

A:

No need to worry about it. it is information message ...


SMON start recovery in parrallel but failed and done in serial mode.

Note 4:
-------

The system monitor process (SMON) performs recovery, if necessary, at instance


startup. SMON is also responsible
for cleaning up temporary segments that are no longer in use and for coalescing
contiguous free extents
within dictionary managed tablespaces. If any terminated transactions were skipped
during instance recovery
because of file-read or offline errors, SMON recovers them when the tablespace or
file is brought back online.
SMON checks regularly to see whether it is needed. Other processes can call SMON
if they detect a need for it.

With Real Application Clusters, the SMON process of one instance can perform
instance recovery for a failed CPU or instance.

19.74: KGX Atomic Operation:


============================

Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production


With the Partitioning, OLAP and Data Mining options
ORACLE_HOME = /dbms/tdbaaccp/ora10g/home
System name: AIX
Node name: pl003
Release: 3
Version: 5
Machine: 00CB560D4C00
Instance name: accptrid
Redo thread mounted by this instance: 1
Oracle process number: 16
Unix process pid: 2547914, image: oracle@pl003 (TNS V1-V3)

*** 2008-03-20 07:22:28.571


*** SERVICE NAME:(SYS$USERS) 2008-03-20 07:22:28.570
*** SESSION ID:(161.698) 2008-03-20 07:22:28.570
KGX cleanup...
KGX Atomic Operation Log 700000036eb4350
Mutex 70000003f9adcf8(161, 0) idn 0 oper EXAM
Cursor Parent uid 161 efd 5 whr 26 slp 0
oper=DEFAULT pt1=700000039ce1c30 pt2=700000039ce1e18 pt3=700000039ce2338
pt4=0 u41=0 stt=0

Note 1:
-------

Q:

Hi there,
Oracle has started using mutexes and it is said that they are more efficient as
compared to latches.
Questions
1)What is mutex?I know mutex are mutual exclusions and they are the concept of
multiple threads.What I want to know that how
this concept is implemented in Oracledatabase?
2) How they are better than latches?both are used for low level locking so how one
is better than the other?
Any input is welcome.
Thanks and regards
Aman....
A:

1) Simply put mutexes are memory structures. They are used to serialize the access
to shared structures.
IMHO their most important characteristics are two. First, they can be taken in
shared or exclusive mode. Second, getting a mutex
can be done in wait or no-wait mode.

2) The main advantages over latches are that mutexes requires less memory and are
faster to get and release.

A:

In Oracle, latches and mutexes are different things and managed using different
modules.
KSL* modules for latches and KGX* for mutexes.

As Chris said, general mutex operatins require less CPU instructions than latch
operations (as they aren't as sophisticated
as latches and don't maintain get/miss counts as latches do).

But the main scalability benefit comes from that there's a mutex structure in each
child cursor handle and the mutex
itself acts as cursor pin structure. So if you have a cursor open (or cached in
session cursor cache) you don't need
to get the library cache latch (which was previously needed for changing cursor
pin status), but you can modify the cursor's
mutex refcount directly (with help of pointers in open cursor state area in
sessions UGA).

Therefore you have much higher scalability when pinning/unpinning cursors (no
library cache latching needed,
virtually no false contention) and no separate pin structures need to be
allocated/maintained.

Few notes:
1) library cache latching is still needed for parsing etc, the mutexes address
only the pinning issue in library cache
2) mutexes are currently used for library cache cursors (not other objects like
PL/SQL stored procs, table defs etc)
3) As mutexes are a generic mechanism (not library cache specific) they're used in
V$SQLSTATS underlying structures too
4) When mutexes are enabled, you won't see cursor pins from X$KGLPN anymore (as
X$KGLPN is a fixed table based on the KGL pin array
- which wouldn't be used for cursors anymore)

19.75: ktsmgtur(): TUR was not tuned for 361 secs:


==================================================

[pl101][tdbaprod][/dbms/tdbaprod/prodrman/admin/dump/bdump] cat
prodrman_mmnl_1011950.trc
/dbms/tdbaprod/prodrman/admin/dump/bdump/prodrman_mmnl_1011950.trc
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
ORACLE_HOME = /dbms/tdbaprod/ora10g/home
System name: AIX
Node name: pl101
Release: 3
Version: 5
Machine: 00CB85FF4C00
Instance name: prodrman
Redo thread mounted by this instance: 1
Oracle process number: 12
Unix process pid: 1011950, image: oracle@pl101 (MMNL)

*** 2008-03-25 06:58:08.841


*** SERVICE NAME:(SYS$BACKGROUND) 2008-03-25 06:58:08.811
*** SESSION ID:(105.1) 2008-03-25 06:58:08.811
ktsmgtur(): TUR was not tuned for 361 secs

What does this mean?

Note 1:
-------

Tur is a pathchecker, and if a SAN connection is lost, TUR will complain.

19.76: tkcrrpa: (WARN) Failed initial attempt to send ARCH message:


===================================================================

> *** SERVICE NAME:() 2008-03-22 14:56:43.590


> *** SESSION ID:(221.1) 2008-03-22 14:56:43.590
> Maximum redo generation record size = 132096 bytes
> Maximum redo generation change vector size = 98708 bytes
> tkcrrsarc: (WARN) Failed to find ARCH for message (message:0x10)
> tkcrrpa: (WARN) Failed initial attempt to send ARCH message (message:0x10)

No good answer yet.

19.77: Weird errors 1:


======================

In a trace file of an Oracle 10.2.0.3 db on AIX 5.3 we can find:

>>>> DATABASE CALLED PRODTRID:

> OS pid = 3907726


> loadavg : 1.12 1.09 1.13
> swap info: free_mem = 49.16M rsv = 24.00M
> alloc = 2078.75M avail = 6144.00M swap_free = 4065.25M
> F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME
TTY TIME CMD
> 240001 A tdbaprod 3907726 1 0 60 20 1cfff7400 90692
06:00:39 - 0:00 ora_m000_prodtrid
> open: Permission denied
> 3907726: ora_m000_prodtrid
> 0x00000001000f81e0 sskgpwwait(??, ??, ??, ??, ??) + ??
> 0x00000001000f5c54 skgpwwait(??, ??, ??, ??, ??) + 0x94
> 0x000000010010ba00 ksliwat(??, ??, ??, ??, ??, ??, ??, ??) + 0x640
> 0x0000000100116744 kslwaitns_timed(??, ??, ??, ??, ??, ??, ??, ??) + 0x24
> 0x0000000100170374 kskthbwt(0x0, 0x7000000, 0x0, 0x0, 0x15ab3c, 0x28284288,
0xfffffff, 0x7000000) + 0x214
> 0x0000000100116884 kslwait(??, ??, ??, ??, ??, ??) + 0x84
> 0x00000001002c8fb0 ksvrdp() + 0x550
> 0x00000001041c8c34 opirip(??, ??, ??) + 0x554
> 0x0000000102ab4ba8 opidrv(??, ??, ??) + 0x448
> 0x000000010409df30 sou2o(??, ??, ??, ??) + 0x90
> 0x0000000100000870 opimai_real(??, ??) + 0x150
> 0x00000001000006d8 main(??, ??) + 0x98
> 0x0000000100000360 __start() + 0x90
> *** 2008-04-01 06:01:43.294

At other instances we find:

>>>> DATABASE CALLED PRODRMAN

06:01:41 - Check for changes since lastscan in file:


/dbms/tdbaprod/prodrman/admin/dump/bdump/prodrman_cjq0_1003754.trc

Warning: Errors detected in file


/dbms/tdbaprod/prodrman/admin/dump/bdump/prodrman_cjq0_1003754.trc

> OS pid = 3997922


> loadavg : 1.00 1.09 1.17
> swap info: free_mem = 62.76M rsv = 24.00M
> alloc = 2087.91M avail = 6144.00M swap_free = 4056.09M
> F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME
TTY TIME CMD
> 240001 A tdbaprod 3997922 1 4 62 20 1322c8400 91516
05:43:28 - 0:00 ora_j000_prodrman
> open: Permission denied
> 3997922: ora_j000_prodrman
> 0x00000001000f81e0 sskgpwwait(??, ??, ??, ??, ??) + ??
> 0x00000001000f5c54 skgpwwait(??, ??, ??, ??, ??) + 0x94
> 0x000000010010ba00 ksliwat(??, ??, ??, ??, ??, ??, ??, ??) + 0x640
> 0x0000000100116744 kslwaitns_timed(??, ??, ??, ??, ??, ??, ??, ??) + 0x24
> 0x0000000100170374 kskthbwt(0x0, 0x0, 0x7000000, 0x7000000, 0x15ab10, 0x1,
0xfffffff, 0x7000000) + 0x214
> 0x0000000100116884 kslwait(??, ??, ??, ??, ??, ??) + 0x84
> 0x00000001021d4fcc kkjsexe() + 0x32c
> 0x00000001021d5d58 kkjrdp() + 0x478
> 0x00000001041c8bd0 opirip(??, ??, ??) + 0x4f0
> 0x0000000102ab4ba8 opidrv(??, ??, ??) + 0x448
> 0x000000010409df30 sou2o(??, ??, ??, ??) + 0x90
> 0x0000000100000870 opimai_real(??, ??) + 0x150
> 0x00000001000006d8 main(??, ??) + 0x98
> 0x0000000100000360 __start() + 0x90
> *** 2008-04-01 05:46:23.170
05:46:20 - Check for changes since lastscan in file:
/dbms/tdbaprod/prodrman/admin/dump/bdump/prodrman_cjq0_1003754.trc

Warning: Errors detected in file


/dbms/tdbaprod/prodrman/admin/dump/bdump/prodrman_cjq0_1003754.trc

> /dbms/tdbaprod/prodrman/admin/dump/bdump/prodrman_cjq0_1003754.trc
> Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
> With the Partitioning, OLAP and Data Mining options
> ORACLE_HOME = /dbms/tdbaprod/ora10g/home
> System name: AIX
> Node name: pl101
> Release: 3
> Version: 5
> Machine: 00CB85FF4C00
> Instance name: prodrman
> Redo thread mounted by this instance: 1
> Oracle process number: 10
> Unix process pid: 1003754, image: oracle@pl101 (CJQ0)
>
> *** 2008-04-01 05:46:17.709
> *** SERVICE NAME:(SYS$BACKGROUND) 2008-04-01 05:44:28.394
> *** SESSION ID:(107.1) 2008-04-01 05:44:28.394
> Waited for process J000 to initialize for 60 seconds
> *** 2008-04-01 05:46:17.709
> Dumping diagnostic information for J000:

>>>> DATABASE CALLED ACCPROSS

06:01:26 - Check for changes since lastscan in file:


/dbms/tdbaaccp/accpross/admin/dump/bdump/accpross_cjq0_1970272.trc

Warning: Errors detected in file


/dbms/tdbaaccp/accpross/admin/dump/bdump/accpross_cjq0_1970272.trc

> /dbms/tdbaaccp/accpross/admin/dump/bdump/accpross_cjq0_1970272.trc
> Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
> With the Partitioning, OLAP and Data Mining options
> ORACLE_HOME = /dbms/tdbaaccp/ora10g/home
> System name: AIX
> Node name: pl003
> Release: 3
> Version: 5
> Machine: 00CB560D4C00
> Instance name: accpross
> Redo thread mounted by this instance: 1
> Oracle process number: 10
> Unix process pid: 1970272, image: oracle@pl003 (CJQ0)
>
> *** 2008-04-01 06:01:21.210
> *** SERVICE NAME:(SYS$BACKGROUND) 2008-04-01 06:00:48.099
> *** SESSION ID:(217.1) 2008-04-01 06:00:48.099
> Waited for process J001 to initialize for 60 seconds
> *** 2008-04-01 06:01:21.210
> Dumping diagnostic information for J001:
> OS pid = 3645448
> loadavg : 1.28 1.18 1.16
> swap info: free_mem = 107.12M rsv = 24.00M
> alloc = 3749.61M avail = 6144.00M swap_free = 2394.39M
> F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME
TTY TIME CMD
> 240001 A tdbaaccp 3645448 1 8 64 20 7566c510 91844
05:59:48 - 0:00 ora_j001_accpross
> open: Permission denied
> 3645448: ora_j001_accpross
> 0x00000001000f81e0 sskgpwwait(??, ??, ??, ??, ??) + ??
> 0x00000001000f5c54 skgpwwait(??, ??, ??, ??, ??) + 0x94
> 0x000000010010ba00 ksliwat(??, ??, ??, ??, ??, ??, ??, ??) + 0x640
> 0x0000000100116744 kslwaitns_timed(??, ??, ??, ??, ??, ??, ??, ??) + 0x24
> 0x0000000100170374 kskthbwt(0x0, 0x0, 0x7000000, 0x7000000, 0x16656c, 0x1,
0xfffffff, 0x7000000) + 0x214
> 0x0000000100116884 kslwait(??, ??, ??, ??, ??, ??) + 0x84
> 0x00000001021d4fcc kkjsexe() + 0x32c
> 0x00000001021d5d58 kkjrdp() + 0x478
> 0x00000001041c8bd0 opirip(??, ??, ??) + 0x4f0
> 0x0000000102ab4ba8 opidrv(??, ??, ??) + 0x448
> 0x000000010409df30 sou2o(??, ??, ??, ??) + 0x90
> 0x0000000100000870 opimai_real(??, ??) + 0x150
> 0x00000001000006d8 main(??, ??) + 0x98
> 0x0000000100000360 __start() + 0x90
> *** 2008-04-01 06:01:26.792

>>>> DATABASE CALLED PRODROSS

05:15:00 - Check for changes since lastscan in file:


/dbms/tdbaprod/prodross/admin/dump/bdump/prodross_cjq0_2068516.trc

Warning: Errors detected in file


/dbms/tdbaprod/prodross/admin/dump/bdump/prodross_cjq0_2068516.trc

> /dbms/tdbaprod/prodross/admin/dump/bdump/prodross_cjq0_2068516.trc
> Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
> With the Partitioning, OLAP and Data Mining options
> ORACLE_HOME = /dbms/tdbaprod/ora10g/home
> System name: AIX
> Node name: pl101
> Release: 3
> Version: 5
> Machine: 00CB85FF4C00
> Instance name: prodross
> Redo thread mounted by this instance: 1
> Oracle process number: 10
> Unix process pid: 2068516, image: oracle@pl101 (CJQ0)
>
> *** 2008-04-01 05:13:52.362
> *** SERVICE NAME:(SYS$BACKGROUND) 2008-04-01 05:11:46.862
> *** SESSION ID:(217.1) 2008-04-01 05:11:46.861
> Waited for process J000 to initialize for 60 seconds
> *** 2008-04-01 05:13:52.362
> Dumping diagnostic information for J000:
> OS pid = 1855710
> loadavg : 1.08 1.15 1.20
> swap info: free_mem = 63.91M rsv = 24.00M
> alloc = 2110.61M avail = 6144.00M swap_free = 4033.39M
> F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME
TTY TIME CMD
> 240001 A tdbaprod 1855710 1 4 66 22 1cb2f5400 92672
05:10:46 - 0:00 ora_j000_prodross
> open: Permission denied
> 1855710: ora_j000_prodross
> 0x00000001000f81e0 sskgpwwait(??, ??, ??, ??, ??) + ??
> 0x00000001000f5c54 skgpwwait(??, ??, ??, ??, ??) + 0x94
> 0x000000010010ba00 ksliwat(??, ??, ??, ??, ??, ??, ??, ??) + 0x640
> 0x0000000100116744 kslwaitns_timed(??, ??, ??, ??, ??, ??, ??, ??) + 0x24
> 0x0000000100170374 kskthbwt(0x0, 0x0, 0x7000000, 0x7000000, 0x15aab2, 0x1,
0xfffffff, 0x7000000) + 0x214
> 0x0000000100116884 kslwait(??, ??, ??, ??, ??, ??) + 0x84
> 0x00000001021d4fcc kkjsexe() + 0x32c
> 0x00000001021d5d58 kkjrdp() + 0x478
> 0x00000001041c8bd0 opirip(??, ??, ??) + 0x4f0
> 0x0000000102ab4ba8 opidrv(??, ??, ??) + 0x448
> 0x000000010409df30 sou2o(??, ??, ??, ??) + 0x90
> 0x0000000100000870 opimai_real(??, ??) + 0x150
> 0x00000001000006d8 main(??, ??) + 0x98
> 0x0000000100000360 __start() + 0x90
> *** 2008-04-01 05:13:59.017

06:01:42 - Check for changes since lastscan in file:


/dbms/tdbaprod/prodroca/admin/dump/bdump/prodroca_cjq0_757946.trc

Warning: Errors detected in file


/dbms/tdbaprod/prodroca/admin/dump/bdump/prodroca_cjq0_757946.trc

> OS pid = 1867996


> loadavg : 1.00 1.09 1.17
> swap info: free_mem = 66.71M rsv = 24.00M
> alloc = 2087.91M avail = 6144.00M swap_free = 4056.09M
> F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME
TTY TIME CMD
> 240001 A tdbaprod 1867996 1 3 65 22 1078c5400 92656
05:44:06 - 0:00 ora_j000_prodroca
> open: Permission denied
> 1867996: ora_j000_prodroca
> 0x00000001000f81e0 sskgpwwait(??, ??, ??, ??, ??) + ??
> 0x00000001000f5c54 skgpwwait(??, ??, ??, ??, ??) + 0x94
> 0x000000010010ba00 ksliwat(??, ??, ??, ??, ??, ??, ??, ??) + 0x640
> 0x0000000100116744 kslwaitns_timed(??, ??, ??, ??, ??, ??, ??, ??) + 0x24
> 0x0000000100170374 kskthbwt(0x0, 0x0, 0x7000000, 0x7000000, 0x15ab10, 0x1,
0xfffffff, 0x7000000) + 0x214
> 0x0000000100116884 kslwait(??, ??, ??, ??, ??, ??) + 0x84
> 0x00000001021d4fcc kkjsexe() + 0x32c
> 0x00000001021d5d58 kkjrdp() + 0x478
> 0x00000001041c8bd0 opirip(??, ??, ??) + 0x4f0
> 0x0000000102ab4ba8 opidrv(??, ??, ??) + 0x448
> 0x000000010409df30 sou2o(??, ??, ??, ??) + 0x90
> 0x0000000100000870 opimai_real(??, ??) + 0x150
> 0x00000001000006d8 main(??, ??) + 0x98
> 0x0000000100000360 __start() + 0x90
> *** 2008-04-01 05:46:23.398

06:01:42 - Check for changes since lastscan in file:


/dbms/tdbaprod/prodtrid/admin/dump/bdump/prodtrid_mmon_921794.trc

Warning: Errors detected in file


/dbms/tdbaprod/prodtrid/admin/dump/bdump/prodtrid_mmon_921794.trc

> /dbms/tdbaprod/prodtrid/admin/dump/bdump/prodtrid_mmon_921794.trc
> Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
> With the Partitioning, OLAP and Data Mining options
> ORACLE_HOME = /dbms/tdbaprod/ora10g/home
> System name: AIX
> Node name: pl101
> Release: 3
> Version: 5
> Machine: 00CB85FF4C00
> Instance name: prodtrid
> Redo thread mounted by this instance: 1
> Oracle process number: 11
> Unix process pid: 921794, image: oracle@pl101 (MMON)
>
> *** 2008-04-01 06:01:39.797
> *** SERVICE NAME:(SYS$BACKGROUND) 2008-04-01 06:01:39.385
> *** SESSION ID:(106.1) 2008-04-01 06:01:39.385
> Waited for process m000 to initialize for 60 seconds
> *** 2008-04-01 06:01:39.797
> Dumping diagnostic information for m000:

06:01:42 - Check for changes since lastscan in file:


/dbms/tdbaprod/prodrman/admin/dump/bdump/alert_prodrman.log
06:01:42 - Check for changes since lastscan in file:
/dbms/tdbaprod/prodrman/admin/dump/udump/sbtio.log
06:01:42 - Check for changes since lastscan in file:
/dbms/tdbaprod/prodroca/admin/dump/bdump/alert_prodroca.log
06:01:42 - Check for changes since lastscan in file:
/dbms/tdbaprod/prodroca/admin/dump/udump/sbtio.log
06:01:42 - Check for changes since lastscan in file:
/dbms/tdbaprod/prodross/admin/dump/bdump/alert_prodross.log
06:01:42 - Check for changes since lastscan in file:
/dbms/tdbaprod/prodross/admin/dump/udump/sbtio.log
06:01:42 - Check for changes since lastscan in file:
/dbms/tdbaprod/prodslot/admin/dump/bdump/alert_prodslot.log
06:01:42 - Check for changes since lastscan in file:
/dbms/tdbaprod/prodslot/admin/dump/udump/sbtio.log
06:01:42 - Check for changes since lastscan in file:
/dbms/tdbaprod/prodtrid/admin/dump/bdump/alert_prodtrid.log
06:01:42 - Check for changes since lastscan in file:
/dbms/tdbaprod/prodtrid/admin/dump/udump/sbtio.log
06:01:42 - Check for changes since lastscan in file:
/dbms/tdbaprod/ora10g/home/network/log/listener.log
File /dbms/tdbaprod/ora10g/home/network/log/listener.log is changed,
but no errors detected

Note 1:
-------

Q:

Hi, we're running oracle 10 on AIX 5.3 TL04.


We're experiencing some troubles with paging space.
We've got 7 GB real mem and 10 GB paging space, and smoetimes the paging space
occupation increases and it "freezes" the server
(no telnet nor console connection).

We've seen oracle has shown this error:

CODE
*** 2007-06-18 11:16:49.696
Dump diagnostics for process q002 pid 786600 which did not start after 120
seconds:
(spawn_time:x10BF1F175 now:x10BF3CB36 diff:x1D9C1)
*** 2007-06-18 11:16:54.668
Dumping diagnostic information for q002:
OS pid = 786600
loadavg : 0.07 0.27 0.28
swap info: free_mem = 9.56M rsv = 40.00M
alloc = 4397.23M avail = 10240.00M swap_free = 5842.77M
skgpgpstack: fgets() timed out after 60 seconds
skgpgpstack: pclose() timed out after 60 seconds
ERROR: process 786600 is not alive
*** 2007-06-18 11:19:41.152
*** 2007-06-18 11:27:36.403
Process startup failed, error stack:
ORA-27300: OS system dependent operation:fork failed with status: 12
ORA-27301: OS failure message: Not enough space
ORA-27302: failure occurred at: skgpspawn3

So we think it's oracle's fault, but we're not sure. We're AIX guys, not oracle,
so we're not sure about this.
Can anyone confirm if this is caused by oracle?

A:

Looks like a bug. We are running on a Windows 2003 Server Standard edition. I had
the same problem.
Server was not responding anymore after the following errors:

ORA-27300: OS system dependent operation:spcdr:9261:4200 failed with status: 997


ORA-27301: OS failure message: Overlapped I/O operation is in progress.
ORA-27302: failure occurred at: skgpspawn

And later:
O/S-Error: (OS 1450) Insufficient system resources exist to complete the requested
service.

We are running the latest patchset 10.2.0.2 because of a big problem in 10.2.0.1
(wrong parsing causes client memory problems.
Procobol., plsql developer ect crash because oracle made mistakes skipping the
parse process, goto direct execute and return
corrupted data to the client.

Tomorrow I will rise a level 1 TAR indicating we had a crach. Server is now
running normaly.

A:

Oracle finally admit there was a bug: BUG 5607984 -


ORACLE DOES NOT CLOSE TCP CONNECTIONS. REMAINS IN CLOSE_WAIT STATE. [On Windows
32-bit].

The patch 10 (patch number 5639232) is supposed to solve the problem for
10.2.0.2.0.
We applied it monday morning and everything is fine up to now.

This bug is also supposed to be solved in the 10.2.0.3.0 patchset that is


availlable on the Metalink site.

Note 2:
-------

Q:

question:
-----------------------------------------------------------

my bdump received two error message traces this morning. One of the trace displays
a lot of detail, mainly as:
*** SESSION ID:(822.1) 2007-02-11 00:35:06.147
Waited for process J000 to initialize for 60 seconds
*** 2007-02-11 00:35:20.276
Dumping diagnostic information for J000:
OS pid = 811172
loadavg : 0.55 0.42 0.44
swap info: free_mem = 3.77M rsv = 24.50M
alloc = 2418.36M avail = 6272.00M swap_free = 3853.64M
F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD
240001 A oracle 811172 1 0 60 20 5bf12400 86396 00:34:32 - 0:00 ora_j000_BAAN
open: The file access permissions do not allow the specified action.

Then whole bunch of the pointers and something like this "0x0000000100055800
kghbshrt(??, ??, ??, ??, ??, ??) + 0x80"
how do I find out what really went wrong? This error occured after I did an export
pump of the DB, about 10 minutes later.
This is first time I sae such and the export pump has been for a year.
My system is Oracle 10g R2 on AIX 5.3L

Note 3:
-------

At least here you have an explanation about the Oracle processes:

pmon
The process monitor performs process recovery when a user process fails. PMON is
responsible for cleaning up
the cache and freeing resources that the process was using. PMON also checks on
the dispatcher processes
(described later in this table) and server processes and restarts them if they
have failed.

mman
Used for internal database tasks.

dbw0
The database writer writes modified blocks from the database buffer cache to the
datafiles. Oracle Database allows a maximum
of 20 database writer processes (DBW0-DBW9 and DBWa-DBWj). The initialization
parameter DB_WRITER_PROCESSES specifies
the number of DBWn processes. The database selects an appropriate default setting
for this initialization parameter
(or might adjust a user specified setting) based upon the number of CPUs and the
number of processor groups.

lgwr
The log writer process writes redo log entries to disk. Redo log entries are
generated in the redo log buffer
of the system global area (SGA), and LGWR writes the redo log entries sequentially
into a redo log file.
If the database has a multiplexed redo log, LGWR writes the redo log entries to a
group of redo log files.

ckpt
At specific times, all modified database buffers in the system global area are
written to the datafiles by DBWn.
This event is called a checkpoint. The checkpoint process is responsible for
signalling DBWn at checkpoints and updating
all the datafiles and control files of the database to indicate the most recent
checkpoint.

smon
The system monitor performs recovery when a failed instance starts up again. In a
Real Application Clusters database,
the SMON process of one instance can perform instance recovery for other instances
that have failed. SMON also cleans up
temporary segments that are no longer in use and recovers dead transactions
skipped during system failure and instance recovery
because of file-read or offline errors. These transactions are eventually
recovered by SMON when the tablespace or file is brought back online.

reco
The recoverer process is used to resolve distributed transactions that are pending
due to a network or system failure
in a distributed database. At timed intervals, the local RECO attempts to connect
to remote databases and automatically complete
the commit or rollback of the local portion of any pending distributed
transactions.
cjq0
Job Queue Coordinator (CJQ0)
Job queue processes are used for batch processing. The CJQ0 process dynamically
spawns job queue slave processes (J000...J999) to run the jobs.

d000
Dispatchers are optional background processes, present only when the shared server
configuration is used.

s000
Dunno.

qmnc Queue monitor background process


A queue monitor process which monitors the message queues. Used by Oracle Streams
Advanced Queuing.

mmon
Performs various manageability-related background tasks.

mmnl
Performs frequent and light-weight manageability-related tasks, such as session
history capture and metrics computation.

j000
A job queue slave. (See cjq0)

Addition:
---------

Sep 13, 2006


Oracle Background Processes, incl. 10gR2

-------------------

--New in 10gR2

-------------------

PSP0 (new in 10gR2) - Process SPawner - to create and manage other Oracle
processes.
NOTE: There is no documentation currently in the Oracle Documentation set on this
process.

LNS1(new in 10gR2) - a network server process used in a Data Guard (primary)


database.

Further explaination From "What's New in Oracle Data Guard?" in the Oracle� Data
Guard Concepts and Administration 10g Release 2 (10.2)

"During asynchronous redo transmission, the network server (LNSn) process


transmits redo data out of the online redo log
files on the primary database and no longer interacts directly with the log writer
process. This change in behavior allows
the log writer (LGWR) process to write redo data to the current online redo log
file and continue processing the next request
without waiting for inter-process communication or network I/O to complete."

-------------------

--New in 10gR1

-------------------

MMAN - Memory MANager - it serves as SGA Memory Broker and coordinates the sizing
of the memory components,
which keeps track of the sizes of the components and pending resize operations.
Used by Automatic Shared Memory Management feature.

RVWR -Recovery Writer - which is responsible for writing flashback logs which
stores pre-image(s) of data blocks.
It is used by Flashback database feature in 10g, which provides a way to quickly
revert an entire Oracle database to the state
it was in at a past point in time.
- This is different from traditional point in time recovery.
- One can use Flashback Database to back out changes that:
- Have resulted in logical data corruptions.
- Are a result of user error.
- This feature is not applicable for recovering the database in case of media
failure.
- The time required for flashbacking a database to a specific time in past is
DIRECTLY PROPORTIONAL to the number of changes made and not on the size
of the database.

Jnnn - Job queue processes which are spawned as needed by CJQ0 to complete
scheduled jobs. This is not a new process.

CTWR - Change Tracking Writer (CTWR) which works with the new block changed
tracking features in 10g for fast RMAN incremental backups.

MMNL - Memory Monitor Light process - which works with the Automatic Workload
Repository new features (AWR) to write out
full statistics buffers to disk as needed.

MMON - Memory MONitor (MMON) process - is associated with the Automatic Workload
Repository new features used for automatic problem
detection and self-tuning. MMON writes out the required statistics for AWR on a
scheduled basis.

M000 - MMON background slave (m000) processes.

CJQn - Job Queue monitoring process - which is initiated with the


job_queue_processes parameter. This is not new.

RBAL - It is the ASM related process that performs rebalancing of disk resources
controlled by ASM.

ARBx - These processes are managed by the RBAL process and are used to do the
actual rebalancing of ASM controlled disk resources.
The number of ARBx processes invoked is directly influenced by the asm_power_limit
parameter.

ASMB - is used to provide information to and from the Cluster Synchronization


Services used by ASM to manage the disk resources.
It is also used to update statistics and provide a heartbeat mechanism.

Changes about Queue Monitor Processes

The QMON processes are optional background processes for Oracle Streams Advanced
Queueing (AQ) which monitor and maintain all
the system and user owned AQ objects. These optional processes, like the job_queue
processes, does not cause the instance to fail
on process failure. They provide the mechanism for message expiration, retry, and
delay, maintain queue statistics,
remove processed messages from the queue table and maintain the dequeue IOT.

QMNx - Pre-10g QMON Architecture

The number of queue monitor processes is controlled via the dynamic initialisation
parameter AQ_TM_PROCESSES.
If this parameter is set to a non-zero value X, Oracle creates that number of QMNX
processes starting from ora_qmn0_
(where is the identifier of the database) up to ora_qmnX_ ; if the parameter is
not specified or is set to 0,
then QMON processes are not created. There can be a maximum of 10 QMON processes
running on a single instance.
For example the parameter can be set in the init.ora as follows

aq_tm_processes=1 or set dynamically via alter system set aq_tm_processes=1;

QMNC & Qnnn - 10g QMON Architecture


Beginning with release 10.1, the architecture of the QMON processes has been
changed to an automatically controlled
coordinator slave architecture. The Queue Monitor Coordinator, ora_qmnc_,
dynamically spawns slaves named, ora_qXXX_,
depending on the system load up to a maximum of 10 in total.

For version 10.01.XX.XX onwards it is no longer necessary to set AQ_TM_PROCESSES


when Oracle Streams AQ or Streams is used.
However, if you do specify a value, then that value is taken into account.
However, the number of qXXX processes can be different
from what was specified by AQ_TM_PROCESSES. If AQ_TM_PROCESSES is not specified in
versions 10.1 and above, QMNC only runs
when you have AQ objects in your database.

19.78: ORA-00600: internal error code, arguments: [13080], [], [], [], [], [], [],
[]:
==================================================================================
====

When running statement ALTER TABLE ENABLE CONSTAINT


this ORA-00600 error appears.
19.79: WARNING: inbound connection timed out (ORA-3136):
========================================================

Note 1:

Q:

WARNING: inbound connection timed out (ORA-3136) this error appearing in Alert log
.
Please explain following:---------------
1.How to overcome this error?
2.Is there any adverse effect in long run?
3.Is it require to SHUTDOWN the DATABASE to solve it.

A:

A good dicussion at freelist.ora

http://www.freelists.org/archives/oracle-l/08-2005/msg01627.html

In 10gR2, SQLNET.INBOUND_CONNECT_TIMEOUT the parameters were set to have a default


of 60 (seconds).

Set the parameters SQLNET.INBOUND_CONNECT_TIMEOUT and


INBOUND_CONNECT_TIMEOUT_listenername to 0 (indefinite).

A:

What the error is telling you is that a connection attempt was made, but the
session authentication was not provided
before SQLNET.INBOUND_CONNECT_TIMEOUT seconds.

As far as adverse effects in the long run, you have a user or process that is
unable to connect to the database.
So someone is unhappy about the database/application.

Before setting SQLNET.INBOUND_CONNECT_TIMEOUT, verify that there is not a firewall


or Network Address Translation (NAT)
between the client and server. Those are common cause for ORA-3136.

Q:

Subject: WARNING: inbound connection timed out (ORA-3136)

I have been getting like 50 of these error message a day in my alert_log


the past couple of days. Anybody know what they mean?

WARNING: inbound connection timed out (ORA-3136)

A:

Yep this is annoying, especially if you have alert log monitors :(. I had
these when I first went to 10G... make these changes to get rid of them:
Listener.ora:

INBOUND_CONNECT_TIMEOUT_<LISTENER_NAME>=0
.. for every listener

Sqlnet.ora:

SQLNET.INBOUND_CONNECT_TIMEOUT=0

Then the errors stop...

Note 2:

SQLNET.INBOUND_CONNECT_TIMEOUT

Purpose
Use the SQLNET.INBOUND_CONNECT_TIMEOUT parameter to specify the time, in seconds,
for a client to connect with the database server
and provide the necessary authentication information.

If the client fails to establish a connection and complete authentication in the


time specified, then the database server
terminates the connection. In addition, the database server logs the IP address of
the client
and an ORA-12170: TNS:Connect timeout occurred error message to the sqlnet.log
file. The client receives either an
ORA-12547: TNS:lost contact or an ORA-12637: Packet receive failed error message.

. Without this parameter, a client connection to the database server can stay open
indefinitely without authentication.
Connections without authentication can introduce possible denial-of-service
attacks, whereby malicious clients attempt to
flood database servers with connect requests that consume resources.

To protect both the database server and the listener, Oracle Corporation
recommends setting this parameter in combination
with the INBOUND_CONNECT_TIMEOUT_listener_name parameter in the listener.ora file.
When specifying values for these parameters,
consider the following recommendations:

Set both parameters to an initial low value.


Set the value of the INBOUND_CONNECT_TIMEOUT_listener_name parameter to a lower
value than the SQLNET.INBOUND_CONNECT_TIMEOUT parameter.
For example, you can set INBOUND_CONNECT_TIMEOUT_listener_name to 2 seconds and
INBOUND_CONNECT_TIMEOUT parameter to 3 seconds.
If clients are unable to complete connections within the specified time due to
system or network delays that are normal
for the particular environment, then increment the time as needed.

19.80 How to insert special symbols:


====================================

Note 1:
-------
Q:

Hi,
Is there anyone who knows how to insert a value containing "&" into a table?
sth like this:

insert into test_tab (test_field) values ('&test');

I tried ''&test' and many more but none of them works:-(

As far as I know Oracle tries to bind a value when it encounters '&sth'...

thanks in advance

A:

Try:
set define off
Then execute your insert.

19.81: SGA POLICY: Cache below reserve getting from component1:


===============================================================

19.82: AUTO SGA: Not free:


==========================

Q:

Hi,
We have 10gr2 on windows server 2003 standard edition.

The below errors are generated in mman trace files every now and then.

AUTO SGA: Not free 0x2DFE78A8, 4, 1, 0


AUTO SGA: Not free 0x2DFE78A8, 4, 1, 0
AUTO SGA: Not free 0x2DFE795C, 4, 1, 0
AUTO SGA: Not free 0x2DFE7A10, 4, 1, 0
AUTO SGA: Not free 0x2DFE7AC4, 4, 1, 0
AUTO SGA: Not free 0x2DFE7B78, 4, 1, 0
AUTO SGA: Not free 0x2DFE7C2C, 4, 1, 0
AUTO SGA: Not free 0x2DFE7CE0, 4, 1, 0
AUTO SGA: Not free 0x2DFE7D94, 4, 1, 0
AUTO SGA: Not free 0x2DFF2708, 4, 1, 0

metalink doesnt give much info either.( BUG : 5201883 for your reference )
did anybody happened to have come across this issue and probably resolved it.
Any comments are appreciated.

A:

This can be safely ignored.Since ASMM(Automatic Shared Memory Management) is


enabled at instance level, you might be hitting this bug.
Check Metalink note: 394026.1

Adding the Metalink note.

A:

As stated in the bug description,

either
1) ignore the messages and delete generated trace files periodically
and/or
2) wait for patchset 10.2.0.4

=====================
20. DATABASE TRACING:
=====================

20.2 Oracle 10g:


================

20.2.1 Tracing a session in 10g:


--------------------------------

The current state of database and instance trace is reported in the data
dictionary view DBA_ENABLED_TRACES.

SQL> desc DBA_ENABLED_TRACES


Name Null? Type
----------------------------------------- -------- ----------------------------
TRACE_TYPE VARCHAR2(21)
PRIMARY_ID VARCHAR2(64)
QUALIFIER_ID1 VARCHAR2(48)
QUALIFIER_ID2 VARCHAR2(32)
WAITS VARCHAR2(5)
BINDS VARCHAR2(5)
INSTANCE_NAME VARCHAR2(16)

Note 1: 10g tracing quick start:


--------------------------------

Oracle�s released a few new facilities to help with tracing in 10g, here�s a real
quick wrap up of the most significant:

>>>> Using the new client identifier:

You can tag database sessions with a session identifier that can later be used to
identify sessions to trace.
You can set the identifier like this:
begin
dbms_session.set_identifier('GUY1');
end;

You can set this from a login trigger if you don�t have access to the source code.
To set trace on for a matching client id,
you use DBMS_MONITOR.CLIENT_ID_TRACE_ENABLE:

BEGIN

DBMS_MONITOR.client_id_trace_enable (client_id => 'GUY1',


waits => TRUE,
binds => FALSE
);
END;

You can add waits and or bind variables to the trace file using the flags shown.

>>>> Tracing by Module and/or action:

Many Oracle-aware applications set Module and action properties and you can use
these to enable tracing as well.
The serv_mod_act_trace_enable method allows you to set the tracing on for sessions
matching particular service, module, actions
and (for clusters) instance identifiers. You can see current values for these
usng the following query:

SELECT DISTINCT instance_name, service_name, module, action


FROM gv$session JOIN gv$instance USING (inst_id);

INSTANCE_NAME SERVICE_NA MODULE ACTION


---------------- ---------- ------------------------------ ------------
ghrac11 SYS$USERS
ghrac11 ghrac1 SQLNav5.exe
ghrac11 ghrac1 Spotlight On Oracle, classic 4.0
ghrac13 SYS$USERS racgimon@mel601416.melquest.de
v.mel.au.qsft (TNS

ghrac13 ghrac1 Spotlight On Oracle, classic 4.0


ghrac12 ghrac1 SQL*Plus
ghrac12 SYS$USERS racgimon@mel601416.melquest.de
v.mel.au.qsft (TNS

So to generate traces for all SQL*plus sessions that connect to the cluster from
any instance,
I could issue the following command:

BEGIN
DBMS_MONITOR.serv_mod_act_trace_enable
(service_name => 'ghrac1',
module_name => 'SQL*Plus',
action_name => DBMS_MONITOR.all_actions,
waits => TRUE,
binds => FALSE,
instance_name => NULL
);
END;
.
/

>>>> Tracing using sid and serial

DBMS_MONITOR can enable traces for specific sid and serial as you would expect:

SELECT instance_name, SID, serial#, module, action


FROM gv$session JOIN gv$instance USING (inst_id)
WHERE username = 'SYSTEM';

INSTANCE_NAME SID SERIAL# MODULE ACTION


---------------- ---------- ---------- ------------ ------------
ghrac11 184 13179 SQL*Plus
ghrac11 181 3353 SQLNav5.exe
ghrac13 181 27184 SQL*Plus
ghrac13 180 492 SQL*Plus
ghrac12 184 18601 SQL*Plus

BEGIN
dbms_monitor.session_trace_enable (session_id => 180,
serial_num => 492,
waits => TRUE,
binds => TRUE
);
END;
/

The sid and serial need to be current now � unlike the other methods, this does
not setup a permanent trace request
(simply because the sid and serial# will never be repeated). Also, you need to
issue this from the same instance
if you are in a RAC cluster.

Providing NULLs for sid and serial# traces the current session.

>>>> Finding and analyzing the trace:

This hasn�t changed much in 10g; the traces are in the USER_DUMP_DEST directory,
and you can analyze them using tkprof.

The trcsess utility is a new additional that allows you to generate a trace based
on multiple input files and several other conditions.

trcsess [output=<output file name >]


[session=<session ID>]
[clientid=<clientid>]
[service=<service name>]
[action=<action name>]
[module=<module name>]
<trace file names>
output=<output file name>

To generate a single trace file combining all the entries from the SQL*Plus
sessions I traced earlier,
then to feed them into tkprof for analysis, I would issue the following commands:

[oracle@mel601416 udump]$ trcsess module='SQL*Plus' *.trc output=sqlplus.trc


[oracle@mel601416 udump]$ tkprof sqlplus.trc sqlplus.prf

TKPROF: Release 10.2.0.1.0 - Production on Wed Sep 27 14:47:51 2006

Note 2:
-------

Setting Up Tracing with DBMS_MONITOR

The DBMS_MONITOR package has routines for enabling and disabling statistics
aggregation as well as for tracing by session ID,
or tracing based upon a combination of service name, module name, and action name.
(These three are associated hierarchically:
you can't specify an action without specifying the module and the service name,
but you can specify only the service name,
or only the service name and module name.) The module and action names, if
available, come from within the application code.
For example, Oracle E-Business Suite applications provide module and action names
in the code, so you can identify these
by name in any of the Oracle Enterprise Manager pages. (PL/SQL developers can
embed calls into their applications by using the
DBMS_APPLICATION_INFO package to set module and action names.)

Note that setting the module, action, and other paramters such as client_id no
longer causes a round-trip to the database
�these routines now piggyback on all calls from the application.

The service name is determined by the connect string used to connect to a service.
User sessions not associated with a specific
service are handled by sys$users (sys$background is the default service for the
background processes). Since we have a service
and a module name, we can turn on tracing for this module as follows:

SQL> exec dbms_monitor.serv_mod_act_trace_enable


(service_name=>'testenv', module_name=>'product_update');

PL/SQL procedure successfully completed.

We can turn on tracing for the client:

SQL> exec dbms_monitor.client_id_trace_enable


(client_id=>'kimberly');

PL/SQL procedure successfully completed.


Note that all of these settings are persistent�all sessions associated with the
service and module will be traced,
not just the current sessions.

To trace the SQL based on the session ID, look at the Oracle Enter-prise Manager
Top Sessions page, or query the V$SESSION view
as you likely currently do.

SQL> select sid, serial#, username


from v$session;
SID SERIAL# USERNAME
------ ------- ------------
133 4152 SYS
137 2418 SYSMAN
139 53 KIMBERLY
140 561 DBSNMP
141 4 DBSNMP
. . .
168 1
169 1
170 1
28 rows selected.

With the session ID (SID) and serial number, you can use DBMS_MONITOR to enable
tracing for just this session:

SQL> exec dbms_monitor.session_trace_enable(139);

exec dbms_monitor.session_trace_enable(81);

PL/SQL procedure successfully completed.

The serial number defaults to the current serial number for the SID (unless
otherwise specified), so if that's the session
and serial number you want to trace, you need not look any further. Also, by
default, WAITS are set to true and BINDS to false,
so the syntax above is effectively the same as the following:

SQL> exec dbms_monitor.session_trace_enable(session_id=>139, serial_num=>53,


waits=>true, binds=>false);

Note that WAITS and BINDS are the same parameters that you might have set in the
past using DBMS_SUPPORT and the 10046 event.

If you're working in a production environment, at this point you'd rerun the


errant SQL or application,
and the trace files would be created accordingly.

Note 3: DBMS_MONITOR:
---------------------

The DBMS_MONITOR package let you use PL/SQL for controlling additional tracing and
statistics gathering.

The chapter contains the following topics:

Subprogram Description
CLIENT_ID_STAT_DISABLE Procedure

Disables statistic gathering previously enabled for a given Client Identifier

CLIENT_ID_STAT_ENABLE Procedure

Enables statistic gathering for a given Client Identifier

CLIENT_ID_TRACE_DISABLE Procedure

Disables the trace previously enabled for a given Client Identifier globally for
the database

CLIENT_ID_TRACE_ENABLE Procedure

Enables the trace for a given Client Identifier globally for the database

DATABASE_TRACE_DISABLE Procedure

Disables SQL trace for the whole database or a specific instance

DATABASE_TRACE_ENABLE Procedure

Enables SQL trace for the whole database or a specific instance

SERV_MOD_ACT_STAT_DISABLE Procedure

Disables statistic gathering enabled for a given combination of Service Name,


MODULE and ACTION

SERV_MOD_ACT_STAT_ENABLE Procedure

Enables statistic gathering for a given combination of Service Name, MODULE and
ACTION

SERV_MOD_ACT_TRACE_DISABLE Procedure

Disables the trace for ALL enabled instances for a or a given combination of
Service Name, MODULE and ACTION name globally

SERV_MOD_ACT_TRACE_ENABLE Procedure

Enables SQL tracing for a given combination of Service Name, MODULE and ACTION
globally unless an instance_name is specified

SESSION_TRACE_DISABLE Procedure

Disables the previously enabled trace for a given database session identifier
(SID) on the local instance

SESSION_TRACE_ENABLE Procedure
Enables the trace for a given database session identifier (SID) on the local
instance

----------------------------------------------------------------------------------
------------------------------------

-- CLIENT_ID_STAT_ENABLE Procedure
This procedure enables statistic gathering for a given Client Identifier.
Statistics gathering is global for the database
and persistent across instance starts and restarts. That is, statistics are
enabled for all instances of the same database,
including restarts. Statistics are viewable through V$CLIENT_STATS views.

Syntax

DBMS_MONITOR.CLIENT_ID_STAT_ENABLE(
client_id IN VARCHAR2);
Parameters

Table 60-3 CLIENT_ID_STAT_ENABLE Procedure Parameters

Parameter Description
client_id
The Client Identifier for which statistic aggregation is enabled.

Examples

To enable statistic accumulation for a client with a given client ID:

EXECUTE DBMS_MONITOR.CLIENT_ID_STAT_ENABLE('janedoe');

EXECUTE DBMS_MONITOR.CLIENT_ID_STAT_ENABLE('edp$jvl');
EXECUTE DBMS_MONITOR.CLIENT_ID_STAT_DISABLE('edp$jvl');

-- CLIENT_ID_STAT_DISABLE Procedure
This procedure will disable statistics accumulation for all instances and remove
the accumulated results
from V$CLIENT_STATS view enabled by the CLIENT_ID_STAT_ENABLE Procedure.

Syntax

DBMS_MONITOR.CLIENT_ID_STAT_DISABLE(
client_id IN VARCHAR2);
Parameters

Parameter Description
client_id
The Client Identifier for which statistic aggregation is disabled.

Examples

To disable accumulation:

EXECUTE DBMS_MONITOR.CLIENT_ID_STAT_DISABLE('janedoe');
----------------------------------------------------------------------------------
------------------------------------

-- CLIENT_ID_TRACE_DISABLE Procedure
This procedure will disable tracing enabled by the CLIENT_ID_TRACE_ENABLE
Procedure.

Syntax

DBMS_MONITOR.CLIENT_ID_TRACE_DISABLE(
client_id IN VARCHAR2);
Parameters

Table 60-4 CLIENT_ID_TRACE_DISABLE Procedure Parameters

Parameter Description
client_id
The Client Identifier for which SQL tracing is disabled.

Examples

EXECUTE DBMS_MONITOR.CLIENT_ID_TRACE_DISABLE ('janedoe');


edp$jvl

-- CLIENT_ID_TRACE_ENABLE Procedure
This procedure will enable the trace for a given client identifier globally for
the database.

Syntax

DBMS_MONITOR.CLIENT_ID_TRACE_ENABLE(
client_id IN VARCHAR2,
waits IN BOOLEAN DEFAULT TRUE,
binds IN BOOLEAN DEFAULT FALSE);
Parameters

Table 60-5 CLIENT_ID_TRACE_ENABLE Procedure Parameters

Parameter Description
client_id
Database Session Identifier for which SQL tracing is enabled.

waits
If TRUE, wait information is present in the trace.

binds
If TRUE, bind information is present in the trace.

Usage Notes

The trace will be written to multiple trace files because more than one Oracle
shadow process can work
on behalf of a given client identifier.
The tracing is enabled for all instances and persistent across restarts.

Examples

EXECUTE DBMS_MONITOR.CLIENT_ID_TRACE_ENABLE('janedoe', TRUE,FALSE);


EXECUTE DBMS_MONITOR.CLIENT_ID_TRACE_ENABLE('albert');
EXECUTE DBMS_MONITOR.CLIENT_ID_TRACE_DISABLE ('albert');

----------------------------------------------------------------------------------
------------------------------------

-- SERV_MOD_ACT_STAT_DISABLE Procedure
This procedure will disable statistics accumulation and remove the accumulated
results from V$SERV_MOD_ACT_STATS view.
Statistics disabling is persistent for the database. That is, service statistics
are disabled for instances of the same database
(plus dblinks that have been activated as a result of the enable).

Syntax

DBMS_MONITOR.SERV_MOD_ACT_STAT_DISABLE(
service_name IN VARCHAR2,
module_name IN VARCHAR2,
action_name IN VARCHAR2 DEFAULT ALL_ACTIONS);
Parameters

Table 60-8 SERV_MOD_ACT_STAT_DISABLE Procedure Parameters

Parameter Description
service_name
Name of the service for which statistic aggregation is disabled.

module_name
Name of the MODULE. An additional qualifier for the service. It is a required
parameter.

action_name
Name of the ACTION. An additional qualifier for the Service and MODULE name.
Omitting the parameter
(or supplying ALL_ACTIONS constant) means enabling aggregation for all Actions for
a given Server/Module combination.
In this case, statistics are aggregated on the module level.

-- SERV_MOD_ACT_STAT_ENABLE Procedure
This procedure enables statistic gathering for a given combination of Service
Name, MODULE and ACTION. Calling this procedure enables
statistic gathering for a hierarchical combination of Service name, MODULE name,
and ACTION name on all instances for the same database.
Statistics are accessible by means of the V$SERV_MOD_ACT_STATS view.

Syntax

DBMS_MONITOR.SERV_MOD_ACT_STAT_ENABLE(
service_name IN VARCHAR2,
module_name IN VARCHAR2,
action_name IN VARCHAR2 DEFAULT ALL_ACTIONS);
Parameters

Table 60-9 SERV_MOD_ACT_STAT_ENABLE Procedure Parameters

Parameter Description
service_name
Name of the service for which statistic aggregation is enabled.

module_name
Name of the MODULE. An additional qualifier for the service. It is a required
parameter.

action_name
Name of the ACTION. An additional qualifier for the Service and MODULE name.
Omitting the parameter
(or supplying ALL_ACTIONS constant) means enabling aggregation for all Actions for
a given Server/Module combination.
In this case, statistics are aggregated on the module level.

Usage Notes

Enabling statistic aggregation for the given combination of Service/Module/Action


names is slightly complicated by the fact
that the Module/Action values can be empty strings which are indistinguishable
from NULLs.
For this reason, we adopt the following conventions:

A special constant (unlikely to be a real action names) is defined:

ALL_ACTIONS constant VARCHAR2 := '###ALL_ACTIONS';


Using ALL_ACTIONS for a module specification means that aggregation is enabled for
all actions with a given module name,
while using NULL (or empty string) means that aggregation is enabled for an action
whose name is an empty string.

Examples

To enable statistic accumulation for a given combination of Service name and


MODULE:

EXECUTE DBMS_MONITOR.SERV_MOD_ACT_STAT_ENABLE( 'APPS1','PAYROLL');


To enable statistic accumulation for a given combination of Service name, MODULE
and ACTION:

EXECUTE
DBMS_MONITOR.SERV_MOD_ACT_STAT_ENABLE('APPS1','GLEDGER','DEBIT_ENTRY');
If both of the preceding commands are issued, statistics are accumulated as
follows:

For the APPS1 service, because accumulation for each Service Name is the default.

For all actions in the PAYROLL Module.

For the DEBIT_ENTRY Action within the GLEDGER Module.

----------------------------------------------------------------------------------
------------------------------------

-- DATABASE_TRACE_ENABLE Procedure
This procedure enables SQL trace for the whole database or a specific instance.

Syntax

DBMS_MONITOR.DATABASE_TRACE_ENABLE(
waits IN BOOLEAN DEFAULT TRUE,
binds IN BOOLEAN DEFAULT FALSE,
instance_name IN VARCHAR2 DEFAULT NULL);
Parameters

Table 60-7 DATABASE_TRACE_ENABLE Procedure Parameters

Parameter Description
waits
If TRUE, wait information will be present in the trace

binds
If TRUE, bind information will be present in the trace

instance_name
If set, restricts tracing to the named instance

EXECUTE dbms_monitor.database_trace_enable
EXECUTE dbms_monitor.database_trace_disable

-- DATABASE_TRACE_DISABLE Procedure
This procedure disables SQL trace for the whole database or a specific instance.

Syntax

DBMS_MONITOR.DATABASE_TRACE_DISABLE(
instance_name IN VARCHAR2 DEFAULT NULL);
Parameters

Table 60-6 DATABASE_TRACE_DISABLE Procedure Parameters

Parameter Description
instance_name
Disables tracing for the named instance

----------------------------------------------------------------------------------
------------------------------------

SERV_MOD_ACT_TRACE_DISABLE Procedure
This procedure will disable the trace at ALL enabled instances for a given
combination of Service Name, MODULE, and ACTION name globally.

Syntax

DBMS_MONITOR.SERV_MOD_ACT_TRACE_DISABLE(
service_name IN VARCHAR2,
module_name IN VARCHAR2,
action_name IN VARCHAR2 DEFAULT ALL_ACTIONS,
instance_name IN VARCHAR2 DEFAULT NULL);
Parameters

Table 60-10 SERV_MOD_ACT_TRACE_DISABLE Procedure Parameters

Parameter Description
service_name
Name of the service for which tracing is disabled.

module_name
Name of the MODULE. An additional qualifier for the service.

action_name
Name of the ACTION. An additional qualifier for the Service and MODULE name.

instance_name
If set, this restricts tracing to the named instance_name.

Usage Notes

Specifying NULL for the module_name parameter means that statistics will no longer
be accumulated for the sessions which do not set the MODULE attribute.

Examples

To enable tracing for a Service named APPS1:

EXECUTE DBMS_MONITOR.SERV_MOD_ACT_TRACE_ENABLE('APPS1',
DBMS_MONITOR.ALL_MODULES, DBMS_MONITOR.ALL_ACTIONS,TRUE,
FALSE,NULL);
To disable tracing specified in the previous step:

EXECUTE DBMS_MONITOR.SERV_MOD_ACT_TRACE_DISABLE('APPS1');
To enable tracing for a given combination of Service and MODULE (all ACTIONs):

EXECUTE DBMS_MONITOR.SERV_MOD_ACT_TRACE_ENABLE('APPS1','PAYROLL',
DBMS_MONITOR.ALL_ACTIONS,TRUE,FALSE,NULL);
To disable tracing specified in the previous step:

EXECUTE DBMS_MONITOR.SERV_MOD_ACT_TRACE_DISABLE('APPS1','PAYROLL');

--------------------------------------------------------------------------------

SERV_MOD_ACT_TRACE_ENABLE Procedure
This procedure will enable SQL tracing for a given combination of Service Name,
MODULE and ACTION globally unless an instance_name is specified.

Syntax

DBMS_MONITOR.SERV_MOD_ACT_TRACE_ENABLE(
service_name IN VARCHAR2,
module_name IN VARCHAR2 DEFAULT ANY_MODULE,
action_name IN VARCHAR2 DEFAULT ANY_ACTION,
waits IN BOOLEAN DEFAULT TRUE,
binds IN BOOLEAN DEFAULT FALSE,
instance_name IN VARCHAR2 DEFAULT NULL);
Parameters

Table 60-11 SERV_MOD_ACT_TRACE_ENABLE Procedure Parameters

Parameter Description
service_name
Name of the service for which tracing is enabled.

module_name
Name of the MODULE. An optional additional qualifier for the service.

action_name
Name of the ACTION. An optional additional qualifier for the Service and MODULE
name.

waits
If TRUE, wait information is present in the trace.

binds
If TRUE, bind information is present in the trace.

instance_name
If set, this restricts tracing to the named instance_name.

Usage Notes

The procedure enables a trace for a given combination of Service, MODULE and
ACTION name. The specification is strictly hierarchical: Service Name or Service
Name/MODULE, or Service Name, MODULE, and ACTION name must be specified. Omitting
a qualifier behaves like a wild-card, so that not specifying an ACTION means all
ACTIONs. Using the ALL_ACTIONS constant achieves the same purpose.

This tracing is useful when an application MODULE and optionally known ACTION is
experiencing poor service levels.

By default, tracing is enabled globally for the database. The instance_name


parameter is provided to restrict tracing to named instances that are known, for
example, to exhibit poor service levels.

Tracing information is present in multiple trace files and you must use the
trcsess tool to collect it into a single file.

Specifying NULL for the module_name parameter means that statistics will be
accumulated for the sessions which do not set the MODULE attribute.

Examples

To enable tracing for a Service named APPS1:

EXECUTE DBMS_MONITOR.SERV_MOD_ACT_TRACE_ENABLE('APPS1',
DBMS_MONITOR.ALL_MODULES, DBMS_MONITOR.ALL_ACTIONS,TRUE,
FALSE,NULL);
To enable tracing for a given combination of Service and MODULE (all ACTIONs):
EXECUTE DBMS_MONITOR.SERV_MOD_ACT_TRACE_ENABLE('APPS1','PAYROLL',
DBMS_MONITOR.ALL_ACTIONS,TRUE,FALSE,NULL);

--------------------------------------------------------------------------------

SESSION_TRACE_DISABLE Procedure
This procedure will disable the trace for a given database session at the local
instance.

Syntax

DBMS_MONITOR.SESSION_TRACE_DISABLE(
session_id IN BINARY_INTEGER DEFAULT NULL,
serial_num IN BINARY_INTEGER DEFAULT NULL);
Parameters

Table 60-12 SESSION_TRACE_DISABLE Procedure Parameters

Parameter Description
session_id
Name of the service for which SQL trace is disabled.

serial_num
Serial number for this session.

Usage Notes

If serial_num is NULL but session_id is specified, a session with a given


session_id is no longer traced irrespective
of its serial number. If both session_id and serial_num are NULL, the current user
session is no longer traced.
It is illegal to specify NULL session_id and non-NULL serial_num. In addition, the
NULL values are default and can be omitted.

Examples

To enable tracing for a client with a given client session ID:

EXECUTE DBMS_MONITOR.SESSION_TRACE_ENABLE(7,4634, TRUE, FALSE);


To disable tracing specified in the previous step:

EXECUTE DBMS_MONITOR.SESSION_TRACE_DISABLE(7,4634);;

--------------------------------------------------------------------------------

SESSION_TRACE_ENABLE Procedure
This procedure enables a SQL trace for the given Session ID on the local instance

Syntax

DBMS_MONITOR.SESSION_TRACE_ENABLE(
session_id IN BINARY_INTEGER DEFAULT NULL,
serial_num IN BINARY_INTEGER DEFAULT NULL,
waits IN BOOLEAN DEFAULT TRUE,
binds IN BOOLEAN DEFAULT FALSE)
Parameters
Table 60-13 SESSION_TRACE_ENABLE Procedure Parameters

Parameter Description
session_id
Database Session Identifier for which SQL tracing is enabled. Specifying NULL
means that my current session should be traced.

serial_num
Serial number for this session. Specifying NULL means that any session which
matches session_id (irrespective of serial number) should be traced.

waits
If TRUE, wait information is present in the trace.

binds
If TRUE, bind information is present in the trace.

Usage Notes

The procedure enables a trace for a given database session, and is still useful
for client/server applications.
The trace is enabled only on the instance to which the caller is connected, since
database sessions do not span instances.
This tracing is strictly local to an instance.

If serial_num is NULL but session_id is specified, a session with a given


session_id is traced irrespective of its serial number.
If both session_id and serial_num are NULL, the current user session is traced. It
is illegal to specify NULL session_id
and non-NULL serial_num. In addition, the NULL values are default and can be
omitted.

Examples

To enable tracing for a client with a given client session ID:

EXECUTE DBMS_MONITOR.SESSION_TRACE_ENABLE(7,4634, TRUE, FALSE);


To disable tracing specified in the previous step:

EXECUTE DBMS_MONITOR.SESSION_TRACE_ENABLE(82,30962);
EXECUTE DBMS_MONITOR.SESSION_TRACE_DISABLE(82,30962);
Either

EXECUTE DBMS_MONITOR.SESSION_TRACE_ENABLE(5);
or

EXECUTE DBMS_MONITOR.SESSION_TRACE_ENABLE(5, NULL);


traces the session with session ID of 5, while either

EXECUTE DBMS_MONITOR.SESSION_TRACE_ENABLE();
or

EXECUTE DBMS_MONITOR.SESSION_TRACE_ENABLE(NULL, NULL);


traces the current user session. Also,
EXECUTE DBMS_MONITOR.SESSION_TRACE_ENABLE(NULL, NULL, TRUE, TRUE);
traces the current user session including waits and binds. The same can be also
expressed using keyword syntax:

EXECUTE DBMS_MONITOR.SESSION_TRACE_ENABLE(binds=>TRUE);

Note 4:
-------

End-to-End Tracing

A common approach to diagnosing performance problems is to enable sql_trace to


trace database calls and then analyze the output later
using a tool such as tkprof. However, the approach has a serious limitation in
databases with shared server architecture.
In this configuration, several shared server processes are created to service the
requests from the users.
When user BILL connects to the database, the dispatcher passes the connection to
an available shared server.
If none is available, a new one is created. If this session starts tracing, the
calls made by the shared server process are traced.

Now suppose that BILL's session becomes idle and LORA's session becomes active. At
that point the shared server originally
servicing BILL is assigned to LORA's session. At this point, the tracing
information emitted is not from BILL's session,
but from LORA's. When LORA's session becomes inactive, this shared server may be
assigned to another active session,
which will have completely different information.

In 10g, this problem has been effectively addressed through the use of end-to-end
tracing. In this case, tracing is not done only
by session, but by an identifiable name such as a client identifier. A new package
called DBMS_MONITOR is available for this purpose.

For instance, you may want to trace all sessions with the identifier
account_update. To set up the tracing, you would issue:
exec DBMS_MONITOR.CLIENT_ID_TRACE_ENABLE('account_update');

This command enables tracing on all sessions with the identifier account_update.
When BILL connects to the database,
he can issue the following to set the client identifier:
exec DBMS_SESSION.SET_IDENTIFIER ('account_update')

Tracing is active on the sessions with the identifier account_update, so the above
session will be traced and a trace file
will be generated on the user dump destination directory. If another user connects
to the database and sets her client identifier
to account_update, that session will be traced as well, automatically, without
setting any other command inside the code.
All sessions with the client identifier account_update will be traced until the
tracing is disabled by issuing:
exec DBMS_MONITOR.CLIENT_ID_TRACE_DISABLE('account_update');

The resulting trace files can be analyzed by tkprof. However, each session
produces a different trace file. For proper problem
diagnosis, we are interested in the consolidated trace file; not individual ones.
How do we achieve that?

Simple. Using a tool called trcsess, you can extract information relevant to
client identifier account_update to a single file
that you can run through tkprof. In the above case, you can go in the user dump
destination directory and run:
trcsess output=account_update_trc.txt clientid=account_update *

This command creates a file named account_update_trc.txt that looks like a regular
trace file but has information on only
those sessions with client identifier account_update. This file can be run through
tkprof to get the analyzed output.

Contrast this approach with the previous, more difficult method of collecting
trace information. Furthermore, tracing is enabled
and disabled by some variable such as client identifier, without calling alter
session set sql_trace = true from that session.
Another procedure in the same package, SERV_MOD_ACT_TRACE_ENABLE, can enable
tracing in other combinations such as for a
specific service, module, or action, which can be set by dbms_application_info
package.

Note 5:
-------

Generating SQL Trace Files

Oracle Tips by Burleson Consulting

The following Tip is from the outstanding book "Oracle PL/SQL Tuning: Expert
Secrets for High Performance Programming"
by Dr. Tim Hall, Oracle ACE of the year, 2006:

There are numerous ways to enable, disable and vary the contents of this trace.
The following methods have been available
for several versions of the database.

-- All versions.

SQL> ALTER SESSION SET sql_trace=TRUE;


SQL> ALTER SESSION SET sql_trace=FALSE;

SQL> EXEC DBMS_SESSION.set_sql_trace(sql_trace => TRUE);


SQL> EXEC DBMS_SESSION.set_sql_trace(sql_trace => FALSE);

SQL> ALTER SESSION SET EVENTS '10046 trace name context forever, level 8';
SQL> ALTER SESSION SET EVENTS '10046 trace name context off';

SQL> EXEC DBMS_SYSTEM.set_sql_trace_in_session(sid=>123, serial#=>1234,


sql_trace=>TRUE);
SQL> EXEC DBMS_SYSTEM.set_sql_trace_in_session(sid=>123, serial#=>1234,
sql_trace=>FALSE);

SQL> EXEC DBMS_SYSTEM.set_ev(si=>123, se=>1234, ev=>10046, le=>8, nm=>' ');


SQL> EXEC DBMS_SYSTEM.set_ev(si=>123, se=>1234, ev=>10046, le=>0, nm=>' ');
-- All versions, requires DBMS_SUPPORT package to be loaded.

SQL> EXEC DBMS_SUPPORT.start_trace(waits=>TRUE, binds=>FALSE);


SQL> EXEC DBMS_SUPPORT.stop_trace;

SQL> EXEC DBMS_SUPPORT.start_trace(sid=>123, serial=>1234, waits=>TRUE,


binds=>FALSE);
SQL> EXEC DBMS_SUPPORT.stop_trace(sid=>123, serial=>1234);

The dbms_support package is not present by default, but can be loaded as the SYS
user by executing the @$ORACLE_HOME/rdbms/admin/dbmssupp.sql script.

For methods that require tracing levels, the following are valid values:

0 - No trace. Like switching sql_trace off.

2 - The equivalent of regular sql_trace.

4 - The same as 2, but with the addition of bind variable values.

8 - The same as 2, but with the addition of wait events.

12 - The same as 2, but with both bind variable values and wait events.

The same combinations are possible for those methods with boolean parameters for
waits and binds.

With the advent of Oracle 10g, the SQL tracing options have been centralized and
extended using the dbms_monitor package.
The examples below show a few possible variations for enabling and disabling SQL
trace in Oracle 10g.

-- Oracle 10g

SQL> EXEC DBMS_MONITOR.session_trace_enable;


SQL> EXEC DBMS_MONITOR.session_trace_enable(waits=>TRUE, binds=>FALSE);
SQL> EXEC DBMS_MONITOR.session_trace_disable;

SQL> EXEC DBMS_MONITOR.session_trace_enable(session_id=>1234, serial_num=>1234);


SQL> EXEC DBMS_MONITOR.session_trace_enable(session_id =>1234, serial_num=>1234,
waits=>TRUE,
binds=>FALSE);
SQL> EXEC DBMS_MONITOR.session_trace_disable(session_id=>1234, serial_num=>1234);

SQL> EXEC DBMS_MONITOR.client_id_trace_enable(client_id=>'tim_hall');


SQL> EXEC DBMS_MONITOR.client_id_trace_enable(client_id=>'tim_hall', waits=>TRUE,
binds=>FALSE);
SQL> EXEC DBMS_MONITOR.client_id_trace_disable(client_id=>'tim_hall');

SQL> EXEC DBMS_MONITOR.serv_mod_act_trace_enable(service_name=>'db10g',


module_name=>'test_api', action_name=>'running');
SQL> EXEC DBMS_MONITOR.serv_mod_act_trace_enable(service_name=>'db10g',
module_name=>'test_api', action_name=>'running', waits=>TRUE, binds=>FALSE);
SQL> EXEC DBMS_MONITOR.serv_mod_act_trace_disable(service_name=>'db10g',
module_name=>'test_api', action_name=>'running');

The package provides the conventional session level tracing along with two new
variations. First, tracing can be enabled
on multiple sessions based on the value of the client_identifier column of the
v$session view, set using the dbms_session package.

Second, tracing can be activated for multiple sessions based on various


combinations of the service_name, module, action columns
in the v$session view, set using the dbms_application_info package, along with the
instance_name in RAC environments.
With all the possible permutations and default values, this provides a high degree
of flexibility.

trcsess

Activating trace on multiple sessions means that trace information is spread


throughout many trace files.
For this reason Oracle 10g introduced the trcsess utility, allowing trace
information from multiple trace files to be identified
and consolidated into a single trace file. The trcsess usage is listed below.

trcsess [output=<output file name >] [session=<session ID>] [clientid=<clientid>]


[service=<service name>]
[action=<action name>] [module=<module name>] <trace file names>
output=<output file name> output destination default being standard output.
session=<session Id> session to be traced.
Session id is a combination of session Index & session serial number e.g. 8.13.
clientid=<clientid> clientid to be traced.
service=<service name> service to be traced.
action=<action name> action to be traced.
module=<module name> module to be traced.
<trace_file_names> Space separated list of trace files with wild card '*'
supported.

With all these options, the consolidated trace file can be as broad or as specific
as needed.

tkprof

The SQL trace files produced by the methods discussed previously can be read in
their raw form, or they can be translated
by the tkprof utility into a more human readable form. The output below lists the
usage notes from the tkprof utility in Oracle 10g.

$ tkprof
Usage: tkprof tracefile outputfile [explain= ] [table= ]
[print= ] [insert= ] [sys= ] [sort= ]
table=schema.tablename Use 'schema.tablename' with 'explain=' option.
explain=user/password Connect to ORACLE and issue EXPLAIN PLAN.
print=integer List only the first 'integer' SQL statements.
aggregate=yes|no
insert=filename List SQL statements and data inside INSERT statements.
sys=no TKPROF does not list SQL statements run as user SYS.
record=filename Record non-recursive statements found in the trace file.
waits=yes|no Record summary for any wait events found in the trace file.
sort=option Set of zero or more of the following sort options:

prscnt number of times parse was called


prscpu cpu time parsing
prsela elapsed time parsing
prsdsk number of disk reads during parse
prsqry number of buffers for consistent read during parse
prscu number of buffers for current read during parse
prsmis number of misses in library cache during parse
execnt number of execute was called
execpu cpu time spent executing
exeela elapsed time executing
exedsk number of disk reads during execute
exeqry number of buffers for consistent read during execute
execu number of buffers for current read during execute
exerow number of rows processed during execute
exemis number of library cache misses during execute
fchcnt number of times fetch was called
fchcpu cpu time spent fetching
fchela elapsed time fetching
fchdsk number of disk reads during fetch
fchqry number of buffers for consistent read during fetch
fchcu number of buffers for current read during fetch
fchrow number of rows fetched
userid userid of user that parsed the cursor

The waits parameter was only added in Oracle 9i, so prior to this version wait
information had to be read from the raw trace file.
The values of bind variables must be read from the raw files as they are not
displayed in the tkprof output.

20.2 OLDER ORACLE Versions 8,8i,9i:


===================================

20.2.1 Trace a session:


-----------------------

Examples:
---------

exec DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(sid, serial#, TRUE);


exec DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(23, 54071, TRUE);

DBMS_SYSTEM has some mysterious and apparently dangerous procedures in it.


Obtaining any information
about SET_EV and READ_EV was very difficult and promises to be more difficult in
the future since
the package header is no longer exposed in Oracle 8.0.

In spite of Oracle's desire to keep DBMS_SYSTEM "under wraps," I feel strongly


that the SET_SQL_TRACE_IN_SESSION
procedure is far too valuable to be hidden away in obscurity. DBAs and developers
need to find out exactly
what is happening at runtime when a user is experiencing unusual performance
problems,
and the SQL trace facility is one of the best tools available for discovering what
the database
is doing during a user's session. This is especially useful when investigating
problems with software packages
where source code (including SQL) is generally unavailable.

So how can we get access to the one program in DBMS_SYSTEM we want without
exposing those other dangerous
elements to the public? The answer, of course, is to build a package of our own to
encapsulate DBMS_SYSTEM
and expose only what is safe. In the process, we can make DBMS_SYSTEM easier to
use as well.
Those of us who are "keyboard-challenged" (or just plain lazy) would certainly
appreciate
not having to type a procedure name with 36 characters.

I've created a package called trace to cover DBMS_SYSTEM and provide friendlier
ways to set SQL tracing on or off
in other user's sessions. Here is the package specification:

*/ Filename on companion disk: trace.sql */*


CREATE OR REPLACE PACKAGE trace
IS

type rr_rec is record (


v_sid number,
v_serial number
);

r_rec rr_rec;

/*
|| Exposes DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION
|| with easier to call programs
||
|| Author: John Beresniewicz, Savant Corp
|| Created: 07/30/97
||
|| Compilation Requirements:
|| SELECT on SYS.V_$SESSION
|| EXECUTE on SYS.DBMS_SYSTEM (or create as SYS)
||
|| Execution Requirements:
||
*/

/* turn SQL trace on by session id */


PROCEDURE Xon(sid_IN IN NUMBER);

/* turn SQL trace off by session id */


PROCEDURE off(sid_IN IN NUMBER);

/* turn SQL trace on by username */


PROCEDURE Xon(user_IN IN VARCHAR2);

/* turn SQL trace off by username */


PROCEDURE off(user_IN IN VARCHAR2);

END trace;
The trace package provides ways to turn SQL tracing on or off by session id or
username.
One thing that annoys me about DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION is having to
figure out and pass
a session serial number into the procedure. There should always be only one
session per sid at any time
connected to the database, so trace takes care of figuring out the appropriate
serial number behind the scenes.

Another improvement (in my mind) is replacing the potentially confusing BOOLEAN


parameter sql_trace
with two distinct procedures whose names indicate what is being done. Compare the
following commands,
either of which might be used to turn SQL tracing off in session 15 using
SQL*Plus:

SQL> execute trace.off(sid_IN=>15);

SQL> execute SYS.DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(15,4567,FALSE);

The first method is both more terse and easier to understand.

The xon and off procedures are both overloaded on the single IN parameter, with
versions accepting
either the numeric session id or a character string for the session username.
Allowing session selection
by username may be easier than by sids. Why? Because sids are transient and must
be looked up at runtime,
whereas username is usually permanently associated with an individual. Beware,
though, that multiple sessions
may be concurrently connected under the same username, and invoking trace.xon by
username will turn tracing on
in all of them.

Let's take a look at the trace package body:

/* Filename on companion disk: trace.sql */*


CREATE OR REPLACE PACKAGE BODY trace
IS

/*
|| Use DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION to turn tracing on
|| or off by either session id or username. Affects all sessions
|| that match non-NULL values of the user and sid parameters.
*/
PROCEDURE set_trace
(sqltrace_TF BOOLEAN
,user IN VARCHAR2 DEFAULT NULL
,sid IN NUMBER DEFAULT NULL)
IS
BEGIN
/*
|| Loop through all sessions that match the sid and user
|| parameters and set trace on in those sessions. The NVL
|| function in the cursor WHERE clause allows the single
|| SELECT statement to filter by either sid OR user.
*/
FOR sid_rec IN
(SELECT sid,serial#
FROM v$session S
WHERE S.type='USER'
AND S.username = NVL(UPPER(user),S.username)
AND S.sid = NVL(sid,S.sid) )
LOOP
SYS.DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION
(sid_rec.sid, sid_rec.serial#, sqltrace_TF);
END LOOP;
END set_trace;

/*
|| The programs exposed by the package all simply
|| call set_trace with different parameter combinations.
*/
PROCEDURE Xon(sid_IN IN NUMBER)
IS
BEGIN
set_trace(sqltrace_TF => TRUE, sid => sid_IN);
END Xon;

PROCEDURE off(sid_IN IN NUMBER)


IS
BEGIN
set_trace(sqltrace_TF => FALSE, sid => sid_IN);
END off;

PROCEDURE Xon(user_IN IN VARCHAR2)


IS
BEGIN
set_trace(sqltrace_TF => TRUE, user => user_IN);
END Xon;

PROCEDURE off(user_IN IN VARCHAR2)


IS
BEGIN
set_trace(sqltrace_TF => FALSE, user => user_IN);
END off;

END trace;

All of the real work done in the trace package is contained in a single private
procedure called set_trace.
The public procedures merely call set_trace with different parameter combinations.
This is a structure
that many packages exhibit: private programs with complex functionality exposed
through public programs
with simpler interfaces.

One interesting aspect of set_trace is the cursor used to get session


identification data from V_$SESSION.
I wanted to identify sessions for tracing by either session id or username. I
could have just defined
two cursors on V_$SESSION with some conditional logic deciding which cursor to
use, but that just did
not seem clean enough. After all, less code means fewer bugs. The solution I
arrived at:
make use of the NVL function to have a single cursor effectively ignore either the
sid or the user parameter
when either is passed in as NULL. Since set_trace is always called with either sid
or user, but not both,
the NVLs act as a kind of toggle on the cursor. I also supplied both the sid and
user parameters to set_trace
with the default value of NULL so that only the parameter being used for selection
needs be passed in the call.

Once set_trace was in place, the publicly visible procedures were trivial.

A final note about the procedure name "xon": I wanted to use the procedure name
"on," but ran afoul of the
PL/SQL compiler since ON is a reserved word in SQL and PL/SQL.

You can also try:

Alter system set sql_trace=true;


Setting sql_trace=true is a prerequisite when using tk prof.

-- TRACING a session:
-----------------------

Enable tracing a session to generate a tarce file.


This file can be formatted with TKPROF

6.1.
The following INIT.ORA parameters must be set:
#SQL_TRACE = TRUE
USER_DUMP_DEST = <preferred directory for the trace output>
TIMED_STATISTICS = TRUE
MAX_DUMP_FILE_SIZE = <optional, determines trace output file size>

6.2
To enable the SQL trace facility for your current session, enter:

ALTER SESSION SET SQL_TRACE = TRUE;

or use

DBMS_SUPPORT.START_TRACE_IN_SESSION( SID , SERIAL# );


DBMS_SUPPORT.STOP_TRACE_IN_SESSION( SID , NULL );
DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(sid, serial#, TRUE);

DBMS_SUPPORT.START_TRACE_IN_SESSION(86,43326);

To enable the SQL trace facility for your instance, set the value of the
SQL_TRACE initialization parameter to TRUE. Statistics will be collected for all
sessions.

Once the SQL trace facility has been enabled for the instance,
you can disable it for an individual session by entering:
ALTER SESSION SET SQL_TRACE = FALSE;
6.3

Examples of TKPROF

TKPROF ora53269.trc ora 53269.prf


SORT = (PRSDSK, EXEDSK, FCHDSK)
PRINT = 10

To analyze the sql statements:

1. tkprof ora_11598.trc myfilename


2. tkprof ora_11598.trc /tmp/myfilename
3. tkprof ora_11598.trc /tmp/myfilename explain=ap/ap
4. tkprof ora_23532.trc myfilename explain=po/po sort=execpu

7 STATSPACK:
------------

Statspack is a set of SQL, PL/SQL, and SQL*Plus scripts that allow the collection,

automation, storage, and viewing of perfoRMANce data (see Table 2).


The installation script (statscre.sql) calls several other scripts in order
to create the entire Statspack environment. (Note: You should run only the
installation script, not the base scripts that statscre.sql invokes.)
All the scripts you need for installing and running Statspack are in the
ORACLE_HOME/rdbms/admin directory for UNIX platforms and in
%ORACLE_HOME%\rdbms\admin for Microsoft Windows NT systems.

The simplest interactive way to take a snapshot is to log in to SQL*Plus


as the owner perfstat and execute the statspack.snap procedure:

SQL> connect perfstat/perfstat


SQL> execute statspack.snap;

You can use dbms_job to automate statistics collection.


The file statsauto.sql contains an example of how to do this,
scheduling a snapshot every hour. When you create a job by using dbms_job,
Oracle assigns the job a unique number that you can use for changing or removing
the job.
In order to use dbms_job to schedule snapshots automatically, you must set the
job_queue_processes
initialization parameter to greater than 0 in the init.ora file:
# Set to enable the job-queue process to start.
# This allows dbms_job to schedule automatic
# statistics collection, using Statspack
job_queue_processes=1

Change the interval of statistics collection by using the dbms_job.interval


procedure:

execute dbms_job.interval(<job number>,


'SYSDATE+(1/48)');

In this case, SYSDATE+(1/48)' causes the statistics to be gathered each 1/48 day-
every half hour.
To stop and remove the automatic-collection job:
execute dbms_job.remove(<job number>);

Install Statspack:

CREATE USER perfstat identified by perfstat


default tableSpace TOOLS temporary tableSpace TEMP;

GRANT CREATE SeSSion to PERFSTAT;


GRANT connect to PERFSTAT;
GRANT reSource to PERFSTAT;
GRANT unlimited tableSpace to PERFSTAT;

sqlplus sys
--
-- Install Statspack
-- Enter tablespace names when prompted
--
@?/rdbms/admin/spcreate.sql
--
-- Drop Statspack
-- Reverse of spcreate.sql
--
-- @?/rdbms/admin/spdrop.sql
--

The spcreate.sql install script automatically calls 3 other scripts needed:

spcusr - creates the user and grants privileges


spctab - creates the tables
spcpkg - creates the package
Check each of the three output files produced (spcusr.lis, spctab.lis, spcpkg.lis)

by the installation to ensure no errors were encountered, before continuing on to


the next step.

Using Statspack (gathering data):


sqlplus perfstat
--
-- Take a perfoRMANce snapshot
--
execute statspack.snap;
--
-- Get a list of snapshots
--
column snap_time format a21
SELECT snap_id,to_char(snap_time,'MON dd, yyyy hh24:mm:ss') snap_time
FROM sp$snapshot;
--

NOTE: To include important timing information set the init.ora parameter


timed_statistics to true.

To examine the change in instancewide statistics between two time periods, the
SPREPORT.SQL file is run
while connected to the PERFSTAT user. The SPREPORT.SQL command file is located in
the rdbms/admin directory
of the Oracle home.

You are prompted for the following:

The beginning snapshot ID


The ending snapshot ID
The name of the report text file to be created

===========
21. Overig:
===========

20.1 NLS:
=========

Bij Server:

1. characterset specificatie bij CREATE DATABASE


2. De Sever kan wel meerdere locale in runtime laden uit files gespecificeerd in
$ export ORA_NLSxx=$ORACLE_HOME/ocommon/nls/admin/data
3. NLS init.ora parameters t.b.v. de user sessions.

If clients using different character sets will access the database, then choose a
superset that includes
all client character sets. Otherwise, character conversions may be necessary at
the cost of
increased overhead and potential data loss.

client:

1. client heeft lokaal een NLS environment setting


2. client connect naar database, een session wordt gevormd, en de NLS enviroment
wordt gemaakt
aan de hAND van de NLS init.ora parameters.
Is bij de clent de NLS_LANG environment variable gezet, dan communiceerd
de client dat naar de server session. Hierdoor zijn beide hetzelfde.
Is er geen NLS_LANG, dan gelden de init.ora NLS parameters voor de server
session
3. De session NLS kan worden verANDert via ALTER SESSION. Dit heeft alleen effect
op de PL/SQL en SQL statements executed op de server

init.ora parameters bij server : invloed op sessions op server


environment variables bij client : locale bij client, overrides session
alter session statement : verANDert de session, overides init.ora
expliciet in SQL statement : overides alles
Voorbeeld van override:

in init.ora: NLS_SORT=ENGLISH
bij client: ALTER SESSION SET NLS_SORT=FRENCH;

Examples:
---------

Example 1:
----------

ALTER SESSION SET nls_date_format = 'dd/mm/yy'


ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY'

ALTER SESSION SET NLS_LANGUAGE='ENGLISH';

ALTER SESSION SET NLS_LANGUAGE='NEDERLANDS';

export NLS_NUMERIC_CHARACTERS=',.'
ALTER SESSION SET NLS_NUMERIC_CHARACTERS=',.'

ALTER SESSION SET NLS_TERRITORY=France;


ALTER SESSION SET NLS_TERRITORY=America;

In SQL functions:

NLS parameters can be used explicitly to hardcode NLS behavior within a SQL
function.
Doing so will override the default values that are set for the session in the
initialization parameter file,
set for the client with environment variables, or set for the session by the ALTER
SESSION statement.
For example:

TO_CHAR(hiredate, 'DD/MON/YYYY', 'nls_date_language = FRENCH')

SELECT last_name FROM employees WHERE hire_date >


TO_DATE('01-JAN-1999','DD-MON-YYYY', 'NLS_DATE_LANGUAGE = AMERICAN');

Example 2:
----------

SQL> ALTER SESSION SET NLS_NUMERIC_CHARACTERS=',.'


2 ;

Session altered.

SQL> select * from ap2;

NAME SAL
---------- ----------
ap 12,53
piet 89,7
SQL> ALTER SESSION SET NLS_NUMERIC_CHARACTERS='.,';

Session altered.

SQL> select * from ap2;

NAME SAL
---------- ----------
ap 12.53
piet 89.7

priority:
---------

1. expliciet in SQL
2. ALTER SESSION
3. environment variable
4. init.ora

NLS parameters, te zetten via:

NLS_CALENDAR init.ora, env, alter session


NLS_COMP init.ora, env, alter session
NLS_CREDIT - env -
NLS_CURRENCY init.ora, env, alter session
NLS_DATE_FORMAT init.ora, env, alter session
NLS_DATE_LANGUAGE init.ora, env, alter session
NLS_DEBIT - env -
NLS_ISO_CURRENCY init.ora, env, alter session
NLS_LANG - env -
NLS_LANGUAGE init.ora, - , alter session
NLS_LIST_SEPERATOR - env -
NLS_MONETARY_CHARACTERS - env -
NLS_NCHAR - env -
NLS_NUMMERIC_CHARACTERS init.ora, env, alter session
NLS_SORT init.ora, env, alter session
NLS_TERRITORY init.ora, - , alter session
NLS_DUAL_CURRENCY init.ora, env, alter session

DATA DICTIONARY VIEWS:


----------------------

Applications can check the session, instance, and database NLS parameters by
querying
the following data dictionary views:

NLS_SESSION_PARAMETERS shows the NLS parameters and their values for the session
that is querying
the view. It does not show information about the character set.

NLS_INSTANCE_PARAMETERS shows the current NLS instance parameters that have been
explicitly set
and the values of the NLS instance parameters.
NLS_DATABASE_PARAMETERS shows the values of the NLS parameters that were used when
the database was created.

Example:
--------

SQL> desc ap1;


Name Null? Type
----------------------------------------- -------- -------------
NAME VARCHAR2(10)
SAL VARCHAR2(10)

SQL> select * from ap1;

NAME SAL
---------- ----------
ap 12,53
piet 89,7

SQL> desc ap2;


Name Null? Type
----------------------------------------- -------- ----------------------------
NAME VARCHAR2(10)
SAL NUMBER

SQL> select * from ap2;

NAME SAL
---------- ----------
ap 12.53
piet 89.7

SQL> insert into ap2


2 select * from ap1;
select * from ap1
*
ERROR at line 2:
ORA-01722: invalid number

SQL> ALTER SESSION SET NLS_NUMERIC_CHARACTERS=',.';

Session altered.

SQL> insert into ap2


2 select * from ap1;

2 rows created.

20.2 More on AL32UTF8, AL16UTF16, UTF8:


=======================================

1) What is the National Character Set?


--------------------------------------
The National Character set (NLS_NCHAR_CHARACTERSET) is a character set which is
defined
in addition to the (normal) database character set and is used for data stored in

NCHAR, NVARCHAR2 and NCLOB columns. Your current value for the
NLS_NCHAR_CHARACTERSET can be found
with this select: select value from NLS_DATABASE_PARAMETERS where
parameter='NLS_NCHAR_CHARACTERSET';
You cannot have more than 2 charactersets defined in Oracle:
The NLS_CHARACTERSET is used for CHAR, VARCHAR2, CLOB columns;
The NLS_NCHAR_CHARACTERSET is used for NCHAR, NVARCHAR2, NCLOB columns.
NLS_NCHAR_CHARACTERSET is defined when the database is created and specified with
the
CREATE DATABASE command. The NLS_NCHAR_CHARACTERSET defaults to AL16UTF16 if
nothing is specified.

From 9i onwards the NLS_NCHAR_CHARACTERSET can have only 2 values:


UTF8 or AL16UTF16 who are Unicode charactersets.
See Note 260893.1 Unicode character sets in the Oracle database for more info
about the difference
between them. Al lot of people think that they *need* to use the
NLS_NCHAR_CHARACTERSET
to have UNICODE support in Oracle, this is not true, NLS_NCHAR_CHARACTERSET
(NCHAR, NVARCHAR2)
is in 9i always Unicode but you can perfectly use "normal" CHAR and VARCHAR2
columns for storing unicode
in a database who has a AL32UTF8 / UTF8 NLS_CHARACTERSET.
See also point 15. When trying to use another
NATIONAL characterset, the CREATE DATABASE command will fail with "ORA-12714
invalid national character set specified".
The character set identifier is stored with the column definition itself.

2) Which datatypes use the National Character Set?


--------------------------------------------------

There are three datatypes which can store data in the national character set:

NCHAR - a fixed-length national character set character string.


The length of the column is ALWAYS defined in characters
(it always uses CHAR semantics)

NVARCHAR2 - a variable-length national character set character string.


The length of the column is ALWAYS defined in characters
(it always uses CHAR semantics)

NCLOB - stores national character set data of up to four gigabytes.


Data is always stored in UCS2 or AL16UTF16, even if the
NLS_NCHAR_CHARACTERSET is UTF8.
This has very limited impact, for more info about this please see:
Note 258114.1
<http://metalink.oracle.com/metalink/plsql/showdoc?db=NOT&id=258114.1>
Possible action for CLOB/NCLOB storage after 10g upgrade
and if you use DBMS_LOB.LOADFROMFILE see
Note 267356.1
<http://metalink.oracle.com/metalink/plsql/showdoc?db=NOT&id=267356.1>
Character set conversion when using DBMS_LOB
If you don't know what CHAR semantics is, then please read
Note 144808.1
<http://metalink.oracle.com/metalink/plsql/showdoc?db=NOT&id=144808.1> Examples
and limits of BYTE and CHAR semantics usage

If you use N-types, DO use the (N'...') syntax when coding it so that Literals
are
denoted as being in the national character set by prepending letter 'N', for
example:

create table test(a nvarchar2(100));


insert into test values(N'this is a NLS_NCHAR_CHARACTERSET string');

3) How to know if I use N-type columns?


---------------------------------------

This select list all tables containing a N-type column:

select distinct OWNER, TABLE_NAME from DBA_TAB_COLUMNS where DATA_TYPE in


('NCHAR','NVARCHAR2', 'NCLOB');

On a 9i database created without (!) the "sample" shema you will see these rows
(or less) returned:

OWNER TABLE_NAME
------------------------------ ------------------------------
SYS ALL_REPPRIORITY
SYS DBA_FGA_AUDIT_TRAIL
SYS DBA_REPPRIORITY
SYS DEFLOB
SYS STREAMS$_DEF_PROC
SYS USER_REPPRIORITY
SYSTEM DEF$_LOB
SYSTEM DEF$_TEMP$LOB
SYSTEM REPCAT$_PRIORITY

9 rows selected.

These SYS and SYSTEM tables may contain data if you are using:

* Fine Grained Auditing -> DBA_FGA_AUDIT_TRAIL


* Advanced Replication -> ALL_REPPRIORITY, DBA_REPPRIORITY, USER_REPPRIORITY
DEF$_TEMP$LOB , DEF$_TEMP$LOB and REPCAT$_PRIORITY
* Advanced Replication or Deferred Transactions functionality -> DEFLOB
* Oracle Streams -> STREAMS$_DEF_PROC

If you do have created the database with the DBCA and included
the sample shema then you will see typically:

OWNER TABLE_NAME
------------------------------------------------------------
OE BOMBAY_INVENTORY
OE PRODUCTS
OE PRODUCT_DESCRIPTIONS
OE SYDNEY_INVENTORY
OE TORONTO_INVENTORY
PM PRINT_MEDIA
SYS ALL_REPPRIORITY
SYS DBA_FGA_AUDIT_TRAIL
SYS DBA_REPPRIORITY
SYS DEFLOB
SYS STREAMS$_DEF_PROC
SYS USER_REPPRIORITY
SYSTEM DEF$_LOB
SYSTEM DEF$_TEMP$LOB
SYSTEM REPCAT$_PRIORITY

15 rows selected.

The OE and PM tables contain just sample data and can be dropped if needed.

4) Should I worry when I upgrade from 8i or lower to 9i or 10g?


---------------------------------------------------------------

* When upgrading from version 7:

The National Character Set did not exist in version 7,


so you cannot have N-type columns.
Your database will just have the -default- AL16UTF16 NLS_NCHAR_CHARACTERSET
declaration and the standard sys/system tables.
So there is nothing to worry about...

* When upgrading from version 8 and 8i:

- If you have only the SYS / SYSTEM tables listed in point 3)


then you don't have USER data using N-type columns.

Your database will just have the -default- AL16UTF16 NLS_NCHAR_CHARACTERSET


declaration after the upgrade and the standard sys/system tables.
So there is nothing to worry about...

We recommend that you follow this note:


Note 159657.1
<http://metalink.oracle.com/metalink/plsql/showdoc?db=NOT&id=159657.1> Complete
Upgrade Checklist for Manual Upgrades from 8.X / 9.0.1 to Oracle9i

- If you have more tables then the SYS / SYSTEM tables listed in point 3)
(and they are also not the "sample" tables) then there are two possible cases:

* Again, the next to points are *only* relevant when you DO have n-type USER
data *

a) Your current 8 / 8i NLS_NCHAR_CHARACTERSET is in this list:

JA16SJISFIXED , JA16EUCFIXED , JA16DBCSFIXED , ZHT32TRISFIXED


KO16KSC5601FIXED , KO16DBCSFIXED , US16TSTFIXED , ZHS16CGB231280FIXED
ZHS16GBKFIXED , ZHS16DBCSFIXED , ZHT16DBCSFIXED , ZHT16BIG5FIXED
ZHT32EUCFIXED

Then the new NLS_NCHAR_CHARACTERSET will be AL16UTF16


and your data will be converted to AL16UTF16 during the upgrade.

We recommend that you follow this note:


Note 159657.1
<http://metalink.oracle.com/metalink/plsql/showdoc?db=NOT&id=159657.1> Complete
Upgrade Checklist for Manual Upgrades from 8.X / 9.0.1 to Oracle9i

b) Your current 8 / 8i NLS_NCHAR_CHARACTERSET is UTF8:

Then the new NLS_NCHAR_CHARACTERSET will be UTF8


and your data not be touched during the upgrade.

We still recommend that you follow this note:


Note 159657.1
<http://metalink.oracle.com/metalink/plsql/showdoc?db=NOT&id=159657.1> Complete
Upgrade Checklist for Manual Upgrades from 8.X / 9.0.1 to Oracle9i

c) Your current 8 / 8i NLS_NCHAR_CHARACTERSET is NOT in the list of point a)


and is NOT UTF8:

Then your will need to export your data and drop it before upgrading.
We recommend that you follow this note:
Note 159657.1
<http://metalink.oracle.com/metalink/plsql/showdoc?db=NOT&id=159657.1> Complete
Upgrade Checklist for Manual Upgrades from 8.X / 9.0.1 to Oracle9i

For more info about the National Character Set in Oracle8 see Note 62107.1
<http://metalink.oracle.com/metalink/plsql/showdoc?db=NOT&id=62107.1>

5) The NLS_NCHAR_CHARACTERSET is NOT changed to UTF8 or AL16UTF16 after upgrading


to 9i.
----------------------------------------------------------------------------------
------

That may happen if you have not set the ORA_NLS33 environment parameter correctly
to the 9i Oracle_Home during the upgrade.
Note 77442.1
<http://metalink.oracle.com/metalink/plsql/showdoc?db=NOT&id=77442.1> ORA_NLS
(ORA_NLS32, ORA_NLS33, ORA_NLS10) Environment Variables explained.

We recommend that you follow this note for the upgrade:


Note 159657.1
<http://metalink.oracle.com/metalink/plsql/showdoc?db=NOT&id=159657.1> Complete
Upgrade Checklist for Manual Upgrades from 8.X / 9.0.1 to Oracle9i

Strongly consider to restore your backup and do the migration again


or log a TAR, refer to this note and ask to assign the TAR to the
NLS/globalization team. That team can then assist you further.
However please do note that not all situations can be corrected,
so you might be asked to do the migration again...

6) Can I change the AL16UTF16 to UTF8 / I hear that there are problems with
AL16UTF16.
----------------------------------------------------------------------------------
----

a) If you do *not* use N-types then there is NO problem at all with AL16UTF16
because you are simply not using it and we strongly advice you the keep
the default AL16UTF16 NLS_NCHAR_CHARACTERSET.
b) If you *do* use N-types then there will be a problem with 8i clients and
lower accessing the N-type columns (note that you will NOT have a problem
selecting from "normal" non-N-type columns).
More info about that is found there:
Note 140014.1
<http://metalink.oracle.com/metalink/plsql/showdoc?db=NOT&id=140014.1> ALERT
Oracle8/8i to Oracle9i/10g using New "AL16UTF16" National Character Set
Note 236231.1
<http://metalink.oracle.com/metalink/plsql/showdoc?db=NOT&id=236231.1> New
Character Sets Not Supported For Use With Developer 6i And Older Versions

If this is a situation you find yourself in we recommend to simply use UTF8


as NLS_NCHAR_CHARACTERSET or create a second 9i db using UTF8 as NCHAR and use
this as "inbetween" between the 8i and the 9i db
you can create views in this new database that do a select from the AL16UTF16
9i db
the data will then be converted from AL16UTF16 to UTF8 in the "inbetween"
database and that can
be read by oracle 8i

This is one of the 2 reasons why you should use UTF8 as NLS_NCHAR_CHARACTERSET.
If you are NOT using N-type columns with pre-9i clients then there is NO reason
to go to UTF8.

c) If you want to change to UTF8 because you are using transportable tablespaces
from 8i database
then check if are you using N-types in the 8i database that are included in the
tablespaces that you are transporting.

select distinct OWNER, TABLE_NAME from DBA_TAB_COLUMNS where DATA_TYPE in


('NCHAR','NVARCHAR2', 'NCLOB');

If yes, then you have the second reason to use UTF8 as as


NLS_NCHAR_CHARACTERSET.

If not, then leave it to AL16UTF16 and log a tar for the solution of the ORA-
19736
and refer to this document.

d) You are in one of the 2 situations where it's really needed to change from
AL16UTF16 to UTF8,
log a tar so that we can assist you.

provide:
1) the output from:

select distinct OWNER, TABLE_NAME, COLUMN_NAME, CHAR_LENGTH


from DBA_TAB_COLUMNS where DATA_TYPE in ('NCHAR','NVARCHAR2', 'NCLOB');

2) a CSSCAN output

IMPORTANT:
Please *DO* install the version 1.2 or higher from TechNet for you version.
http://technet.oracle.com/software/tech/globalization/content.html
and use this.

copy all scripts and executables found in the zip file you downloaded
to your oracle_home overwriting the old versions.
Then run csminst.sql using these commands and SQL statements:

cd $ORACLE_HOME/rdbms/admin
set oracle_sid=<your SID>
sqlplus "sys as sysdba"
SQL>set TERMOUT ON
SQL>set ECHO ON
SQL>spool csminst.log
SQL> START csminst.sql

Check the csminst.log for errors.

Then run CSSCAN

csscan FULL=Y FROMNCHAR=AL16UTF16 TONCHAR=UTF8 LOG=Ncharcheck CAPTURE=Y

( note the usage of fromNchar and toNchar )

Upload the 3 resulting files and the output of the select while creating the tar

important:

Do NOT use the N_SWITCH.SQL script, this will corrupt you NCHAR data !!!!!!

7) Is the AL32UTF8 problem the same as the AL16UTF16 / do I need the same patches?
----------------------------------------------------------------------------------
No, they may look similar but are 2 different issues.

For information about the possible AL32UTF8 issue please see


Note 237593.1
<http://metalink.oracle.com/metalink/plsql/showdoc?db=NOT&id=237593.1>
Problems connecting to AL32UTF8 databases from older versions (8i and lower)

8) But I still want <characterset> as NLS_NCHAR_CHARACTERSET, like I had in 8(i)!


---------------------------------------------------------------------------------

This is simply not possible.

From 9i onwards the NLS_NCHAR_CHARACTERSET can have only 2 values: UTF8 or


AL16UTF16.

Both UTF8 and AL16UTF16 are unicode charactersets, so they can


store whatever <characterset> you had as NLS_NCHAR_CHARACTERSET in 8(i).

If you are not using N-types then keep the default AL16UTF16 or use UTF8,
it doesn't matter if you don't use the types.

There is one condition in which this "limitation" can have a undisired affect,
when you are importing an Oracle8i Transportable Tablespace into Oracle9i
you can run into a ORA-19736 (as wel with AL16UTF16 as with UTF8).
In that case log a TAR, refer to this note and ask to assign the TAR to the
NLS/globalization team. That team can then assist you to work around this
issue.

9) Do i need to set NLS_LANG to AL16UTF16 when creating/using the


NLS_NCHAR_CHARACTERSET ?
----------------------------------------------------------------------------------
--------

As clearly stated in
Note 158577.1
<http://metalink.oracle.com/metalink/plsql/showdoc?db=NOT&id=158577.1>
NLS_LANG Explained (How does Client-Server Character Conversion Work?)
point "1.2 What is this NLS_LANG thing anyway?"

* NLS_LANG is used to let Oracle know what characterset you client's OS is USING
so that Oracle can do (if needed) conversion from the client's characterset to
the
database characterset.

NLS_LANG is a CLIENT parameter has has no influance on the database side.

10) I try to use AL32UTF8 as NLS_NCHAR_CHARACTERSET but it fails with ORA-12714


-------------------------------------------------------------------------------

From 9i onwards the NLS_NCHAR_CHARACTERSET can have only 2 values:


UTF8 or AL16UTF16.

UTF8 is possible so that you can use it (when needed) for 8.x backwards
compatibility.
In all other conditions AL16UTF16 is the preferred and best value.
AL16UTF16 has the same unicode revision as AL23UTF8,
so there is no need for AL32UTF8 as NLS_NCHAR_CHARACTERSET.

11) I have the message "( possible ncharset conversion )" during import.
------------------------------------------------------------------------

in the import log you see something similar to this:

Import: Release 9.2.0.4.0 - Production on Fri Jul 9 11:02:42 2004


Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

Connected to: Oracle9i Enterprise Edition Release 9.2.0.4.0 - 64bit Production


JServer Release 9.2.0.4.0 - Production
Export file created by EXPORT:V08.01.07 via direct path
import done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set
export server uses WE8ISO8859P1 NCHAR character set (possible ncharset conversion)

This is normal and is not a error condition.

- If you do not use N-types then this is a pure informative message.

- But even in the case that you use N-types like NCHAR or NCLOB then this is not a
problem:

* the database will convert from the "old" NCHAR characterset to the new one
automatically.
(and - unlike the "normal" characterset - the NLS_LANG has no impact on this
conversion
during exp/imp)

* AL16UTF16 or UTF8 (the only 2 possible values in 9i) are unicode characterset
and so
can store any character... So no data loss is to be expected.
12) Can i use AL16UTF16 as NLS_CHARACTERSET ?
----------------------------------------------

No, AL16UTF16 can only be used as NLS_NCHAR_CHARACTERSET in 9i and above.


Trying to create a database with a AL16UTF16 NLS_CHARACTERSET will fail.

13) I'm inserting <special character> in a Nchar or Nvarchar2 col but it comes
back as ? or ? ...
----------------------------------------------------------------------------------
----------------

see point 13 in Note 227330.1


<http://metalink.oracle.com/metalink/plsql/showdoc?db=NOT&id=227330.1>
Character Sets & Conversion - Frequently Asked Questions

14) Do i need to change the NLS_NCHAR_CHARACTERSET in 8i to UTF8 BEFORE upgrading


to 9i/10g?
----------------------------------------------------------------------------------
----------

No, see point 4) in this note.

15) Having a UTF8 NLS_CHARACTERSET db is there a advantage to use AL16UTF16 N-


types ?
----------------------------------------------------------------------------------
---

there migth be 2 reasons:

a) one possible advantage is storage (disk space).

UTF8 uses 1 up to 3 bytes, AL16UTF16 always 2 bytes.


If you have a lot of non-western data (cyrillic, Chinese, Japanese, Hindi
languages..)
then i can be advantageous to use N-types for those columns.
For western data (english, french, spanish, dutch, german, portuguese etc...)
UTF8 will use in most cases less disk space then AL16UTF16.

Note 260893.1
<http://metalink.oracle.com/metalink/plsql/showdoc?db=NOT&id=260893.1>
Unicode character sets in the Oracle database

This is not true for (N)CLOB, they are both encoded a internal fixed-width Unicode
character set
Note 258114.1
<http://metalink.oracle.com/metalink/plsql/showdoc?db=NOT&id=258114.1>
Possible action for CLOB/NCLOB storage after 10g upgrade
so they will use the same amount of disk space.

b) other possible advantage is extending the limits of CHAR semantics

For a single-byte character set encoding, the character and byte length are
the same. However, multi-byte character set encodings do not correspond to
the bytes, making sizing the column more difficult.

Hence the reason why CHAR semantics was introduced. However, we still have some
physical underlying byte based limits and development has choosen to allow the
full usage
of the underlying limits. This results in the following table giving the
maximum amount
of CHARarcters occupying the MAX datalength that can be stored for a cer
datatype in 9i and up.

The MAX colum is the MAXIMUM amount of CHARACTERS that can be stored
occupying the MAXIMUM data len seen that UTF8 and AL32UTF8 are VARRYING
charactersets this means that a string of X chars can be X to X*3 (or X*4 for
AL32) bytes.

The MIN col is the maximum size that you can *define* and that Oracle can store
if all data
is the MINIMUM datalength (1 byte for AL32UTF8 and UTF8) for that characet.

N-types (NVARCHAR2, NCHAR) are *always* defined in CHAR semantics, you cannot
define them in BYTE.

all numbers are CHAR definitions

UTF8 (1 to 3 bytes) AL32UTF8 (1 to 4 bytes) AL16UTF16 ( 2 bytes)


MIN MAX MIN MAX MIN MAX
CHAR 2000 666 2000 500 N/A N/A

VARCHAR2 4000 1333 4000 1000 N/A N/A

NCHAR 2000 666 N/A N/A 1000 1000

NVARCHAR2 4000 1333 N/A N/A 2000 2000

(N/A means not possible)

This means that if you try to store more then 666 characters
that occupy 3 bytes in UTF8 in a CHAR UTF8 colum you still will get a
ORA-01401: inserted value too large for column
(or from 10g onwards: ORA-12899: value too large for column )
error, even if you have defined the colum as CHAR (2000 CHAR)
so here it might be a good idea to define that column as NCHAR
that will raise the MAX to 1000 char's ...

Note 144808.1
<http://metalink.oracle.com/metalink/plsql/showdoc?db=NOT&id=144808.1> Examples
and limits of BYTE and CHAR semantics usage

Disadvantages using N-types:

* You might have some problems with older clients if using AL16UTF16
see point 6) b) in this note
* Be sure that you use (AL32)UTF8 as NLS_CHARACTERSET , otherwise you will run
into
point 13 of this note.
* Do not expect a higher *performance* by using AL16UTF16, it might be faster
on some systems, but that has more to do with I/O then with the database kernel.
* If you use N-types, DO use the (N'...') syntax when coding it so that Literals
are
denoted as being in the national character set by prepending letter 'N', for
example:

create table test(a nvarchar2(100));


insert into test values(N'this is NLS_NCHAR_CHARACTERSET string');

Normally you will choose to use VARCHAR (using a (AL32)UTF8 NLS_CHARACTERSET)


for simplicity, to avoid confusion and possible other limitations who might be
imposed by your application or programming language to the usage of N-types.

16) I have a message running DBUA (Database Upgrade Assistant) about NCHAR type
when upgrading from 8i .

AL16UTF16
The default Oracle character set for the SQL NCHAR data type, which is used for
the national character set.
It encodes Unicode data in the UTF-16 encoding.

AL32UTF8
An Oracle character set for the SQL CHAR data type, which is used for the database
character set.
It encodes Unicode data in the UTF-8 encoding.

Unicode
Unicode is a universal encoded character set that allows you information from any
language to be stored
by using a single character set. Unicode provides a unique code value for every
character, regardless
of the platform, program, or language.

Unicode database
A database whose database character set is UTF-8.

Unicode code point


A 16-bit binary value that can represent a unit of encoded text for processing and
interchange.
Every point between U+0000 and U+FFFF is a code point.

Unicode datatype
A SQL NCHAR datatype (NCHAR, NVARCHAR2, and NCLOB). You can store Unicode
characters in columns
of these datatypes even if the database character set is not Unicode.

unrestricted multilingual support


The ability to use as many languages as desired. A universal character set, such
as Unicode,
helps to provide unrestricted multilingual support because it supports a very
large character
repertoire, encompassing most modern languages of the world.

UTFE
A Unicode 3.0 UTF-8 Oracle database character set with 6-byte supplementary
character support.
It is used only on EBCDIC platforms.

UTF8
The UTF8 Oracle character set encodes characters in one, two, or three bytes.
It is for ASCII-based platforms. The UTF8 character set supports Unicode 3.0.
Although specific supplementary characters were not assigned code points in
Unicode until
version 3.1, the code point range was allocated for supplementary characters in
Unicode 3.0.
Supplementary characters are treated as two separate, user-defined characters that
occupy 6 bytes.

UTF-8
The 8-bit encoding of Unicode. It is a variable-width encoding. One Unicode
character can
be 1 byte, 2 bytes, 3 bytes, or 4 bytes in UTF-8 encoding. Characters from the
European scripts
are represented in either 1 or 2 bytes. Characters from most Asian scripts are
represented in
3 bytes. Supplementary characters are represented in 4 bytes.

UTF-16
The 16-bit encoding of Unicode. It is an extension of UCS-2 and supports the
supplementary characters
defined in Unicode 3.1 by using a pair of UCS-2 code points.
One Unicode character can be 2 bytes or 4 bytes in UTF-16 encoding.
Characters (including ASCII characters) from European scripts and most Asian
scripts are
represented in 2 bytes. Supplementary characters are represented in 4 bytes.

wide character
A fixed-width character format that is useful for extensive text processing
because it allows data to be processed in consistent, fixed-width chunks. Wide
characters are intended to support internal character processing

Oracle started supporting Unicode based character sets in Oracle7.


Here is a summary of the Unicode character sets supported in Oracle:

+------------+---------+-----------------+
| Charset | RDBMS | Unicode version |
+------------+---------+-----------------+
| AL24UTFFSS | 7.2-8.1 | 1.1 |
| | | |
| UTF8 | 8.0-10g | 2.1 (8.0-8.1.7) |
| | | 3.0 (8.1.7-10g) |
| | | |
| UTFE | 8.0-10g | 2.1 (8.0-8.1.7) |
| | | 3.0 (8.1.7-10g) |
| | | |
| AL32UTF8 | 9.0-10g | 3.0 (9.0) |
| | | 3.1 (9.2) |
| | | 3.2 (10.1) |
| | | |
| AL16UTF16 | 9.0-10g | 3.0 (9.0) |
| | | 3.1 (9.2) |
| | | 3.2 (10.1) |
+------------+---------+-----------------+

AL24UTFFSS
AL24UTFFSS was the first Unicode character set supported by Oracle. Is was
introduced in Oracle 7.2. The AL24UTFFSS encoding scheme was based on the
Unicode 1.1 standard, which is now obsolete. AL24UTFFSS has been de-supported
from Oracle9i. The migration path for existing AL24UTFFSS databases is to
upgrade the database to 8.0 or 8.1, then upgrade the character set to UTF8
before upgrading the database further to 9i or 10g.
[NOTE:234381.1]
<http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_id=234381.
1&p_database_id=NOT> Changing AL24UTFFSS to UTF8 - AL32UTF8 with ALTER DATABASE
CHARACTERSET

UTF8
UTF8 was the UTF-8 encoded character set in Oracle8 and 8i. It followed the
Unicode 2.1 standard between Oracle 8.0 and 8.1.6, and was upgraded to Unicode
version 3.0 for versions 8.1.7, 9i and 10g. To maintain compatibility with
existing installations this character set will remain at Unicode 3.0 in future
Oracle releases. Although specific supplementary characters were not assigned
to Unicode until version 3.1, the allocation for these characters were already
defined in 3.0. So if supplementary characters are inserted in a UTF8 database,
it will not corrupt the actual data inside the database. They will be treated as
2 separate undefined characters, occupying 6 bytes in storage. We recommend that
customers switch to AL32UTF8 for full supplementary character support.

UTFE
This is the UTF8 database character set for the EDCDIC platforms. It has the
same properties as UTF8 on ASCII based platforms. The EBCDIC Unicode
transformation format is documented in Unicode Technical Report #16 UTF-EBCDIC.
Which can be found at http://www.unicode.org/unicode/reports/tr16/

AL32UTF8
This is the UTF-8 encoded character set introduced in Oracle9i. AL32UTF8 is the
database character set that supports the latest version (3.2 in 10g) of the
Unicode standard. It also provides support for the newly defined supplementary
characters. All supplementary characters are stored as 4 bytes.
AL32UTF8 was introduced because when UTF8 was designed (in the times of Oracle8)
there was no concept of supplementary characters, therefore UTF8 has a maximum
of 3 bytes per character. Changing the design of UTF8 would break backward
compatibility, so a new character set was introduced. The introduction of
surrogate pairs should mean that no significant architecture changes are needed
in future versions of the Unicode standard, so the plan is to keep enhancing
AL32UTF8 as necessary to support future version of the Unicode standard, for
example work is now underway to make sure we support Unicode 4.0 in AL32UTF8
in the release after 10.1.

AL16UTF16
This is the first UTF-16 encoded character set in Oracle. It was introduced in
Oracle9i as the default national character set (NLS_NCHAR_CHARACTERSET).
AL16UTF16 supports the latest version (3.2 in 10g) of the Unicode standard.
It also provides support for the newly defined supplementary characters.
All supplementary characters are stored as 4 bytes.
As with AL32UTF8, the plan is to keep enhancing AL16UTF16 as
necessary to support future version of the Unicode standard.
AL16UTF16 cannot be used as a database character set (NLS_CHARACTERSET),
only as the national character set (NLS_NCHAR_CHARACTERSET).
The database character set is used to identify and to hold SQL,
SQL metadata and PL/SQL source code. It must have either single byte 7-bit ASCII
or single byte EBCDIC as a subset, whichever is native to the deployment
platform. Therefore, it is not possible to use a fixed-width, multi-byte
character set (such as AL16UTF16) as the database character set.
Trying to create a database with AL16UTF16 a characterset in 9i and up will give
"ORA-12706: THIS CREATE DATABASE CHARACTER SET IS NOT ALLOWED".

Further reading
---------------
All the above information is taken from the white paper "Oracle Unicode database
support". The paper itself contains much more information and is available from:
http://otn.oracle.com/tech/globalization/pdf/TWP_Unicode_10gR1.pdf

References
----------
The following URLs contain a complete list of hex values and character
descriptions for every Unicode character:
Unicode Version 3.2: http://www.unicode.org/Public/3.2-Update/UnicodeData-
3.2.0.txt
Unicode Version 3.1: http://www.unicode.org/Public/3.1-Update/UnicodeData-
3.1.0.txt
Unicode Version 3.0: http://www.unicode.org/Public/3.0-Update/UnicodeData-
3.0.0.txt
Unicode Versions 2.x:
http://www.unicode.org/unicode/standard/versions/enumeratedversions.html
Unicode Version 1.1: http://www.unicode.org/Public/1.1-Update/UnicodeData-
1.1.5.txt
A description of the file format can be found at:
http://www.unicode.org/Public/UNIDATA/UnicodeData.html
For a glossarry of unicode terms, see:
http://www.unicode.org/glossary/

On above locations you can find the unicode standard, all characters are there
referenced with their UCS-2 codepoint

Some further notes:


===================

Note 1:
-------

Thanks for the detailed reply.


>
> >Furthermore the use of NLS columns on a utf8 database (al32utf8 would be
> better by the way) is
> >subject to questions. Correct me if I'm wrong but I believe that most
> >asian character sets can be translated into utf8 without loosing any
> >information. The only exception to this statement is for surrogate pairs
> >and that's the only difference between al32utf8 and utf8 in Oracle.
> >al32utf8 supports surrogate pairs.
>
> I found from Oracle documentation that UTF8 supports surrogate pairs but
> requires 6 bytes for surrogate pairs.

I should have clarified : the jdbc drivers don't support these 6-bytes
utf8 surrogate pairs. That's the reason why we introduced al32utf8 as
one of the native character set (ascii, isolatin1, utf8, al32utf8, ucs2,
al24utffss).

Note 2:
-------
> AL32UTF8
> The AL32UTF8 character set encodes characters in one to three bytes.
> Surrogate
> pairs require four bytes. It is for ASCII-based platforms.
>
> UTF8
> The UTF8 character set encodes characters in one to three bytes. Surrogate
> pairs
> require six bytes. It is for ASCII-based platforms.
>
> AL32UTF8
> ---------
> Advantages
> ----------
> 1. Surrogate pair Unicode characters
> are stored in the standard 4 bytes
> representation, and there is no
> data conversion upon retrieval
> and insertion of those surrogate
> characters. Also, the storage for
> those characters requires less disk
> space than that of the same
> characters encoded in UTF8.
>
> Disadvantages
> -------------
> 1. You cannot specify the length of SQL CHAR
> types in the number of characters (Unicode
> code points) for surrogate characters. For
> example, surrogate characters are treated as
> one code point rather than the standard of two
> code points.
> 2. The binary order for SQL CHAR columns is
> different from that of SQL NCHAR columns
> when the data consists of surrogate pair
> Unicode characters. As a result, CHAR columns
> NCHAR columns do not always have the same
> sort for identical strings.
>
> UTF8
> ----
> Advantages
> ----------
> 1. You can specify the length of SQL
> CHAR types as a number of
> characters.
> 2. The binary order on the SQL CHAR
> columns is always the same as
> that of the SQL NCHAR columns
> when the data consists of the same
> surrogate pair Unicode characters.
> As a result, CHAR columns and
> NCHAR columns have the same
> sort for identical strings.
>
> Disadvantages
> -------------
> 1. Surrogate pair Unicode characters are stored
> as 6 bytes instead of the 4 bytes defined by the
> Unicode standard. As a result, Oracle has to
> convert data for those surrogate characters.
>
> I dont understand the 1st disadvantage of AL32UTF8 encoding !! If surrogate
> characters are considered 1 codepoint, then if I declare a CHAR column as of
> length 40 characters (codepoints) , then I can enter 40 surrogate
> characters.

Note 3:
-------

Universal Character Sets


====================
Character Set Name Description Comments
Language, Country or Region
================= ===================================== =========
==========================
AL16UTF16 Unicode 3.1 UTF-16Universal character set MB, EURO, FIXED
Universal Unicode
AL32UTF8 Unicode 3.1 UTF-8 Universal character set MB, ASCII, EURO
Universal Unicode
UTF8 Unicode 3.0 UTF-8 Universal character set MB, ASCII, EURO
Universal Unicode
CESU-8 compliant
UTFE EBCDIC form of Unicode 3.0UTF-8 MB, EURO
Universal Unicode
Universal character set

Note 4:
-------

WE8ISO is a single byte character set. It has 255 characters.

Korean data requires a multi-byte character set -- each character could be 1, 2,


3 or more bytes. It is a variable length encoding scheme. It has more then,
way more then 255 characters. I don't see it fitting into we8iso unless they
use RAW in which case it is just bytes, not characters at all.

Note 5:
-------

Hi Tom,

We migrated our DB 8.1.7 to 9.2.In 8.1.7 we used UTF8 character set.It remains
same in 9.2.
We know that Oracle 9.2 doesn't have UTF8 but AL32UTF8.
Can we keep this UTF8 or have to change to AL32UTF8.
If we need to change, may we do it by :
alter database character set AL32UTF8
or
we must use exp/imp utility?

Regards

Followup:
what do you mean -- utf8 is still a valid character set?
Note 6:
-------

Hi Tom,

We are migrating from oracle 8.1.6 to oracle 9 R2. We have about 14 oracle
instance. All instances have WE8ISO88591P1
character set. Our company is expanding globally so we are thinking to use
unicode character set with oracle 9.
I have few questions on this issue.

1) What is the difference between UTF-8,UTF-16


Is AL32UTF8 and UTF-8 is same character set or they are different?
Is UTF-16 and AL16UTF16 is same character set or different ?

2) Which character is super set of all character set?


If there is any, Does oracle support that character set?

3) Do we have to change our pl/sql procedure if we move to unicode database ?


The reason for this question is our developer is using ascii character for
carrage return and line feed like chr(10) and chr(13) and some other ascii
character .

4) What is impact on CLOB ?

5) What will be the size of the database? Our production DB size is currently
50GB. What it would be in unicode?

Thanks

basically utf8 is unicode 3.0 support, utf16 is unicode 3.1


there is no super super "top" set.

Your plsql routines may will have to change -- your data model may well have to
change.

You'll find that in utf, european characters (except ascii -- 7bit data) all
take 2 bytes. That varchar2(80) you have in your database? It might only hold
40 characters of eurpean data (or even less of other kinds of data). It is 80
bytes (you can use the new 9i syntax varchar2( N char ) -- it'll allocate in
characters, not bytes).

So, you could find your 80 character description field cannot hold 80
characters.

You might find that x := a || b; fails -- with string to long in your plsql code
due to the increased size.

You might find that your string intensive routines run slower (substr(x,1,80) is
no longer byte 1 .. byte 80 -- Oracle has to look through the string to find
where characters start and stop -- it is more complex)

chr(10) and chr(13) should work find, they are simple ASCII.
On clob -- same impact as on varchar2, same issues.

Your database could balloon to 200gb, but it will be somewhere between 50 and
200. As unicode is a VARYING WIDTH encoding scheme, it is impossible to be
precise -- it is not a fixed width scheme, so we don't know how big your strings
will get to be.

21.3 Oracle Rowid's


-------------------

Rowid's: Every table row has an internal rowid which contains information about
object_id, block_id, file#.
Also you can query on the "logical" number rownum.

SQL> SELECT * FROM charlie.xyz;

ID NAME
--------- --------------------
1 joop
2 gerrit

SQL> SELECT rownum FROM charlie.xyz;

ROWNUM
---------
1
2

SQL> SELECT rowid FROM SALES.xyz;

ROWID
------------------
AAAI92AAQAAAFXbAAA
AAAI92AAQAAAFXbAAB

- DBMS_ROWID:

DBMS_ROWID.

Every row has a rowid. Every row has also an associated


logical "rownum" on which you can query.

The rowid is an 18 byte structure that stores the


location of blockid WHERE the row is in.

The old format is the restricted format of Oracle 7


The new format is the extended format of Oracle 8, 8i

format: OOOOOOFFFBBBBBRRRR

000000=object_id
FFF=relative datafile number
BBBBB=block_id
RRR=row in block

The dbms package DBMS_ROWID has several function to convert FROM


the one format to the other.

DBMS_ROWID EXAMPLES:
--------------------

SELECT DBMS_ROWID.ROWID_TO_EXTENDED(ROWID,null,null,0),
DBMS_ROWID.ROWID_TO_RESTRICTED(ROWID,0), rownum
FROM CHARLIE.XYZ;

SELECT dbms_rowid.rowid_block_number(rowid)
FROM emp
WHERE ename = 'KING';

SELECT dbms_rowid.rowid_block_number(rowid)
FROM TCMLOGDBUSER.EVENTLOG
WHERE id = 5;

This example returns the ROWID for a row in the EMP table, extracts the data
object number
FROM the ROWID, using the ROWID_OBJECT function in the DBMS_ROWID package, then
displays the object number:

DECLARE
object_no INTEGER;
row_id ROWID;
BEGIN
SELECT ROWID INTO row_id FROM TCMLOGDBUSER.EVENTLOG
WHERE id=5;
object_no := dbms_rowid.rowid_object(row_id);
dbms_output.put_line('The obj. # is '|| object_no);
END;
/

PL/SQL procedure successfully completed.

SQL> set serveroutput on


SQL> /
The obj. # is 28954

PL/SQL procedure successfully completed.

SQL> select * from dba_objects where object_id=28954;

OWNER
------------------------------
OBJECT_NAME
-----------------------------------------------------------
SUBOBJECT_NAME OBJECT_ID DATA_OBJECT_ID
------------------------------ ---------- --------------
OBJECT_TYPE CREATED LAST_DDL_ TIMESTAMP
------------------ --------- --------- -------------------
STATUS T G S
------- - - -
TCMLOGDBUSER
EVENTLOG
28954 28954
TABLE 05-DEC-04 05-DEC-04 2004-12-05:22:26:10
VALID N N N

21.4 HETEROGENEOUS SERVICES:


----------------------------

Generic connectivity is intended for low-end data integration solutions


requiring the ad hoc query capability to connect
from Oracle8i to non-Oracle database systems. Generic connectivity is enabled
by Oracle Heterogeneous Services,
allowing you to connect to non-Oracle systems with improved performance and
throughput.
Generic connectivity is implemented as a Heterogeneous Services ODBC agent.
An ODBC agent is included as part of your Oracle8i system.

To access the non-Oracle data store using generic connectivity, the agent works
with an ODBC driver. Oracle8i provides support for the ODBC driver interface.
The driver that you use must be on the same machine as the agent.
The non-Oracle data stores can reside on the same machine as Oracle8i or a
different machine.

Agent processes are usually started when a user session makes its first
non-Oracle system access through a database link. These connections are made using

Oracle's remote data access software, Oracle Net Services, which enables both
client-server and server-server communication. The agent process continues to run
until the user session is disconnected or the database link is explicitly closed.

Multithreaded agents behave slightly differently. They have to be explicitly


started
and shut down by a database administrator instead of automatically being spawned
by Oracle Net Services.

Oracle has Generic Connectivity agents for ODBC and OLE DB that enable you to use
ODBE and OLEDB drivers to access non-Oracle systems that have an ODBC or an OLE DB
interface.

Setup:
------

1. HS datadictonary
-------------------

To install the data dictionary tables and views for Heterogeneous Services, you
must run a script
that creates all the Heterogeneous Services data dictionary tables, views, and
packages.
On most systems the script is called caths.sql and resides in
$ORACLE_HOME/rdbms/admin.

Check for the existence of Heterogeneous Services data dictionary views,

All normal standard preparations for HS needs to be in place in Oracle 9i.


To recap this here, if you must install HS from scratch:
- run caths.sql as SYS on Ora9i DB Server.
- The HS Agent will be installed as part of 9i DB install.
It will be started as part of the listener.
- On NT/2000, The agent works with a OLEDB or ODBC driver to connect
to target db
- The DB Server will connect to the agent through NET8, which is why
a tnsnames.ora and a listener.ora entry needs to be setup

You van also check on HS installation. Just check on existence of the


HS% views in the SYS schema, for example, SYS.HS_FDS_CLASS.

2. tnsnames.ora and listener.ora


--------------------------------

To initiate a connection to the non-Oracle system, the Oracle9i server starts an


agent process
through the Oracle Net listener. For the Oracle9i server to be able to connect to
the agent, you must
configure tnsnames.ora and listener.ora

------------------------------------------------------------------------------

tnsnames examples:

Sybase_sales= (DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)
(HOST=dlsun206) -- local machine
(PORT=1521)
)
(CONNECT_DATA = (SERVICE_NAME=SalesDB)
)
(HS = OK)
)

TNSNAMES.ORA hsmsql =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = tcp)(host=winhost)(port=1521)) ) -- local
machine
(CONNECT_DATA =
(SID = msql)
) -- needs to match the sid in
listener.ora.
(HS=OK)
)
)

TG4MSQL.WORLD =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = ukp15340)(PORT = 1528) )
(CONNECT_DATA = (SID = tg4msql)
)
(HS = OK)
)
-------------------------------------------------------------------------------
listener.ora examples:

LISTENER =
(ADDRESS_LIST =
(ADDRESS= (PROTOCOL=tcp)
(HOST = dlsun206)
(PORT = 1521)
)
)
...
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC = (SID_NAME=SalesDB)
(ORACLE_HOME=/home/oracle/megabase/9.0.1)
(PROGRAM=tg4mb80)
(ENVS=LD_LIBRARY_PATH=non_oracle_system_lib_directory)
)
)

LISTENER.ORA
LISTENER = (DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = winhost)(PORT = 1521)) ) )

SID_LIST_LISTENER = (SID_LIST = (SID_DESC =


(SID_NAME = msql) <== needs to match the sid in tnsnames.ora
(ORACLE_HOME = E:\Ora816)
(PROGRAM = hsodbc) <== hsodbc is the executable ) )

3. create the initialization file:


----------------------------------

Create the Initialization file. Oracle supplies a sample initialization file named

"inithsodbc.ora" which is stored in the $ORACLE_HOME\hs\admin directory.


To create an initialization file, copy the appropriate sample file and rename
the file to initHS_SID.ora. In this example the sid noted in the listener and
tnsnames is msql so our new initialization file is called initmsql.ora.

INITMSQL.ORA
# HS init parameters
#
HS_FDS_CONNECT_INFO = msql <= odbc data_source_name
HS_FDS_TRACE_LEVEL = 0 <= trace levels 0 - 4 (4 is verbose)
HS_FDS_TRACE_FILE_NAME = hsmsql.trc <= trace file name #
# Environment variables required for the non-Oracle system #
#set <envvar>=<value>

HS_FDS_SHAREABLE_NAME
Default value:
none

Range of values:
not applicable
HS_FDS_SHAREABLE_NAME:
Specifies the full path name to the ODBC library. This parameter is required when
you are
using generic connectivity to access data from an ODBC provider on a UNIX machine.

4. create a database link:


--------------------------

CREATE DATABASE LINK sales


USING `Sybase_sales';

Common Errors:
--------------

AGTCTL.exe = ORA-28591 unable to access parameter file, ORA-28592 agent SID not
set
agentctl
hsodbc.exe =
caths.sql

What is the difference between agtctl and lsnrctl dbsnmp_start

Error: ORA-28591 Text: agent control utility: unable to access parameter


file
---------------------------------------------------------------------------
Cause: The agent control utility was unable to access its parameter file.
This could be because it could not find its admin directory or because
permissions on directory were not correctly set. Action:
The agent control utility puts its parameter file in either the
directory pointed to by the environment variable AGTCTL_ADMIN or in the
directory pointed to by the environment variable TNS_ADMIN. Make sure
that at least one of these environment variables is set and that it
points to a directory that the agent has access to.

SET AGTCTL_ADMIN=\OPT\ORACLE\ORA81\HS\ADMIN

Error: ORA-28592 Text: agent control utility: agent SID not set
---------------------------------------------------------------------------
Cause: The agent needs to know the value of the AGENT_SID parameter before it

can process any commands. If it does not have a value for AGENT_SID
then all commands will fail.
Action: Issue the command SET AGENT_SID <value> and then retry the command
that failed.

Error:
------

fix:
Set the HS_FDS_TRACE_FILE_NAME to a filename:
HS_FDS_TRACE_FILE_NAME = test.log

or comment it out:

#HS_FDS_TRACE_FILE_NAME

Error: incorrect characters


------

Change the HS_LANGUAGE to a correct NLS


like AMERICAN_AMERICA.WE8MSWIN1252

Error: ORA-02085
----------------

HS_FDS_CONNECT_INFO = <SystemDSN_name>
HS_FDS_TRACE_LEVEL = 0
HS_FDS_TRACE_FILE_NAME = c:\hs.log
HS_DB_NAME = exhsodbc -- case sensitive
HS_DB_DOMAIN = ch.oracle.com -- case sensitive

ERROR: ORA-02085
----------------

SET GLOBAL_NAMES TRUE

ERORR:ORA-02068 and ORA-28511


-----------------------------

LD_LIBRARY_PATH=/u06/home/oracle/support/network/ODBC/lib
f the LD_LIBRARY_PATH does not contain the path to the ODBC library, a
dd the ODBC library path and start the listener with this environment.

LD_LIBRARY_PATH=/u01/app/oracle/product/8.1.7/lib; export LD_LIBRARY_PATH

When the listener launches the agent hsodbc, the agent inherits the
environment from the listener and needs to have the ODBC library path in order
to access the ODBC shareable file. The shareable file is defined in
the init<sid>.ora file located in the $ORACLE_HOME/hs/admin directory.
HS_FDS_SHAREABLE_NAME=/u06/home/oracle/support/network/ODBC/lib/libodbc.so

21.5 SET EVENTS:


----------------

Note 1:
-------

- What is a database EVENT and how does one set it?

Oracle trace events are useful for debugging the Oracle database server. The
following two examples
are simply to demonstrate syntax. Refer to later notes on this page for an
explanation of what these
particular events do.
Events can be activated by either adding them to the INIT.ORA parameter file. E.g.

event='1401 trace name errorstack, level 12'


... or, by issuing an ALTER SESSION SET EVENTS command: E.g.
alter session set events '10046 trace name context forever, level 4';

The alter session method only affects the user's current session, whereas changes
to the INIT.ORA file will
affect all sessions once the database has been restarted.

- What database events can be set?

The following events are frequently used by DBAs and Oracle Support to diagnose
problems:
10046 trace name context forever, level 4
Trace SQL statements and show bind variables in trace output.

10046 trace name context forever, level 8


This shows wait events in the SQL trace files

10046 trace name context forever, level 12


This shows both bind variable names and wait events in the SQL trace files

1401 trace name errorstack, level 12


1401 trace name errorstack, level 4
1401 trace name processstate
Dumps out trace information if an ORA-1401 "inserted value too large for column"
error occurs.
The 1401 can be replaced by any other Oracle Server error code that you want to
trace.

60 trace name errorstack level 10


Show where in the code Oracle gets a deadlock (ORA-60), and may help to diagnose
the problem.

- The following list of events are examples only. They might be version specific,
so please call Oracle before using them:
10210 trace name context forever, level 10
10211 trace name context forever, level 10
10231 trace name context forever, level 10
These events prevent database block corruptions

10049 trace name context forever, level 2


Memory protect cursor

10210 trace name context forever, level 2


Data block check

10211 trace name context forever, level 2


Index block check

10235 trace name context forever, level 1


Memory heap check

10262 trace name context forever, level 300


Allow 300 bytes memory leak for connections

- How can one dump internal database structures?


The following (mostly undocumented) commands can be used to obtain information
about internal database structures.

-- Dump control file contents


alter session set events 'immediate trace name CONTROLF level 10'
/

-- Dump file headers


alter session set events 'immediate trace name FILE_HDRS level 10'
/

-- Dump redo log headers


alter session set events 'immediate trace name REDOHDR level 10'
/

-- Dump the system state


-- NOTE: Take 3 successive SYSTEMSTATE dumps, with 10 minute intervals
alter session set events 'immediate trace name SYSTEMSTATE level 10'
/

-- Dump the process state


alter session set events 'immediate trace name PROCESSSTATE level 10'
/

-- Dump Library Cache details


alter session set events 'immediate trace name library_cache level 10'
/

-- Dump optimizer statistics whenever a SQL statement is parsed (hint: change


statement or flush pool)
alter session set events '10053 trace name context forever, level 1'
/

-- Dump a database block (File/ Block must be converted to DBA address)


-- Convert file and block number to a DBA (database block address). Eg:
variable x varchar2;
exec :x := dbms_utility.make_data_block_address(1,12);
print x
alter session set events 'immediate trace name blockdump level 50360894'
/

ALTER SESSION SET EVENTS '1652 trace name errorstack level 1 ';

or
alter system set events '1652 trace name errorstack level 1 ';
alter system set events '1652 trace name errorstack off ';

Note 2:
-------

Doc ID </help/usaeng/Search/search.html>: Note:218105.1 Content Type:


TEXT/PLAIN
Subject: Introduction to ORACLE Diagnostic EVENTS Creation Date: 11-NOV-2002
Type: BULLETIN Last Revision Date: 20-NOV-2002
Status: PUBLISHED
PURPOSE
-------

This document describes the different types of Oracle EVENT that exist to help
customers and Oracle Support Services when investigating Oracle RDBMS related
issues.

This note will only provide information of a general nature.

Specific information on the usage of a given event should be provided by


Oracle Support Services or the Support related article that is suggesting the
use of a given event. This note will not provide that level of detail.

SCOPE & APPLICATION


-------------------

The information held here is of use to Oracle DBAs, developers and Oracle
Support Services.

Introduction to ORACLE Diagnostic EVENTS


----------------------------------------

Before proceeding, please review the following note as it contain some


important additional information on Events.

[NOTE:75713.1] <ml2_documents.showDocument?p_id=75713.1&p_database_id=NOT>
"Important Customer information about
using Numeric Events"

EVENTS are primarily used to produce additional diagnostic information


when insufficient information is available to resolve a given problem.

EVENTS are also used to workaround or resolve problems by changing Oracle's


behaviour or enabling undocumented features.

*WARNING* Do not use an Oracle Diagnostic Event unless directed to do so by


Oracle Support Services or via a Support related article on Metalink.
Incorrect usage can result in disruptions to the database services.

Setting EVENTS
--------------

There are a number of ways in which events can be set.

How you set an event depends on the nature of the event and the circumstances
at the time. As stated above, specific information on how you set a given event
should be provided by Oracle Support Services or the Support related article
that is suggesting the use of a given event.

Most events can be set using more than one of the following methods :

o As INIT parameters
o In the current session
o From another session using a Debug tool
INIT Parameters
~~~~~~~~~~~~~~~

Syntax:

EVENT = "<event_name> <action>"

Reference:

[NOTE:160178.1] <ml2_documents.showDocument?p_id=160178.1&p_database_id=NOT> How


to set EVENTS in the SPFILE

Current Session
~~~~~~~~~~~~~~~

Syntax:

ALTER SESSION SET EVENTS '<event_name> <action>';

From another Session using a Debug tool


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

There are a number of debug tools :

o ORADEBUG
o ORAMBX (VMS only)

ORADEBUG :
========

Syntax:

Prior to Oracle 9i,

SVRMGR> oradebug event <event_name> <action>

Oracle 9i and above :

SQL> oradebug event <event_name> <action>

Reference:

[NOTE:29786.1] <ml2_documents.showDocument?p_id=29786.1&p_database_id=NOT>
"SUPTOOL: ORADEBUG 7.3+ (Server Manager/SQLPLUS Debug Commands)"
[NOTE:1058210.6] <ml2_documents.showDocument?p_id=1058210.6&p_database_id=NOT>
"HOW TO ENABLE SQL TRACE FOR ANOTHER SESSION USING ORADEBUG"

ORAMBX : on OpenVMS is still available and described under :


======

[NOTE:29062.1] <ml2_documents.showDocument?p_id=29062.1&p_database_id=NOT>
"SUPTOOL: ORAMBX (VMS) - Quick Reference"

This note will not enter into additional details on these tools.

EVENT Categories
----------------
The most commonly used events fall into one of four categories :

o Dump diagnostic information on request


o Dump diagnostic information when an error occurs
o Change Oracle's behaviour
o Produce trace diagnostic information as the instance runs

Dump diagnostic information on request (Immediate Dump)


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

An immediate dump Event will result in information immediately being


written to a trace file.

Some common immediate dump Events include :

SYSTEMSTATE, ERRORSTACK, CONTROLF, FILE_HDRS and REDOHDR

These type of events are typically set in the current session.

For example:

ALTER SESSION SET EVENTS 'IMMEDIATE trace name ERRORSTACK level 3';

Dump Diagnostic information when an error occurs (On-Error Dump)


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The on-error dump Event is similar to the immediate dump Event with the
difference being that the trace output is only produced when the given
error occurs.

You can use virtually any standard Oracle error to trigger this type of
event.

For example, an ORA-942 "table or view does not exist" error does not include
the name of the problem table or view. When this is not obvious from the
application (due to its complexity), then it can be difficult to investigate
the source of the problem. However, an On-Error dump against the 942 error can
help narrow the search.

These type of events are typically set as INIT parameters.

For example, using the 942 error :

EVENT "942 trace name ERRORSTACK level 3"

Once established, the next time a session encounters an ORA-942 error, a


trace file will be produced that shows (amongst other information) the current
SQL statement being executed. This current SQL can now be checked and the
offending table or view more easily discovered.

Change Oracle's behaviour


~~~~~~~~~~~~~~~~~~~~~~~~~

Instance behaviour can be changed or hidden features can be enabled using


these type of Event

A common event in this category is 10262 which is discussed in


[NOTE:21235.1] <ml2_documents.showDocument?p_id=21235.1&p_database_id=NOT> EVENT:
10262 "Do not check for memory leaks"

These type of events are typically set as INIT parameters.

For example:

EVENT "10262 trace name context forever, level 4000"

Produce trace diagnostic information as the instance runs (Trace Events)


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Trace events produce diagnostic information as processes are running.

They are used to gather additional information about a problem.

A common event in this category is 10046 which is discussed in

[NOTE:21154.1] <ml2_documents.showDocument?p_id=21154.1&p_database_id=NOT> EVENT:


10046 "enable SQL statement tracing (including binds/waits)"

These type of events are typically set as INIT parameters.

For example:

EVENT = "10046 trace name context forever, level 12"

Summary
-------

EVENT usage and syntax can be very complex and due to the possible impact on
the database, great care should be taken when dealing with them.

Oracle Support Services (or a Support article) should provide information


on the appropriate method to be adopted and syntax to be used when
establishing a given event.

If it is possible to do so, test an event against a development system


prior to doing the same thing on a production system.

The misuse of events can lead to a loss of service.

RELATED DOCUMENTS
-----------------

[NOTE:75713.1] <ml2_documents.showDocument?p_id=75713.1&p_database_id=NOT>
Important Customer information about using Numeric Events
[NOTE:21235.1] <ml2_documents.showDocument?p_id=21235.1&p_database_id=NOT>
EVENT: 10262 "Do not check for memory leaks"
[NOTE:21154.1] <ml2_documents.showDocument?p_id=21154.1&p_database_id=NOT>
EVENT: 10046 "enable SQL statement tracing (including binds/waits)"
[NOTE:160178.1] <ml2_documents.showDocument?p_id=160178.1&p_database_id=NOT> How
to set EVENTS in the SPFILE
[NOTE:1058210.6] <ml2_documents.showDocument?p_id=1058210.6&p_database_id=NOT> HOW
TO ENABLE SQL TRACE FOR ANOTHER SESSION USING ORADEBUG
[NOTE:29786.1] <ml2_documents.showDocument?p_id=29786.1&p_database_id=NOT>
SUPTOOL: ORADEBUG 7.3+ (Server Manager/SQLPLUS Debug Commands)
[NOTE:29062.1] <ml2_documents.showDocument?p_id=29062.1&p_database_id=NOT>
SUPTOOL: ORAMBX (VMS) - Quick Reference

======================
22. DBA% and v$ views
======================

NLS:
----

VIEW_NAME OWNER
------------------------------ ------------------------------
NLS_DATABASE_PARAMETERS SYS
NLS_INSTANCE_PARAMETERS SYS
NLS_SESSION_PARAMETERS SYS

DBA:
----

VIEW_NAME OWNER
------------------------------ ------------------------------
DBA_2PC_NEIGHBORS SYS
DBA_2PC_PENDING SYS
DBA_ALL_TABLES SYS
DBA_ANALYZE_OBJECTS SYS
DBA_ASSOCIATIONS SYS
DBA_AUDIT_EXISTS SYS
DBA_AUDIT_OBJECT SYS
DBA_AUDIT_SESSION SYS
DBA_AUDIT_STATEMENT SYS
DBA_AUDIT_TRAIL SYS
DBA_CACHEABLE_OBJECTS SYS
DBA_CACHEABLE_TABLES SYS
DBA_CACHEABLE_TABLES_BASE SYS
DBA_CATALOG SYS
DBA_CLUSTERS SYS
DBA_CLUSTER_HASH_EXPRESSIONS SYS
DBA_CLU_COLUMNS SYS
DBA_COLL_TYPES SYS
DBA_COL_COMMENTS SYS
DBA_COL_PRIVS SYS
DBA_CONSTRAINTS SYS
DBA_CONS_COLUMNS SYS
DBA_CONTEXT SYS
DBA_DATA_FILES SYS
DBA_DB_LINKS SYS
DBA_DEPENDENCIES SYS
DBA_DIMENSIONS SYS
DBA_DIM_ATTRIBUTES SYS
DBA_DIM_CHILD_OF SYS
DBA_DIM_HIERARCHIES SYS
DBA_DIM_JOIN_KEY SYS
DBA_DIM_LEVELS SYS
DBA_DIM_LEVEL_KEY SYS
DBA_DIRECTORIES SYS
DBA_DMT_FREE_SPACE SYS
DBA_DMT_USED_EXTENTS SYS
DBA_ERRORS SYS
DBA_EXP_FILES SYS
DBA_EXP_OBJECTS SYS
DBA_EXP_VERSION SYS
DBA_EXTENTS SYS
DBA_FREE_SPACE SYS
DBA_FREE_SPACE_COALESCED SYS
DBA_FREE_SPACE_COALESCED_TMP1 SYS
DBA_FREE_SPACE_COALESCED_TMP2 SYS
DBA_FREE_SPACE_COALESCED_TMP3 SYS
DBA_IAS_CONSTRAINT_EXP SYS
DBA_IAS_GEN_STMTS SYS
DBA_IAS_GEN_STMTS_EXP SYS
DBA_IAS_OBJECTS SYS
DBA_IAS_OBJECTS_BASE SYS
DBA_IAS_OBJECTS_EXP SYS
DBA_IAS_POSTGEN_STMTS SYS
DBA_IAS_PREGEN_STMTS SYS
DBA_IAS_SITES SYS
DBA_IAS_TEMPLATES SYS
DBA_INDEXES SYS
DBA_INDEXTYPES SYS
DBA_INDEXTYPE_OPERATORS SYS
DBA_IND_COLUMNS SYS
DBA_IND_EXPRESSIONS SYS
DBA_IND_PARTITIONS SYS
DBA_IND_SUBPARTITIONS SYS
DBA_INTERNAL_TRIGGERS SYS
DBA_JAVA_POLICY SYS
DBA_JOBS SYS
DBA_JOBS_RUNNING SYS
DBA_LIBRARIES SYS
DBA_LMT_FREE_SPACE SYS
DBA_LMT_USED_EXTENTS SYS
DBA_LOBS SYS
DBA_LOB_PARTITIONS SYS
DBA_LOB_SUBPARTITIONS SYS
DBA_METHOD_PARAMS SYS
DBA_METHOD_RESULTS SYS
DBA_MVIEWS SYS
DBA_MVIEW_AGGREGATES SYS
DBA_MVIEW_ANALYSIS SYS
DBA_MVIEW_DETAIL_RELATIONS SYS
DBA_MVIEW_JOINS SYS
DBA_MVIEW_KEYS SYS
DBA_NESTED_TABLES SYS
DBA_OBJECTS SYS
DBA_OBJECT_SIZE SYS
DBA_OBJECT_TABLES SYS
DBA_OBJ_AUDIT_OPTS SYS
DBA_OPANCILLARY SYS
DBA_OPARGUMENTS SYS
DBA_OPBINDINGS SYS
DBA_OPERATORS SYS
DBA_OUTLINES SYS
DBA_OUTLINE_HINTS SYS
DBA_PARTIAL_DROP_TABS SYS
DBA_PART_COL_STATISTICS SYS
DBA_PART_HISTOGRAMS SYS
DBA_PART_INDEXES SYS
DBA_PART_KEY_COLUMNS SYS
DBA_PART_LOBS SYS
DBA_PART_TABLES SYS
DBA_PENDING_TRANSACTIONS SYS
DBA_POLICIES SYS
DBA_PRIV_AUDIT_OPTS SYS
DBA_PROFILES SYS
DBA_QUEUES SYS
DBA_QUEUE_SCHEDULES SYS
DBA_QUEUE_TABLES SYS
DBA_RCHILD SYS
DBA_REFRESH SYS
DBA_REFRESH_CHILDREN SYS
DBA_REFS SYS
DBA_REGISTERED_SNAPSHOTS SYS
DBA_REGISTERED_SNAPSHOT_GROUPS SYS
DBA_REPAUDIT_ATTRIBUTE SYS
DBA_REPAUDIT_COLUMN SYS
DBA_REPCAT SYS
DBA_REPCATLOG SYS
DBA_REPCAT_REFRESH_TEMPLATES SYS
DBA_REPCAT_TEMPLATE_OBJECTS SYS
DBA_REPCAT_TEMPLATE_PARMS SYS
DBA_REPCAT_TEMPLATE_SITES SYS
DBA_REPCAT_USER_AUTHORIZATIONS SYS
DBA_REPCAT_USER_PARM_VALUES SYS
DBA_REPCOLUMN SYS
DBA_REPCOLUMN_GROUP SYS
DBA_REPCONFLICT SYS
DBA_REPDDL SYS
DBA_REPFLAVORS SYS
DBA_REPFLAVOR_COLUMNS SYS
DBA_REPFLAVOR_OBJECTS SYS
DBA_REPGENERATED SYS
DBA_REPGENOBJECTS SYS
DBA_REPGROUP SYS
DBA_REPGROUPED_COLUMN SYS
DBA_REPGROUP_PRIVILEGES SYS
DBA_REPKEY_COLUMNS SYS
DBA_REPOBJECT SYS
DBA_REPPARAMETER_COLUMN SYS
DBA_REPPRIORITY SYS
DBA_REPPRIORITY_GROUP SYS
DBA_REPPROP SYS
DBA_REPRESOLUTION SYS
DBA_REPRESOLUTION_METHOD SYS
DBA_REPRESOLUTION_STATISTICS SYS
DBA_REPRESOL_STATS_CONTROL SYS
DBA_REPSCHEMA SYS
DBA_REPSITES SYS
DBA_RGROUP SYS
DBA_ROLES SYS
DBA_ROLE_PRIVS SYS
DBA_ROLLBACK_SEGS SYS
DBA_RSRC_CONSUMER_GROUPS SYS
DBA_RSRC_CONSUMER_GROUP_PRIVS SYS
DBA_RSRC_MANAGER_SYSTEM_PRIVS SYS
DBA_RSRC_PLANS SYS
DBA_RSRC_PLAN_DIRECTIVES SYS
DBA_RULESETS SYS
DBA_SEGMENTS SYS
DBA_SEQUENCES SYS
DBA_SNAPSHOTS SYS
DBA_SNAPSHOT_LOGS SYS
DBA_SNAPSHOT_LOG_FILTER_COLS SYS
DBA_SNAPSHOT_REFRESH_TIMES SYS
DBA_SOURCE SYS
DBA_STMT_AUDIT_OPTS SYS
DBA_SUBPART_COL_STATISTICS SYS
DBA_SUBPART_HISTOGRAMS SYS
DBA_SUBPART_KEY_COLUMNS SYS
DBA_SUMMARIES SYS
DBA_SUMMARY_AGGREGATES SYS
DBA_SUMMARY_DETAIL_TABLES SYS
DBA_SUMMARY_JOINS SYS
DBA_SUMMARY_KEYS SYS
DBA_SYNONYMS SYS
DBA_SYS_PRIVS SYS
DBA_TABLES SYS
DBA_TABLESPACES SYS
DBA_TAB_COLUMNS SYS
DBA_TAB_COL_STATISTICS SYS
DBA_TAB_COMMENTS SYS
DBA_TAB_HISTOGRAMS SYS
DBA_TAB_MODIFICATIONS SYS
DBA_TAB_PARTITIONS SYS
DBA_TAB_PRIVS SYS
DBA_TAB_SUBPARTITIONS SYS
DBA_TEMP_FILES SYS
DBA_TRIGGERS SYS
DBA_TRIGGER_COLS SYS
DBA_TS_QUOTAS SYS
DBA_TYPES SYS
DBA_TYPE_ATTRS SYS
DBA_TYPE_METHODS SYS
DBA_UNUSED_COL_TABS SYS
DBA_UPDATABLE_COLUMNS SYS
DBA_USERS SYS
DBA_USTATS SYS
DBA_VARRAYS SYS
DBA_VIEWS SYS

V_$:
----

VIEW_NAME OWNER
------------------------------ ------------------------------
V_$ACCESS SYS
V_$ACTIVE_INSTANCES SYS
V_$AQ SYS
V_$AQ1 SYS
V_$ARCHIVE SYS
V_$ARCHIVED_LOG SYS
V_$ARCHIVE_DEST SYS
V_$ARCHIVE_PROCESSES SYS
V_$BACKUP SYS
V_$BACKUP_ASYNC_IO SYS
V_$BACKUP_CORRUPTION SYS
V_$BACKUP_DATAFILE SYS
V_$BACKUP_DEVICE SYS
V_$BACKUP_PIECE SYS
V_$BACKUP_REDOLOG SYS
V_$BACKUP_SET SYS
V_$BACKUP_SYNC_IO SYS
V_$BGPROCESS SYS
V_$BH SYS
V_$BSP SYS
V_$BUFFER_POOL SYS
V_$BUFFER_POOL_STATISTICS SYS
V_$CIRCUIT SYS
V_$CLASS_PING SYS
V_$COMPATIBILITY SYS
V_$COMPATSEG SYS
V_$CONTEXT SYS
V_$CONTROLFILE SYS
V_$CONTROLFILE_RECORD_SECTION SYS
V_$COPY_CORRUPTION SYS
V_$DATABASE SYS
V_$DATAFILE SYS
V_$DATAFILE_COPY SYS
V_$DATAFILE_HEADER SYS
V_$DBFILE SYS
V_$DBLINK SYS
V_$DB_CACHE_ADVICE SYS
V_$DB_OBJECT_CACHE SYS
V_$DB_PIPES SYS
V_$DELETED_OBJECT SYS
V_$DISPATCHER SYS
V_$DISPATCHER_RATE SYS
V_$DLM_ALL_LOCKS SYS
V_$DLM_CONVERT_LOCAL SYS
V_$DLM_CONVERT_REMOTE SYS
V_$DLM_LATCH SYS
V_$DLM_LOCKS SYS
V_$DLM_MISC SYS
V_$DLM_RESS SYS
V_$DLM_TRAFFIC_CONTROLLER SYS
V_$ENABLEDPRIVS SYS
V_$ENQUEUE_LOCK SYS
V_$EVENT_NAME SYS
V_$EXECUTION SYS
V_$FAST_START_SERVERS SYS
V_$FAST_START_TRANSACTIONS SYS
V_$FILESTAT SYS
V_$FILE_PING SYS
V_$FIXED_TABLE SYS
V_$FIXED_VIEW_DEFINITION SYS
V_$GLOBAL_BLOCKED_LOCKS SYS
V_$GLOBAL_TRANSACTION SYS
V_$HS_AGENT SYS
V_$HS_PARAMETER SYS
V_$HS_SESSION SYS
V_$INDEXED_FIXED_COLUMN SYS
V_$INSTANCE SYS
V_$INSTANCE_RECOVERY SYS
V_$KCCDI SYS
V_$KCCFE SYS
V_$LATCH SYS
V_$LATCHHOLDER SYS
V_$LATCHNAME SYS
V_$LATCH_CHILDREN SYS
V_$LATCH_MISSES SYS
V_$LATCH_PARENT SYS
V_$LIBRARYCACHE SYS
V_$LICENSE SYS
V_$LOADCSTAT SYS
V_$LOADISTAT SYS
V_$LOADPSTAT SYS
V_$LOADTSTAT SYS
V_$LOCK SYS
V_$LOCKED_OBJECT SYS
V_$LOCKS_WITH_COLLISIONS SYS
V_$LOCK_ACTIVITY SYS
V_$LOCK_ELEMENT SYS
V_$LOG SYS
V_$LOGFILE SYS
V_$LOGHIST SYS
V_$LOGMNR_CONTENTS SYS
V_$LOGMNR_DICTIONARY SYS
V_$LOGMNR_LOGS SYS
V_$LOGMNR_PARAMETERS SYS
V_$LOG_HISTORY SYS
V_$MAX_ACTIVE_SESS_TARGET_MTH SYS
V_$MLS_PARAMETERS SYS
V_$MTS SYS
V_$MYSTAT SYS
V_$NLS_PARAMETERS SYS
V_$NLS_VALID_VALUES SYS
V_$OBJECT_DEPENDENCY SYS
V_$OBSOLETE_PARAMETER SYS
V_$OFFLINE_RANGE SYS
V_$OPEN_CURSOR SYS
V_$OPTION SYS
V_$PARALLEL_DEGREE_LIMIT_MTH SYS
V_$PARAMETER SYS
V_$PARAMETER2 SYS
V_$PQ_SESSTAT SYS
V_$PQ_SLAVE SYS
V_$PQ_SYSSTAT SYS
V_$PQ_TQSTAT SYS
V_$PROCESS SYS
V_$PROXY_ARCHIVEDLOG SYS
V_$PROXY_DATAFILE SYS
V_$PWFILE_USERS SYS
V_$PX_PROCESS SYS
V_$PX_PROCESS_SYSSTAT SYS
V_$PX_SESSION SYS
V_$PX_SESSTAT SYS
V_$QUEUE SYS
V_$RECOVERY_FILE_STATUS SYS
V_$RECOVERY_LOG SYS
V_$RECOVERY_PROGRESS SYS
V_$RECOVERY_STATUS SYS
V_$RECOVER_FILE SYS
V_$REQDIST SYS
V_$RESERVED_WORDS SYS
V_$RESOURCE SYS
V_$RESOURCE_LIMIT SYS
V_$ROLLNAME SYS
V_$ROLLSTAT SYS
V_$ROWCACHE SYS
V_$ROWCACHE_PARENT SYS
V_$ROWCACHE_SUBORDINATE SYS
V_$RSRC_CONSUMER_GROUP SYS
V_$RSRC_CONSUMER_GROUP_CPU_MTH SYS
V_$RSRC_PLAN SYS
V_$RSRC_PLAN_CPU_MTH SYS
V_$SESSION SYS
V_$SESSION_CONNECT_INFO SYS
V_$SESSION_CURSOR_CACHE SYS
V_$SESSION_EVENT SYS
V_$SESSION_LONGOPS SYS
V_$SESSION_OBJECT_CACHE SYS
V_$SESSION_WAIT SYS
V_$SESSTAT SYS
V_$SESS_IO SYS
V_$SGA SYS
V_$SGASTAT SYS
V_$SHARED_POOL_RESERVED SYS
V_$SHARED_SERVER SYS
V_$SORT_SEGMENT SYS
V_$SORT_USAGE SYS
V_$SQL SYS
V_$SQLAREA SYS
V_$SQLTEXT SYS
V_$SQLTEXT_WITH_NEWLINES SYS
V_$SQL_BIND_DATA SYS
V_$SQL_BIND_METADATA SYS
V_$SQL_CURSOR SYS
V_$SQL_SHARED_CURSOR SYS
V_$SQL_SHARED_MEMORY SYS
V_$STATNAME SYS
V_$SUBCACHE SYS
V_$SYSSTAT SYS
V_$SYSTEM_CURSOR_CACHE SYS
V_$SYSTEM_EVENT SYS
V_$SYSTEM_PARAMETER SYS
V_$SYSTEM_PARAMETER2 SYS
V_$TABLESPACE SYS
V_$TARGETRBA SYS
V_$TEMPFILE SYS
V_$TEMPORARY_LOBS SYS
V_$TEMPSTAT SYS
V_$TEMP_EXTENT_MAP SYS
V_$TEMP_EXTENT_POOL SYS
V_$TEMP_PING SYS
V_$TEMP_SPACE_HEADER SYS
V_$THREAD SYS
V_$TIMER SYS
V_$TRANSACTION SYS
V_$TRANSACTION_ENQUEUE SYS
V_$TYPE_SIZE SYS
V_$VERSION SYS
V_$WAITSTAT SYS
V_$_LOCK SYS

==========
23 TUNING:
==========

1. init.ora settings
--------------------

background_dump_dest = /var/opt/oracle/SALES/bdump
control_files = ( /oradata/arc/control/ctrl1SALES.ctl
, /oradata/temp/control/ctrl2SALES.ctl
, /oradata/rbs/control/ctrl3SALES.ctl)

db_block_size = 16384
db_name = SALES
db_block_buffers = 17500
db_block_checkpoint_batch = 16
db_files = 255
db_file_multiblock_read_count = 10
license_max_users = 170
#core_dump_dest = /var/opt/oracle/SALES/cdump
core_dump_dest = /oradata/rbs/cdump
distributed_transactions = 40
dml_locks = 1000
job_queue_processes = 2
log_archive_buffers = 20
log_archive_buffer_size = 256
log_archive_dest = /oradata/arc
log_archive_format = arcSALES_%s.arc
log_archive_start = true
log_buffer = 163840
log_checkpoint_interval = 1250
log_checkpoint_timeout = 1800
log_simultaneous_copies = 4
max_dump_file_size = 100240
max_enabled_roles = 50
oracle_trace_enable = true
open_cursors = 2000
open_links = 20
processes = 200
remote_os_authent = true
rollback_segments = (r1, r2, r3, rbig,rbig2)
sequence_cache_entries = 30
sequence_cache_hash_buckets = 23
shared_pool_size = 750M
sort_area_retained_size = 15728640
sort_area_size = 15728640
sql_trace = false
timed_statistics = true
resource_limit = true
user_dump_dest = /var/opt/oracle/SALES/udump
utl_file_dir = /var/opt/oracle/utl
utl_file_dir = /var/opt/oracle/utl/frontend

SORT_AREA_SIZE = 65536 (per PGA, max sort area)


SORT_AREA_RETAINED_SIZE = 65536 (size after sort)
PROCESSES = 100 (alle processes)
DB_BLOCK_SIZE = 8192
DB_BLOCK_BUFFERS = 3400 (DB_CACHE_SIZE in Oracle 9i)
SHARED_POOL_SIZE = 52428800
LOG_BUFFER = 26215400
4194304
8388608
LARGE_POOL_SIZE =
DBWR_IO_SLAVES (DB_WRITER_PROCESSES)
DB_WRITER_PROCESSES = 2
LGWR_IO_SLAVES=
DB_FILE_MULTIBLOCK_READ_COUNT =16 (minimize io during table scans,
it specifies max number of blocks
in one
io operation during sequential
read)
BUFFER_POOL_RECYCLE =
BUFFER_POOL_KEEP =
TIMED_STATISTICES =TRUE (statistics related to time are collected
or not)
OPTIMIZER_MODE =RULE, CHOOSE, FIRST_ROWS, ALL_ROWS

PARALLEL_MIN_SERVERS = 2 (voor Parallel Query, en parallel


recovery)
PARALLEL_MAX_SERVERS = 4

RECOVERY_PARALLELISM = 2 (set parallel recovery op database


niveau)

2. UTLBSTAT and UTLESTAT


------------------------

- if wanted change default tablespace of SYS to TOOLS


- set timed_statistics=true
- in $ORACLE_HOME/rdbms/admin you find utlbstat.sql and utlestat.sql

to create perfoRMANce table and insert baseline: run utlbstat


let the database run for some time
to gather statistics, run utlestat which drop tables and generate report.txt

3. STATSPACK:
-------------

Available as of 8.1.6
installation:

- connect internal
- @$ORACLE_HOME/rdbms/admin/statscre.sql

It will create user PERFSTAT who ownes the new statistics tables
You will be prompted for TEMP and DEFAULT tablespaces

Gather statistices:

- connect perfstat/perfstat
- execute statspack.snap

Or use DBMS_JOB to schedule the generation of snapshots

Create report:

- connect perfstat/perfstat
- @ORACLE_HOME/rdbms/admin/statsrep.sql

This will ask for beginning snapshot id and ending snapshot id.
Then you can enter the filename for the report.

4. QUERIES:
-----------

-- 4.1 HIT RATIO buffercache

SELECT (1-(pr.value/(dbg.value+cg.value)))*100
FROM v$sysstat pr, v$sysstat dbg, v$sysstat cg
WHERE pr.name = 'physical reads'
AND dbg.name = 'db block gets'
AND cg.name = 'consistent gets';

-- 4.2 redo noWait ratio

SELECT (req.value*5000)/entries.value
FROM v$sysstat req, v$sysstat entries
WHERE req.name ='redo log space requests'
AND entries.name='redo entries';

-- 4.3 Library cache and shared pool

Overview memory:

SELECT * FROM V$SGA;

Free memory shared pool:

SELECT * FROM v$sgastat


WHERE name = 'free memory';

How often an object has to be reloaded into the cache once it has been loaded

SELECT sum(pins) Executions, sum(reloads) Misses, sum(reloads)/sum(pins) Ratio


FROM v$librarycache;
SELECT gethits,gets,gethitratio FROM v$librarycache
WHERE namespace = 'SQL AREA';

SELECT sum(sharable_mem) FROM v$db_object_cache;

-- 4.4 TABLE OR INDEX REBUILD NECCESARY?

SELECT substr(segment_name, 1, 30), segment_type, substr(owner, 1, 10),


extents, initial_extent, next_extent, max_extents
FROM dba_segments
WHERE extents > max_extents - 100
AND owner not in ('SYS','SYSTEM');

SELECT index_name, blevel,


decode(blevel,0,'OK BLEVEL',1,'OK BLEVEL',
2,'OK BLEVEL',3,'OK BLEVEL',4,'OK BLEVEL','BLEVEL HIGH') OK
FROM dba_indexes
WHERE owner='SALES';

EXAMPLE OF A SCRIPT THAT YOU MIGHT SCHEDULE ONCE A DAY:


-------------------------------------------------------

-- report 1.

set linesize 500


set pagesize 500
set serveroutput on
set trimspool on
spool d:\logs\

exec dbms_output.put_line('DAILY REPORT SALES DATABASE ON SERVER SUPER');


exec dbms_output.put_line('RUNTIME: '||to_char(SYSDATE, 'DD-MM-YYYY;HH24:MI'));
exec dbms_output.put_line('Please read all sections carefully, takes only 1
minute.');
exec dbms_output.put_line(' ');
exec dbms_output.put_line('===================================================');
exec dbms_output.put_line('SECTION 1: OBJECTS AND USERS');
exec dbms_output.put_line('===================================================');
exec dbms_output.put_line(' ');
exec dbms_output.put_line('---------------------------------------------------');
exec dbms_output.put_line('1.1 INVALID OBJECTS AS FOUND RIGHT NOW:');
exec dbms_output.put_line(' ');

SELECT substr(object_name, 1. 30), substr(object_type, 1, 20), owner, status


FROM dba_objects WHERE status='INVALID';

exec dbms_output.put_line(' ');


exec dbms_output.put_line('Remark: If invalid objects are found intervention is
required.');
exec dbms_output.put_line(' ');
exec dbms_output.put_line('---------------------------------------------------');
exec dbms_output.put_line('1.2 TABLE/INDEX REACHING MAX NO OF EXTENTS:');
exec dbms_output.put_line(' ');

SELECT substr(segment_name, 1, 30), segment_type, substr(owner, 1, 10),


extents, initial_extent, next_extent, max_extents
FROM dba_segments
WHERE extents > max_extents - 50
AND owner not in ('SYS','SYSTEM');

exec dbms_output.put_line(' ');


exec dbms_output.put_line('Remark: If objects are found intervention is
required.');
exec dbms_output.put_line(' ');
exec dbms_output.put_line('---------------------------------------------------');
exec dbms_output.put_line('1.3 SKEWED or BAD INDEXES with blevel > 3:');
exec dbms_output.put_line(' ');

SELECT index_name, owner, blevel,


decode(blevel,0,'OK BLEVEL',1,'OK BLEVEL',
2,'OK BLEVEL',3,'OK BLEVEL',4,'OK BLEVEL','BLEVEL HIGH') OK
FROM dba_indexes
WHERE owner in ('SALES','FRONTEND')
and blevel > 3;

exec dbms_output.put_line(' ');


exec dbms_output.put_line('Remark: If indexes are found rebuild is required.');
exec dbms_output.put_line(' ');
exec dbms_output.put_line('---------------------------------------------------');
exec dbms_output.put_line('1.4. NEW OBJECTS CREATED SINCE YESTERDAY:');
exec dbms_output.put_line(' ');

SELECT owner, substr(object_name, 1, 30), object_type, created,


last_ddl_time, status
FROM dba_objects
WHERE created > SYSDATE-5;

exec dbms_output.put_line(' ');


exec dbms_output.put_line('---------------------------------------------------');
exec dbms_output.put_line('1.5. NEW ORACLE USERS CREATED SINCE YESTERDAY:');
exec dbms_output.put_line(' ');

SELECT substr(username, 1, 20), account_status,


default_tablespace, temporary_tablespace, created
FROM dba_users WHERE created > SYSDATE -10;

exec dbms_output.put_line(' ');

exec dbms_output.put_line('===================================================');
exec dbms_output.put_line('SECTION 2: TABLESPACES, DATAFILES, ROLLBACK SEGS');
exec dbms_output.put_line('===================================================');
exec dbms_output.put_line(' ');
exec dbms_output.put_line('---------------------------------------------------');
exec dbms_output.put_line('2.1 FREE/USED SPACE OF TABLESPACES RIGHT NOW:');
exec dbms_output.put_line(' ');

SELECT Total.name "Tablespace Name",


Free_space, (total_space-Free_space) Used_space, total_space
FROM
(SELECT tablespace_name, sum(bytes/1024/1024) Free_Space
FROM sys.dba_free_space
GROUP BY tablespace_name
) Free,
(SELECT b.name, sum(bytes/1024/1024) TOTAL_SPACE
FROM sys.v_$datafile a, sys.v_$tablespace B
WHERE a.ts# = b.ts#
GROUP BY b.name
) Total
WHERE Free.Tablespace_name = Total.name;

exec dbms_output.put_line(' ');


exec dbms_output.put_line('REMARK: FOR MONTHLY INTERNET BILLING AT LEAST 50MB
SPACE MUST');
exec dbms_output.put_line('BE AVAILABLE IN EACH OF THE MANIIN% TABLESPACES. ');
exec dbms_output.put_line(' ');
exec dbms_output.put_line('---------------------------------------------------');
exec dbms_output.put_line('2.2 STATUS DATABASE FILES RIGHT NOW:');
exec dbms_output.put_line(' ');

SELECT substr(file_name, 1, 50), tablespace_name, status


FROM dba_data_files;

exec dbms_output.put_line(' ');


exec dbms_output.put_line('Remark: status of all files should be available ');
exec dbms_output.put_line(' ');
exec dbms_output.put_line('---------------------------------------------------');
exec dbms_output.put_line('2.3 STATUS ROLLBACK SEGMENTS RIGHT NOW:');
exec dbms_output.put_line(' ');

SELECT substr(segment_name, 1, 20), substr(tablespace_name, 1, 20), status,


INITIAL_EXTENT, NEXT_EXTENT, MIN_EXTENTS, MAX_EXTENTS, PCT_INCREASE
FROM DBA_ROLLBACK_SEGS;

exec dbms_output.put_line(' ');


exec dbms_output.put_line('===================================================');
exec dbms_output.put_line('SECTION 3: PERFORMANCE STATS SINCE DATABASE STARTUP');
exec dbms_output.put_line('===================================================');
exec dbms_output.put_line(' ');
exec dbms_output.put_line('---------------------------------------------------');
exec dbms_output.put_line('3.1 ORACLE MEMORY (SGA LAYOUT):');
exec dbms_output.put_line(' ');

SELECT * FROM V$SGA;

exec dbms_output.put_line(' ');


exec dbms_output.put_line('---------------------------------------------------');
exec dbms_output.put_line('3.2 FREE MEMORY SHARED POOL:');
exec dbms_output.put_line(' ');

SELECT * FROM v$sgastat


WHERE name = 'free memory';

exec dbms_output.put_line(' ');


exec dbms_output.put_line('---------------------------------------------------');
exec dbms_output.put_line('3.3 LIBRARY (pl/sql) HIT RATIO:');
exec dbms_output.put_line(' ');

SELECT sum(pins) Executions, sum(reloads) Misses, sum(reloads)/sum(pins) Ratio


FROM v$librarycache;

exec dbms_output.put_line(' ');


exec dbms_output.put_line('Remark: above Ratio should be low ');
exec dbms_output.put_line(' ');

exec dbms_output.put_line('---------------------------------------------------');
exec dbms_output.put_line('3.4 DATABASE BUFFERS HIT RATIO:');
exec dbms_output.put_line(' ');

SELECT (1-(pr.value/(dbg.value+cg.value)))*100
FROM v$sysstat pr, v$sysstat dbg, v$sysstat cg
WHERE pr.name = 'physical reads'
AND dbg.name = 'db block gets'
AND cg.name = 'consistent gets';

exec dbms_output.put_line(' ');


exec dbms_output.put_line('Remark: above Ratio should be high ');
exec dbms_output.put_line(' ');
exec dbms_output.put_line('---------------------------------------------------');
exec dbms_output.put_line('3.5 REDO BUFFERS WAITS:');
exec dbms_output.put_line(' ');

SELECT (req.value*5000)/entries.value
FROM v$sysstat req, v$sysstat entries
WHERE req.name ='redo log space requests'
AND entries.name='redo entries';

exec dbms_output.put_line(' ');


exec dbms_output.put_line('Remark: above Ratio should be very low ');
exec dbms_output.put_line(' ');
exec dbms_output.put_line('===================================================');
exec dbms_output.put_line('SECTION 4: LOCKS');
exec dbms_output.put_line('===================================================');
exec dbms_output.put_line(' ');
exec dbms_output.put_line('---------------------------------------------------');
exec dbms_output.put_line('4.1 OBJECT LOCKS RIGHT NOW:');
exec dbms_output.put_line(' ');

SELECT l.object_id object_id,


l.session_id session_id,
substr(l.oracle_username, 1, 10) username,
substr(l.os_user_name, 1, 30) osuser,
l.process process,
l.locked_mode lockmode,
substr(o.object_name, 1, 20) objectname
FROM v$locked_object l, dba_objects o
WHERE l.object_id=o.object_id;

exec dbms_output.put_line(' ');


exec dbms_output.put_line('---------------------------------------------------');
exec dbms_output.put_line('4.2 PERSISTENT LOCKS SINCE YESTERDAY:');
exec dbms_output.put_line(' ');

SELECT OBJECT_ID,SESSION_ID,USERNAME,OSUSER,PROCESS,LOCKMODE,
OBJECT_NAME, to_char(DATUM, 'DD-MM-YYYY;HH24:MI')
FROM PROJECTS.LOCKLIST
WHERE DATUM > SYSDATE-2
ORDER BY DATUM;

exec dbms_output.put_line(' ');


exec dbms_output.put_line('---------------------------------------------------');
exec dbms_output.put_line('4.3 BLOCKED SESSIONS RIGHT NOW:');
exec dbms_output.put_line(' ');

SELECT s.sid sid,


substr(s.username, 1, 10) username,
substr(s.schemaname, 1, 10) schemaname,
substr(s.osuser, 1, 10) osuser,
substr(s.program, 1, 30) program,
s.command command,
l.lmode lockmode,
l.block blocked
FROM v$session s, v$lock l
WHERE s.sid=l.sid and schemaname not in ('SYS','SYSTEM');

exec dbms_output.put_line(' ');


exec dbms_output.put_line('===================================================');
exec dbms_output.put_line('SECTION 5: ONLY NEEDED FOR oracle-dba ');
exec dbms_output.put_line(' INFO NEEDED FOR RECOVERY ');
exec dbms_output.put_line('===================================================');
exec dbms_output.put_line(' ');
exec dbms_output.put_line('scn datafiles: ');
exec dbms_output.put_line('scn controlfiles: ');
exec dbms_output.put_line('latest 20 archived redo: ');
exec dbms_output.put_line(' ');
exec dbms_output.put_line(' ');

exec dbms_output.put_line('---------------------------------------------------');
exec dbms_output.put_line('---------------------------------------------------');
exec dbms_output.put_line('END REPORT 1');
exec dbms_output.put_line('Thanks a lot for reading this report !!!');
exit
/

========
24 RMAN:
========

$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
$$$$$

===============
24.1: RMAN 10g:
===============

24.1.1 Create the catalog and register target database:


-------------------------------------------------------

10g example:
------------

Oracle 10.2 target database is test10g.


Orcale 10.2 rman database is RMAN.

Set up the catalog and register the target:

RMAN> create catalog tablespace "RMAN"

recovery catalog created

RMAN> exit

Recovery Manager complete.

C:\oracle>rman catalog=rman/rman@rman target=system/vga88nt@test10g

Recovery Manager: Release 10.2.0.1.0 - Production on Wed Feb 27 21:31:02 2008

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: TEST10G (DBID=899275577)


connected to recovery catalog database

RMAN> register database;

database registered in recovery catalog


starting full resync of recovery catalog
full resync complete

24.1.2 Backup and recovery examples 10g RMAN:


---------------------------------------------

Good Examples using RMAN on 10g:


--------------------------------

>>>> Full Backup

First we configure several persistant parameters for this instance:

RMAN> configure retention policy to recovery window of 5 days;


RMAN> configure default device type to disk;
RMAN> configure controlfile autobackup on;
RMAN> configure channel device type disk format
'C:\Oracle\Admin\W2K2\Backup%d_DB_%u_%s_%p';

Next we perform a complete database backup using a single command:

RMAN> run {
2> backup database plus archivelog;
3> delete noprompt obsolete;
4> }

The recovery catalog should be resyncronized on a regular basis so that changes to


the database structure
and presence of new archive logs is recorded. Some commands perform partial and
full resyncs implicitly,
but if you are in doubt you can perform a full resync using the follwoing command:

RMAN> resync catalog;

>>>> Restore & Recover The Whole Database

If the controlfiles and online redo logs are still present a whole database
recovery can be achieved
by running the following script:

run {
shutdown immediate; # use abort if this fails
startup mount;
restore database;
recover database;
alter database open;
}

This will result in all datafiles being restored then recovered. RMAN will apply
archive logs
as necessary until the recovery is complete. At that point the database is opened.

If the tempfiles are still present you can issue a command like like the following
for each of them:

sql "ALTER TABLESPACE temp ADD


TEMPFILE ''C:\Oracle\oradata\W2K2\temp01.dbf''
REUSE";

If the tempfiles are missing they must be recreated as follows:

sql "ALTER TABLESPACE temp ADD


TEMPFILE ''C:\Oracle\oradata\W2K2\temp01.dbf''
SIZE 100M
AUTOEXTEND ON NEXT 64K";

>>>> Restore & Recover A Subset Of The Database

A subset of the database can be restored in a similar fashion:

run {
sql 'ALTER TABLESPACE users OFFLINE IMMEDIATE';
restore tablespace users;
recover tablespace users;
sql 'ALTER TABLESPACE users ONLINE';
}

Recovering a Tablespace in an Open Database


The following example takes tablespace TBS_1 offline, restores and recovers it,
then brings it back online:

run {
allocate channel dev1 type 'sbt_tape';
sql "ALTER TABLESPACE tbs_1 OFFLINE IMMEDIATE";
restore tablespace tbs_1;
recover tablespace tbs_1;
sql "ALTER TABLESPACE tbs_1 ONLINE";
}

Recovering Datafiles Restored to New Locations


The following example allocates one disk channel and one media management channel
to use datafile copies
on disk and backups on tape, and restores one of the datafiles in tablespace TBS_1
to a different location:

run {
allocate channel dev1 type disk;
allocate channel dev2 type 'sbt_tape';
sql "ALTER TABLESPACE tbs_1 OFFLINE IMMEDIATE";
set newname for datafile 'disk7/oracle/tbs11.f'
to 'disk9/oracle/tbs11.f';
restore tablespace tbs_1;
switch datafile all;
recover tablespace tbs_1;
sql "ALTER TABLESPACE tbs_1 ONLINE";
}

>>>> Example backup to sbt:

echo " run {


allocate channel t1 type 'sbt_tape' parms
'ENV=(tdpo_optfile=/usr/tivoli/tsm/client/oracle/bin64/tdpo.opt)';
allocate channel t2 type 'sbt_tape' parms
'ENV=(tdpo_optfile=/usr/tivoli/tsm/client/oracle/bin64/tdpo.opt)';

backup full database ;


backup (spfile) (current controlfile) ;

sql 'alter system archive log current';

backup archivelog all delete input ;

release channel t1;


release channel t2;
}

>>>> Incomplete Recovery

As you would expect, RMAN allows incomplete recovery to a specified time, SCN or
sequence number:

run {
shutdown immediate;
startup mount;
set until time 'Nov 15 2000 09:00:00';
# set until scn 1000; # alternatively, you can specify SCN
# set until sequence 9923; # alternatively, you can specify log sequence number
restore database;
recover database;
alter database open resetlogs;
}

The incomplete recovery requires the database to be opened using the RESETLOGS
option.

>>>> Disaster Recovery

In a disaster situation where all files are lost you can only recover to the last
SCN in the archived redo logs.
Beyond this point the recovery would have to make reference to the online redo
logs which are not present.
Disaster recovery is therefore a type of incomplete recovery. To perform disaster
recovery connect to RMAN:

C:>rman catalog=rman/rman@w2k1 target=sys/password@w2k2

Once in RMAN do the following:

startup nomount;
restore controlfile;
alter database mount;

From SQL*Plus as SYS get the last archived SCN using:

SQL> SELECT archivelog_change#-1 FROM v$database;

ARCHIVELOG_CHANGE#-1
--------------------
1048438

1 row selected.

SQL>Back in RMAN do the following:

run {
set until scn 1048438;
restore database;
recover database;
alter database open resetlogs;
}

If the "until scn" were not set the following type of error would be produced once
a redo log was referenced:

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 03/18/2003 09:33:19
RMAN-06045: media recovery requesting unknown log: thread 1 scn 1048439
With the database open all missing tempfiles must be replaced:

sql "ALTER TABLESPACE temp ADD


TEMPFILE ''C:\Oracle\oradata\W2K2\temp01.dbf''
SIZE 100M
AUTOEXTEND ON NEXT 64K";
Once the database is fully recovered a new backup should be perfomed.

The recovered database will be registered in the catalog as a new incarnation. The
current incarnation
can be listed and altered using the following commands:

list incarnation;
reset database to incarnation x;Lists And Reports
RMAN has extensive listing and reporting functionality allowing you to monitor you
backups and maintain
the recovery catalog. Here are a few useful commands:

>>>> Restoring a datafile to another location:

For example, if you restore datafile ?/oradata/trgt/tools01.dbf to its default


location, then RMAN restores
the file ?/oradata/trgt/tools01.dbf and overwrites any file that it finds with the
same filename.
If you run a SET NEWNAME command before you restore a file, then RMAN creates a
datafile copy
with the name that you specify. For example, assume that you run the following
commands:

SET NEWNAME FOR DATAFILE '?/oradata/trgt/tools01.dbf' TO '/tmp/tools01.dbf';


RESTORE DATAFILE '?/oradata/trgt/tools01.dbf';

In this case, RMAN creates a datafile copy of ?/oradata/trgt/tools01.dbf named


/tmp/tools01.dbf and records it in the repository.
To change the name for datafile ?/oradata/trgt/tools01.dbf to /tmp/tools01.dbf in
the control file,
run a SWITCH command so that RMAN considers the restored file as the current
database file. For example:

SWITCH DATAFILE '/tmp/tools01.dbf' TO DATAFILECOPY '?/oradata/trgt/tools01.dbf';

The SWITCH command is the RMAN equivalent of the SQL statement ALTER DATABASE
RENAME FILE.

>>>> Archive logs

What is the purpose and are the differences of


�ALTER SYSTEM ARCHIVE LOG CURRENT� and
�ALTER SYSTEM ARCHIVE LOG ALL�

# When the database is open, run the following SQL statement to force Oracle to
switch out of
the current log and archive it as well as all other unarchived logs:
ALTER SYSTEM ARCHIVE LOG CURRENT;

# When the database is mounted, open, or closed, you can run the following SQL
statement to force Oracle
to archive all noncurrent redo logs:
ALTER SYSTEM ARCHIVE LOG ALL;

A log switch does not mean that the redo is archived.


When you execute "'alter system archive log current" you force that the current
log to be archived,
so it is safe: you are sure to have all the needed archived logs.

alter system archive log all:


This command will archive all filled redo logs but will not complete
current log because it will not be full.

>>>> LIST AND REPORT COMMANDS:

=============
LIST COMMAND:
=============

List commands query the catalog or control file, to determine which backups or
copies are available.
List commands provide for basic information.
Report commands can provide for much more detail.

About RMAN Reports Generated by the LIST Command


You can control how the output is displayed by using the BY BACKUP and BY FILE
options of the LIST command
and choosing between the SUMMARY and VERBOSE options.

-- Example 1: Query on the incarnations of the target database

RMAN> list incarnation of database;


RMAN-03022: compiling command: list

List of Database Incarnations


DB Key Inc Key DB Name DB ID CUR Reset SCN Reset Time
------- ------- -------- ---------------- --- ---------- ----------
1 2 AIRM 2092303715 YES 1 24-DEC-02

-- Example 2: Query on tablespace backups

You can ask for lists of tablespace backups, as shown in the


following example:

RMAN> list backup of tablespace users;

-- Example 3: Query on database backups

RMAN> list backup of database;

-- Example 4: Query on backup of archivelogs:

RMAN> list backup of archivelog all;

The primary purpose of the LIST command is to determine which backups are
available. For example, you can list:
. Backups and proxy copies of a database, tablespace, datafile, archived redo log,
or control file
. Backups that have expired
. Backups restricted by time, path name, device type, tag, or recoverability
. Incarnations of a database

By default, RMAN lists backups by backup, which means that it serially lists each
backup or proxy copy
and then identifies the files included in the backup. You can also list backups by
file.

By default, RMAN lists in verbose mode. You can also list backups in a summary
mode if the verbose mode
generates too much output.

Listing Backups by Backup


To list backups by backup, connect to the target database and recovery catalog (if
you use one), and then
execute the LIST BACKUP command. Specify the desired objects with the listObjList
clause. For example,
you can enter:

LIST BACKUP; # lists backup sets, image copies, and proxy copies
LIST BACKUPSET; # lists only backup sets and proxy copies
LIST COPY; # lists only disk copies

Example:

RMAN> LIST BACKUP OF DATABASE;

By default the LIST output is detailed, but you can also specify that RMAN display
the output in summarized form.
Specify the desired objects with the listObjectList or recordSpec clause. If you
do not specify an object,
then LIST BACKUP displays all backups.

After connecting to the target database and recovery catalog (if you use one),
execute LIST BACKUP,
specifying the desired objects and options. For example:

LIST BACKUP SUMMARY; # lists backup sets, proxy copies, and disk copies

You can also specify the EXPIRED keyword to identify those backups that were not
found during a crosscheck:

LIST EXPIRED BACKUP SUMMARY;

# Show all backup details


list backup;

================
Report commands:
================
RMAN>report schema;

Shows the physical structure of the target database.

RMAN> report obsolete;

RMAN-03022: compiling command: report


RMAN-06147: no obsolete backups found

-- REPORT COMMAND:
-- ---------------

About Reports of RMAN Backups


Reports enable you to confirm that your backup and recovery strategy is in fact
meeting your requirements
for database recoverability. The two major forms of REPORT used to determine
whether your database
is recoverable are:

RMAN> REPORT NEED BACKUP;

Reports which database files need to be backed up to meet a configured or


specified retention policy
Use the REPORT NEED BACKUP command to determine which database files need backup
under a specific retention policy.
With no arguments, REPORT NEED BACKUP reports which objects need backup under the
currently configured retention policy.
The output for a configured retention policy of REDUNDANCY 1 is similar to this
example:

REPORT NEED BACKUP;

RMAN retention policy will be applied to the command


RMAN retention policy is set to redundancy 1
Report of files with less than 1 redundant backups
File #bkps Name
---- ----- -----------------------------------------------------
2 0 /oracle/oradata/trgt/undotbs01.dbf

RMAN> REPORT UNRECOVERABLE;

Reports which database files require backup because they have been affected by
some NOLOGGING operation
such as a direct-path insert

You can report backup sets, backup pieces and datafile copies that are obsolete,
that is, not needed
to meet a specified retention policy, by specifying the OBSOLETE keyword. If you
do not specify any
other options, then REPORT OBSOLETE displays the backups that are obsolete
according to the current
retention policy, as shown in the following example:
RMAN> REPORT OBSOLETE;

In the simplest case, you could crosscheck all backups on disk, tape or both,
using any one
of the following commands:

RMAN> CROSSCHECK BACKUP DEVICE TYPE DISK;


RMAN> CROSSCHECK BACKUP DEVICE TYPE SBT;
RMAN> CROSSCHECK BACKUP; # crosshecks all backups on all devices

The REPORT SCHEMA command lists and displays information about the database files.

After connecting RMAN to the target database and recovery catalog (if you use
one), issue REPORT SCHEMA
as shown in this example:

RMAN> REPORT SCHEMA;

# Show items that beed 7 days worth of


# archivelogs to recover completely
report need backup days = 7 database;
report need backup;

# Show/Delete items not needed for recovery


report obsolete;
delete obsolete;

# Show/Delete items not needed for point-in-time


# recovery within the last week
report obsolete recovery window of 7 days;
delete obsolete recovery window of 7 days;

RMAN> REPORT OBSOLETE REDUNDANCY 2;


RMAN> REPORT OBSOLETE RECOVERY WINDOW OF 5 DAYS;

RMAN displays backups that are obsolete according to those retention policies,
regardless of the actual configured retention policy.

# Show/Delete items with more than 2 newer copies available


report obsolete redundancy = 2 device type disk;
delete obsolete redundancy = 2 device type disk;

# Show datafiles that connot currently be recovered


report unrecoverable database;
report unrecoverable tablespace 'USERS';

24.1.3 More on Backup and recovery 10g RMAN:


--------------------------------------------

24.1.3.1 About RMAN Backups:


----------------------------

When you execute the BACKUP command in RMAN, you create one or more backup sets or
image copies. By default,
RMAN creates backup sets regardless of whether the destination is disk or a media
manager.

>>>About Image Copies

An image copy is an exact copy of a single datafile, archived redo log file, or
control file.
Image copies are not stored in an RMAN-specific format. They are identical to the
results of copying a file
with operating system commands. RMAN can use image copies during RMAN restore and
recover operations,
and you can also use image copies with non-RMAN restore and recovery techniques.

To create image copies and have them recorded in the RMAN repository, run the RMAN
BACKUP AS COPY command
(or, alternatively, configure the default backup type for disk as image copies
using
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COPY before performing a backup).
A database server session is used to create the copy, and the server session also
performs actions such as
validating the blocks in the file and recording the image copy in the RMAN
repository.

You can also use an operating system command such as the UNIX dd command to create
image copies,
though these will not be validated, nor are they recorded in the RMAN repository.
You can use the CATALOG command
to add image copies created with native operating system tools in the RMAN
repository.

>>>Using RMAN-Created Image Copies

If you run a RESTORE command, then by default RMAN restores a datafile or control
file to its original location
by copying an image copy backup to that location. Image copies are chosen over
backup sets because of the
extra overhead of reading through an entire backup set in search of files to be
restored.

However, if you need to restore and recover a current datafile, and if you have an
image copy of the datafile
available on disk, then you do not actually need to have RMAN copy the image copy
back to its old location.
You can instead have the database use the image copy in place, as a replacement
for the datafile to be restored.
The SWITCH command updates the RMAN repository indicate that the image copy should
now be treated as
the current datafile. Issuing the SWITCH command in this case is equivalent to
issuing the SQL statement
ALTER DATABASE RENAME FILE. You can then perform recovery on the copy.

>>>User-Managed Image Copies

RMAN can use image copies created by mechanisms outside of RMAN, such as native
operating system file copy commands
or third-party utilities that leave image copies of files on disk. These copies
are known as user-managed copies
or operating system copies.

The RMAN CATALOG command causes RMAN to inspect an existing image copy and enter
its metadata into the RMAN repository.
Once cataloged, these files can be used like any other backup with the RESTORE or
SWITCH commands.

Some sites store their datafiles on mirrored disk volumes, which permit the
creation of image copies by breaking
a mirror. After you have broken the mirror, you can notify RMAN of the existence
of a new user-managed copy,
thus making it a candidate for a backup operation. You must notify RMAN when the
copy is no longer available,
by using the CHANGE ... UNCATALOG command. In this example, before resilvering the
mirror (not including other
copies of the broken mirror), you must use a CHANGE ... UNCATALOG command to
update the recovery catalog
and indicate that this copy is no longer available.

>>>Storage of Backups on Disk and Tape

RMAN can create backups on disk or a third-party media device such as a tape
drive. If you specify
DEVICE TYPE DISK, then your backups are created on disk, in the file name space of
the target instance
that is creating the backup. You can make a backup on any device that can store a
datafile.

To create backups on non-disk media, such as tape, you must use third-party media
management software,
and allocate channels with device types, such as SBT, that are supported by that
software.

>>>Backups of Archived Logs

There are several features of RMAN backups specific to backups of archived redo
logs.

Deletion of Archived Logs After Backups


RMAN can delete one or all copies of archived logs from disk after backing them up
to backup sets.
If you specify the DELETE INPUT option, then RMAN backs up exactly one copy of
each specified log sequence number
and thread from an archive destination to tape, and then deletes the specific file
it backed up while leaving
the other copies on disk. If you specify the DELETE ALL INPUT option, then RMAN
backs up exactly one copy of each
specified log sequence number and thread, and then deletes that log from all
archive destinations.
Note that there are special considerations related to deletion of archived redo
logs in standby database configurations.
See Oracle Data Guard Concepts and Administration for details.

>>>Backups of Backup Sets


The RMAN BACKUP BACKUPSET command backs up previously created backup sets. Only
backup sets that were created
on device type DISK can be backed up, and they can be backed up to any available
device type.

Note:
RMAN issues an error if you attempt to run BACKUP AS COPY BACKUPSET.

The BACKUP BACKUPSET command uses the default disk channel to copy backup sets
from disk to disk.
To back up from disk to tape, you must either configure or manually allocate a
non-disk channel.

Uses for Backups of Backup Sets


The BACKUP BACKUPSET command is a useful way to spread backups among multiple
media. For example,
you can execute the following BACKUP command weekly as part of the production
backup schedule:

# makes backup sets on disk


BACKUP DEVICE TYPE DISK AS BACKUPSET DATABASE PLUS ARCHIVELOG;
BACKUP DEVICE TYPE sbt BACKUPSET ALL; # copies backup sets on disk to tape

In this way, you ensure that all your backups exist on both disk and tape. You can
also duplex backups
of backup sets, as in this example:

BACKUP COPIES 2 DEVICE TYPE sbt BACKUPSET ALL;

(Again, control file autobackups are never duplexed.)

You can also use BACKUP BACKUPSET to manage backup space allocation. For example,
to keep more recent backups
on disk and older backups only on tape, you can regularly run the following
command:

BACKUP DEVICE TYPE sbt BACKUPSET COMPLETED BEFORE 'SYSDATE-7' DELETE INPUT;

This command backs up backup sets that were created more than a week ago from disk
to tape, and then deletes
them from disk. Note that DELETE INPUT here is equivalent to DELETE ALL INPUT;
RMAN deletes all
existing copies of the backup set. If you duplexed a backup to four locations,
then RMAN deletes
all four copies of the pieces in the backup set.

>>> Restoring Files with RMAN

Use the RMAN RESTORE command to restore the following types of files from disk or
other media:

- Database (all datafiles)


- Tablespaces
- Control files
- Archived redo logs
- Server parameter files

Because a backup set is in a proprietary format, you cannot simply copy it as you
would a backup database file
created with an operating system utility; you must use the RMAN RESTORE command to
extract its contents.
In contrast, the database can use image copies created by the RMAN BACKUP AS COPY
command
without additional processing.

RMAN automates the procedure for restoring files. You do not need to go into the
operating system,
locate the backup that you want to use, and manually copy files into the
appropriate directories.
When you issue a RESTORE command, RMAN directs a server session to restore the
correct backups to either:

- The default location, overwriting the files with the same name currently there
- A new location, which you can specify with the SET NEWNAME command

To restore a datafile, either mount the database or keep it open and take the
datafile to be restored offline.
When RMAN performs a restore, it creates the restored files as datafile image
copies and records them
in the repository. The following table describes the behavior of the RESTORE, SET
NEWNAME, and SWITCH commands.

>>>Datafile Media Recovery with RMAN

The concept of datafile media recovery is the application of online or archived


redo logs
or incremental backups to a restored datafile in order to update it to the current
time
or some other specified time. Use the RMAN RECOVER command to perform media
recovery and
apply logs or incremental backups automatically.

RMAN Media Recovery: Basic Steps


If possible, make the recovery catalog available to perform the media recovery. If
it is not available,
or if you do not maintain a recovery catalog, then RMAN uses metadata from the
target database control file.
If both the control file and recovery catalog are lost, then you can still recover
the database
--assuming that you have backups of the datafiles and at least one autobackup of
the control file.

The generic steps for media recovery using RMAN are as follows:

-Place the database in the appropriate state: mounted or open. For example, mount
the database
when performing whole database recovery, or open the database when performing
online tablespace recovery.
-To perform incomplete recovery, use the SET UNTIL command to specify the time,
SCN,
or log sequence number at which recovery terminates. Alternatively, specify the
UNTIL clause
on the RESTORE and RECOVER commands.
-Restore the necessary files with the RESTORE command.
-Recover the datafiles with the RECOVER command.
-Place the database in its normal state. For example, open it or bring recovered
tablespaces online.

RESTORE DATABASE;
RECOVER DATABASE;

>>> Corrupt Block recovery

Although datafile media recovery is the principal form of recovery, you can also
use the RMAN BLOCKRECOVER
command to perform block media recovery. Block media recovery recovers an
individual corrupt datablock
or set of datablocks within a datafile. In cases when a small number of blocks
require media recovery,
you can selectively restore and recover damaged blocks rather than whole
datafiles.

For example, you may discover the following messages in a user trace file:

ORA-01578: ORACLE data block corrupted (file # 7, block # 3)


ORA-01110: data file 7: '/oracle/oradata/trgt/tools01.dbf'
ORA-01578: ORACLE data block corrupted (file # 2, block # 235)
ORA-01110: data file 2: '/oracle/oradata/trgt/undotbs01.dbf'

You can then specify the corrupt blocks in the BLOCKRECOVER command as follows:

BLOCKRECOVER
DATAFILE 7 BLOCK 3
DATAFILE 2 BLOCK 235;

>>> After a Database Restore and Recover, RMAN gives the error:

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 03/03/2008 11:13:06
RMAN-06059: expected archived log not found, lost of archived log compromises
recoverability
ORA-19625: error identifying file
/dbms/tdbaeduc/educroca/recovery/archive/arch_1_870_617116679.arch
ORA-27037: unable to obtain file status
IBM AIX RISC System/6000 Error: 2: No such file or directory

Note 1:

If you no longer have a particular archivelog file you can let RMAN catalog
know this by issuing the following command at the rman prompt after
connecting to the rman catalog and the target database -

change archivelog all crosscheck ;

This will check the archivelog folder and then make the catalog agree with
what is actually available.

rman> DELETE EXPIRED ARCHVIELOG ;


Oracle Error :: RMAN-20011
target database incarnation is not current in recovery catalog

Cause
the database incarnation that matches the resetlogs change# and time of the
mounted target database
control file is not the current incarnation of the database

Action
If "reset database to incarnation <key>" was used to make an old incarnation
current then restore the
target database from a backup that matches the incarnation and mount it. You will
need to do "startup nomount"
before you can restore the control file using RMAN.
Otherwise use "reset database to incarnation <key>" make the intended incarnation
current in the recovery catalog.

>>> Note about rman and tape sbt and recovery window:

Suppose you have a retention period defined in rman, like for example
CONFIGURE RETENTION POLICY TO REDUNDANCY 3
This means that 3 backups needs to be maintained by rman, and other backups are
considered "obsolete".
But those other backups beyond retention, are not expired or otherwise not usable.
If they are still present, you can use them in a recovery.

Besides this, it cannot be known beforehand how the tape subsystem will deal with
rman commands
like "delete obsolete". The tape subsystem has probably its own retention period,
and you need
much more details about all systems involved, before you know whats going on.

=============================================
24.1.3.2 ABOUT RMAN ERRORS / troubleshooting:
=============================================

Err 1: Missing archived redolog:


================================

Problem: If an archived redo is missing, you might get a message similar like
this:

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 03/05/2008 07:44:35
RMAN-06059: expected archived log not found, lost of archived log compromises
recoverability
ORA-19625: error identifying file
/dbms/tdbaeduc/educroca/recovery/archive/arch_1_817_617116679.arch
ORA-27037: unable to obtain file status
IBM AIX RISC System/6000 Error: 2: No such file or directory
Solution:

If archived redo logs are (wrongly) deleted/moved/compressed from disk without


being backed up, the rman catalog will not know
this has happened, and will keep attempting to backup the missing archived redo
logs. That will cause rman archived redo log backups
to fail altogether with an error like:

RMAN-06059: expected archived log not found, lost of archived log compromises
recoverability

If you can, you should bring back the missing archved redo logs to their original
location and name, and let rman back them up.
But if that is impossible, the workaround is to �crosscheck archivelog all�, like:

rman <<e1
connect target /
connect catalog username/password@catalog
run {
allocate channel c1 type disk ;
crosscheck archivelog all ;
release channel c1 ;
}
e1

Or just go into rman and run the command:

RMAN> crosscheck archivelog all;

You�ll get output like this:

validation succeeded for archived log archive log filename=D:REDOARCHARCH_1038.DBF


recid=1017 stamp=611103638
for every archived log as they are all checked on disk.
That should be the catalog fixed, run an archivelog backup to make sure.

Err 2: online redo logs listed as archives:


===========================================

Testcase: a 10g 10.2.0.3 shows after recovery with resetlogs the following
in v$archived_log. It looks as if it will stay there forever:

SEQ# FIRST NEXT NAME DIFF STATUS


814 17311773 17311785 12 D
815 17311785 17354662 42877 D
816 17354662 17354674 12 D
817 17354674 17402531 47857 D
2 17415287 2.81E+14 redo01.log 2.8147E+14 A
0 0 0 redo02.log 0 A
0 0 0 redo03.log 0 A
0 0 0 redo04.log 0 A
1 -->17402532 17415287 redo05.log 12755 A
1 17402532 17404154 1622 D
2 17404154 17404165 11 D

FIRST_CHANGE# NEXT_CHANGE# SEQUENCE# RESETLOGS_CHANGE#


------------- ------------ ---------- -----------------
17311785 17354662 815 1
17354662 17354674 816 1
17354674 17402531 817 1
-->17402532 17404154 1 -->17402532
17404154 17404165 2 17402532
17404165 17415733 3 17402532

We dont know what is going on here.

Err 3: Highlevel overview RMAN Error Codes


==========================================

RMAN error codes are summarized in the table below.

0550-0999 Command-line interpreter


1000-1999 Keyword analyzer
2000-2999 Syntax analyzer
3000-3999 Main layer
4000-4999 Services layer
5000-5499 Compilation of RESTORE or RECOVER command
5500-5999 Compilation of DUPLICATE command
6000-6999 General compilation
7000-7999 General execution
8000-8999 PL/SQL programs
9000-9999 Low-level keyword analyzer
10000-10999 Server-side execution
11000-11999 Interphase errors between PL/SQL and RMAN
12000-12999 Recovery catalog packages
20000-20999 Miscellaneous RMAN error messages

Err 4: RMAN-03009 accompinied with ORA- error:


==============================================

Q:

Here is my problem; When trying to delete obsolete RMAN backupsets, I


get an error:

RMAN> change backupset 698, 702, 704, 708 delete;

List of Backup Pieces


BP Key BS Key Pc# Cp# Status Device Type Piece Name
------- ------- --- --- ----------- ----------- ----------
698 698 1 1 AVAILABLE SBT_TAPE df_546210555_706_1
702 702 1 1 AVAILABLE SBT_TAPE df_546296605_709_1
704 704 1 1 AVAILABLE SBT_TAPE df_546383776_712_1
708 708 1 1 AVAILABLE SBT_TAPE df_546469964_715_1

Do you really want to delete the above objects (enter YES or NO)? YES
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of delete command on ORA_MAINT_SBT_TAPE_1 channel at
03/02/2005 16:27:06
ORA-27191: sbtinfo2 returned error
Additional information: 2

What in the world does "Additional information: 2" mean? I can't find
any more useful detail than this.

A:

Oracle Error :: ORA-27191


sbtinfo2 returned error

Cause
sbtinfo2 returned an error. This happens while retrieving backup file information
from the media manager"s catalog.

Action
This error is returned from the media management software which is linked with
Oracle. There should be additional messages
which explain the cause of the error. This error usually requires contacting the
media management vendor.

A:

---> ORA-27191

John Clarke:
My guess is that "2" is an O/S return code, and in
/usr/sys/include/errno.h, you'll see that error# 2 is "no such file or
directory. Accompanied with ORA-27191, I'd guess that your problem is
that your tape library doesn't currently have the tape(s) loaded and/or
can't find them.

Mladen Gogala:
Additional information 2 means that OS returned status 2. That is a
"file not found" error. In plain Spanglish, you cannot
delete files from tape, only from the disk drives.

Niall Litchfield:
The source error is the ora-27191 error
(http://download-
west.oracle.com/docs/cd/B14117_01/server.101/b10744/e24280.htm#ORA-27191)
which suggests a tape library issue to me. You can search for RMAN
errors using the error search page as well
http://otn.oracle.com/pls/db10g/db10g.error_search?search=rman-03009, for example

A:

---> RMAN-03009
RMAN-03009: failure of delete command on ORA_MAINT_SBT_TAPE_1 channel at date/time
RMAN-03009: failure of allocate command on t1 channel at date/time
RMAN-03009: failure of backup command on t1 channel at date/time
etc..

-> Means most of the time that you have Media Management Library problems
-> Can also mean that there is a problem with backup destination (disk not found,
no space, tape not loaded etc..)

ERR 5: Test your Media Management API:


======================================

Testing the Media Management API


On specified platforms, Oracle provides a diagnostic tool called "sbttest". This
utility performs a simple test of the
tape library by acting as the Oracle database server and attempting to communicate
with the media manager.

Obtaining the Utility


On UNIX, the sbttest utility is located in $ORACLE_HOME/bin.

Obtaining Online Documentation


For online documentation of sbttest, issue the following on the command line:

% sbttest

The program displays the list of possible arguments for the program:

Error: backup file name must be specified


Usage: sbttest backup_file_name # this is the only required parameter
<-dbname database_name>
<-trace trace_file_name>
<-remove_before>
<-no_remove_after>
<-read_only>
<-no_regular_backup_restore>
<-no_proxy_backup>
<-no_proxy_restore>
<-file_type n>
<-copy_number n>
<-media_pool n>
<-os_res_size n>
<-pl_res_size n>
<-block_size block_size>
<-block_count block_count>
<-proxy_file os_file_name bk_file_name
[os_res_size pl_res_size block_size block_count]>

The display also indicates the meaning of each argument. For example, following is
the description for two optional parameters:

Optional parameters:
-dbname specifies the database name which will be used by SBT
to identify the backup file. The default is "sbtdb"
-trace specifies the name of a file where the Media Management
software will write diagnostic messages.
Using the Utility
Use sbttest to perform a quick test of the media manager. The following table
explains how to interpret the output:

If sbttest returns... Then...

0
The program ran without error. In other words, the media manager is installed and
can accept a data stream and
return the same data when requested.

non-0
The program encountered an error. Either the media manager is not installed or it
is not configured correctly.

To use sbttest:

Make sure the program is installed, included in your system path, and linked with
Oracle by typing sbttest at the command line:

% sbttest

If the program is operational, you should see a display of the online


documentation.

Execute the program, specifying any of the arguments described in the online
documentation. For example, enter the following
to create test file some_file.f and write the output to sbtio.log:

% sbttest some_file.f -trace sbtio.log

You can also test a backup of an existing datafile. For example, this command
tests datafile tbs_33.f of database PROD:

% sbttest tbs_33.f -dbname prod

Examine the output. If the program encounters an error, it provides messages


describing the failure.
For example, if Oracle cannot find the library, you see:

libobk.so could not be loaded. Check that it is installed properly, and that LD_
LIBRARY_PATH environment variable (or its equivalent on your platform) includes
the
directory where this file can be found. Here is some additional information on the

cause of this error:


ld.so.1: sbttest: fatal: libobk.so: open failed: No such file or directory

ERR 6: RMAN-12004
=================

Hi,

I m facing this problem any pointers will of great help....


1.
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00579: the following error occurred at 12/16/2003 02:46:31
RMAN-10035: exception raised in RPC:
RMAN-10031: ORA-19624 occurred during call to
DBMS_BACKUP_RESTORE.BACKUPPIECECREATE
RMAN-03015: error occurred in stored script backup_db_full
RMAN-03015: error occurred in stored script backup_del_all_al
RMAN-03007: retryable error occurred during execution of command: backup
RMAN-12004: unhandled exception during command execution on channel t1
RMAN-10035: exception raised in RPC: ORA-19506: failed to create sequential
file, name="l0f93ro5_1_1", parms=""
ORA-27028: skgfqcre: sbtbackup returned error
ORA-19511: Error received from media manager layer, error text:
sbtbackup: Failed to process backup file.
RMAN-10031: ORA-19624 occurred during call to
DBMS_BACKUP_RESTORE.BACKUPPIECECREATE

OR another errorstack

RMAN-12004: unhandled exception during command execution on channel disk13


RMAN-10035: exception raised in RPC: ORA-19502: write error on file
"/db200_backup/archive_log03/EDPP_ARCH0_21329_1_492222998",
blockno 612353 (blocksize=1024)
ORA-27072: skgfdisp: I/O error
HP-UX Error: 2: No such file or directory
Additional information: 612353
RMAN-10031: ORA-19624 occurred during call to
DBMS_BACKUP_RESTORE.BACKUPPIECECREATE

OR another errorstack

RMAN-12004: unhandled exception during command execution on channel ch00


RMAN-10035: exception raised in RPC: ORA-19599: block number 691 is corrupt in
controlfile C:\ORACLE\ORA90\DATABASE\SNCFSUMMITDB.ORA
RMAN-10031: ORA-19583 occurred during call to
DBMS_BACKUP_RESTORE.BACKUPPIECECREATE

OR another errorstack

Have managed to create a job to backup my db, but I can't restore. I get the
following:
RMAN-03002: failure during compilation of command
RMAN-03013: command type: restore
RMAN-03006: non-retryable error occurred during execution of command: IRESTORE
RMAN-07004: unhandled exception during command execution on channel BackupTest
RMAN-10035: exception raised in RPC: ORA-19573: cannot obtain exclusive enqueue
for datafile 1
RMAN-10031: ORA-19583 occurred during call to
DBMS_BACKUP_RESTORE.RESTOREBACKUPPIECE

Seems to relate to corrupt or missing Oracle files.

$$$$

ERR 7: ORA-27211
================

Q:

Continue to get ORA-27211 Failed to load media management library

A:

had a remarkably similar experience a few months ago with Legato NetWorker and
performed all of the steps
you listed with the same results. The problem turned out to be very simple. The
SA installed the 64-bit version
of the Legato Networker client because it is a 64-bit server. However, we were
running a 32-bit version of Oracle on it.
Installing the 32-bit client solved the problem.

A:

Cause: User-supplied SBT_LIBRARY or libobk.so could not be loaded. Call to dlopen


for media library returned error.
See Additional information for error code.
Action: Retry the command with proper media library. Or re-install Media
management module for Oracle.

A:

Exact Error Message


ORA-27211: Failed to load Media Management Library on HP-UX system

Details:
Overview:

The Oracle return code ORA-27211 implies a failure to load a shared object library
into process space.
Oracle Recovery Manager (RMAN) backups will fail with a message "ORA-27211: Failed
to load Media Management Library"
if the SBT_LIBRARY keyword is defined and points to an incorrect library name. The
SBT_LIBRARY keyword must be set
in the PARMS clause of the ALLOCATE CHANNEL statement in the RMAN script. This
keyword is not valid with the SEND command
and is new to Oracle 9i. If this value is set, it overrides the default search
path for the libobk library.
By default, SBT_LIBRARY is not set.

Troubleshooting:

If an ORA-27211 error is seen for an Oracle RMAN backup, it is necessary to review


the Oracle RMAN script and verify if SBT_LIBRARY
is either not set or is set correctly. If set, the filename should be libobk.sl
for HP-UX 10, 11.00 and 11.11,
but libobk.so for HP-UX 11.23 (ia64) clients.

Example of an invalid entry for HP-UX 11.23 (ia64) clients:


PARMS='SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.sl'

Example of a correct entry for HP-UX 11.23 (ia64) clients:


PARMS='SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so'
Master Server Log Files: n/a

Media Server Log Files: n/a

Client Log Files:

The RMAN log file on the client will show the following error message:
RMAN-00571: ===========================================
RMAN-00569: ======= ERROR MESSAGE STACK FOLLOWS =======
RMAN-00571: ===========================================
RMAN-03009: failure of allocate command on ch00 channel at 05/21/2005 16:39:17
ORA-19554: error allocating device, device type: SBT_TAPE, device name:
ORA-27211: Failed to load Media Management Library
Additional information: 25

Resolution:

The Oracle return code ORA-27211 implies a failure to load a shared object library
into process space.
Oracle RMAN backups will fail with a message "ORA-27211: Failed to load Media
Management Library" if the SBT_LIBRARY keyword
is defined and points to an incorrect library name.

To manually set the SBT_LIBRARY path, follow the steps described below:

1. Modify the RMAN ALLOCATE CHANNEL statement in the backup script to reference
the HP-UX 11.23 library file directly:

PARMS='SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so'

Note: This setting would be added to each ALLOCATE CHANNEL statement. A restart of
the Oracle instance
is not needed for this change to take affect.

2. Run a test backup or wait for the next scheduled backup of the Oracle database

ERR8: More on DBMS_BACKUP_RESTORE:


==================================

Note 1:

The dbms_backup_restore package is used as a PL/SQL command-line interface for


replacing native RMAN commands,
and it has very little documentation.

The Oracle docs note how to install and configure the dbms_backup_restore package:

�The DBMS_BACKUP_RESTORE package is an internal package created by the


dbmsbkrs.sql and prvtbkrs.plb scripts.
This package, along with the target database version of DBMS_RCVMAN, is
automatically installed in every Oracle database
when the catproc.sql script is run. This package interfaces with the Oracle
database server and the operating system
to provide the I/O services for backup and restore operations as directed by
RMAN.�
The docs also note that �The DBMS_BACKUP_RESTORE package has a PL/SQL procedure to
normalize filenames on Windows NT platforms.�

Oracle DBA John Parker gives this example of dbms_backup_restore to recover a


controlfile:

declare
devtype varchar2(256);
done boolean;
begin
devtype:=dbms_backup_restore.deviceallocate( type=>'sbt_tape',
params=>'ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=rdcs,OB2BARLIST=ORA_RDCS_WEEKLY)',
ident=>'t1');
dbms_backup_restore.restoresetdatafile;
dbms_backup_restore.restorecontrolfileto('D:\oracle\ora81\dbs\CTL1rdcs.ORA');
dbms_backup_restore.restorebackuppiece(
'ORA_RDCS_WEEKLY<rdcs_6222:596513521:1>.dbf', DONE=>done );
dbms_backup_restore.restoresetdatafile;
dbms_backup_restore.restorecontrolfileto('D:\DBS\RDCS\CTL2RDCS.ORA');
dbms_backup_restore.restorebackuppiece(
'ORA_RDCS_WEEKLY<rdcs_6222:596513521:1>.dbf', DONE=>done );
dbms_backup_restore.devicedeallocate('t1');
end;

Here are some other examples of using dbms_backup_restore:

DECLARE
devtype varchar2(256);
done boolean;
BEGIN
devtype := dbms_backup_restore.DeviceAllocate (type => '',ident => 'FUN');
dbms_backup_restore.RestoreSetDatafile;
dbms_backup_restore.RestoreDatafileTo(dfnumber => 1,toname =>
'D:\ORACLE_BASE\datafiles\SYSTEM01.DBF');
dbms_backup_restore.RestoreDatafileTo(dfnumber => 2,toname =>
'D:\ORACLE_BASE\datafiles\UNDOTBS.DBF');
--dbms_backup_restore.RestoreDatafileTo(dfnumber => 3,toname =>
'D:\ORACLE_BASE\datafiles\MYSPACE.DBF');
dbms_backup_restore.RestoreBackupPiece(done => done,handle =>
'D:\ORACLE_BASE\RMAN_BACKUP\MYDB_DF_BCK05H2LLQP_1_1', params => null);
dbms_backup_restore.DeviceDeallocate;
END;
/

--restore archived redolog


DECLARE
devtype varchar2(256);
done boolean;
BEGIN
devtype := dbms_backup_restore.DeviceAllocate (type => '',ident => 'FUN');
dbms_backup_restore.RestoreSetArchivedLog(destination=>'D:\ORACLE_BASE\achive\');
dbms_backup_restore.RestoreArchivedLog(thread=>1,sequence=>1);
dbms_backup_restore.RestoreArchivedLog(thread=>1,sequence=>2);
dbms_backup_restore.RestoreArchivedLog(thread=>1,sequence=>3);
dbms_backup_restore.RestoreBackupPiece(done => done,handle =>
'D:\ORACLE_BASE\RMAN_BACKUP\MYDB_LOG_BCK0DH1JGND_1_1', params => null);
dbms_backup_restore.DeviceDeallocate;
END;
/

Note 2:
-------

--restore controlfile
DECLARE
devtype varchar2(256);
done boolean;
BEGIN
devtype := dbms_backup_restore.DeviceAllocate(type => '',ident => 'FUN');
dbms_backup_restore.RestoresetdataFile;
dbms_backup_restore.RestoreControlFileto('D:\ORACLE_BASE\controlfiles\CONTROL01.CT
L');
dbms_backup_restore.RestoreBackupPiece('D:\ORACLE_BASE\Rman_Backup\MYDB_DF_BCK0BH1
JBVA_1_1',done => done);
dbms_backup_restore.RestoresetdataFile;
dbms_backup_restore.RestoreControlFileto('D:\ORACLE_BASE\controlfiles\CONTROL02.CT
L');
dbms_backup_restore.RestoreBackupPiece('D:\ORACLE_BASE\Rman_Backup\MYDB_DF_BCK0BH1
JBVA_1_1',done => done);
dbms_backup_restore.RestoresetdataFile;
dbms_backup_restore.RestoreControlFileto('D:\ORACLE_BASE\controlfiles\CONTROL03.CT
L');
dbms_backup_restore.RestoreBackupPiece('D:\ORACLE_BASE\Rman_Backup\MYDB_DF_BCK0BH1
JBVA_1_1',done => done);
dbms_backup_restore.DeviceDeallocate;
END;
/

--restore datafile
DECLARE
devtype varchar2(256);
done boolean;
BEGIN
devtype := dbms_backup_restore.DeviceAllocate (type => '',ident => 'FUN');
dbms_backup_restore.RestoreSetDatafile;
dbms_backup_restore.RestoreDatafileTo(dfnumber => 1,toname =>
'D:\ORACLE_BASE\datafiles\SYSTEM01.DBF');
dbms_backup_restore.RestoreDatafileTo(dfnumber => 2,toname =>
'D:\ORACLE_BASE\datafiles\UNDOTBS.DBF');
--dbms_backup_restore.RestoreDatafileTo(dfnumber => 3,toname =>
'D:\ORACLE_BASE\datafiles\MYSPACE.DBF');
dbms_backup_restore.RestoreBackupPiece(done => done,handle =>
'D:\ORACLE_BASE\RMAN_BACKUP\MYDB_DF_BCK05H2LLQP_1_1', params => null);
dbms_backup_restore.DeviceDeallocate;
END;
/

--restore archived redolog


DECLARE
devtype varchar2(256);
done boolean;
BEGIN
devtype := dbms_backup_restore.DeviceAllocate (type => '',ident => 'FUN');
dbms_backup_restore.RestoreSetArchivedLog(destination=>'D:\ORACLE_BASE\achive\');
dbms_backup_restore.RestoreArchivedLog(thread=>1,sequence=>1);
dbms_backup_restore.RestoreArchivedLog(thread=>1,sequence=>2);
dbms_backup_restore.RestoreArchivedLog(thread=>1,sequence=>3);
dbms_backup_restore.RestoreBackupPiece(done => done,handle =>
'D:\ORACLE_BASE\RMAN_BACKUP\MYDB_LOG_BCK0DH1JGND_1_1', params => null);
dbms_backup_restore.DeviceDeallocate;
END;
/

ERR 9: RMAN-00554 initialization of internal recovery manager package failed:


=============================================================================

connected to target database: PLAYROCA (DBID=575215626)


RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00554: initialization of internal recovery manager package failed
RMAN-04004: error from recovery catalog database: ORA-03135: connection lost
contact

keys:
RMAN-00554
RMAN-04004
ORA-03135
ORA-3136

>>>> In alertlog of the rman, we can find:

WARNING: inbound connection timed out (ORA-3136)


Thu Mar 13 23:09:54 2008

>>>> In Net logs sqlnet.log we can find:

Warning: Errors detected in file /dbms/tdbaplay/ora10g/home/network/log/sqlnet.log

> ***********************************************************************
> Fatal NI connect error 12170.
>
> VERSION INFORMATION:
> TNS for IBM/AIX RISC System/6000: Version 10.2.0.3.0 - Production
> TCP/IP NT Protocol Adapter for IBM/AIX RISC System/6000: Version
10.2.0.3.0 - Production
> Oracle Bequeath NT Protocol Adapter for IBM/AIX RISC System/6000:
Version 10.2.0.3.0 - Production
> Time: 18-MAR-2008 23:01:43
> Tracing not turned on.
> Tns error struct:
> ns main err code: 12535
> TNS-12535: TNS:operation timed out
> ns secondary err code: 12606
> nt main err code: 0
> nt secondary err code: 0
> nt OS err code: 0
> Client address: (ADDRESS=(PROTOCOL=tcp)(HOST=57.232.4.123)(PORT=35844))

Note 1:
-------

RMAN-00554: initialization of internal recovery manager package failed

Is a general error code. You must turn your attention the the codes underneath
this one.
For example:

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00554: initialization of internal recovery manager package failed
RMAN-06003: ORACLE error from target database:
ORA-00210: cannot open the specified control file
ORA-00202: control file: '/devel/dev02/dev10g/standbyctl.ctl'

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00554: initialization of internal recovery manager package failed
RMAN-04005: error from target database: ORA-01017: invalid
username/password;

Note 2:
-------

RMAN-04004: error from recovery catalog database: ORA-03135: connection lost


contact

ERR 10: RMAN-00554 initialization of internal recovery manager package failed:


==============================================================================

Starting backup at 17-MAY-08


released channel: t1
released channel: t2
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 05/17/2008 23:30:13
RMAN-06004: ORACLE error from recovery catalog database: RMAN-20242:
specification does not match any archive log in the recovery catalog

Note 1:
-------

Oracle Error :: RMAN-20242


specification does not match any archivelog in the recovery catalog

Cause
No archive logs in the specified archive log range could be found.

Action
Check the archive log specifier.

Note 2:
-------

Some of the common RMAN errors are:

RMAN-20242: Specification does not match any archivelog in the recovery catalog.

Add to RMAN script: sql 'alter system archive log current';

Note 3:
-------

Q:

RMAN-20242: specification does not match any archive log in the recovery ca
Posted: Feb 12, 2008 7:52 AM Reply

A couple of archive log files were deleted from the OS. They still show up in the
list of archive logs in Enterprise Manager.
I want to fix this because now whenever I try to run a crosscheck command, I get
the message:

RMAN-20242: specification does not match any archive log in the recovery catalog

I also tried to uncatalog those files, but got the same message.

Any suggestions on what to do?

Thanks!

A:

hi,
from rman run the command

list expired archivelog;

if ther archives are in this list they will show, then i think you should do a

crosscheck archivelog all;

then you should be able to delete them.

regards

Note 4:
-------

The RMAN error number would be helpful, but this is a common problem - RMAN-20242
- and is addressed in detail in MetaLink notes.
Either the name specification (the one you entered) is wrong, or you could be
using mismatched versions
between RMAN and the database (don't know since you didn't provide any version
details).

Note 5:
-------

Q:

Hi there!

We are having problems with an Oracle backup. The compiling of the backup
command fails with the error message: RMAN-20242: specification does not
match any archivelog in the recovery catalog

But RMAN is only supposed to backup any archived logs that are there and
then insert them in the catalog...
Did anybody experience anything similar?

This is 8.1.7 on HP-UX with Legato Networker

Thanks,

A:

If i ask rman to backup archivelogs that are more than 2days old and there are
none, thats not an error.
That is when i see it the most, now most companies will force a log switch after a
set amount of time during the day so in DR,
you dont lose days worth of redo that might still be hanging in a redo log if it
gets lost.

$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
$$$$$

Now we will do some test of RMAN on a testsystem with Oracle 10g R2

Test Case 1:
============

10g Database test10g:

TEST10G:
startup mount pfile=c:\oracle\admin\test10g\pfile\init.ora
alter database archivelog;
archive log start;
alter database force logging;
alter database add supplemental log data;
alter database open;

Files and tablespaces:

>>> User albert creates table TEST

CREATE TABLE test (


id number,
name varchar2(10));

insert into test


values
(1,'test1');

commit;

>>> make full RMAN backup BACKUP 1:

>>> Some time later, albert inserts second record

insert into test


values
(2,'test2');

commit;

>>> make full RMAN backup BACKUP 2:

>>> Now investigate some SCN's:

SQL> select CHECKPOINT_CHANGE#,CONTROLFILE_SEQUENCE#,ARCHIVE_CHANGE#,CURRENT_SCN


from v$database;

CHECKPOINT_CHANGE# CONTROLFILE_SEQUENCE# ARCHIVE_CHANGE# CURRENT_SCN


------------------ --------------------- --------------- -----------
888889 1745 889087 889154

SQL> select
CHECKPOINT_CHANGE#,CONTROLFILE_SEQUENCE#,ARCHIVE_CHANGE#,CURRENT_SCN,archivelog_ch
ange# from v$database;

CHECKPOINT_CHANGE# CONTROLFILE_SEQUENCE# ARCHIVE_CHANGE# CURRENT_SCN


ARCHIVELOG_CHANGE#
------------------ --------------------- --------------- -----------
------------------
889090 1748 889087 890538
889090

SQL> SELECT DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER() from dual;

DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER()
-----------------------------------------
890599

SQL> select
CHECKPOINT_CHANGE#,CONTROLFILE_SEQUENCE#,ARCHIVE_CHANGE#,CURRENT_SCN,archivelog_ch
ange# from v$database;

CHECKPOINT_CHANGE# CONTROLFILE_SEQUENCE# ARCHIVE_CHANGE# CURRENT_SCN


ARCHIVELOG_CHANGE#
------------------ --------------------- --------------- -----------
------------------
889090 1748 889087 890678
889090

SQL> select
file#,CHECKPOINT_CHANGE#,LAST_CHANGE#,OFFLINE_CHANGE#,ONLINE_CHANGE#,NAME from
v$datafile;

FILE# CHECKPOINT_CHANGE# LAST_CHANGE# OFFLINE_CHANGE# ONLINE_CHANGE# NAME


---------- ------------------ ------------ --------------- --------------
--------------------------
1 888936 534906 534907
C:\ORACLE\ORADATA\TEST10G\SYSTEM01.DBF
2 888936 534906 534907
C:\ORACLE\ORADATA\TEST10G\UNDOTBS01.DBF
3 888936 534906 534907
C:\ORACLE\ORADATA\TEST10G\SYSAUX01.DBF
4 888936 534906 534907
C:\ORACLE\ORADATA\TEST10G\USERS01.DBF
5 888936 0 0
C:\ORACLE\ORADATA\TEST10G\EXAMPLE01.DBF
6 888936 0 0
C:\ORACLE\ORADATA\TEST10G\TS_CDC.DBF

6 rows selected.

SQL> select RL_SEQUENCE#,RL_FIRST_CHANGE#,RL_NEXT_CHANGE# from V$BACKUP_FILES


......
151 888889 889090
151 888889 889090

SQL> select SEQUENCE#,FIRST_CHANGE#, STATUS from v$log;

SEQUENCE# FIRST_CHANGE# STATUS


---------- ------------- ----------------
152 889090 CURRENT
150 887780 INACTIVE
151 888889 INACTIVE

SQL> select SEQUENCE#,FIRST_CHANGE#,NEXT_CHANGE# from v$log_history


.........
147 880266 882166
148 882166 882431
149 882431 887780
150 887780 888889
151 888889 889090
>>> Some time later, albert inserts third record

insert into test


values
(3,'test3');

commit;

>>> shutdown database

>>> delete a datafile

data file 6: 'C:\ORACLE\ORADATA\TEST10G\TS_CDC.DBF'

>>> startup database

SQL> alter database open;


alter database open
*
ERROR at line 1:
ORA-01157: cannot identify/lock data file 6 - see DBWR trace file
ORA-01110: data file 6: 'C:\ORACLE\ORADATA\TEST10G\TS_CDC.DBF'

>>> RECOVER WITH RMAN

RMAN> RESTORE DATABASE;


RMAN> RECOVER DATABASE;

>>> logon as albert

SQL> select * from test;

ID NAME
---------- ----------
3 test3
1 test1
2 test2

>>> logon as system

SQL> select CHECKPOINT_CHANGE#,CONTROLFILE_SEQUENCE#,ARCHIVE_CHANGE#,CURRENT_SCN


from v$database;

CHECKPOINT_CHANGE# CONTROLFILE_SEQUENCE# ARCHIVE_CHANGE# CURRENT_SCN


------------------ --------------------- --------------- -----------
891236 1780 889087 891702

SQL> select
file#,CHECKPOINT_CHANGE#,LAST_CHANGE#,OFFLINE_CHANGE#,ONLINE_CHANGE#,NAME from
v$datafile;

FILE# CHECKPOINT_CHANGE# LAST_CHANGE# OFFLINE_CHANGE# ONLINE_CHANGE# NAME


---------- ------------------ ------------ --------------- --------------
--------------------------
1 891236 534906 534907
C:\ORACLE\ORADATA\TEST10G\SYSTEM01.DBF
2 891236 534906 534907
C:\ORACLE\ORADATA\TEST10G\UNDOTBS01.DBF
3 891236 534906 534907
C:\ORACLE\ORADATA\TEST10G\SYSAUX01.DBF
4 891236 534906 534907
C:\ORACLE\ORADATA\TEST10G\USERS01.DBF
5 891236 0 0
C:\ORACLE\ORADATA\TEST10G\EXAMPLE01.DBF
6 891236 0 0
C:\ORACLE\ORADATA\TEST10G\TS_CDC.DBF

6 rows selected.

SQL> select
CHECKPOINT_CHANGE#,CONTROLFILE_SEQUENCE#,ARCHIVE_CHANGE#,CURRENT_SCN,archivelog_ch
ange#
from v$database;

CHECKPOINT_CHANGE# CONTROLFILE_SEQUENCE# ARCHIVE_CHANGE# CURRENT_SCN


ARCHIVELOG_CHANGE#
------------------ --------------------- --------------- -----------
------------------
893124 1785 889087 893131
889090

SQL> select
file#,CHECKPOINT_CHANGE#,LAST_CHANGE#,OFFLINE_CHANGE#,ONLINE_CHANGE#,NAME from
v$datafil
e;

FILE# CHECKPOINT_CHANGE# LAST_CHANGE# OFFLINE_CHANGE# ONLINE_CHANGE# NAME


---------- ------------------ ------------ --------------- --------------
--------------------------
1 893124 534906 534907
C:\ORACLE\ORADATA\TEST10G\SYSTEM01.DBF
2 893124 534906 534907
C:\ORACLE\ORADATA\TEST10G\UNDOTBS01.DBF
3 893124 534906 534907
C:\ORACLE\ORADATA\TEST10G\SYSAUX01.DBF
4 893124 534906 534907
C:\ORACLE\ORADATA\TEST10G\USERS01.DBF
5 893124 0 0
C:\ORACLE\ORADATA\TEST10G\EXAMPLE01.DBF
6 893124 0 0
C:\ORACLE\ORADATA\TEST10G\TS_CDC.DBF

6 rows selected.

select THREAD#,SEQUENCE#,FIRST_CHANGE#,NEXT_CHANGE# from v$log_history;

....
1 149 882431 887780
1 150 887780 888889
1 151 888889 889090
1 152 889090 893499
1 153 893499 895665
1 154 895665 896834
1 155 896834 898275
1 156 898275 899008

select THREAD#,SEQUENCE#,FIRST_CHANGE#,NEXT_CHANGE# from v$archived_log;

1 149 882431 887780


1 149 882431 887780
1 150 887780 888889
1 150 887780 888889
1 151 888889 889090
1 151 888889 889090
1 152 889090 893499
1 152 889090 893499
1 153 893499 895665
1 153 893499 895665
1 154 895665 896834
1 154 895665 896834
1 155 896834 898275
1 155 896834 898275
1 156 898275 899008
1 156 898275 899008

END TESTCASE 1:
===============

$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
$$$$$

----------------------------------

The V$RMAN_OUTPUT memory-only view shows the output of a currently executing RMAN
job, whereas the
V$RMAN_STATUS control file view indicates the status of both executing and
completed RMAN jobs.
The V$BACKUP_FILES provides access to the information used as the basis of the
LIST BACKUP and REPORT OBSOLETE commands.

Best views to obtain backup information are:

V$RMAN_STATUS
V$BACKUP_FILES
v$archived_log
v$log_history
v$database;

You can also list backups by querying V$BACKUP_FILES and the RC_BACKUP_FILES
recovery catalog view.
These views provide access to the same information as the LIST BACKUPSET command.

----------------------------------
Enhanced Reporting: RESTORE PREVIEW

The PREVIEW option to the RESTORE command can now tell you which backups will be
accessed during a RESTORE operation.
----------------------------------

>> To run RMAN commands interactively, start RMAN and then type commands into the
command-line interface.
For example, you can start RMAN from the UNIX command shell and then execute
interactive commands as follows:

% rman TARGET SYS/oracle@trgt CATALOG rman/cat@catdb


% rman TARGET=SYS/oracle@trgt CATALOG=rman/cat@catdb

----------------------------------

>> Command files


In this example, a sample RMAN script is placed into a command file called
commandfile.rcv.
You can run this file from the operating system command line and write the output
into the log file
outfile.txt as follows:

% rman TARGET / CATALOG rman/cat@catdb CMDFILE commandfile.rcv LOG outfile.txt

----------------------------------

Run the CONFIGURE DEFAULT DEVICE TYPE command to specify a default device type for
automatic channels.
For example, you may make backups to tape most of the time and only occasionally
make a backup to disk.
In this case, configure channels for disk and tape devices, but make sbt the
default device type:

CONFIGURE DEVICE TYPE DISK PARALLELISM 1; # configure device disk


CONFIGURE DEVICE TYPE sbt PARALLELISM 2; # configure device sbt
CONFIGURE DEFAULT DEVICE TYPE TO sbt;

Now, RMAN will, by default, use sbt channels for backups. For example, if you run
the following command:

BACKUP TABLESPACE users;

RMAN only allocates channels of type sbt during the backup because sbt is the
default device.

$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
$$$$$

24.2 Older RMAN stuff: 8,8i,9i:


===============================

24.1 Introduction:
------------------
Recovery Manager (RMAN) is an Oracle tool that allows you to back up,
copy, restore, and recover datafiles, control files, and archived redo logs.
It is included with the Oracle server and does not require separate installation.
You can invoke RMAN as a command line utility from the operating system (O/S)
prompt
or use the GUI-based Enterprise Manager Backup Manager.

RMAN users "server sessions" to automate many of the backup and recovery tasks
that
were formerly performed manually. For example, instead of requiring you to
locate appropriate backups for each datafile, copy them to the correct place using

operating system commands, and choose which archived logs to apply,


RMAN manages these tasks automatically.

RMAN stores metadata about its backup and recovery operations in the recovery
catalog,
which is a centralized repository of information, or exclusively in the control
file.
Typically, the recovery catalog is stored in a separate database.
If you do not use a recovery catalog, RMAN uses the control file as its repository
of metadata.

RMAN can be used on a database in archive mode or no archive mode.

!!!! But, for open backups, the database MUST BE in ARCHIVE MODE.
That's true for Oracle 8, 8i, 9i and 10g.

RMAN doesn't do a "begin backup". It is not necessary when you use RMAN.
RMAN does an intelligent copy of the database blocks (as opposed to a simple OS
copy) and it ensures we do not copy a fractured block. The whole purpose of the
begin backup (of the OS type of backup) is to record more info into the redo logs
in the event an OS copy
copies a "fractured block" - where the head and tail do not match (can happen
since we are WRITING to the database at the same time the backup would be
reading). When RMAN hits such a block -- it re-reads it to get a clean copy.

How to start RMAN?

- You can call from unix, or cmd prompt, the RMAN utility:

$ rman

RMAN>

Once started you will see the RMAN> prompt.

- Or you can give command line paramaters along with the rman call

% rman target sys/sys_pwd@prod1 catalog rman/rman@rcat

24.2 Types of commands, and interactive mode or batch mode:


-----------------------------------------------------------

RMAN uses two basic types of commands: stand-alone commands and job commands.
- The job commands always appear within the brackets of a run command.
- The stand-alone command can be issued right after the RMAN prompt.

You can run RMAN in interactive mode or batch mode

- examples of interactive mode:

RMAN> run {
2> allocate channel d1 type disk;
3> backup database;
4> }

RMAN> run {
allocate channel c1 type disk;
copy datafile 6 to 'F:\oracle\backups\oem01.cpy';
release channel c1;
}

RMAN> run {
allocate channel c1 type disk;
backup format 'F:\oracle\backups\oem01.rbu' ( datafile 6 );
release channel c1;
}

RMAN> run {
allocate channel c1 type 'sbt_tape';
restore database;
recover database;
}

Note about 'channel':

You must allocate a 'channel" before you execute backup and recovery commands.
Each allocated channel establishes a connection from RMAN to a target database
by starting a server session on the instance. This server session performs
the backup and recovery operations.
Only one RMAN session communicates with the allocated server sessions.

You can allocate multiple channels, thus allowing a single RMAN command
to read or write multiple backups or image copies in parallel.
Thus, the number of channels that you allocate affects the degree of parallelism
within a command.
When backing up to tape you should allocate one channel for each physical device,
but when backing up to disk you can allocate as many channels
as necessary for maximum throughput.

The simplest way to determine whether RMAN encountered an error is to examine its
return code.
RMAN returns 0 to the operating system if no errors occurred, 1 otherwise.
For example, if you are running UNIX and using the C shell,
RMAN outputs the return code into a shell variable called $status.

The second easiest way is to search the Recovery Manager output for the
string RMAN-00569, which is the message number for the error stack banner.
All RMAN errors are preceded by this error message.
If you do not see an RMAN-00569 message in the output, then there are no errors.

- example of batch mode:

You can type RMAN commands into a file, and then run the command file
by specifying its name on the command line.
The contents of the command file should be identical
to commands entered at the command line. Suppose the commandfile is
called 'b_whole_l0.rcv', then the rman call could be as in the following example:

$ rman target / catalog rman/rman@rcat @b_whole_l0.rcv log rman_log.f

Another example:

c:> rman target xxx/yyy@target rcvcat aaa/bbb@catalog cmdfile bkdb.scr msglog


bkdb.log

24.3. Recovery Manager Repository or RMAN Catalog:


--------------------------------------------------

Storage of the RMAN Repository in the Recovery Catalog, or exclusively in the


target database controlfile:

The RMAN repository is the collection of metadata about your target databases
that RMAN uses to conduct its backup, recovery, and maintenance operations.
You can either create a recovery catalog in which to store this information,
or let RMAN store it exclusively in the target database control file.
Although RMAN can conduct all major backup and recovery operations using
just the control file, some RMAN commands function only when you use a recovery
catalog.

The recovery catalog is maintained solely by RMAN; the target database never
accesses it directly. RMAN propagates information about the database structure,
archived redo logs, backup sets, and datafile copies
into the recovery catalog from the target database's control file.

A single recovery catalog is able to store information for multiple target


databases.

What is in the recovery catalog?


--------------------------------

-Datafile and archived redo log backup sets and backup pieces.
-Datafile copies.
-Archived redo logs and their copies.
-Tablespaces and datafiles on the target database.
-Stored scripts, which are named user-created sequences of RMAN and SQL commands.

Resynchronization of the Recovery Catalog


-----------------------------------------

The recovery catalog obtains crucial RMAN metadata from the target database
control file.
Resynchronization of the recovery catalog ensures that the metadata that RMAN
obtains
from the control file stays current.

Resynchronizations can be full or partial. In a partial resynchronization,


RMAN reads the current control file to update changed data, but does not
resynchronize
metadata about the database physical schema: datafiles, tablespaces, redo threads,

rollback segments (only if the database is open), and online redo logs.
In a full resynchronization, RMAN updates all changed records, including schema
records.

When you issue certain commands in RMAN, the program automatically detects when it
needs
to perform a full or partial resynchronization and executes the operation as
needed.
You can also force a full resynchronization by issuing a 'resync catalog' command.

It is a good idea to run RMAN once a day or so and issue the resync catalog
command
to ensure that the catalog stays current.
Because the control file employs a circular reuse system,
backup and copy records eventually get overwritten.

A single recovery catalog is able to store information for multiple target


databases.

24.4 Media Manager:


-------------------

To utilize tape storage for your database backups, RMAN requires a media manager.
A media manager is a utility that loads, labels,
and unloads sequential media such as tape drives for the purpose of backing up and
recovering data.
Note that Oracle does not need to connect to the media management
library (MML) software when it backs up to disk.

Software that is compliant with the MML interface enables an Oracle server session

to issue commands to the media manager to back up or restore a file.


The media manager responds to the command by loading, labeling, or unloading the
requested tape.

24.5 Backups:
-------------

When you execute the backup command, you create one or more backup sets.
A backup set, which is a logical construction, contains one or more physical
backup pieces.
Backup pieces are operating system files that contain the backed up datafiles,
control files, or archived redo logs. You cannot split a file across different
backup sets
or mix archived redo logs and datafiles into a single backup set.

A backup set is a complete set of backup pieces that constitute a full or


incremental
backup of the objects specified in the backup command. Backup sets are in an RMAN-
specific format;
image copies, in contrast, are available for use without additional processing.

So, for example:


You can have a backupset 'backupset 1' containing just 1 datafile.
You can have a backupset 'backupset 2' containing many datafiles, as blocks.
You can have a backupset 'backupset 3' containing archived redologs

You can either let RMAN determine a unique name for the backup piece or use the
format parameter
to specify a name. If you do not specify a filename, RMAN uses the %U substitution
variable
to guarantee a unique name. The backup command provides substitution variables
that allow you to generate unique filenames.

24.6 Starting RMAN Sessions:


----------------------------

Example 1: connect to target database


-------------------------------------

$ ORACLE_SID=brdb;export ORACLE_SID

$rman
RMAN>connect target sys/password
RMAN .. connected

Example 2: connect to catalog database


--------------------------------------

$rman
RMAN>connect catalog rman/rman
RMAN .. connected

Starting and stopping target database

$ ORACLE_SID=brdb;export ORACLE_SID

$rman
RMAN>connect target sys/password
RMAN .. connected

RMAN>startup -- will start the target database

RMAN>shutdown -- will stop the target database

Example 3: starting RMAN with command parameters:


-------------------------------------------------

$ ORACLE_SID=brdb;export ORACLE_SID

$ rman target sys/password@prod1 catalog rman/rman@rcat


rman target sys/cactus@playroca catalog rman/cactus@playrman

24.7 Creating the Recovery Catalog:


-----------------------------------

- create a database for the Recovery Catalog, for example rcdb


- create the user that will hold the catalog, rman with password rman

create user rman identified by rman


default tablespace rman
temporary tablespace temp;

- give the right permissions:

grant connect, resource, recovery_catalog_owner to rman;

- create the catalog in database rcdb

In 8.0, to setup Recovery Catalog, you can run


$ORACLE_HOME/rdbms/admin/catrman.sql while connected to RMAN database.

In 8.1 and later, to setup the Recovery Catalog, use the create catalog command.

$ rman
RMAN>connect catalog rman/rman

RMAN-06008 connected to recovery catalog database


RMAN-06428 recovery catalog is not installed

RMAN>create catalog tablespace rman;


RMAN-06431 recovery catalog created

You can expect something like the following to exist


in the rcdb database:

SQL> select table_name, tablespace_name, owner


2 from dba_tables where owner='RMAN';

TABLE_NAME TABLESPACE_NAME OWNER


------------------------------ ------------------------------ ------
AL DATA RMAN
BCB DATA RMAN
BCF DATA RMAN
BDF DATA RMAN
BP DATA RMAN
BRL DATA RMAN
BS DATA RMAN
CCB DATA RMAN
CCF DATA RMAN
CDF DATA RMAN
CKP DATA RMAN
CONFIG DATA RMAN
DB DATA RMAN
DBINC DATA RMAN
DF DATA RMAN
DFATT DATA RMAN
OFFR DATA RMAN
ORL DATA RMAN
RCVER DATA RMAN
RLH DATA RMAN
RR DATA RMAN
RT DATA RMAN
SCR DATA RMAN
SCRL DATA RMAN
TS DATA RMAN
TSATT DATA RMAN
XCF DATA RMAN
XDF DATA RMAN

28 rows selected.

SQL> select view_name, owner


2 from dba_views where owner='RMAN';

8, 8i:
------

VIEW_NAME OWNER
------------------------------ -----
RC_ARCHIVED_LOG RMAN
RC_BACKUP_CONTROLFILE RMAN
RC_BACKUP_CORRUPTION RMAN
RC_BACKUP_DATAFILE RMAN
RC_BACKUP_PIECE RMAN
RC_BACKUP_REDOLOG RMAN
RC_BACKUP_SET RMAN
RC_CHECKPOINT RMAN
RC_CONTROLFILE_COPY RMAN
RC_COPY_CORRUPTION RMAN
RC_DATABASE RMAN
RC_DATABASE_INCARNATION RMAN
RC_DATAFILE RMAN
RC_DATAFILE_COPY RMAN
RC_LOG_HISTORY RMAN
RC_OFFLINE_RANGE RMAN
RC_PROXY_CONTROLFILE RMAN
RC_PROXY_DATAFILE RMAN
RC_REDO_LOG RMAN
RC_REDO_THREAD RMAN
RC_RESYNC RMAN
RC_STORED_SCRIPT RMAN
RC_STORED_SCRIPT_LINE RMAN
RC_TABLESPACE RMAN

24 rows selected.

The recovery catalog is now installed in the database rcdb.

10g:
----
SQL> select view_name from dba_views where view_name like '%RMAN%';

VIEW_NAME
------------------------------
V_$RMAN_CONFIGURATION
GV_$RMAN_CONFIGURATION
V_$RMAN_STATUS
V_$RMAN_OUTPUT
GV_$RMAN_OUTPUT
V_$RMAN_BACKUP_SUBJOB_DETAILS
V_$RMAN_BACKUP_JOB_DETAILS
V_$RMAN_BACKUP_TYPE
MGMT$HA_RMAN_CONFIG
RC_RMAN_OUTPUT
RC_RMAN_BACKUP_SUBJOB_DETAILS
RC_RMAN_BACKUP_JOB_DETAILS
RC_RMAN_BACKUP_TYPE
RC_RMAN_CONFIGURATION
RC_RMAN_STATUS

15 rows selected.

SQL> select view_name, owner


2 from dba_views where owner='RMAN';

XXXX
RC_RMAN_OUTPUT
RC_BACKUP_FILES
RC_RMAN_BACKUP_SUBJOB_DETAILS
RC_RMAN_BACKUP_JOB_DETAILS
RC_BACKUP_SET_DETAILS
RC_BACKUP_PIECE_DETAILS
RC_BACKUP_COPY_DETAILS
RC_PROXY_COPY_DETAILS
RC_PROXY_ARCHIVELOG_DETAILS
RC_BACKUP_DATAFILE_DETAILS
RC_BACKUP_CONTROLFILE_DETAILS
RC_BACKUP_ARCHIVELOG_DETAILS
RC_BACKUP_SPFILE_DETAILS
RC_BACKUP_SET_SUMMARY
RC_BACKUP_DATAFILE_SUMMARY
RC_BACKUP_CONTROLFILE_SUMMARY
RC_BACKUP_ARCHIVELOG_SUMMARY
RC_BACKUP_SPFILE_SUMMARY
RC_BACKUP_COPY_SUMMARY
RC_PROXY_COPY_SUMMARY
RC_PROXY_ARCHIVELOG_SUMMARY
RC_UNUSABLE_BACKUPFILE_DETAILS
RC_RMAN_BACKUP_TYPE
RC_DATABASE
RC_DATABASE_INCARNATION
RC_RESYNC
RC_CHECKPOINT
RC_TABLESPACE
RC_DATAFILE
RC_TEMPFILE
RC_REDO_THREAD
RC_REDO_LOG
RC_LOG_HISTORY
RC_ARCHIVED_LOG
RC_BACKUP_SET
RC_BACKUP_PIECE
RC_BACKUP_DATAFILE
RC_BACKUP_CONTROLFILE
RC_BACKUP_SPFILE
RC_DATAFILE_COPY
RC_CONTROLFILE_COPY
RC_BACKUP_REDOLOG
RC_BACKUP_CORRUPTION
RC_COPY_CORRUPTION
RC_OFFLINE_RANGE
RC_STORED_SCRIPT
RC_STORED_SCRIPT_LINE
RC_PROXY_DATAFILE
RC_PROXY_CONTROLFILE
RC_RMAN_CONFIGURATION
RC_DATABASE_BLOCK_CORRUPTION
RC_PROXY_ARCHIVEDLOG
RC_RMAN_STATUS

53 rows selected.

Compatibility:
---------------

If you use an 8.1.6 RMAN executable to execute the "create catalog" command,
then the recovery catalog is created as a release 8.1.6 recovery catalog.
Compatibility=8.1.6
You cannot use the 8.1.6 catalog with a pre-8.1.6 release of the RMAN executable.

If you use an 8.1.6 RMAN executable to execute the "upgrade catalog" command,
then the recovery catalog is upgraded from a pre-8.1.6 release to a release 8.1.6
catalog.
Compatibility=8.0.4
The 8.1.6 catalog is backwards compatible with older releases of the RMAN
executable.

To view compatibility:

SQL> SELECT value FROM config WHERE name='compatible';

Use an older RMAN to create the catalog.


Use the newer RMAN to upgrade the catalog.

You can allwys do:

RMAN> configure compatible = 8.1.5;

*** EXTRA: different RMAN CATALOGS in 1 DATABASE ***


Different versions in one database:
-----------------------------------

In general, the rules of RMAN compatibility are as follows:

- The RMAN catalog schema version (tables/views) should be greater than or equal
to the catalog database version.
- The RMAN catalog is backwards compatible with target databases from earlier
releases.
- The versions of the RMAN executable and the target database should be the same

- RMAN cannot create release 8.1 or later catalog schemas in 8.0 catalog
databases.

Suppose you have 8.0.5 and 9i target databases.

- create one 9i database rcdb


- create 2 tablespaces: RCAT80 and RCAT9I
- create corresponding rman users

Create the 8.0.5 catalog in the 9.2.0 catalog database.


# sql syntax for creating logical catalog 8.0.5 structure.
create tablespace RCAT80 datafile '/export/home/dfreneuil/D817F/
DATAFILES/rcat80_01.dbf' size 20M ;

Create the 9.2.0 catalog in the 9.2.0 catalog database.


# sql syntax for creating logical catalog 8i structure.
create tablespace RCAT9I datafile '/export/home/dfreneuil/D920F/
DATAFILES/rcat9i_01.dbf' size 20M ;

# sql syntax for creating catalog 8.0.5 user owner.


create user RMAN80 identified by rman80
default tablespace RCAT80
temporary tablespace temp
quota unlimited on RCAT80 ;

grant connect, resource,recovery_catalog_owner to rman80 ;

# sql syntax for creating catalog 9i user owner.


create user RMAN9I identified by rman9i
default tablespace RCAT9I
temporary tablespace temp
quota unlimited on RCAT9I ;

grant connect, resource,recovery_catalog_owner to rman9i ;

- make tnsnames.ora OK

- Create the 2 catalogs:

9.2.0 catalog views creation.

$ rman catalog rman9i/rman9i -- to connect locally.


or
$ rman catalog rman8i/rman9i@alias to connect through NET8.
RMAN> create catalog ;

8.0.5 catalog views creation.

Since the catalogs database is an 8.1.7 database, connect to the 8.0.5 catalog
via 8.0.5 SQL*Plus.

$ sqlplus rman80/rman80@alias_to_rcat80
--> connect from the target machine to the 8.0.5 catalog.
SQL> @?/rdbms/admin/catrman.sql

Backup an 8.0.5 database with 8.0.5 RMAN into an 8.0.5 catalog in an 9.2.0 catalog
database.

$ rman rcvcat rman80/rman80@V817

8.0.5 db ----> 8.0.5 RMAN ----> 8.0.5 catalog in 9.2.0 db


9.2.0 db ----> 9.2.0 RMAN ----> 9.2.0 catalog in 9.2.0 db

*** END EXTRA ***

24.8 Registering and un-registering the target database:


--------------------------------------------------------

Register:
---------

Now we must 'register' the target database.


Suppose the target database is called 'airm'.

Connect to the target and the catalog:

$ rman target / catalog rman/rman@rcdb

or

$ rman system/passw@airm catalog rman/rman@rcdb

RMAN-06005 connected to target database: AIRM


RMAN-06008 connected to recovery catalog database

RMAN>register database

And the airm database will be registered in the catalog.

If you connected to rcdb before the registering and


give the following queries before and after registering airm:

SQL> connect system/manager@rcdb


Connected.

before registering:
SQL> select * from rman.db;
no rows selected

after registering:
SQL> select * from rman.db;

DB_KEY DB_ID CURR_DBINC_KEY


---------- ---------- --------------
1 2092303715 2

Unregister:
-----------

It's best to unregister the backups from the catalog


first:

RMAN> list backup of database;


RMAN-03022: compiling command: list

shows possible backupsets with their numbers


fore example 989

RMAN> allocate channel for maintenance type disk;


change backupset 989 delete;

Next we un-register the target database. You will


not use rman, but a special procedure.
You must use this procedure with the DB_KEY and DB_ID
parameters as values.

In SQL*Plus:

SQL>execute dbms_rcvcat.unregisterdatabase(1,2092303715)

and the airm database will be unregistered.

24.9 Reset of the catalog:


--------------------------

If you have opened the target database with the 'RESETLOGS' option,
you have in fact created a new 'incarnation' of the database.

This information must be 'told' to the recovery catalog via


the 'reset database' command:

$ rman target sys/passw catalog rman/rman@rcdb

RMAN>reset database;

-- VALIDATE:
-- ---------
You can use the VALIDATE option of the BACKUP command to verify that database
files exist and are in the correct locations,
and have no physical or logical corruptions that would prevent RMAN from creating
backups of them.
When performing a BACKUP... VALIDATE, RMAN reads the files to be backed up in
their entirety, as it would during
a real backup. It does not, however, actually produce any backup sets or image
copies.

If the backup validation discovers corrupt blocks, then RMAN updates the
V$DATABASE_BLOCK_CORRUPTION view
with rows describing the corruptions. You can repair corruptions using block media
recovery, documented in
Oracle Database Backup and Recovery Advanced User's Guide. After a corrupt block
is repaired,
the row identifying this block is deleted from the view.

For example, you can validate that all database files and archived logs can be
backed up by running a command as follows:

BACKUP VALIDATE DATABASE ARCHIVELOG ALL;

The RMAN client displays the same output that it would if it were really backing
up the files.
If RMAN cannot validate the backup of one or more of the files, then it issues an
error message.
For example, RMAN may show output similar to the following:

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 08/29/2002 14:33:47
ORA-19625: error identifying file /oracle/oradata/trgt/arch/archive1_6.dbf
ORA-27037: unable to obtain file status
SVR4 Error: 2: No such file or directory
Additional information: 3

-- CONTROLFILE AUTOBACKUP
-- ----------------------

Configuring Control File and Server Parameter File Autobackup


RMAN can be configured to automatically back up the control file and server
parameter file whenever
the database structure metadata in the control file changes and whenever a backup
record is added.
The autobackup enables RMAN to recover the database even if the current control
file, catalog, and server
parameter file are lost.

Because the filename for the autobackup uses a well-known format, RMAN can search
for it without access
to a repository, and then restore the server parameter file. After you have
started the instance with the
restored server parameter file, RMAN can restore the control file from an
autobackup. After you mount
the control file, the RMAN repository is available and RMAN can restore the
datafiles and find
the archived redo log.

You can enable the autobackup feature by running this command:

CONFIGURE CONTROLFILE AUTOBACKUP ON;

You can disable the feature by running this command:

CONFIGURE CONTROLFILE AUTOBACKUP OFF;

Backing Up Control Files with RMAN


You can back up the control file when the database is mounted or open. RMAN uses a
snapshot control file
to ensure a read-consistent version. If CONFIGURE CONTROLFILE AUTOBACKUP is ON (by
default it is OFF),
then RMAN automatically backs up the control file and server parameter file after
every backup
and after database structural changes. The control file autobackup contains
metadata about the previous backup,
which is crucial for disaster recovery.

If the autobackup feature is not set, then you must manually back up the control
file in one of the following ways:

.Run BACKUP CURRENT CONTROLFILE


.Include a backup of the control file within any backup by using the INCLUDE
CURRENT CONTROLFILE option
of the BACKUP command
.Back up datafile 1, because RMAN automatically includes the control file and
SPFILE in backups of datafile 1

Note:

If the control file block size is not the same as the block size for datafile 1,
then the control file
cannot be written into the same backup set as the datafile. RMAN writes the
control file into a backup set
by itself if the block size is different.
A manual backup of the control file is not the same as a control file autobackup.
In manual backups,
only RMAN repository data for backups within the current RMAN session is in the
control file backup,
and a manually backed-up control file cannot be automatically restored.

24.11 Create scripts:


---------------------

If you are connected to the target and the catalog,


you can create and store scripts in the catalog.

Example:
== XXX

RMAN> create script complet_bac1 {


2> allocate channel c1 type disk;
3> allocate channel c2 type disk;
4> backup database;
5> sql 'ALTER SYSTEM ARCHIVE LOG ALL';
6> backup archivelog all;
7> }

RMAN-03022: compiling command: create script


RMAN-03023: executing command: create script
RMAN-08085: created script complet_bac1

To run such a script:

$ rman target sys/passw@airm catalog rman/rman@rcdb

RMAN>run { execute scipt complet_bac1; }

You can also replace a script:

RMAN>replace script b_whole_l0 {


# back up whole database and archived logs
allocate channel d1 type disk;
allocate channel d2 type disk;
allocate channel d3 type disk;
backup
incremental level 0
tag b_whole_l0
filesperset 6
format '/dev/backup/prod1/df/df_t%t_s%s_p%p' -- name of the backup piece
(database);
sql 'ALTER SYSTEM ARCHIVE LOG CURRENT';
backup
filesperset 20
format '/dev/backup/prod1/al/al_t%t_s%s_p%p'
(archivelog all
delete input);
}

RMAN> SET CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO 'controlfile_%F';
RMAN> BACKUP AS COPY DATABASE;
RMAN> RUN {
SET CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/tmp/%F.bck';
BACKUP AS BACKUPSET DEVICE TYPE DISK DATABASE;
}

24.12 Parallization:
--------------------

RMAN executes commands serially; that is, it completes the current command
before starting the next one. Parallelism is exploited only within the context
of a single command. Consequently, if you want 5 datafile copies,
issue a single copy command specifying all 5 copies rather than 5 separate copy
commands.

In the following example, you allocate 5 channels, and then


you issued 5 separate copy commands.
So, all copy commands are performed one after the other.

run {
allocate channel c1 type disk;
allocate channel c2 type disk;
allocate channel c3 type disk;
allocate channel c4 type disk;
allocate channel c5 type disk;
copy datafile 22 to '/dev/prod/backup1/prod_tab5_1.dbf';
copy datafile 23 to '/dev/prod/backup1/prod_tab5_2.dbf';
copy datafile 24 to '/dev/prod/backup1/prod_tab5_3.dbf';
copy datafile 25 to '/dev/prod/backup1/prod_tab5_4.dbf';
copy datafile 26 to '/dev/prod/backup1/prod_tab6_1.dbf';
}

To get the copy command run in parallel, use the following


command:

run {
allocate channel c1 type disk;
allocate channel c2 type disk;
allocate channel c3 type disk;
allocate channel c4 type disk;
allocate channel c5 type disk;
copy datafile 5 to '/dev/prod/backup1/prod_tab5_1.dbf',
datafile 23 to '/dev/prod/backup1/prod_tab5_2.dbf',
datafile 24 to '/dev/prod/backup1/prod_tab5_3.dbf',
datafile 25 to '/dev/prod/backup1/prod_tab5_4.dbf',
datafile 26 to '/dev/prod/backup1/prod_tab6_1.dbf';
}

24.13 Creating backups:


-----------------------

1. Image copy and Backup set:


-----------------------------

- you can make 'image copies', which are actual


complete copies of database files, controlfiles, or
archived redologs, to disk.
These are not stored in the special RMAN format, and can
be used 'ouside' of rman if neccessary.

- you can make for example backups of database files


in a 'backup set' which are in the special rman format.
You must use rman to process them.

Examples:

- image copy, using the copy command:


RMAN>run { allocate channel c1 type disk;
copy
datafile 1 to '/staging/system01.dbf',
datafile 2 to '/staging/data01.dbf',
datafile 3 to '/staging/users01.dbf',
current controlfile to '/staging/control1.ctl'; }

RMAN> run {
2> allocate channel c1 type disk;
3> copy datafile 1 to 'df1.bak';
4> }

- backup set, using the backup command:

RMAN> run
{ allocate channel c1 type disk;
backup tablespace users
including current controlfile; }

RMAN> run {
2> allocate channel c1 type disk;
3> backup tablespace system;
4> }

RMAN>

This example backs up the tablespace to its default backup location, which is
port-specific:
on UNIX systems the location is $ORACLE_HOME/dbs. Because you do not specify the
format parameter,
RMAN automatically assigns the backup a unique filename.

2. Archive mode and No archive mode:


------------------------------------

If the database is in ARCHIVELOG mode, then the target database can be open or
closed;
you do not need to close the database cleanly (although Oracle recommends
you do so that the backup is consistent).

If the database is in NOARCHIVELOG mode, then you must close it cleanly


prior to taking a backup.

The following example shows that a tablespace backup


does not work if the database is open and in no archive mode.

RMAN> run {
2> allocate channel c1 type disk;
3> backup tablespace users;
4> }

RMAN-03022: compiling command: allocate


RMAN-03023: executing command: allocate
RMAN-08030: allocated channel: c1
RMAN-08500: channel c1: sid=17 devtype=DISK

RMAN-03022: compiling command: backup


RMAN-03023: executing command: backup
RMAN-08008: channel c1: starting full datafile backupset
RMAN-08502: set_count=2 set_stamp=482962114 creation_time=10-JAN-03
RMAN-08010: channel c1: specifying datafile(s) in backupset
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03007: retryable error occurred during execution of command: backup
RMAN-07004: unhandled exception during command execution on channel c1
RMAN-10035: exception raised in RPC: ORA-19602: cannot backup or copy active file
in NOARCHIVELOG mode
RMAN-10031: ORA-19624 occurred during call to DBMS_BACKUP_RESTORE.BACKUPDATAFILE

3. Names and sizes:


-------------------

Filenames for Backup Pieces:

You can either let RMAN determine a unique name for the backup piece or use
the format parameter to specify a name. If you do not specify a filename,
RMAN uses the %U substitution variable to guarantee a unique name.
The backup command provides substitution variables that allow you to generate
unique filenames.

Number and Size of Backup Set:

Use the backupSpec clause to list what you want to back up as well as specify
other useful options. The number and size of backup sets depends on:

The number of backupSpec clauses that you specify.

The number of input files specified or implied in each backupSpec clause.

The number of channels that you allocate.

The filesperset parameter, which limits the number of files for a backup set.

The setsize parameter, which limits the overall size in bytes of a backup set.

The most important rules in the algorithm for backup set creation are:

Each allocated channel that performs work in the backup job--that is,
that is not idle--generates at least one backup set.
By default, this backup set contains one backup piece.

RMAN always tries to divide the backup load so that all allocated channels have
roughly
the same amount of work to do.

The maximum upper limit for the number of files per backup set is determined by
the
filesperset parameter of the backup command.
The maximum upper limit for the size in bytes of a backup set is determined by the

setsize parameter of the backup command.

The filesperset parameter limits the number of files that can go in a backup set.
The default value of this parameter is calculated by RMAN as follows:
RMAN compares the value 64 to the rounded-up ratio of number of files / number of
channels,
nd sets filesperset to the lower value. For example, if you back up 70 files with
one channel,
RMAN divides 70/1, compares this value to 64, and sets filesperset to 64
because it is the lowest value.

The number of backup sets produced by RMAN is the rounded-up ratio of number of
datafiles / filesperset. For example, if you back up 70 datafiles and filesperset
is 64,
then RMAN produces 2 backup sets.

setsize: Sets the maximum size in bytes of the backup set without
specifying a limit to the number of files in the set.

filesperset: Sets a limit to the number of files in the backup set without
specifying a maximum size in bytes of the set.

4. Examples:
------------

- Backup and Recovery Database:


-------------------------------

Other Examples:
---------------

$ rman target / catalog rman/rman@rcat

To write the output to a log file, specify the file at startup. For example,
enter:

$ rman target / catalog rman/rman@rcat log /oracle/log/mlog.f

Allocate one or more channels of type disk or type 'sbt_tape'.


This example backs up all the datafiles as well as the control file.
It does not specify a format parameter, so RMAN gives each backup piece
a unique name automatically and stores it in the port-specific
default location ($ORACLE_HOME/dbs on UNIX).

Whole database backups automatically include the current control file,


but the current control file does not contain a record of the whole database
backup.
To obtain a control file backup with a record of the whole database backup,
make a backup of the control file after executing the whole database backup.
Include a backup of the control file within any backup by specifying
the include current controlfile option.

Optionally, use the set duplex command to create multiple identical backupsets.

run {
allocate channel ch1 type disk;
backup database;
sql 'ALTER SYSTEM ARCHIVE LOG CURRENT'; # archives current redo log as well
as
# all unarchived logs
}

Optionally, use the format parameter to specify a filename for the backup piece.
For example, enter:

run {
allocate channel ch1 type disk;
backup database
format '/oracle/backup/%U'; # %U generates a unique filename
}

Optionally, use the tag parameter to specify a tag for the backup. For example,
enter:

run {
allocate channel ch1 type 'sbt_tape';
backup database
tag = 'weekly_backup'; # gives the backup a tag identifier
}

This script backs up the database and the archived redo logs:

RMAN> run {
allocate channel ch1 type disk;
allocate channel ch2 type disk;
backup database;
sql 'ALTER SYSTEM ARCHIVE LOG ALL';
backup archivelog all;
}

RMAN> run {
allocate channel ch1 type disk;
allocate channel ch2 type disk;
backup format 'i:\backup\full_db.bck' (database);
sql 'ALTER SYSTEM ARCHIVE LOG CURRENT';
backup archivelog all;
}

- Backup tablespace:
--------------------

run {
allocate channel ch1 type disk;
allocate channel ch2 type disk;
allocate channel ch3 type disk;
backup filesperset = 3
tablespace inventory, sales
include current controlfile;
}

- Backup datafiles:
-------------------

run {
allocate channel ch1 type disk;
backup
(datafile 1,2,3,4,5,6
filesperset 3)
datafilecopy '/oracle/copy/tbs_1_c.f';
}

RMAN> run {
allocate channel c1 type disk;
copy datafile 6 to 'F:\oracle\backups\oem01.cpy';
release channel c1;
}

RMAN> run {
allocate channel c1 type disk;
backup format 'F:\oracle\backups\oem01.rbu' ( datafile 6 );
release channel c1;
}

RMAN> run {
allocate channel ch1 type disk;
allocate channel ch2 type disk;
allocate channel ch3 type disk;
backup
(datafile 1,2,3 filesperset = 1 channel ch1)
(datafilecopy '/oracle/copy/cf.f' filesperset = 2 channel ch2)
(archivelog from logseq 100 until logseq 102 thread 1 filesperset = 3
channel ch3);
}

- Backup archived redologs:


---------------------------

To back up archived logs, issue backup archivelog with the desired filtering
options:

run {
allocate channel ch1 type 'sbt_tape';
backup archivelog all # Backs up all archived redo logs.
delete input; # Optionally, delete the input logs
}

You can also specify a range of archived redo logs by time, SCN, or log sequence
number.
This example backs up all archived logs created more than 7 and less than 30 days
ago:

run {
allocate channel ch1 type disk;
backup archivelog
from time 'SYSDATE-30' until time 'SYSDATE-7';
}

- Incremental backups:
----------------------

This example makes a level 0 backup of the database:

run {
allocate channel ch1 type disk;
backup
incremental level = 0
database;
}

This example makes a level 1 backup of the database:

run {
allocate channel ch1 type disk;
backup
incremental level = 1
database;
}

Further examples:
------------------

Your database has to be in archive log mode for this script to work

RMAN> run {
2> # backup the database to disk
3> allocate channel d1 type disk;
4> backup
5> full
6> tag full_db
7> format '/backups/db_%t_%s_p%p'
8> (database);
9> release channel d1;
10> }

----

This script will backup all archive logs. Your database has to be
in archive log mode for this script to work.

RMAN> run {
2> allocate channel d1 type disk;
3> backup
4> format '/backups/log_t%t_s%s_p%p'
5> (archivelog all);
6> release channel d1;
7> }
----

This script will backup all the datafiles.

resync catalog;
run {
allocate channel c1 type disk;
copy datafile 1 to 'C:\rman1.dbf';
copy datafile 2 to 'C:\rman2.dbf';
copy datafile 3 to 'C:\rman3.dbf';
copy datafile 4 to 'C:\rman4.dbf';
copy datafile 5 to 'C:\rman5.dbf';
}

exit
echo exiting after successful hot backup using RMAN

-----

run {
sql 'alter database close';
allocate channel d1 type disk;
backup full
tag full_offline_backup
format 'c:\backup\db_t%t_s%s_p%p'
(database);
release channel d1;
sql 'alter database open';
}

5. Complete Examples:
---------------------

***************************************************************

L=0 BACKUP

run {
allocate channel d1 type disk;
backup
incremental level = 0
tag db_whole_l0
format 'i:\backup\l0_%d_t%t_s%s_p%p' (database);
sql 'ALTER SYSTEM ARCHIVE LOG CURRENT';
backup
format 'i:\backup\log_%d_t%t_s%s_p%p' (archivelog all);
}

or

run {
allocate channel d1 type disk;
allocate channel d2 type disk;
backup
incremental level = 0
tag db_whole_l0
format 'i:\backup\l0_%d_t%t_s%s_p%p' (database channel d1);
sql 'ALTER SYSTEM ARCHIVE LOG CURRENT';
backup
format 'i:\backup\log_%d_t%t_s%s_p%p' (archivelog all channel d2);
}

L=1 BACKUP

run {
allocate channel d1 type disk;
backup
incremental level = 1
tag db_whole_l1
format 'i:\backup\l1_%d_t%t_s%s_p%p' (database);
sql 'ALTER SYSTEM ARCHIVE LOG CURRENT';
backup
format 'i:\backup\log_%d_t%t_s%s_p%p' (archivelog all);
}

*****************************************************************

RMAN>create script db_whole_l0 {


# back up whole database and archived logs
allocate channel d1 type disk;
backup
incremental level 0
tag db_whole_l0
filesperset 15
format 'i:\backup\l0_%d_t%t_s%s_p%p' -- name of the backup piece
(database);
sql 'ALTER SYSTEM ARCHIVE LOG CURRENT';
backup
filesperset 20
format 'i:\backup\log_%d_t%t_s%s_p%p'
(archivelog all
delete input);
}

RMAN>create script db_whole_l1 {


# back up whole database and archived logs
allocate channel d1 type disk;
backup
incremental level 1
tag db_whole_l0
filesperset 15
format 'i:\backup\l1_%d_t%t_s%s_p%p' -- name of the backup piece
(database);
sql 'ALTER SYSTEM ARCHIVE LOG CURRENT';
backup
filesperset 20
format 'i:\backup\log_%d_t%t_s%s_p%p'
(archivelog all
delete input);
}

On sunday : schedule RMAN>run { execute scipt db_whole_l0; }


Other days: schedule RMAN>run { execute scipt db_whole_l1; }
**********************************************

replace script backup_all_archives {


execute script alloc_all_disks;
backup
filesperset 50
format '/bkup/SID/%d_al_t%t_s%s_p%p'
(archivelog all delete input);
execute script rel_all_disks;
}

# Incremental level 0 (whole) database backup


# The control file is automatically included each time file 1 of the
# system tablespace is backed up.
# replace script backup_db_level_0_disk {
# execute script alloc_all_disks;
# set maxcorrupt for datafile 1 to 0;
run {
allocate channel c2 type disk;
backup
incremental level = 0
tag backup_db_level_0
# The skip inaccessible clause ensures the backup will continue
# if any of the datafiles are inaccessible.
skip inaccessible
filesperset 9
format 'i:\backup\L0_%d.bck'
(database);
sql 'alter system archive log current';
execute script backup_all_archives;
}

*************************************************************

-- SUNDAY LEVEL 0 BACKUP


run {
allocate channel d1 type disk;
setlimit channel d1 kbytes 2097150 maxopenfiles 32 readrate 200;
set maxcorrupt for datafile 1,2,3,4,5,6 to 0;
backup
incremental level 0 cumulative
skip inaccessible
tag sunday_level_0
format 'c:\temp\df_t%t_s%s_p%p'
database;
copy current controlfile to 'c:\temp\sunday.ctl';
sql 'alter system archive log current';
backup
format 'c:\temp\al_t%t_s%s_p%p'
archivelog all
delete input;
release channel d1;
}
-- MONDAY LEVEL 2 BACKUP
run {
allocate channel d1 type disk;
setlimit channel d1 kbytes 2097150 maxopenfiles 32 readrate 200;
set maxcorrupt for datafile 1,2,3,4,5,6 to 0;
backup
incremental level 2 cumulative
skip inaccessible
tag monday_level_2
format 'c:\temp\df_t%t_s%s_p%p'
database;
copy current controlfile to 'c:\temp\monday.ctl';
sql 'alter system archive log current';
backup
format 'c:\temp\al_t%t_s%s_p%p'
archivelog all
delete input;
release channel d1;
}
-- TUESDAY LEVEL 2 BACKUP
run {
allocate channel d1 type disk;
setlimit channel d1 kbytes 2097150 maxopenfiles 32 readrate 200;
set maxcorrupt for datafile 1,2,3,4,5,6 to 0;
backup
incremental level 2 cumulative
skip inaccessible
tag tueday_level_2
format 'c:\temp\df_t%t_s%s_p%p'
database;
copy current controlfile to 'c:\temp\tuesday.ctl';
sql 'alter system archive log current';
backup
format 'c:\temp\al_t%t_s%s_p%p'
archivelog all
delete input;
release channel d1;
}
-- WEDNESDAY LEVEL 2 BACKUP
run {
allocate channel d1 type disk;
setlimit channel d1 kbytes 2097150 maxopenfiles 32 readrate 200;
set maxcorrupt for datafile 1,2,3,4,5,6 to 0;
backup
incremental level 2 cumulative
skip inaccessible
tag wednesday_level_2
format 'c:\temp\df_t%t_s%s_p%p'
database;
copy current controlfile to 'c:\temp\wednesday.ctl';
sql 'alter system archive log current';
backup
format 'c:\temp\al_t%t_s%s_p%p'
archivelog all
delete input;
release channel d1;
}
-- THURSDAY LEVEL 1 BACKUP
run {
allocate channel d1 type disk;
setlimit channel d1 kbytes 2097150 maxopenfiles 32 readrate 200;
set maxcorrupt for datafile 1,2,3,4,5,6 to 0;
backup
incremental level 1 cumulative
skip inaccessible
tag thursday_level_1
format 'c:\temp\df_t%t_s%s_p%p'
database;
copy current controlfile to 'c:\temp\thursday.ctl';
sql 'alter system archive log current';
backup
format 'c:\temp\al_t%t_s%s_p%p'
archivelog all
delete input;
release channel d1;
}
-- FRIDAY LEVEL 2 BACKUP
run {
allocate channel d1 type disk;
setlimit channel d1 kbytes 2097150 maxopenfiles 32 readrate 200;
set maxcorrupt for datafile 1,2,3,4,5,6 to 0;
backup
incremental level 2 cumulative
skip inaccessible
tag friday_level_2
format 'c:\temp\df_t%t_s%s_p%p'
database;
copy current controlfile to 'c:\temp\friday.ctl';
sql 'alter system archive log current';
backup
format 'c:\temp\al_t%t_s%s_p%p'
archivelog all
delete input;
release channel d1;
}
-- SATURDAY LEVEL 2 BACKUP
run {
allocate channel d1 type disk;
setlimit channel d1 kbytes 2097150 maxopenfiles 32 readrate 200;
set maxcorrupt for datafile 1,2,3,4,5,6 to 0;
backup
incremental level 2 cumulative
skip inaccessible
tag saturday_level_2
format 'c:\temp\df_t%t_s%s_p%p'
database;
copy current controlfile to 'c:\temp\saturday.ctl';
sql 'alter system archive log current';
backup
format 'c:\temp\al_t%t_s%s_p%p'
archivelog all
delete input;
release channel d1;
}

6. Third Party:
---------------
You can use rman in combination with third party storage managers.
In this case, rman is used with a MML library and possibly some API
that uses it's own configuration files, for example:

backup.scr script:

run
{
allocate channel t1 type 'sbt_tape' parms
'ENV=(TDPO_OPTFILE=c:\RMAN\scripts\tdpo.opt)';
allocate channel t2 type 'sbt_tape' parms
'ENV=(TDPO_OPTFILE=c:\RMAN\scripts\tdpo.opt)';

backup
filesperset 5
format 'df_%t_%s_%p'
(database);

release channel t1;


release channel t2;
}

run {
allocate channel d1 type 'sbt_tape' connect 'internal/manager@scdb2' parms
'ENV=(TDPO_OPTFILE=/usr/tivoli/tsm/client/oracle/bin64/tdpo.opt)';
allocate channel d2 type 'sbt_tape' connect 'internal/manager@scdb1' parms
'ENV=(TDPO_OPTFILE=/usr/tivoli/tsm/client/oracle/bin64/tdpo.opt)';
backup
format 'ctl_t%t_s%s_p%p'
tag cf
(current controlfile);
backup
full
filesperset 8
format 'db_t%t_s%s_p%p'
tag fulldb
(database);
release channel d1;
release channel d2;
}

The PARMS parameter sends instructions to the media manager. For example, the
following
vendor-specific PARMS setting instructs the media manager to back up to
a volume pool called oracle_tapes:

PARMS='ENV=(NSR_DATA_VOLUME_POOL=oracle_tapes)'
parms='ENV=(DSMO_FS=oracle)'

Another example:

RUN
{
ALLOCATE CHANNEL c1 DEVICE TYPE sbt
PARMS='ENV=(NSR_SERVER=tape_srv,NSR_GROUP=oracle_tapes)';
}
If you do not receive an error message, then Oracle successfully l
oaded the shared library. However, channel allocation can fail with the ORA-27211
error:

To delete an old backup:

run
{
allocate channel for delete type 'sbt_tape' parms
'ENV=(TDPO_OPTFILE=c:\RMAN\scripts\tdpo.opt)';

change backupset primary_key delete;

To schedule scripts:
--------------------

orcschedppim.cmd

rem ==================================================
rem orcsched.cmd
rem ==================================================

rem ==================================================
rem set rman executable
rem ==================================================
set ora_exe=d:\oracle\ora81\bin\rman

rem ==================================================
rem set script and log directory
rem ==================================================
rem set ora_script_dir=d:\oracle\scripts\
set ora_script_dir=c:\progra~1\tivoli\tsm\agentoba\
rem ==================================================
rem run the backup script
rem ==================================================

%ora_exe% target system/manager@ppim rcvcat rman_db1/rman_db1@orcl


cmdfile %ora_script_dir%bkdbppim.scr msglog %ora_script_dir%bkdbppim.log

bkdbppim.scr

run
{
allocate channel t1 type 'sbt_tape' parms
'ENV=(TDPO_OPTFILE=C:\Progra~1\Tivoli\TSM\AgentOBA\tdpoppim.opt)';
allocate channel t2 type 'sbt_tape' parms
'ENV=(TDPO_OPTFILE=C:\Progra~1\Tivoli\TSM\AgentOBA\tdpoppim.opt)';
backup
filesperset 5
format 'df_%t_%s_%p'
(database);

release channel t1;


release channel t2;
}

------------------------------------

Remarks:
--------

The following is what needs to be changed.

- Old Way

allocate channel for maintenance type 'sbt_tape' parms


'ENV=(DSMO_NODE=tora,
DSMI_ORC_CONFIG=/opt/tivoli/tsm/client/oracle/bin/dsm.opt)'

allocate channel t1 type 'sbt_tape' parms


> 'ENV=(DSMO_NODE=rx_r50,
> DSMI_CONFIG=/usr/tivoli/tsm/client/ba/bin/dsm.opt,
> DSMO_PSWDPATH=/usr/tivoli/tsm/client/oracle/bin,
> DSMI_DIR=/usr/tivoli/tsm/client/ba/bin,
> DSMO_AVG_SIZE#00)';
>

- New Way
allocate channel for maintenance type 'sbt_tape' parms
'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin/tdpo.opt)'

Contents of tdpo.opt

DSMI_ORC_CONFIG /opt/tivoli/tsm/client/oracle/bin/dsm.opt
DSMI_LOG /opt/tivoli/tsm/client/oracle/bin/tdpoerror.log

TDPO_FS rman_fs
TDPO_NODE tora
*TDPO_OWNER
TDPO_PSWDPATH /opt/tivoli/tsm/client/oracle/bin

*TDPO_DATE_FMT 1
*TDPO_NUM_FMT 1
*TDPO_TIME_FMT 1

*TDPO_MGMT_CLASS2 mgmtclass2
*TDPO_MGMT_CLASS3 mgmtclass3
*TDPO_MGMT_CLASS4 mgmtclass4

It is recomended TDP_NUM_BUFFERS be set to a value of 1 only.

7. Recovery:
------------

A restore can be as easy as:

RMAN> RESTORE DATABASE;


RMAN> RECOVER DATABASE;
Or a single tablespace:

Restore the tablespace or datafile with the RESTORE command, and recover it with
the RECOVER command.
(Use configured channels, or if desired, use a RUN block and allocate channels to
improve performance
of the RESTORE and RECOVER commands.)

RMAN> RESTORE TABLESPACE users;

RMAN> RECOVER TABLESPACE users;

If RMAN reported no errors during the recovery, then bring the tablespace back
online:

RMAN> SQL 'ALTER TABLESPACE users ONLINE';

Use the RMAN restore command to restore datafiles, control files, or archived redo
logs
from backup sets or image copies.
RMAN restores backups from disk or tape, but image copies only from disk.

Restore files to either:

- The default location, which overwrites the files with the same name.
- A new location specified by the set newname command.

Restoring the Database to its Default Location


----------------------------------------------

If you do not specify set newname commands for the datafiles during a restore job,

the database must be closed or the datafiles must be offline.

RMAN> run {
allocate channel c1 type 'sbt_tape';
restore database;
recover database;
}

run {
set until logseq 5 thread 1;
allocate auxiliary channel dupdb1 type disk;
duplicate target database to dupdb;

Examples Restoring the Database to a point in time (same incarnation)


---------------------------------------------------------------------

Example 1:
----------

RMAN> run
2 {
3 set until time '23-DEC-2006 13:45:00';
4 restore database;
5 recover database;
6 }

Example 2:
----------

To recover the database until a specified time, SCN, or log sequence number:

After connecting to the target database and, optionally, the recovery catalog
database,
ensure that the database is mounted. If the database is open, shut it down and
then mount it:

SHUTDOWN IMMEDIATE;
STARTUP MOUNT;

Determine the time, SCN, or log sequence that should end recovery. For example, if
you discover
that a user accidentally dropped a tablespace at 9:02 a.m., then you can recover
to 9 a.m.
--just before the drop occurred. You will lose all changes to the database made
after that time.

You can also examine the alert.log to find the SCN of an event and recover to a
prior SCN.
Alternatively, you can determine the log sequence number that contains the
recovery termination SCN,
and then recover through that log. For example, query V$LOG_HISTORY to view the
logs that you have archived.

RECID STAMP THREAD# SEQUENCE# FIRST_CHAN FIRST_TIM NEXT_CHANG


---------- ---------- ---------- ---------- ---------- --------- ----------
1 344890611 1 1 20037 24-SEP-02 20043
2 344890615 1 2 20043 24-SEP-02 20045
3 344890618 1 3 20045 24-SEP-02 20046

Perform the following operations within a RUN command:


Set the end recovery time, SCN, or log sequence. If specifying a time, then use
the date format specified
in the NLS_LANG and NLS_DATE_FORMAT environment variables.
If automatic channels are not configured, then manually allocate one or more
channels.
Restore and recover the database.
The following example performs an incomplete recovery until November 15 at 9 a.m.

RUN
{
SET UNTIL TIME 'Nov 15 2002 09:00:00';
# SET UNTIL SCN 1000; # alternatively, specify SCN
# SET UNTIL SEQUENCE 9923; # alternatively, specify log sequence number
RESTORE DATABASE;
RECOVER DATABASE;
}

If recovery was successful, then open the database and reset the online logs:
ALTER DATABASE OPEN RESETLOGS;

Moving the Target Database to a New Host with the Same File System
------------------------------------------------------------------

A media failure may force you to move a database by restoring a backup from
one host to another. You can perform this procedure so long as you have
a valid backup and a recovery catalog or control file.

Because your restored database will not have the online redo logs of your
production database,
you will need to perform incomplete recovery up to the lowest SCN of the most
recently
archived redo log in each thread and then open the database with the RESETLOGS
option.

To restore the database from HOST_A to HOST_B with a recovery catalog:

Copy the initialization parameter file for HOST_A to HOST_B using an operating
system utility.
Connect to the HOST_B target instance and HOST_A recovery catalog. For example,
enter:

% rman target sys/change_on_install@host_b catalog rman/rman@rcat

Start the instance without mounting it:

startup nomount

Restore and mount the control file. Execute a run command with the following sub-
commands:

Allocate at least one channel.


Restore the control file.
Mount the control file.

run {
allocate channel ch1 type disk;
restore controlfile;
alter database mount;
}

Because there may be multiple threads of redo, use change-based recovery.


Obtain the SCN for recovery termination by finding the lowest SCN among the most
recent archived redo logs for each thread.

Start SQL*Plus and use the following query to determine the necessary SCN:

SELECT min(scn)
FROM (SELECT max(next_change#) scn
FROM v$archived_log
GROUP BY thread#);

Execute a run command with the following sub-commands:

Set the SCN for recovery termination using the value obtained from the previous
step.

Allocate at least one channel.


Restore the database.
Recover the database.
Open the database with the RESETLOGS option.

run {
set until scn = 500; # use appropriate SCN for incomplete recovery
allocate channel ch1 type 'sbt_tape';
restore database;
recover database;
alter database open resetlogs;
}

Moving the Target Database to a New Host with a different File System
---------------------------------------------------------------------

Follow the procedure as above, but now use the 'set newname' command.

run {
set until scn 500; # use appropriate SCN for incomplete recovery
allocate channel ch1 type disk;
set newname for datafile 1 to '/disk1/%U'; # rename each datafile manually
set newname for datafile 2 to '/disk1/%U';
set newname for datafile 3 to '/disk1/%U';
set newname for datafile 4 to '/disk1/%U';
set newname for datafile 5 to '/disk1/%U';
set newname for datafile 6 to '/disk2/%U';
set newname for datafile 7 to '/disk2/%U';
set newname for datafile 8 to '/disk2/%U';
set newname for datafile 9 to '/disk2/%U';
set newname for datafile 10 to '/disk2/%U';
alter database mount;
restore database;
switch datafile all; # points the control file to the renamed datafiles
recover database;
alter database open resetlogs;
}

Warning:

restore with use catalog:


If you issue switch commands, RMAN considers the restored database as the target
database,
and the recovery catalog becomes corrupted. If you do not issue switch commands,
RMAN considers the restored datafiles as image copies that are candidates for
future restore operations.
restore with no catalog:
If you issue switch commands, RMAN considers the restored database as the target
database.
If you do not issue switch commands, the restore operation has no effect on the
repository.

Restoring a tablespace:
-----------------------

Suppose tablespace DATA_BIG has become unusable.

run {
allocate channel ch1 type disk;
restore tablespace data_big;
}

run {
allocate channel ch1 type disk;
recover tablespace data_big;
}

This script will perform datafile recovery

RMAN> run {
2> allocate channel d1 type disk;
3> sql "alter tablespace users offline immediate";
4> restore datafile 5;
5> recover datafile 5;
6> sql "alter tablespace users online";
7> release channel d1;
8> }

RMAN> run {
allocate channel ch1 type disk;
restore database;
recover database;
alter database open resetlogs;
}

Duplexing the Target Database to a New Host:


---------------------------------------------

- create instance on second host


- create init.ora, password file etc..
- create similar directories on second host
- make sure net8 works from target en rman to second host
- startup nomount
- neccesary archived redologs are present on second host

$ rman target sys/target_pwd@target_str catalog rman/cat_pwd@cat_str


auxiliary sys/aux_pwd@aux_str
run {
allocate auxiliary channel ch1 type 'sbt_tape';
duplicate target database to dupdb
nofilenamecheck;
}

run {
# allocate at least one auxiliary channel of type disk or tape
allocate auxiliary channel dupdb1 type 'sbt_tape';
. . .
# set new filenames for the datafiles
set newname for datafile 1 TO '$ORACLE_HOME/dbs/dupdb_data_01.f';
set newname for datafile 2 TO '$ORACLE_HOME/dbs/dupdb_data_02.f';
. . .
# issue the duplicate command
duplicate target database to dupdb
# create at least two online redo log groups
logfile
group 1 ('$ORACLE_HOME/dbs/dupdb_log_1_1.f',
'$ORACLE_HOME/dbs/dupdb_log_1_2.f') size 200K,
group 2 ('$ORACLE_HOME/dbs/dupdb_log_2_1.f',
'$ORACLE_HOME/dbs/dupdb_log_2_2.f') size 200K;
}

24.14 Common RMAN errors:


-------------------------

What are the common RMAN errors (with solutions)?


Some of the common RMAN errors are:

PROBLEM 1.
----------

RMAN-20242: Specification does not match any archivelog in the recovery catalog.

Add to RMAN script: sql 'alter system archive log current';

PROBLEM 2.
----------

RMAN-06089: archived log xyz not found or out of sync with catalog

Execute from RMAN: change archivelog all validate;

PROBLEM 3.
----------

fact: Oracle Server - Enterprise Edition 8


fact: Oracle Server - Enterprise Edition 9
fact: Recovery Manager (RMAN)
symptom: RMAN backup fails
symptom: RMAN-10035: exception raised in RPC
symptom: ORA-19505: failed to identify file <file>
symptom: ORA-27037: unable to obtain file status
symptom: SVR4 error:2:no such file or directory
cause: Datafile existed in previous backup set, but has been subsequently
removed or renamed.

fix:

Resync the RMAN Catalog


$ rman target sys/<passwd>@target catalog rman/<passwd>@catalog
RMAN> resync catalog;
Or
Validate the backup pieces.
$ rman target sys/<passwd>@target catalog rman/<passwd>@catalog
RMAN> allocate channel for maintenance type disk;
RMAN> crosscheck backup;
RMAN> resync catalog;

PROBLEM 4.
----------

RMAN> connect target sys/change_on_install@TARGETDB

RMAN-00569: ================error message stack follows


RMAN-04005: error from target database:
ORA-01017: invalid username/password; logon denied

Problem Explanation:

Recovery Manager automatically requests a connection to the target database


as SYSDBA.

Solution Description:

Recovery Manager automatically requests a connection to the target database as


SYSDBA. In order to connect to the target database as SYSDBA, you must either:

1. Be part of the operating system DBA group with respect to the target
database. This means that you have the ability to CONNECT INTERNAL
to the trget database without a password.

- or -

2. Have a password file setup. This requires the use of the "orapwd" command
and the initialization parameter "remote_login_passwordfile". See Chapter 1
of the Oracle8(TM) Server Administrator's Guide, Release 8.0 for details.
Note that changes to the password file will not take affect until after
the database is shutdown and restarted.

For Unix, also ensure TWO_TASK is _not_ set.


e.g. % env | grep -i two
If set, unset it.
% unsetenv TWO_TASK

PROBLEM 5.
---------

RMAN cannot connect to the target database through a multi-threaded server (MTS)
dispatcher:
it requires a dedicated server process

Create a net service name in the tnsnames.ora file that connects to the non-shared
SID.
For example, enter:

inst1_ded =
(description=
(address=(protocol=tcp)(host=inst1_host)(port1521))
(connect_data=(service_name=inst1)(server=dedicated))
)

$ rman target sys/oracle@inst1_ded catalog rman/rman@rcat

PROBLEM 6.
---------

No MML libary found.


RMAN will:

1. Attempts to load the library indicated by the SBT_LIBRARY parameter in the


ALLOCATE CHANNEL or CONFIGURE CHANNEL command. If the SBT_LIBRARY parameter
is not specified, then Oracle proceeds to the next step.

2. Attempts to load the default media management library. The filename of the
default library
is operating system specific. On UNIX, the library filename is
$ORACLE_HOME/lib/libobk.so,
with the extension name varying according to platform: .so, .sl, .a, and so forth.

On Windows NT the library is named %ORACLE_HOME%\bin\orasbt.dll.

If Oracle is unable to locate the MML library,then RMAN issues an ORA-27211 error
and exits.

Whenever channel allocation fails, Oracle writes a trace file to the


USER_DUMP_DEST directory.
The following shows sample output:

SKGFQ OSD: Error in function sbtinit on line 2278


SKGFQ OSD: Look for SBT Trace messages in file /oracle/rdbms/log/sbtio.log
SBT Initialize failed for /oracle/lib/libobk.so

24.15 RMAN 10g Notes:


---------------------

==========================
25. UPGRADE AND MIGRATION:
==========================
25.1 Version and release numbers:
---------------------------------

Oracle 7 -> 8,8i,9i


Oracle 8 -> 8i
Oracle 8.1.x -> 8.1.y
Oracle 8,8i -> 9i

Upgrade: move upwarde from one release in the same version to a higher release
within the same base version, for example 8.1.6 -> 8.1.7
Migration: move to a different version, for example 7.4.3 -> 8.1.5
Patches : bugfixes
Patchset : smaller patches combined to latest patchset

Example version:

8.1.6.2 ->
8=version,1=release number,6=maintenance release number,2=patch number

Exp Imp matrix:


---------------

1. Migration to Oracle9i release 1 - 9.0.1.x :


-------------------------------------------
Direct migration with a full database export and full database import
is only supported if the source database is:
- Oracle7 : 7.3.4
- Oracle8 : 8.0.6
- Oracle8i: 8.1.5 or 8.1.6 or 8.1.7

Migration to Oracle9i release 2 - 9.2.0.x :


-------------------------------------------
Direct migration with a full database export and full database import
is only supported if the source database is:
- Oracle7 : 7.3.4
- Oracle8 : 8.0.6
- Oracle8i: 8.1.7
- Oracle9i: 9.0.1

Tools that can be used to migrate from one version to another:


--------------------------------------------------------------

- exp/imp
- MIG Migration Utility
- ODMA Oracle Data Migration Assistant

There also exists the "Migration Workbench" for migrating


Access, SQL Server etc.. to Oracle.

25.2 Migration From 7 to 8,8i:


------------------------------

Take into account the following:


- Changed standard directories of init, alert, dump
- Changed and obsolete init.ora parameters
- Changed and obsolete sqlnet.ora, tnsnames.ora and listener.ora parameters
- Rowid values have changed from "restricted" to "extended" format

Obsolete init.ora parameters:

init_sql_files
lm_domains
lm_non_fault_tolerant
parallel_default_max_scans
parallel_default_scansize
sequence_cache_hash_buckets
serializable
session_cached_cursors
v733_plans_enabled

Change init.ora parameters:

compatible
snapshot_refresh_interval -> job_queue_interval
snapshot_refresh_process -> job_queue_processes
db_writers -> dbwr_io_slaves
user_dump_dest, background_dump_dest, ifile

Three main tools:

- exp/imp

OWNER= or FULL exp/imp


In case of a full exp/imp you must run catalog.sql of new database

- Migration utility

This is a command line utility.


From 7 to 8 or higher: the Rowid will not be changed automatically.
Migration utility will create a "conversion file" instance_name.dbf
Move this file to the /dbs directory of Oracle 8,9.
Startup svrmgrl or sqlplus
alter database convert;
alter database open resetlogs;

- ODMA

This tool uses a GUI.

25.2 Example Upgrade of 8.1.6 to 9 using ODMA:


----------------------------------------------

1. Install the Oracle 9i software in it's own ORACLE_HOME.


2. Prepare the original init.ora
DB_DOMAIN=correct domain
JOB_QUEUE_PROCESS=0
AQ_TM_PROCESSES=0
REMOTE_LOGIN_PASSWORDFILE=NONE
3. Resize the SYSTEM tablespace to have more than 100M free
4. Prepare the system rollbacksegment to be big enough
alter rollback segment system storage(maxextents 505 optimal null next 1M);
5. Verify that SYSTEM is the default tablespace for SYS and SYSTEM
6. Make sure there is no user MIGRATE. ODMA will use a user called MIGRATE.
7. Shutdown the database cleanly.
8. Make a backup

9. Setup the environment variables for the 9i software.


Also, ODMA uses the java GUI, just like the OUI
10. Start ODMA

$ cd $ORACLE_HOME/bin
$ odma

11. Basically, follow the instructions.

ODMA will ask you the instance that must be upgraded.


On unix, this is read from the oratab file.

Then it will ask you to confirm both the old and new ORACLE_HOME.
It will also ask for the location of the init.ora file.

Then it will proceed in the upgrade.


The upgrade is primarily about the datadictionary.

12. When ODMA is ready, do the following:


check the alert log and other logs
Also check oratab, optionally run utlrp.sql to automatically
rebuild any invalid objects.
Check for invalid objects and check indexes.
Analyze all tables plus indexes.

25.3 Example Upgrade of 8.1.6 to 8.1.7:


---------------------------------------

1. Install the new Oracle software in a different $ORACLE_HOME


For example
$ cd $ORACLE_BASE
$ cd product
$ ls
8.1.6 8.1.7

Backup and shutdown the 8.1.6 database, and stop the listener

2. Set the correct env variables for 8.1.7


3. Create a softlink in the new $ORACLE_HOME/dbs to the init.ora
in the $ORACLE_BASE/admin/sid/pfile directory

Startup the database with new Oracle release

4. Startup the database using the new Oracle software


sqlplus internal (or via svrmgrl)
startup restrict;
5. Run the upgrade script $ORACLE_HOME/rdbms/admin/u0801060.sql
This will also rebuild the datadictionary (catalog, catproc)
6. You optionally run utlrp.sql to automatically rebuild any invalid objects
7. Change on unix oratab for new $ORACLE_HOME
8. Change listener.ora for $ORACLE_HOME value
9. Set COMPATIBLE in init.ora
10. Checks:
check the alert log and other logs
Also check oratab, optionally run utlrp.sql to automatically
rebuild any invalid objects.
Check for invalid objects and check indexes.
Analyze all tables plus indexes.

=====================
26. Some info on Rdb:
=====================

Rdb is most often seen on Digtal unix, or OpenVMS VAX, or OpenVMS alpha,
but there exists a port to NT / 2000 as well.

Samples directory:
------------------

- digital unix: /usr/lib/dbs/vnn/examples


- OpenVMS: SQL$EXAMPLE

In digital unix, to create a sample database:


$/usr/lib/dbs/sql/vnn/examples/personnel <database-form> <dir>
<database-form>: S, M, MSDB
<dir>: enter a directory where you want the database to be created.
$/usr/lib/dbs/sql/vnn/examples/personnel m /tmp/

Invoking SQL:
--------------

- In OpenVMS. Create a symbol


$ SQL:==$SQL$

$ SQL
SQL>

- In digital unix:
$ SQL
SQL>

Attach to database:
-------------------

SQL>ATTACH 'FILENAME mf_personnel';

SQL>ATTACH 'FILENAME DISK$1:[GERALDO.DB]SUPPLIES MULTISCHEMA IS OFF'

Detach from database:


---------------------

SQL>exit
$

or
SQL>DISCONNECT DEFAULT;
SQL>

Editing a SQL Statement:


------------------------

SQL>EDIT

...

EXIT

OpenVMS: Defining a Logical name for a database:


------------------------------------------------

$ DEFINE SQL$DATABASE DISK01:[FIELDMAN.DBS]mf_personnel

You do not need to attach to the database anymore.

Digital unix: Defining a configuration parameter:


-------------------------------------------------

$ SQL_DATABASE /usr/fieldman/dbs/mf_personnel

SHOW Statements:
----------------

SQL> SHOW TABLES -- shows all tables


SQL> SHOW TABLE *
SQL> SHOW ALL TABLES
SQL> SHOW TABLE WORK_STATUS -- displays info about table WORK_STATUS
SQL> SHOW VIEWS -- shows all views
SQL> SHOW VIEW CURRENT_SALARY -- shows info about this view only
SQL> SHOW DOMAINS -- display all domains
SQL> SHOW DOMAIN DATE_DOM
SQL> SHOW INDEXES
SQL> SHOW INDEXES ON SALARY_HISTORY
SQL> SHOW INDEX DEG_EMP_ID
SQL> SHOW DATABASE -- returns the database name
SQL> SHOW STORAGE AREAS

Single file or multifile database:


----------------------------------

A database that stores tables in one file (file type .rdb) is a


single file database. Alternately, you can have a database in which
system information is stored in a database root file (.rdb) and the data
and metadata are stored in one or more storage area files (type .rda).

Single file:
- a database root file which contains all user data and information
about the status of all database operations.
- a snapshot file (.snp file) which contains copies of rows (before images) that
are beiing modified by users updating the database.

Multifile:
- a database root file which contains information about the status of all database
operations.
- a storage area file, .rda file, for the system tables (RDB$SYSTEM)
- one or more .rda files for user data.
- snapshot files for each .rda file and for the database root file.

Create multifile database example:


----------------------------------

$ SQL
SQL> CREATE DATABASE FILENAME mf_personnel_test
cont> ALIAS MF_PERS
cont> RESERVE 6 JOURNALS
cont> RESERVE 15 STORAGE AREAS
cont> DEFAULT STORAGE AREA default_area
cont> SYSTEM INDEX COMPRESSION IS ENABLED
cont> CREATE STORAGE AREA default_area FILENAME default_area
cont> CREATE STORAGE AREA RDB$SYSTEM FILENAME pers_system_area;

Datatypes:
----------

Rdb Oracle
--------------------------------------------------

CHAR CHAR, NCHAR

VARCHAR VARCHAR2, NVARCHAR2


SMALLINT (16 bits) NUMBER(L,P)
INTEGER (32 bits) can be used with NUMBER(L,P)
a scale factor INTEGER(2)
BIGINT (64 bits) RAW, LONG, LONG RAW,
VARYING
DATE ANSI (year, month, day)
TIME
INTERVAL
TIMESTAMP (year, month, day, DATE
hours, min, sec)
DATE VMS

ODBC for RDB:


-------------

---------------
The current driver version is 3.00.02.05 which
doesnt work, and the older driver version (which does
work) is 2.10.17.00 (DriverConf1 outputs attached).

---------------
I am trying to run a DTS job to import data from an Oracle 7.3 RDB (DEC) platform
into SLQ Server 2000.
I have an odbc connection set up and I am using it in MS Access 2000 to view the
table that I want to import.
When I create the job in SQL Server, I can preview the data and everything looks
fine, as in the Access table,
but when I try and run the job I get an:
[ORACLE][ODBC]Function Sequence Error
error message. Any experience with these type of errors and RDB.
Thanks,
John Campbell

This can - I understand - occur where the version of the ODBC drivers on the NT
box with SQL Server running
is incompatible with the services running on the VMS box.
I can't remember the various numbers I'm afraid (or even where I found the stuff -
it was some time ago).

We're running VMS 7.2-1 and Oracle 7.3 and found that this produced a similar
error with the most recent version
of the Oracle ODBC Drivers for RdB - but we have no problems running the v2.10
drivers (v2.10.17 to be exact).

HTH
---------------

ODBC driver for RDB uses


SQSAPI32.ini

JInitiator:
-----------

Oracle heeft deze standaard aangepast, specifiek gericht op het uitvoeren van
Webforms.
Deze aanpassingen houden verband met stabiliteit (bugfixes) en performance
verbetering,
zoals JAR file caching, incremental JAR file loading en applet caching. Met behulp
van JInitiator
kunnen Oracle Forms in een browser (Webforms) worden uitgevoerd.

JInitiator is g��n JVM, maar een extensie op de JVM standaard, waarmee Oracle
Webforms
op een stabiele �n ondersteunde wijze in een browser kunnen worden uitgevoerd.
JInitiator is alleen beschikbaar voor het Windows platform. Op dit moment is het
niet mogelijk
om Webforms uit te voeren in de standaard Microsoft JVM. Jinitiator zal in de
volgende release
niet meer terugkeren. Webforms wordt gecertificeerd op de standaard Java Plugin.
De Microsoft JVM conformeert zich ook aan deze standaard (g��n certificatie),
waardoor Webforms
op termijn in een standaard Microsoft Internet Explorer browser uitgevoerd zal
kunnen worden.
Dit kan echter pas met zekerheid gesteld worden na grondig testen.

Installatie JInitiator
JInitiator wordt bij het eerste gebruik automatisch gedownload vanaf de
Application Server.
Overigens kan de JInitiator ook handmatig worden ge�nstalleerd op de client
machines.

============================
28. Some info on IFS
============================

First some remarks about IFS in versions 9.0.2 and 9.0.3:

9.0.2
=====

In version 9.0.2, IFS (Internet File System) is a separate product.

9.0.3
=====

In version 9.0.3, CM SDK runs in conjunction with Oracle9i Application Server and
an Oracle9i database.
The Oracle Content Management SDK (Oracle CM SDK) is the new name for the product
formerly known as the
Oracle Internet File System (Oracle 9iFS). This new naming is official as of
version 9.0.3.
Oracle CM SDK runs in conjunction with Oracle9i Application Server and an Oracle9i
database.
Written entirely in Java, Oracle CM SDK is an extensible content management system
with file server convenience.

27.1 IFS 9.0.2


--------------
--------------

We first will turn our attention to iFS 9.0.2:


----------------------------------------------

The Oracle 9i database stores all content that comprises the filesystem,
from the files themselves to metadata like owners and group information.

On most occasions, 9iFS stores the files contents as LOB's in the database.

Tools:
------

- Oracle 9iFS Configuration Assistant.


Allows you to create a new 9iFS Domain, and add nodes etc..

- Oracle 9IFS Credential Manager Configuration Assistant.


To change the default credential manager to be applied to each user.

- OEM for 9iAS website (9iAS Home Page)


You can manage 9iFS from the 9iAS OEM website.

- OEM console (Oracle Enterprise Manager)


You can manage 9iFS from the OEM console.
- Oracle 9iFS Manager
Graphical java based interface on iFS.

- Webinterface iFS manager

- Command line utilities


ifsshell etc..

- Import/Export utility
The Import/Export utility exports Oracle 9iFS objects (content and users)
into an export file.

Domain:
-------

9iFS is organized in a Domain concept, with an administrative


Domain controller and possibly other nodes as members in the Domain.

Repository:
-----------

All data managed by 9iFS resides in an 9i database schema, called


the 9iFS repository. You specify the database instance and schemaname
during installation of 9iFS.

Commands:
---------

Stop IFS:

Oracle Internet File System 1.1.x


ORACLE_HOME\ifs1.1\bin\ifsstop.bat

Oracle 9iFS 9.0.1 (and higher)


ORACLE_HOME\9ifs\bin\ifsstopdomain.bat

start iFS OC4J instance


Windows NT or 2K: > ifsstartoc4j.bat

start up ifs domain controller process


Windows NT or 2K > ifslaunchdc.bat

Start ifs node processes


Windows NT or 2K > ifslaunchnode.bat

Activate the iFS domain controller and Nodes


Windows NT or 2K > ifsstartdomain.bat

Here is a script example to run on windows NT or 2K:

StartIfs902.bat
===============

D:\ora902\9ifs\bin\ifsstartoc4j.bat
start D:\ora902\9ifs\bin\ifslaunchdc.bat
start D:\ora902\9ifs\bin\ifslaunchdomain.bat
D:\ora902\9ifs\bin\ifsstartdomain -s myifshost:53140 ifssys
echo "iFS 902 started"

- Home:

Oracle CM SDK must be installed in the Oracle9i Application Server, Release 2


home.
Make sure to select the file location carefully;
once installed, the Oracle CM SDK software cannot be moved without deinstalling
and reinstalling.

Oracle 9iFS requires an Oracle 9.0.2 home, which means you must install and
configure
Oracle9i Application Server, Release 2 in an Oracle home separate from that of the
database.
The Oracle home can be on the same machine (resources allowing), or on a different
machine.

- Install with Oracle Universal Installer.

Installation and configuration of Oracle 9iFS starts from the Oracle Universal
Installer,
the graphical user interface wizard that copies all necessary software to the
Oracle home
on the target machine.

The Oracle 9iFS Configuration tool launches automatically at the end of the Oracle
Universal Installer process
and guides you through the process of identifying the Oracle database to be used
for the
Oracle Internet File System schema; selecting the type of authentication to use
(native Oracle 9iFS credential manager or Oracle Internet Directory for credential
management);
and various other configuration tasks. The specific configuration tasks vary,
depending on the type
of deployment (new Oracle 9iFS domain vs. additional Oracle 9iFS nodes, for
example)

- Starting install wizard again:

ORACLE_HOME\ifs\cmsdk\bin\ifsca.bat

- connect to database:

The Oracle CM SDK Configuration Assistant attempts to make a connection as SYS AS


SYSDBA using a database string,
and therefore needs the database to be configured with a password file.

- Directory service:

Select either CMSDK Directory Service or Oracle Internet Directory Service for
user authentication.

The default Oracle Internet Directory super user name/password is


cn=orcladmin/welcome1.
The default Oracle Internet Directory root Oracle context is set to
cn=OracleContext.

- Launch Internet File System Manager from a Web browser:

http://hostname.mycompany.com:7778/cmsdk/admin

Access paths and directory structure:


-------------------------------------

- Oracle FileSync Client Software:

In addition to using the networking protocols or client applications native to the


Windows operating system,
Windows users can install and use Oracle FileSync to keep local directories on a
desktop machine and folders
in Oracle CM SDK synchronized.

Double-click Setup.exe to run the installation program,


or run O:\ifs\clients\filesync\setup.exe from the Windows Start...Run Menu.

- CUP (Command-line Utilities Protocol) Client

The Oracle Command-line Utilities Protocol server enables administrators and


developers to perform a
variety of tasks quickly and easily from a Windows command-line or a UNIX shell.

copy /ifs/clients/cmdline/win32
to a local directory.

============================
28. Some info on 9iAS rel. 2
============================

28.1 General Information:


=========================

Oracle9i Application Server (Oracle9iAS) is a part of the Oracle9i platform,


a complete and integrated e-business platform. Oracle9i platform consists of:

- Oracle9i Developer Suite for developing applications

- Oracle9i Application Server for deploying Internet applications

- Oracle9i Database Server for storing content

9iAS is not just a webserver. A webserver is only part of the 9iAS system. 9iAS
offers OC4J
(Oracle Containers for J2EE), portals, webserver and webcache, and
BusinessIntelligence and other components.
OC4J:
-----

The "core" of the AS (thus the application part), is the OC4J architecture. The
OC4J infrastructure supports
EJB, JSP and Servlet applications. Developers can write J2EE applications, like
EJB, Servlet and JSP applications,
that will run on 9iAS.
OC4J itself is written in Java and runs on a Java virtual machine.

BusinessIntelligence:
---------------------

A set of services and client applications that make reports and all types of
analysis possible.
For example, the 'Oracle Reports service' , an application in the middle tier,
uses a queue for
submitted client requests. These request might create reports of a Datawarehouse
in a
Customer database etc...

28.1.1 Components:
------------------

There are 3 install types:

-J2EE and Web Cache


-Portal and Wireless
-BusinessIntelligence and Forms

Note:

The Oracle 9iAS 9.0.2 Concepts and the 9iAS Install guides mentions 3 install
types,
but the Admin guide Rel. 9.0.2 mentions 4 install types.
The fourth additional one is "Unified Messaging". This Enables you to integrate
different
types of messages into a single framework.
It includes all of the components available in the Business Intelligence and Forms
install type.

Component J2EE and Web Cache Portal and Wireless BusinessInt.


and Forms
Oracle9iAS Web Cache YES YES YES
Oracle HTTP Server YES YES YES
Oracle9iAS Container for J2EE YES YES YES
Oracle EM Web site YES YES YES
Oracle9iAS Portal no YES YES
Oracle9iAS Wireless no YES YES
Oracle9iAS Discoverer no no YES
Oracle9iAS Reports Services no no YES
Oracle9iAS Clickstream Int. no no YES
Oracle9iAS Forms Services no no YES
Oracle9iAS Personalization no no YES
28.1.2. Need of Oracle9iAS Infrastructure:
------------------------------------------

Prior to installing an instance of the "Portal and Wireless"


or "Business Intelligence and Forms" install type,
you must install and configure the Oracle9iAS Infrastructure
somewhere in your network, optimally on a separate computer.

The J2EE and Web Cache install type does not require Oracle9iAS Infrastructure.

You can install single or multiple instances of Oracle9iAS install types, J2EE and
Web Cache, Portal and Wireless,
and Business Intelligence and Forms, on the same host, which is not a very
realistic scenarion.

Multiple instances of different Oracle9iAS install types, can use one instance of
Oracle9iAS Infrastructure,
and this could be a realistic scenario.

28.1.3. Metadata Repository in the Infrastructure:


--------------------------------------------------

The Oracle9iAS Infrastructure installation consists of:

- Oracle9iAS Metadata Repository:


Pre-seeded database containing metadata needed to run Oracle9iAS instances.

- Oracle Internet Directory OID:


Directory service that enables sharing information about dispersed users and
network resources.
Oracle Internet Directory implements LDAP v3.

- Oracle9iAS Single Sign-On SSO:


Creates an enterprise-wide user authentication to access multiple accounts
and Oracle9iAS applications.

- Oracle Management Server OMS:


Processes system management tasks and administers the distribution of these
tasks
across the network using the Oracle Enterprise Manager Console.
The Console and its three-tier architecture can be used with the
Oracle Enterprise Manager Web site to manage not only Oracle9iAS, but your
entire Oracle environment.

- J2EE and Web Cache:


For internal use with Oracle9iAS Infrastructure. Not used for component
application deployment.

Application server installations and their components use an infrastructure in the


following ways:

-- Components and applications use the Single Sign-on service provided by


Oracle9iAS Single Sign-On.
-- Application server installations and components store configuration information
and
user and group privileges in Oracle Internet Directory.

-- Components use schemas that reside in the metadata repository.

SSO is required for "Portal and Wireless" and "Business Intelligence and Forms"
install types.
Also required for application server clustering with J2EE and Web Cache install
type.

28.1.4. Customer database:


--------------------------

This could be any database on any Host, containing business data.


But,

The following components require a customer database:

Oracle9iAS Discoverer

Oracle9iAS Personalization

Oracle9iAS Unified Messaging

If you configure any of these components during installation, their setup and
configuration will not be
complete at the end of installation. You need to take additional steps to install
and tune a customer database,
load schemas into the database, and finish configuring the component to use the
customer database.

28.1.5. Oracle Home:


--------------------

Oracle home is the directory in which Oracle software is installed.

Different Oracle versions always get their own Oracle Homes.

Multiple instances of Oracle9iAS install types (J2EE and Web Cache, Business
Intelligence and Forms,
and Portal and Wireless) must be installed in separate Oracle homes on the same
computer.

You must install Oracle9iAS Infrastructure in its own Oracle home directory,
preferably on a separate host.
The Oracle9iAS installation cannot exist in the same Oracle home as the Oracle9iAS
Infrastructure installation.

28.1.6. Oracle9iAS Infrastructure Port Usage:


---------------------------------------------

!! Oracle9iAS Infrastructure requires exclusive use of port 1521

Installation of Oracle9iAS Infrastructure requires exclusive use of port 1521 on


your computer.
If one of your current system applications uses this port,
then complete one of the following actions before installing Oracle9iAS
Infrastructure:

If you have an existing application using port 1521,


then reconfigure the existing application to use another port.

If you have an existing Oracle Net listener and an Oracle9i database, then proceed

with the installation of Oracle9iAS Infrastructure.


Your Oracle9iAS Infrastructure will use the existing Oracle Net listener.

If you have an existing Net8 listener in use by an Oracle8i database,


then you must upgrade to the Oracle9i Net listener version by installing
Oracle9iAS Infrastructure.

28.1.6. Using the Oracle Enterprise Manager Console:


----------------------------------------------------

The Oracle Enterprise Manager console provides a wider view of your Oracle
environment,
beyond Oracle9iAS. Use the Console to automatically discover and manage databases,

application servers, and Oracle applications across your entire network.

The Console and its related components are installed with the Oracle Management
Server
as part of the Oracle9iAS Infrastructure installation option.
The Console is part of the Oracle Management Server component of the Oracle9iAS
Infrastructure.
The Management Server, the Console, and Oracle Agent are installed
on the Oracle9iAS Infrastructure host, along with the other infrastructure
components.

28.1.7. Starting and Stopping the Oracle Management Server on Windows:


----------------------------------------------------------------------

On Windows systems, use the Services control panel to start and stop the
management server.
The name of the service is in the following format:

OracleORACLE_HOMEManagementServer

For example:

OracleOraHome902ManagementServer

28.1.8. OEM Website:


--------------------

You can verify the Enterprise Manager Web site is started by pointing your browser
to the Web site URL.
For example:

http://hostname:1810
get console http://hostname:1810 http://127.0.0.1:1810
get welcome http://hostname:7777

To start or stop the Enterprise Manager Web site on Windows, use the Services
control panel.
The name of the service is in the following format:

OracleORACLE_HOMEEMwebsite

Or
Start the Enterprise Manager Web site

(UNIX) ORACLE_HOME/bin/emctl start


(Windows) ORACLE_HOME\bin\emctl start

Stop the Enterprise Manager Web site


emctl stop

Example Services:

Oracleias902Discoverer
Oracleias902ProcessManager
Oracleias902WebCache
Oracleias902WebCacheAdmin
Oracleinfra902Agent = Agent for Management Server
Oracleinfra902EMWebsite = Enterprise Manager Web site
Oracleinfra902InternetDirectory_iasdb
Oracleinfra902ManagementServer = OEM Management Server
Oracleinfra902ProcessManager
OracleOraHome901TNSListener = just the Listener
OracleServiceIASDB = infra structure db
OracleServiceO901 = regular customer db

Note for Oracle 10g RDBMS EM DB console:


========================================

Sites:
------

Enterprise Manager Database Control URL - (dbname) :


http://hostname:1158/em
http://127.0.0.1:1810
http://127.0.0.1:1158

The iSQL*Plus URL is:


http://localhost:5561/isqlplus

The iSQL*Plus DBA URL is:


http://localhost:5561/isqlplus/dba

emctl prompt tool:


------------------

C:\ora10g\product\10.2.0\db_1\NETWORK\ADMIN>emctl status dbconsole


Oracle Enterprise Manager 10g Database Control Release 10.2.0.1.0
Copyright (c) 1996, 2005 Oracle Corporation. All rights reserved.
http://xpwsora:1158/em/console/aboutApplication
Oracle Enterprise Manager 10g is running.

Logs are generated in directory


C:\ora10g\product\10.2.0\db_1/xpwsora_SPLCONF/sysman/log

Services:
---------

C:\ora10g\product\10.2.0\db_1\NETWORK\ADMIN>net start | find "Ora"


OracleDBConsolesplconf
OracleOraDb10g_home1iSQL*Plus
OracleOraDb10g_home1TNSListener
OracleServiceSPLCONF

C:\ora10g\product\10.2.0\db_1\NETWORK\ADMIN>

28.1.9. emctl tool : for controlling EM website:


-------------------------------------------------

Enterprise manager homepage http://hostname:1810 can only be accessed if EM webste


is running.

Usage::
emctl start|stop|status
emctl reload | upload
emctl set credentials [<Target_name>[:<Target_Type>]]
emctl gencertrequest
emctl installcert [-ca|-cert] <certificate base64 text file>
emctl set ssl test|on|off|password [<old password> <new password>]
emctl set password <old password> <new password>
emctl authenticate <pwd>
emctl switch home [-silent <new_home>]
emctl config <options>

emctl start : Start the Enterprise Manager Web site.


emctl stop : Stop the Enterprise Manager Web site (requires
ias_admin password).
emctl status : Verify the status of the Enterprise Manager Web
site.
emctl set password new_password : Reset the ias_admin password.
emctl authenticate password : Verify that the supplied password is the
ias_admin password.

emctl config options can be listed by typing "emctl config"

emctl status
C:\temp>emctl status
EMD is up and running : 200 OK

28.1.10. OEMCTL tool: for controlling Management Server:


--------------------------------------------------------
EM control

D:\temp>oemctl
"Syntax: OEMCTL START OMS "
" OEMCTL STOP OMS <EM Username>/<EM Password>"
" OEMCTL STATUS OMS <EM Username>/<EM Password>[@<OMS-HostName>]"
" OEMCTL PING OMS "
" OEMCTL START PAGING [BootHost Name] "
" OEMCTL STOP PAGING [BootHost Name] "
" OEMCTL ENABLE EVENTHANDLER"
" OEMCTL DISABLE EVENTHANDLER"
" OEMCTL EXPORT EVENTHANDLER <filename>"
" OEMCTL IMPORT EVENTHANDLER <filename>"
" OEMCTL DUMP EVENTHANDLER"
" OEMCTL IMPORT REGISTRY <filename> <Rep Username>/<Rep Password>@<RepAlias>"
" OEMCTL EXPORT REGISTRY <Rep Username>/<Rep Password>@<RepAlias>"
" OEMCTL CONFIGURE RWS"

28.1.11. The Intelligent Agent:


-------------------------------

The Oracle Intelligent Agent is installed whenever you install Oracle9iAS on a


host computer.
For example, if you select the J2EE and Web Cache installation type, the Oracle
Universal Installer
installs Oracle Enterprise Manager Web site and the Oracle Intelligent Agent,
along with the J2EE and Web Cache software. This means the Intelligent Agent
software
is always available if you decide to use the Console and the Management Server
to manage your Oracle9iAS environment.

The Console and Management Server are installed as part of the Oracle9iAS
Infrastructure.
In most cases, you install the Infrastructure on a dedicated host that can be used
to
centrally manage multiple application server instances. The Infrastructure
includes
Oracle Internet Directory, Single Sign-On, the metadata repository, the
Intelligent Agent,
and Oracle Management Server.

You only need to run the Intelligent Agent if you are using Oracle Management
Server in your enterprise.
In order for Oracle Management Server to detect application server installations
on a host,
you must make sure the Intelligent Agent is started.
Note that one Intelligent Agent is started per host and must be started after
every system boot.

28.1.12. AGENTCTL: for controlling the Intelligent Agent:


---------------------------------------------------------

(UNIX) You can run the following commands in the Oracle home of the primary
installation
(the first installation on the host) to get status and start the Intelligent
Agent:
ORACLE_HOME/bin/agentctl status agent
ORACLE_HOME/bin/agentctl start agent

(Windows) You can check the status and start the Intelligent Agent using the
Services control panel.
The name of the service is in the following format:

OracleORACLE_HOMEAgent (the executable is agntsrvc.exe)

start the Intelligent Agent in the Oracle home of the primary installation:

ORACLE_HOME/bin/agentctl start agent

28.1.13. Backup and Restore:


----------------------------

To ensure that you can make a full recovery from media failures,
you should perform regular backups of the following:

- Application Server and Infrastructure Oracle Homes


- Oracle Internet Directory
- Metadata Repository
- Customer Databases

You should perform regular backups of all files in the Oracle home of each
application server
and infrastructure installation in your enterprise using your preferred method of
filesystem backup.

Oracle Internet Directory offers command-line tools for backing up and restoring
the Oracle Internet Directory schema and subtree.

The metadata repository is an Oracle9i Enterprise Edition Database that you can
back up and restore
using several different tools and operating system commands.

The customer databases can be backupped using any standard method, the same way
you would do
for any other 9iEE database.

Applications:
=============

28.2 Report services:


---------------------

Client contacts the Report Server


- Web,through url
- Nonweb, rwclient

-requests goes to a jobqueue


-users with webbrowser:
http Server must be running, and you use or reports servlet, a JSP, or CGI
components on 9iAS

The reports server must be running.

- default it is an inprocess server


httpd -> mod_oc4j {reports servlet} -> Reports Server

- CGI
httpd -> CGI -> Reports Server

- starting from URL:


http://machine:port/reports/rwservlet

commandline:
rwserver server=machinename

- The servlet is part of the OC4J instance: OC4J_BI_FORMS

- its possible to make it a service of its own:


rwserver -install autostart=yes/no

- verify the Reports Servlet and Server Are Running:

http://missrv/rwservlet/help
(show help page with rwservlet command line arguments)

http://machine:port/reports/rwservlet/showjobs?server=server_name
(show a listing of the jobqueue)

IP:7778/reports/rwservlet/showenv
http://<hostname>:<port>/reports/rwservlet/getserverinfo?
http://<hostname>:<port>/reports/rwservlet/getserverinfo?authid=orcladmin/<passw
ord of ias_admin>
http://machinename/servlet/RWServlet/showmap?server=Rep60_servername

- stopping Reports Server:

commandline:
rwserver server=machinename shutdown=normal/immediate authid=admin/password

Enterprise Manager: stop Reports Server

The reports servlet uses the PORT parameter configured in the


httpd.conf

reports_user/welcome1
ias_admin/welcome1
orcladmin /welcome1

Reports Servlet
url : http://missrv:7778/reports/rwservlet
em username : reports_user
em password : welcome1
reports store : d:\reports (change in registry, key is REPORTS_PATH)
- Reports Server configuration files:

ORACLE_HOME\reports\conf\server_name.conf
ORACLE_HOME\reports\dtd\rwserverconf.dtd
ORACLE_HOME\reports\conf\rwbuilder.conf
ORACLE_HOME\reports\conf\rwservlet.properties

- Check miskm.propery files:

$9ias_home\j2ee\OC4J_iFS_cmsdk\applications\brugpaneel\FrontOffice\WEB-
INF\classes.
Het gaat om de volgende bestanden:

misIfs.properties : parameters van iFs interface/front office.


miskm.properties : parameters van MIS Front Office applications
XSQLConfig.xml : XSQL Parameters, moet wijzen naar mis_owner schema.

Er wordt ook gebruik gemaakt van JDBC. De instellingen van deze connectie staan in
het bestand:
$9ias_home\j2ee\OC4J_iFS_cmsdk\applications\brugpaneel\META-INF\data-sources.xml

miskm.properties:
-----------------

# miskm.reports parameters are used in order to display reports that are built
# using Oracle Reports.

# The action of the hidden form.


#miskm.reports.action=http://dgas40.mindef.nl/reports/rwservlet
miskm.reports.action=http://missrv.miskm.mindef.nl:7778/reports/rwservlet

# The schemaname/schemapassword@tns_names entry where the data is stored.


#miskm.reports.connectstring=mis_owner/mis_owner@miskm_demo
miskm.reports.connectstring=mis_owner/mis_owner@miskm_dev

# The name of the Reports Server (after default installation: rep_missrv)


#miskm.reports.repserver=rep_dgas40
miskm.reports.repserver=rep_missrv

# The location where the output is placed on the server.


miskm.reports.destype=cache

# The output of the the generated report (e.g html, pdf, etc.)
#miskm.reports.desformat=pdf
miskm.reports.desformat=rtf&mimetype=application/msword

# The reports server is a partner application, therefore a sso username/password


# is required.
miskm.reports.ssoauthid=reports_user/welcome1

- Reports Server configuration files:

ORACLE_HOME\reports\conf\server_name.conf
ORACLE_HOME\reports\dtd\rwserverconf.dtd
ORACLE_HOME\reports\conf\rwbuilder.conf
ORACLE_HOME\reports\conf\rwservlet.properties (inprocess or standallone)

reports_server_name.conf
cgicmd.dat
jdbcpds.conf
proxyinfo.xml
rwbuilder.conf
rwserver.template
rwservlet.properties
textpds.conf
xmlpds.conf in ORACLE_HOME/reports/conf

Reports Servlet 9i

Rapportages worden gemaakt met behulp van de Reports Builder en moeten worden
opgeslagen
in een directory op de applicatieserver (standaard is dit d:\reports).
Om de Reports Servlet te laten weten waar allemaal reports zijn opgeslagen dient
de registersleutel
van Windows REPORTS_PATH te worden uitgebreid met de directory waar de
rapportages zijn opgeslagen.

De servlet is onderdeel van de OC4J instance: OC4J_BI_FORMS, dus om hier gebruik


van te maken,
moet deze instance opgestart zijn.

De servlet maakt gebruik van Oracle SSO en daarom dient een er een SSO gebruiker
aangemaakt te worden
die in staat is om gebruik te maken van de servlet:
1. Ga naar http://missrv.miskm.mindef.nl:7777/oiddas
2. Log in als de portal gebruiker (standaard portal/welcome1)
3. Maak een nieuwe gebruiker aan, bijvoorbeeld: reports_user.
4. Sta deze gebruiker de privilege: �Allow resource management for Oracle
Reports and Forms� toe.
5. Controleer of deze gebruiker overeenstemt met de sleutel:
miskm.reports.ssoauthid in het bestand miskm.properties

28.3 Internet Directory and Single Sign-On:


-------------------------------------------

Oracle Internet Directory, an LDAP directory, provides a single repository and


administration for user accounts.

Oracle9iAS Single Sign-On enables users to login to Oracle9iAS and gain access to
those applications for which they
are authorized, without requiring them to re-enter a user name and password for
each application.
It is fully integrated with Oracle Internet Directory, which stores user
information. It supports LDAP-based
user and password management through OID.

Oracle Internet Directory is installed as part of the Oracle9iAS Infrastructure


installation.
Oracle9iAS Single Sign-On is installed as part of the Oracle9iAS Infrastructure
installation.
SSO is Portal's authentication engine. In 9iAS all applications may use SSO.
Without a functioning SSO, users will not be able to logon and use SSO. The first
test following
a failure to authenticate is to login directly using SSO:

http://servername:port/pls/orasso

Examples:

Single Sign-On Server : oasdocs.us.oracle.com:7777


Internet Directory : oasdocs.us.oracle.com:389
Infrastructure database : iasdb.oasdocs.us.oracle.com

missrv.miskm.mindef.nl:1521:iasdb

In a start script, you may find commands like the following to start the OID
server:

%INFRA_BIN%\oidmon start
%INFRA_BIN%\oidctl server=oidldapd instance=1 start

In a stop script, you may notice the following commands to stop the OID server:

%INFRA_BIN%\oidctl server=oidldapd instance=1 stop


%INFRA_BIN%\oidmon stop

When oidctl is executed, it connects to the database as user ODSCOMMON and simply
inserts/updates rows
into a table ODS.ODS_PROCESS depending on the options used in the command. A row
is inserted if the START option
is used, and updated if the STOP or RESTART option is used. So there are no
processes started at this point,
and LDAP server is not started.

Both the listener/dispatcher process and server process are called oidldapd on
unix, and oidldapd.exe on NT.
Oidmon is also a process (called oidmon on unix, oidmon.exe/oidservice.exe on
windows).

To control the processes (servers) we need to have OID Monitor (oidmon) running.
This monitor is often called
daemon or guardian process as well. When oidmon is running, it periodically
connects to the database and reads
the ODS.ODS_PROCESS table in order to start/stop/restart related processes.

NOTE:

Because the only task oidctl has is to insert / update table ODS.ODS_PROCESS in
the database,
it's obvious that the database and listener have to be fully accessible when
oidctl is used.

Also, oidmon connects periodically to the database. So the database and listener
must be
accessible for oidmon to connect.

28.4 Example and default values:


--------------------------------

Information Example Values Your Information


Oracle home location D:\ora9ias

Instance Name instance1


ias_admin Password welcome1

Single Sign-On Server HostName/server oasdocs.us.oracle.com


Single Sign-On Port Number 7777

Internet Directory Hostname/server oasdocs.us.oracle.com


Internet Directory Port Number 389 / 4032
Internet Directory Username orcladmin, cn=orcladmin (the Oracle
Internet Directory administrator)
Internet Directory Password welcome1

9iAS Metadata Repository oasdocs.us.oracle.com


9iAS Reports Services Outgoing Mail Server oasdocs.us.oracle.com

http Server oasdocs.us.oracle.com:7777

Metadata database connection string


oasdocs.us.oracle.com:1521:iasdb:iasdb.oasdocs.us.oracle.com

Oracle Universal Installer creates a file showing the port assignments during
installation of Oracle9iAS components.
This file is ORACLE_HOME\install\portlist.ini
It contains entries like the following default values:

Oracle HTTP Server port = 7777


Oracle HTTP Server SSL port = 4443
Oracle HTTP Server listen port = 7778
Oracle HTTP Server SSL listen port = 4444
Oracle HTTP Server Jserv port = 8007
Enterprise Manager Servlet port = 1810

The ID username and password are defined in Oracle Internet Directory as either
the:

- orcladmin (root user)


- a user who is member of the IASAdmins group in Oracle Internet Directory

The SSO schema is now 'ORASSO' and the ORASSO user is registered with OID after an
infra install.
THe default user is 'orcladmin' with a login of your ias_admin password.

EM Website: http://<hostname.domain>:<port>
(port 1810 assigned by default)
You will login using the 'ias_admin' username and the password you entered

during the Infrastructure installation.


SSO Login Page: http://<hostname.domain>:<port>/pls/orasso
You will login using the 'orcladmin' username and the password for the
'ias_admin'.
The port will be the HTTP Server port of your Infrastructure, (port 7777 by
default)
http://missrv.miskm.mindef.nl:7777/pls/orasso

OID_DAS Page: http://<hostname.domain>:<port>/oiddas


You will login using the 'orcladmin' username and the password for the
'ias_admin'.
The port will be the HTTP Server port of your Infrastructure, (port 7777 by
default).
The OC4J_DAS component must be UP for this test to succeed.

28.5 Management tools:


----------------------

28.5.1. OEM Website:


-------------------

You can access the Welcome Page by pointing your browser to the HTTP Server URL
for your installation.
For example, the default HTTP Server URL is:

http://hostname:7777

This page offer many options to explore features of 9iAS.

You can also go directly to the Oracle Enterprise Manager Web site using the
following instructions:

http://hostname:1810 http://

Enterprise manager homepage http://hostname:1810 can only be accessed if EM webste


is running.
This corresponds to the service like "Oracleinfra902EMWebsite".

The username for the administrator user is ias_admin.


The password is defined during the installation of Oracle9iAS. The default
password is welcome1.

Depending upon the options you have installed, the Administration section of the
Oracle9iAS Instance Home Page
provides additional features that allow you to perform the following tasks:

-Associate the current instance with an existing Oracle9iAS Infrastructure.


-Configure additional Oracle9iAS components that have been installed, but not
configured
-Change the password or default schema for a component

Start or stop on NT/W2K:

To start or stop the Enterprise Manager Web site on Windows, use the Services
control panel.
The name of the service is in the following format:

OracleORACLE_HOMEEMwebsite

For example, if the name of the Oracle Home is OraHome902, the service name is:
OracleOraHome902EMWebsite

You can also use


net start OracleOraHome902EMWebsite
net stop OracleOraHome902EMWebsite

Start or stop on UNIX:

Start the Enterprise Manager Web site: emctl start

Stop the Enterprise Manager Web site: emctl stop


Or use the kill command if it does not respond

Changing the ias_admin Password:

1. Using Oracle Enterprise Manager Web Site:

Navigate to the Instance Home Page. Select Preferences in the top right corner.

This displays the Change Password Page.

Enter the new password and new password confirmation. Click OK.
This resets the ias_admin password for all application server installations on
the host.

Restart the Oracle Enterprise Manager Web site.

2. Using the emctl Command-Line Tool:

To change the ias_admin user password using a command-line tool:

Enter the following command in the Oracle home of the primary installation
(the first installation on the host):

(UNIX) ORACLE_HOME/bin/emctl set password new_password


(Windows) ORACLE_HOME\bin\emctl set password new_password

For example:

(UNIX) ORACLE_HOME/bin/emctl set password m5b8r5


(Windows) ORACLE_HOME\bin\emctl set password m5b8r5

Restart the Enterprise Manager Web site.

The Enterprise Manager Web site relies on various technologies to discover,


monitor,
and administer the Oracle9iAS environment. These technologies include:

- Oracle Dynamic Monitoring Service (DMS)


The Enterprise Manager Web site uses DMS to gather performance data about your
Oracle9iAS components.

- Oracle HTTP Server and Oracle Containers for J2EE (OC4J)


the Enterprise Manager Web site also uses HTTP Server and OC4J to deploy its
management components.

- Oracle Process Management Notification (OPMN)


OPMN manages Oracle HTTP Server and OC4J processes within an application server
instance.
It channels all events from different component instances to all components
interested in receiving them.

- Distributed Configuration Management (DCM)


This will be used with clusters or farms.
DCM manages configurations among application server instances
that are associated with a common Infrastructure (members of an Oracle9iAS
farm).
It enables Oracle9iAS cluster-wide deployment so you can deploy an application
to an entire cluster,
or make a single host or instance configuration change applicable across all
instances in a cluster.

28.5.2 OEM Console:


-------------------

The console is a non Web, Java tool, and part of the 3-tier OMS architecture.
See also section 28.1.

The Oracle Enterprise Manager console provides a wider view of your Oracle
environment,
beyond Oracle9iAS. Use the Console to automatically discover and manage databases,

application servers, and Oracle applications across your entire network.

The Console and its related components are installed with the Oracle Management
Server
as part of the Oracle9iAS Infrastructure installation option.
The Console is part of the Oracle Management Server component of the Oracle9iAS
Infrastructure.
The Management Server, the Console, and Oracle Agent are installed
on the Oracle9iAS Infrastructure host, along with the other infrastructure
components.

The Console offers advanced management features, such as an Event system to notify
administrators
of changes in your environment and a Job system to automate standard and
repetitive tasks,
such as executing a SQL script or executing an operating system command.

The Console and Management Server are installed as part of the Oracle9iAS
Infrastructure.

Use the OEMCTL commandline tool for controlling OMS. See section 28.1.10.

29. Starting and stopping 9iAS and components:


==============================================

29.1 Starting a simple Webcache/J2EE installation:


--------------------------------------------------

Start the Enterprise Manager Web site.


Even though you are not using the Web site, this ensures that the processes to
support the
dcmctl command-line tool are started. To start the Web site, execute the following
command
in the Oracle home of the primary installation on your host:

(UNIX) ORACLE_HOME/bin/emctl start


(Windows) ORACLE_HOME\bin\emctl start

Start Oracle HTTP Server and OC4J (the rest of the commands in this section should
be executed
in the Oracle home of the J2EE and Web Cache instance):

(UNIX) ORACLE_HOME/dcm/bin/dcmctl start


(Windows) ORACLE_HOME\dcm\bin\dcmctl start

If Web Cache is configured, start Web Cache:

(UNIX) ORACLE_HOME/bin/webcachectl start


(Windows) ORACLE_HOME\bin\webcachectl start

29.2 Startin and stopping Advanced 9iAS installations


-----------------------------------------------------

Start/Stop Enterprise:
----------------------

Starting an Application Server Enterprise:


The order in which to start the pieces of an application server enterprise is as
follows:

1. Start the infrastructure.


If your enterprise contains more than one infrastructure, start the primary
infrastructure first.

2. Start customer databases.


If your enterprise contains customer databases, you can start them using
several methods,
including SQL*Plus and Oracle Enterprise Manager Console.
Remember that iFS could also be installed into the customer database.

3. Start application server instances.


You can start application server instances in any order.
If instances are part of a cluster, start them as part of starting the cluster.

The order in which to stop the pieces of an application server enterprise is as


follows:

1. Stop application server instances.


You can stop application server instances in any order.
If instances are part of a cluster, stop them as part of stopping the cluster.

2. Stop customer databases.


If your enterprise contains customer databases, you can stop them using several
methods,
including SQL*Plus and Oracle Enterprise Manager Console.

3. Stop the infrastructure.


If your enterprise contains more than one infrastructure, stop the primary
infrastructure last.

Start/Stop Instance:
--------------------

Start:

First you have started the infrastructure instance, and customer database
instance.

1. Preliminary:

- Enterprise Manager Web Site (Required):

The first step before starting an application server instance is to ensure that
the Enterprise Manager Web site
is running on the host. The Web site provides underlying processes required to run
an application server instance
and must be running even if you intend to use command-line tools to start your
instance.

There is one Enterprise Manager Web site per host. It resides in the primary
installation (or first installation)
on that host. The primary installation can be an application server installation
or an infrastructure.
This Web site usually listens on port 1810 and provides services to all
application server instances
and infrastructures on that host.

To verify the status of the Enterprise Manager Web site, run the following command
in the Oracle home of the
primary installation:

(UNIX) ORACLE_HOME/bin/emctl status


(Windows) ORACLE_HOME\bin\emctl status

To start the Enterprise Manager Web site, run the following command in the Oracle
home of the primary installation:

(UNIX) ORACLE_HOME/bin/emctl start


(Windows) ORACLE_HOME\bin\emctl start

Or on NT/W2K: net start OracleORACLE_HOMEEMwebsite


- Intelligent Agent (Optional)

You only need to run the Intelligent Agent if you are using Oracle Management
Server in your enterprise.
In order for Oracle Management Server to detect application server installations
on a host,
you must make sure the Intelligent Agent is started. Note that one Intelligent
Agent is started per host
and must be started after every system boot.

(UNIX) You can run the following commands in the Oracle home of the primary
installation
(the first installation on the host) to get status and start the Intelligent
Agent:

ORACLE_HOME/bin/agentctl status agent


ORACLE_HOME/bin/agentctl start agent

(Windows) You can check the status and start the Intelligent Agent using the
Services control panel.
The name of the service is in the following format:

OracleORACLE_HOMEAgent

2. Start the instance using OEM Website:

You can start, stop, and restart all types of application server instances using
the
Instance Home Page on the Enterprise Manager Web site.

Or...

3. Start the 'J2EE and Web Cache' instance using commands:

Start OEM Website: ORACLE_HOME\bin\emctl start or net start


OracleORACLE_HOMEEMwebsite

Start Oracle HTTP Server and OC4J: ORACLE_HOME\dcm\bin\dcmctl start

If Web Cache is configured, start Web Cache: ORACLE_HOME\bin\webcachectl start

4. Stop the 'J2EE and Web Cache' instance using commands:

ORACLE_HOME\bin\webcachectl stop
ORACLE_HOME\dcm\bin\dcmctl stop

Start/Stop components:
----------------------

You can start, stop, and restart individual components using the Instance Home
Page or the component home page
on the Enterprise Manager Web site. You can also start and stop some components
using command-line tools.
Oracle HTTP Server
Start: ORACLE_HOME\dcm\bin\dcmctl start -ct ohs
Stop : ORACLE_HOME\dcm\bin\dcmctl stop -ct ohs

Individual OC4J Instances


Start: ORACLE_HOME\dcm\bin\dcmctl start -co instance_name
Stop : ORACLE_HOME\dcm\bin\dcmctl stop -co instance_name

All OC4J Instances


Start: ORACLE_HOME\dcm\bin\dcmctl start -ct oc4j
Stop : ORACLE_HOME\dcm\bin\dcmctl stop -ct oc4j

Web Cache
Start: ORACLE_HOME\bin\webcachectl start
Stop : ORACLE_HOME\bin\webcachectl stop

Reports
Start: ORACLE_HOME\bin\rwserver server=name
Stop : ORACLE_HOME\bin\rwserver server=name shutdown=yes

You cannot start or stop some components. The radio buttons in the Select column
on the Instance Home Page
are disabled for these components, and their component home pages do not have
Start, Stop, or Restart buttons.

Start/Stop the Infrastructure:


------------------------------

No matter which procedure you use, starting an infrastructure involves performing


the following steps in order:

Start the Metadata Repository = infrastructure database


Start OID, Oracle Internet Directory
Start the Enterprise Manager Web site.
Start OHS, Oracle HTTP Server
Start the OC4J_DAS instance
Start Web Cache (optional)
Start Oracle Management Server and Intelligent Agent (optional)

No matter which procedure you use, stopping an infrastructure involves performing


the following steps in order:

Stop all middle-tier application server instances that use the infrastructure.
Stop Oracle Management Server and Intelligent Agent (optional)
Stop Web Cache (optional)
Stop OC4J instances
Stop Oracle HTTP Server
Stop Oracle Internet Directory
Stop the Metadata Repository

The next section describes how to start an infrastructure using command-line tools
on Windows.
Except where noted, all commands should be run in the Oracle home of the
infrastructure.
-- ---------------------------------------------------------------------

-Start the metadata repository listener:

ORACLE_HOME\bin\lsnrctl start

-Set the ORACLE_SID environment variable to the metadata repository system


identifier (default is iasdb).

You can set the ORACLE_SID system variable using the System Properties control
panel.

-Start the metadata repository instance using SQL*Plus:

ORACLE_HOME\bin\sqlplus /nolog
sql> connect sys/password_for_sys as sysdba
sql> startup
sql> quit

-- ---------------------------------------------------------------------
- Start Oracle Internet Directory.

Make sure the ORACLE_SID is set to the metadata repository system identifier
(refer to previous step).
Start the Oracle Internet Directory monitor:

ORACLE_HOME\bin\oidmon start

-Start the Oracle Internet Directory server:

ORACLE_HOME\bin\oidctl server=oidldapd configset=0 instance=n start

where n is any instance number (1, 2, 3...) that is not in use. For example:
ORACLE_HOME\bin\oidctl server=oidldapd configset=0 instance=1 start

-- ---------------------------------------------------------------------
- Start the Enterprise Manager Web site.

Even though you are using command-line, the Web site is required because it
provides underlying support
for the command-line tools. The Web site must be started after every system
boot.
You can check the status and start the Enterprise Manager Web site using the
Services control panel.
The name of the service is in the following format: OracleORACLE_HOMEEMwebsite

You can also start the service using the following command line:

net start WEB_SITE_SERVICE_NAME

-- ---------------------------------------------------------------------
-Start Oracle HTTP Server.

ORACLE_HOME\dcm\bin\dcmctl start -ct ohs

Note that starting Oracle HTTP Server also makes Oracle9iAS Single Sign-On
available.

-- ---------------------------------------------------------------------
- Start the OC4J_DAS instance.

ORACLE_HOME\dcm\bin\dcmctl start -co OC4J_DAS

Note that the infrastructure instance contains other OC4J instances, such as
OC4J_home and OC4J_Demos,
but these do not need to be started; their services are not required and incur
unnecessary overhead.

-- ---------------------------------------------------------------------
-Start Web Cache (optional).

Web Cache is not configured in the infrastructure by default, but if you have
configured it, start it as follows:

ORACLE_HOME\bin\webcachectl start
-- ---------------------------------------------------------------------
- Start Oracle Management Server and Intelligent Agent (optional).

Perform these steps only if you have configured Oracle Management Server.
Start Oracle Management Server:

ORACLE_HOME\bin\oemctl start oms

-- ---------------------------------------------------------------------

Start the Intelligent Agent.

In order for Oracle Management Server to detect the infrastructure and any other
application server
installations on this host, you must make sure the Intelligent Agent is started.
Note that one Intelligent Agent
is started per host and must be started after every reboot.

You can check the status and start the Intelligent Agent using the Services
control panel.
The name of the service is in the following format:

OracleORACLE_HOMEAgent

30. Creating a Database Access Descriptor (DAD) for mod_plsql:


---------------------------------------------------------------

Oracle HTTP Server contains the mod_plsql module, which provide support for
building PL/SQL-based
applications on the Web.
PL/SQL stored procedures retrieve data from a database and generate HTTP responses
containing data and code
to display in a Web browser.

In order to use mod_plsql you must install the PL/SQL Web Toolkit into a database
and create a
Database Access Descriptor (DAD) which provides mod_plsql with connection
information for the database.
31. Configuring HTTP Server, OC4J, and Web Cache:
--------------------------------------------------

You can use the OEM website in order to configure components as HTTP Server, OC4J,
and Web Cache,
or
you can manually edit configuration files.

If you edit Oracle HTTP Server or OC4J configuration files manually, instead of
using the Enterprise Manager Web site,
you must use the DCM command-line utility dcmctl to notify the DCM repository of
the changes. Otherwise,
your changes will not go into effect and will not be reflected in the Enterprise
Manager Web site.

Note that the dcmctl tool is located in:


UNIX) ORACLE_HOME/dcm/bin/dcmctl
(Windows) ORACLE_HOME\dcm/bin\dcmctl

To notify DCM of changes made to: Use this command:

Oracle HTTP Server configuration files: dcmctl updateConfig -ct ohs

OC4J configuration files : dcmctl updateConfig -ct oc4j

All configuration files : dcmctl updateConfig

- HTTP Server:

You can configure Oracle HTTP Server using the Oracle HTTP Server Home Page on the
Oracle Enterprise Manager Web site.
You can perform tasks such as modifying directives, changing log properties,
specifying a port for a listener,
modifying the document root directory, managing client requests, and editing
server configuration files.

You can access the Oracle HTTP Server Home Page in the Name column of the System
Components table on the Instance Home Page.

- OC4J:

You can configure Oracle9iAS Containers for J2EE (OC4J) using the Enterprise
Manager Web site.
You can use the Instance Home Page to create and delete OC4J instances, each of
which has its own OC4J Home Page.
You can use each individual OC4J Home Page to configure the corresponding OC4J
instance and its deployed applications.

Creating an OC4J Instance.

Every application server instance has a default OC4J instance named OC4J_home.
You can create additional instances, each with a unique name, within an
application server instance.
To create a new OC4J instance:

- Navigate to the Instance Home Page on the Oracle Enterprise Manager Web site.
Scroll to the System Components section.
- Click Create OC4J Instance. This opens the Create OC4J Instance Page.
- In the Create OC4J Instance Page, type a unique instance name in the OC4J
instance name field. Click Create.
- A new OC4J instance is created with the name you provided.
- This OC4J instance shows up on the Instance Home Page in the System Components
section.
- The instance is initially in a stopped state and can be started any time after
creation.

Each OC4J instance has its own OC4J Home Page which allows you to configure global
services
and deploy applications to that instance.

32. 9iAS CONFIG FILES:


-----------------------

---------------------------------------------
32.1 9iAS Rel. 2 most obvious config files:
---------------------------------------------

Oracle HTTP Server:


-------------------
httpd.conf
oracle_apache.conf
access.conf
magic
mime.types
mod_oc4j.conf
srm.conf in ORACLE_HOME/Apache/Apache/conf

JServ:
------
jserv.conf
jserv.properties
zone.properties in ORACLE_HOME/Apache/Jserv/etc

mod_oradav:
-----------
moddav.conf in ORACLE_HOME/Apache/oradav/conf

mod_plsql:
----------
cache.conf
dads.conf in ORACLE_HOME/Apache/modplsql/conf

Oracle9iAS Web Cache:


---------------------
internal.xml
internal_admin.xml
webcache.xml in ORACLE_HOME/webcache
Oracle9iAS Reports Services:
----------------------------
reports_server_name.conf
cgicmd.dat
jdbcpds.conf
proxyinfo.xml
rwbuilder.conf
rwserver.template
rwservlet.properties
textpds.conf
xmlpds.conf in ORACLE_HOME/reports/conf

Oracle9iAS Discoverer:
----------------------
configuration.xml in
ORACLE_HOME/j2ee/OC4J_BI_Forms/applications/discoverer/web/WEB-INF/lib
viewer_config.xml in
ORACLE_HOME/j2ee/OC4J_BI_Forms/applications/discoverer/web/viewer_files
plus_config.xml in
ORACLE_HOME/j2ee/OC4J_BI_Forms/applications/discoverer/web/plus_files
portal_config.xml in
ORACLE_HOME/j2ee/OC4J_BI_Forms/applications/discoverer/web/portal
pref.txt in ORACLE_HOME/discoverer902/util
.reg_key.dc in ORACLE_HOME/discoverer902/bin/.reg

---------------------------------------------
32.2 9iAS Rel. 2 list of all .conf files:
---------------------------------------------

Now as an example, follows a listing of all .conf configuration files of a real


9iAS Server.

-- -------------------------------------------------------------------
-- BEGIN LISTING FROM AN REAL LIFE 9iAS rel. 9.0.2 Server:
-- -------------------------------------------------------------------

Directory of D:\ORACLE\ias902\Apache\Apache\conf

06/25/2002 10:55p 293 access.conf


12/01/2003 02:07p 46,178 httpd.conf
12/01/2003 02:07p 3,342 mod_oc4j.conf
12/01/2003 02:07p 517 mod_osso.conf
12/01/2003 02:07p 811 oracle_apache.conf
06/25/2002 10:55p 305 srm.conf
12/01/2003 02:07p 551 wireless_sso.conf
7 File(s) 51,997 bytes

Directory of D:\ORACLE\ias902\Apache\Apache\conf\osso

04/23/2003 08:41p 433 osso.conf


1 File(s) 433 bytes

Directory of D:\ORACLE\ias902\Apache\Jserv\conf

04/23/2003 08:38p 10,745 jserv.conf


1 File(s) 10,745 bytes

Directory of D:\ORACLE\ias902\Apache\jsp\conf

12/01/2003 02:07p 594 ojsp.conf


1 File(s) 594 bytes

Directory of D:\ORACLE\ias902\Apache\modplsql\conf

12/01/2003 02:07p 840 cache.conf


12/01/2003 02:07p 2,122 dads.conf
12/01/2003 02:07p 1,598 plsql.conf
3 File(s) 4,560 bytes

Directory of D:\ORACLE\ias902\Apache\oradav\conf

12/01/2003 02:07p 785 moddav.conf


12/01/2003 02:07p 396 oradav.conf
2 File(s) 1,181 bytes

Directory of D:\ORACLE\ias902\click\conf

12/01/2003 02:07p 427 click-apache.conf


1 File(s) 427 bytes

Directory of D:\ORACLE\ias902\click\conf\templates

01/14/2002 11:21p 445 click-apache.conf


1 File(s) 445 bytes

Directory of D:\ORACLE\ias902\dcm\config

02/17/2004 01:31p 186 dcm.conf


1 File(s) 186 bytes

Directory of D:\ORACLE\ias902\dcm\config\plugins\apache

06/27/2002 11:01p 43,623 httpd.conf


1 File(s) 43,623 bytes

Directory of D:\ORACLE\ias902\dcm\repository.install\dcm\config

04/23/2003 08:57p 185 dcm.conf


1 File(s) 185 bytes

Directory of D:\ORACLE\ias902\forms90\server

12/01/2003 02:07p 2,997 forms90.conf


1 File(s) 2,997 bytes

Directory of D:\ORACLE\ias902\ldap\das

12/01/2003 02:07p 165 oiddas.conf


1 File(s) 165 bytes

Directory of D:\ORACLE\ias902\opmn\conf

02/17/2004 01:31p 45 ons.conf


1 File(s) 45 bytes

Directory of D:\ORACLE\ias902\portal\conf

12/01/2003 02:07p 1,407 portal.conf


1 File(s) 1,407 bytes

Directory of D:\ORACLE\ias902\RDBMS\demo

12/01/2003 02:07p 482 aqxml.conf


1 File(s) 482 bytes

Directory of D:\ORACLE\ias902\reports\conf

04/28/2003 02:59p 3,386 Copy (2) of rep_vbas99.conf


05/17/2002 08:45p 7,421 jdbcpds.conf
04/28/2003 02:59p 3,386 rep_vbas99.conf
05/17/2002 08:45p 6,381 textpds.conf
05/17/2002 08:45p 454 xmlpds.conf
5 File(s) 21,028 bytes

Directory of D:\ORACLE\ias902\ultrasearch\webapp\config

12/01/2003 02:07p 320 ultrasearch.conf


1 File(s) 320 bytes

Directory of D:\ORACLE\ias902\xdk\admin

12/01/2003 02:07p 294 xml.conf


1 File(s) 294 bytes

Directory of D:\ORACLE\infra902\Apache\Apache\conf

06/25/2002 10:55p 293 access.conf


04/23/2003 08:23p 46,224 httpd.conf
04/23/2003 08:23p 1,500 mod_oc4j.conf
04/23/2003 08:23p 519 mod_osso.conf
04/23/2003 08:23p 747 oracle_apache.conf
06/25/2002 10:55p 305 srm.conf
6 File(s) 49,588 bytes

Directory of D:\ORACLE\infra902\Apache\Apache\conf\osso

04/23/2003 08:20p 433 osso.conf


1 File(s) 433 bytes

Directory of D:\ORACLE\infra902\Apache\Jserv\conf

04/23/2003 08:04p 10,763 jserv.conf


1 File(s) 10,763 bytes

Directory of D:\ORACLE\infra902\Apache\jsp\conf

04/23/2003 08:23p 598 ojsp.conf


1 File(s) 598 bytes

Directory of D:\ORACLE\infra902\Apache\modplsql\conf
04/23/2003 08:23p 842 cache.conf
04/23/2003 08:23p 1,485 dads.conf
04/23/2003 08:23p 1,606 plsql.conf
3 File(s) 3,933 bytes

Directory of D:\ORACLE\infra902\Apache\oradav\conf

04/23/2003 08:23p 789 moddav.conf


04/23/2003 08:23p 2 oradav.conf
2 File(s) 791 bytes

Directory of D:\ORACLE\infra902\dcm\config

02/17/2004 01:31p 188 dcm.conf


1 File(s) 188 bytes

Directory of D:\ORACLE\infra902\dcm\config\plugins\apache

06/27/2002 11:01p 43,623 httpd.conf


1 File(s) 43,623 bytes

Directory of D:\ORACLE\infra902\dcm\repository.install\dcm\config

04/23/2003 08:24p 187 dcm.conf


1 File(s) 187 bytes

Directory of D:\ORACLE\infra902\ldap\das

04/23/2003 08:23p 165 oiddas.conf


1 File(s) 165 bytes

Directory of D:\ORACLE\infra902\oem_webstage

04/23/2003 08:23p 943 oem.conf


1 File(s) 943 bytes

Directory of D:\ORACLE\infra902\opmn\conf

02/17/2004 01:31p 45 ons.conf


1 File(s) 45 bytes

Directory of D:\ORACLE\infra902\RDBMS\demo

04/23/2003 08:23p 477 aqxml.conf


1 File(s) 477 bytes

Directory of D:\ORACLE\infra902\sqlplus\admin

04/23/2003 08:23p 1,454 isqlplus.conf


1 File(s) 1,454 bytes

Directory of D:\ORACLE\infra902\sso\conf

04/23/2003 08:23p 154 sso_apache.conf


1 File(s) 154 bytes

Directory of D:\ORACLE\infra902\ultrasearch\webapp\config
04/23/2003 08:23p 324 ultrasearch.conf
1 File(s) 324 bytes

Directory of D:\ORACLE\infra902\xdk\admin

04/23/2003 08:23p 291 xml.conf


1 File(s) 291 bytes

Directory of D:\ORACLE\ora901\Apache\Apache\conf

08/20/2001 11:00a 285 access.conf


04/23/2003 07:26p 43,205 httpd.conf
04/23/2003 07:33p 472 oracle_apache.conf
08/20/2001 11:00a 297 srm.conf
4 File(s) 44,259 bytes

Directory of D:\ORACLE\ora901\Apache\Jserv\conf

04/23/2003 07:26p 6,710 jserv.conf


1 File(s) 6,710 bytes

Directory of D:\ORACLE\ora901\Apache\jsp\conf

04/23/2003 07:33p 511 ojsp.conf


1 File(s) 511 bytes

Directory of D:\ORACLE\ora901\Apache\modose\conf

04/23/2003 07:27p 637 ose.conf


1 File(s) 637 bytes

Directory of D:\ORACLE\ora901\Apache\modplsql\cfg

04/23/2003 07:29p 318 plsql.conf


1 File(s) 318 bytes

Directory of D:\ORACLE\ora901\BC4J

04/23/2003 07:33p 121 bc4j.conf


1 File(s) 121 bytes

Directory of D:\ORACLE\ora901\oem_webstage

04/23/2003 07:33p 682 oem.conf


1 File(s) 682 bytes

Directory of D:\ORACLE\ora901\rdbms\demo

04/23/2003 07:26p 326 aqxml.conf


1 File(s) 326 bytes

Directory of D:\ORACLE\ora901\sqlplus\admin

04/23/2003 07:33p 1,476 isqlplus.conf


1 File(s) 1,476 bytes

Directory of D:\ORACLE\ora901\ultrasearch\jsp\admin\config
05/02/2001 08:26p 10,681 mod__ose.conf
1 File(s) 10,681 bytes

Directory of D:\ORACLE\ora901\xdk\admin

04/23/2003 07:33p 253 xml.conf


1 File(s) 253 bytes

Total Files Listed:


71 File(s) 321,045 bytes

33. Deploying J2EE Applications:


----------------------------------

You can deploy J2EE applications using the OC4J Home Page on the Enterprise
Manager Web site.
To navigate to an OC4J Home Page, do the following:

-Navigate to the Instance Home Page where the OC4J instance resides.
Scroll to the System Components section.

-Select the OC4J instance in the Name column. This opens the OC4J Home Page for
that OC4J instance.

-Scroll to the Deployed Applications section on the OC4J Home Page.

Clicking Deploy EAR File or Deploy WAR File starts the deployment wizard, which
deploys the application
to the OC4J instance and binds any Web application to a URL context.

Your J2EE application can contain the following modules:

-- Web applications
The Web applications module (WAR files) includes servlets and JSP pages.

-- EJB applications
The EJB applications module (EJB JAR files) includes Enterprise JavaBeans
(EJBs).

-- Client application contained within a JAR file

Now archive the JAR and WAR files that belong to an enterprise Java application
into an EAR file
for deployment to OC4J. The J2EE specifications define the layout for an EAR file.

The internal layout of an EAR file should be as follows:

<appname>-
|--META_INF
| |
| -----application.xml
|
|--EJB JAR file
|
|--WEB WAR file
|
|--Client JAR file
|

When you deploy an application within a WAR file, the application.xml file is
created
for the Web application.
When you deploy an application within an EAR file, you must create the
application.xml
file within the EAR file.
Thus, deploying a WAR file is an easier method for deploying a Web application.

-------------
34. Errors:
-------------

-- TROUBLESHOOTING 9iAS Rel. 2


-- Version 2.0
-- 4 juli 2004
-- Albert van der Sel

With an 9iAS Release 2 Full install (Business Inteligence install), a tremendous


amount
of errors might be encountered.
Here you will find my own experiences, as well as some threads from metalink.

OPMN = Oracle Process Manager and Notification Server


JAZN / JAAS = Oracle Application Server Java Authentication and Authorization
Service
DCM = Distributed Configuration Management

OPMN stands for 'oracle process management notification' and is Oracle's 'high
availability' system.
OPMN monitors processes and brings them up again automatically if they go down.
It is started when you start enterprise manager website with emctl start from the
prompt
in the infrastructure oracle home, and doing this starts 2 opmn processes for each
oracle home.
OPMN consists of two components - Oracle Process Manager and Oracle Notification
System.

DCM stands for 'distributed component management' and is the framework by which
all
IAS R2 components hang together. DCM is a layer that ensures that if something is
changed in one components,
others like Enterprise Manager are made aware as well. It is not a process as
such,
but rather a generic term for a framework and utilities.
It is controlled directly with the dcmctl command.

DMS Dynamic Monitoring Services . These processes are started when you start ohs.
DMS basically gathers information on components.
Jserv Jserv works in much the same way as R1 except oracle components no longer
use this servlet architecture,
but use oc4j instead.

mod_plsql works the same way as R1.

mod_oradav oradav allows web folders to be shared with clients e.g. PC's and
accessed as if they were NT folders.

OC4J_DAS is used by Portal for the management of users and groups. You access this
via http://machine:port/oiddas

============================
PART 1: GENERAL 9iAS ERRORS:
============================

1. troubleshooting the targets.xml:


===================================

If you change the HOSTNAME for the repository (infrastructure) database,


then you need to update the ssoServerMachineName property for the oracle SSO
target
in INFRA_ORACLE_HOME/sysman/emd/targets.xml

The $ORACLE_HOME/sysman/emd/targets.xml file is created during installation of


9iAS and
includes descriptions of all currently known targets.
This file is used as the source of targets for the EM Website.

sample targets.xml:

- <Targets>
- <Target TYPE="oracle_webcache" NAME="ias902dev.missrv.miskm.mindef.nl_Web Cache"
DISPLAY_NAME="Web Cache">
<Property NAME="HTTPPort" VALUE="7778" />
<Property NAME="logFileName" VALUE="webcache.log" />
<Property NAME="authrealm" VALUE="Oracle Web Cache Administrator" />
<Property NAME="AdminPort" VALUE="4003" />
<Property NAME="HTTPProtocol" VALUE="http" />
<Property NAME="logFileDir" VALUE="/sysman/log" />
<Property NAME="HTTPMachine" VALUE="missrv.miskm.mindef.nl" />
<Property NAME="HTTPQuery" VALUE="" />
<Property NAME="controlFile" VALUE="d:\oracle\ias902/bin/webcachectl.exe" />
<Property NAME="MonitorPort" VALUE="4005" />
<Property NAME="HTTPPath" VALUE="/" />
<Property NAME="authpwd" VALUE="98574abda4f0a0cadcfe3e420f09854b"
ENCRYPTED="TRUE" />
<Property NAME="authuser" VALUE="98574abda4f0a0cadcfe3e420f09854b"
ENCRYPTED="TRUE" />
- <CompositeMembership>
<MemberOf TYPE="oracle_ias" NAME="ias902dev.missrv.miskm.mindef.nl"
ASSOCIATION="null" />
</CompositeMembership>
</Target>
+ <Target TYPE="oracle_clkagtmgr"
NAME="ias902dev.missrv.miskm.mindef.nl_Clickstream" DISPLAY_NAME="Clickstream
Collector" ON_HOST="missrv.miskm.mindef.nl">
- <CompositeMembership>
<MemberOf TYPE="oracle_ias" NAME="ias902dev.missrv.miskm.mindef.nl" />
</CompositeMembership>
</Target>
..
..
- <Target TYPE="oracle_repserv"
NAME="ias902dev.missrv.miskm.mindef.nl_Reports:rep_missrv"
DISPLAY_NAME="Reports:rep_missrv" VERSION="1.0" ON_HOST="missrv.miskm.mindef.nl">
<Property NAME="OracleHome" VALUE="d:\oracle\ias902" />
<Property NAME="UserName" VALUE="repadmin" />
<Property NAME="Servlet"
VALUE="http://missrv.miskm.mindef.nl:7778/reports/rwservlet" />
<Property NAME="Server" VALUE="rep_missrv" />
<Property NAME="Password" VALUE="ced9a541f77e7df6" ENCRYPTED="TRUE" />
<Property NAME="host" VALUE="missrv.miskm.mindef.nl" />
- <CompositeMembership>
<MemberOf TYPE="oracle_ias" NAME="ias902dev.missrv.miskm.mindef.nl"
ASSOCIATION="null" />
</CompositeMembership>
</Target>
</Target>
- <Target TYPE="oracle_ifs" NAME="iFS_missrv.miskm.mindef.nl:1521:o901:IFSDP">
<Property NAME="DomainName" VALUE="ifs://missrv.miskm.mindef.nl:1521:o901:IFSDP"
/>
<Property NAME="IfsRootHome" VALUE="d:\oracle\ias902\ifs" />
<Property NAME="SysadminUsername" VALUE="system" />
<Property NAME="SysadminPassword" VALUE="973dc46d050ca537" ENCRYPTED="TRUE" />
<Property NAME="IfsHome" VALUE="d:\oracle\ias902\ifs\cmsdk" />
<Property NAME="SchemaPassword" VALUE="daeffdd4f05cd456" ENCRYPTED="TRUE" />
- <CompositeMembership>
<MemberOf TYPE="oracle_ias" NAME="ias902dev.missrv.miskm.mindef.nl" />
</CompositeMembership>
</Target>

The above file stores amongst other things, the encrypted passwords that EM uses
for access to components.
Search for oracle_portal, oracle_repserv etc. Although encrypted, you can change
these to be a password
in Englidh as long as you flag it ENCRYPTED=FALSE. This should only be done for
specific bug problems
as recommended by oracle support.
Do not change these passwords for any other reason!!

The following is a list of things to check when there appears to be a problem with
targets.xml.

1. Check the permissions on the active targets.xml file and restart all the
infrastructure components
(database, listener, oid, emctl in that order).
The targets.xml file should be owned by the user who installed 9iAS and who
starts emctl.
Accidentally starting emctl as root recreates the targets.xml under root
ownership.
Fix this by changing ownership on targets.xml and restarting emctl.

2. Check which targets are listed, to ensure there is information on each


expected target.

3. Check whether the hosts file and targets.xml have matching hostnames,
and whether both have fully qualified hostnames.

4. What should be done if targets.xml is empty, or missing targets?

a. Restore targets.xml from backup

b. Copy $ORACLE_HOME/sysman/emd/discoveredTargets.xml to
$ORACLE_HOME/sysman/emd/targets.xml,
although it may not be complete if additional targets were installed following
installation.

See EM Website has no Entries for the 9iAS Instances 226226.1


and EM Web Site Fails to Display Application Servers 210552.1
and Login as ias_admin to 9iAS R2 Enterprise Manager, A Blank List is
Displayed for Targets 209540.1

c. Check the amount of disk space available. See Bug 2508930 - TARGETS.XML IS
EMPTY IF WE HAVE NO DISK SPACE.

d. Reinstall. See De-Installing 9iAS Release 2 (9.0.2) From Unix Platforms


218277.1

5. Is there an Infrastructure and Mid-Tier install on the system?

When installing both the infrastructure and a mid-tier on the same server (in
different homes),
the installation of the infrastructure creates the emtab file pointing to its
own home.
During installation of the mid-tier, the mid-tier installation routine uses
the emtab file
pointing to the infrastructure home so it knows where to write configuration
information
required for the infrastructure EM Website, so it can see not only information
concerning itself
but also information related to the mid-tier.

If the emtab file is removed/renamed after installation of of the


infrastructure but before installation
of the mid-tier, a new emtab file is created pointing to the mid-tier home.
The configuration file
routines of the mid-tier installation therefore do not know about the
existence of the infrastructure
and write the new configuration information into files in its own home and not
into the files
in the infrastructure home.

In addition to entries in the targets.xml in the infrastructure home, other files


such as the
ias.properties file in the infrastructure home are also updated with information
concerning the mid-tier.
Merging the targets.xml file from both homes may solve some of the display
problems,
though they may not solve control of component issues due to incomplete
configuration files
in the infrastructure home.

References to renaming the emtab file should be disregarded when performing


infrastructure/mid-tier
installs on the same server, and may have in fact been specific to certain
platforms and specific
for certain circumstances.

The EM Web Site is launched as a J2EE application. The configuration files consist
of many XML files
and properties files. Here are some of those files:

targets.xml
emd.properties
logging.properties
iasadmin.properties

2. Cleanly Restarting OID After A 9iAS 9.0.2 Crash:


===================================================

A problem that often seems to happen when Oracle 9iAS 9.0.2 crashes is that you
can't seem
to restart OID using OIDCTL.

For example, a situation might arise when a server is bounced without 9iAS being
shut down cleanly.
When you reboot the PC, and use DCMCTL to check the status of the OC4J instances
prior to starting them,
you get the following error message:

C:\ocs_onebox\infra\dcm\bin>dcmctl getState -V

ADMN-202026
A problem has occurred accessing the Oracle9iAS infrastructure database.
Base Exception:
oracle.ias.repository.schema.SchemaException:Unable to connect to Directory Server
:javax.naming.CommunicationException: markr.plusconsultancy.co.
uk:4032 [Root exception is java.net.ConnectException: Connection refused: connect]
Please, refer to the base exception for resolution, or call Oracle support.

Or, when you watch an ias start script, at the point oid get started, you will see

C:\ocs_onebox\infra\bin>oidctl server=oidldapd configset=0 instance=1 start

which should startup an OID instance. However, sometimes this fails to work and
you get the error message:

C:\ocs_onebox\infra\bin>oidctl server=oidldapd configset=0 instance=1 start


*** Instance Number already in use. ***
*** Please try a different Instance number. ***

oidmon is the 'monitor' process. It pools the database ( table ODS.ODS_PROCESS )


for new
ldap server launch requests, and if it finds one, (also placed there by oidctl as
user ODSCOMMON ) ,
then it starts a 'dispatcher/listener process.' As such, oidctl does not actually
start the ldap processes.
Oidmon then spawns 'dispatcher' and 'server' oidldapd processes.

What actually happens behind the scenes is that a row is inserted or updated in
the ODS.ODS_PROCESS table
that contains the instance name (which must be unique), the process ID, and a flag
called 'state',
which has three values - 0,1,2 and 3 which stand for stop, start, running and
restart. A second process, OIDMON,
polls the ODS.ODS_PROCESS table and when it finds a row with state=0, it reads the
pid and stops the process.
When it finds a state=1, oidmon starts a new process and updates pid with a new
process id. With state=2,
oidmon reads the pid, and checks that the process with the same pid is running. If
it's not, oidmon starts
a new process and updates the pid. Lastly, with state=3, oidmon reads the pid,
stops the process, starts a new one
and updates the pid accordingly. If oidmon can't start the server for some reason,
it retries 10 times, and if
still unsuccessful, it deletes the row from the ODS.ODS_PROCESS table. Therefore,
OIDCTL only inserts or updates
state information, and OIDMON reads rows from ODS.ODS_PROCESS, and performs
specified tasks based on the value of
the state column.

This all works fine except when 9iAS crashes; when this happens, OIDMON exits but
the OIDLDAPD processes are
not killed, and in addition, stray rows are often left in the ODS.ODS_PROCESS
table that are detected when you try
to restart the oidldapd instance after a reboot.

The way to properly deal with this is to take two steps.

1. Kill any stray OIDLDAPD processes still running (if you haven't rebooted the
server since the crash)
2. Delete any rows in the ODS.ODS_PROCESS table

connect to the IASDB database as the ODS user, or as SYSTEM

select * from ODS.ODS_PROCESS; (there should be at least one row)


delete form ODS.ODS_PROCESS;
commit;

3. Restart the OID instance again, using

C:\ocs_onebox\infra\bin>oidctl server=oidldapd configset=0 instance=1 start


OID uses the configfile:

$INFRA_ORACLE_HOME/network/admin/ldap.ora

Sample:

# LDAP.ORA Network Configuration File: d:\oracle\infra902\network\admin\ldap.ora


# Generated by Oracle configuration tools.

DEFAULT_ADMIN_CONTEXT = ""

DIRECTORY_SERVERS= (missrv.miskm.mindef.nl:4032:4031)

DIRECTORY_SERVER_TYPE = OID

3. Deobfuscate Errors After Reboot, Crash, or Network Change.


=============================================================

This can occur occur under these scenarios:


* A reboot has just occurred for the first time after 9iAS was installed.
(And, you had to change the /etc/hosts file during installation) OR
* A system crash occurred, and trying to recover. The 9iAS installion
is placed on a machine with the same hostname and IP address as before the
crash occurred. OR
* Hardware changes have occurred to the machine. (ie, CPU, NIC) AND
* Everything was working under the current 9iAS configuration.
(A 9iAS configuration change causing this can be a different problem)

There are different times when this error can occur. But, it basically occurs when
a
change to the system has been done. This can be after a reboot or a crash, but
there is
a difference on the machine before and after the occurance.
It is usually a network configuration change that has caused the problem.

When you try to start the Oracle HTTP Server, the following error might appear in
the opmn logs:
"Syntax error on line 6 of OH/Apache/Apache/conf/mod_osso.conf: Unable to
deobfuscate the SSO server config file,
OH/Apache/Apache/conf/osso/osso.conf, error Bad padding pattern detected in the
last block."
Most of the Mid-Tier components will fail to connect to the Infrastructure, and
will give the following error:
"oracle.ias.repository.schema.SchemaException:Password could not be retrieved"

Possible solution:

1. Start Infrastructure DB
2. Start the Infrastructure OID
3. Include $ORACLE_HOME/lib in the LD_LIBRARY_PATH, SHLIB_PATH, or LIBPATH
environment variable,
depending on your platform.
-For AIX LIBPATH=$ORACLE_HOME/lib:$ORACLE_HOME/lib64:$LIBPATH; export LIBPATH

-For HPUX SHLIB_PATH=$ORACLE_HOME/lib32:$ORACLE_HOME/lib:$SHLIB_PATH; export


SHLIB_PATH
-For Solaris, Linux and Tru64
LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH

4. Run the command to reset the iAS password. Please use the SAME password, as we
are not attempting
to change the password you enter when signing onto EM. That is done with the emctl
utility.
This command changes it internally, and we want to re-instate the current
obfuscated password:

resetiASpasswd.sh "cn=orcladmin" <orcladminpassword_given_before> <$ORACLE_HOME>

Note: There is a resetiASpasswd.bat on Windows, to be used the same way, just in


case these steps
are followed on Windows. The above stated problem is specific to UNIX, but there
may be occasions
to run through the same steps.

5. Use the showPassword utility to obtain the password for the orasso user.
Then, re-register the listener, being sure to add this information to the ossoreg
command in Step 6:
-schema orasso
-pass ReplaceWithPassword

6. Run the command to re-register mod_osso.


* Make sure there are no spaces after the trailing '\'s
If on Windows, use all one line, withouth the "\"
* Replace the uppercase with proper items
* The following assumes the to-be registered http server is on the mid-tier
* If on Windows, use "SYSTEM", instead of "root" for -u

java -jar $ORACLE_HOME/sso/lib/ossoreg.jar \ -host $INFRA_HOST \ -port 1521 \


-sid iasdb \
-site_name MID_HOST:MID_PORT\ -oracle_home_path $ORACLE_HOME \
-success_url http://MID_HOST:MID_PORT/osso_login_success \
-logout_url http://MID_HOST:MID_PORT/osso_logout_success \
-cancel_url http://MID_HOST:MID_PORT/ \
-home_url http://MID_HOST:MID_PORT/ \
-config_mod_osso TRUE \ -u root \ -sso_server_version v1.2 \ -schema orasso
\ -pass <ReplaceWithPassword>

NOTE: The following command will not work on 9iAS 9.0.2.0.x, unless a patched
dcm.jar
has previously been applied with a patch (or 9.0.2.1). Since this cannot be run on
previous versions,
just proceed to step 8.

7. Run following commands on the machine where the change occurred, (not the
associated Mid-Tiers):
a. Solaris
i. $ORACLE_HOME/dcm/bin/dcmctl resetHostInformation
ii. $ORACLE_HOME/bin/emctl set password <previous_password>
b. NT
i. Make sure the Oracle9iAS is stopped
ii. Edit %ORACLE_HOME%\sysman\j2ee\config\jazn-data.xml
iii.Search for �ias_admin�
iv. Replace obfuscated text between <credentials> and </credentials> with
"!<password>"
where "<password>" is the password.
Example: <credentials>!welcome1</credentials>
v. Save the file.

8. Continue starting 9iAS, as in Note 200475.1. The next step is:

% dcmctl start -ct ohs

This is what was originally failing. After successfully starting OHS, You may want
to take a backup
of the deobfuscated information as described in Note215955.1.

3. Not able to access the Middle Tier from EM Website.


======================================================

3.1
---

Thread Status: Active


From: Ishaq Baig <mailto:ishaq@alrabie.com>19-Nov-03 10:47
Subject: Enable to Access the Middle Tier Instance from EM Website

RDBMS Version: 8.1.7


Operating System and Version: WIN2K Service Pack3
Product (i.e., OAS, IAS, etc): IAS
Product Version: 9.0.2
JDK Version: 1.3.1.9
Error number:

Enable to Access the Middle Tier Instance from EM Website

Hi,
We have an 9IAS (9.0.2) Infrastructure and Middle Tier
instance running on ONE Box (Win2k),thing we fine until
while trying to implement the Single Sigon after
making the changes as instructed in Note:199072.1
we stopped the HTTP Server so that change could take
effect,but every since we have stopped the HTTP Server
we couldn't gain access to the Middle Instance from the
EM WEB SITE the page just hangs......on the other hand
the INFRASTRUCTURE instance is working fien.We even tried
starting the HTTP server through the DCM UTILITY the following was
the message

Content-Type: text/html
Response: 0 of 1 processes started.

Check opmn log files such as ipm.log and ons.log for detailed.".
Resolve the indicated problem at the Oracle9iAS instance where it occurred
thenresync the instance
Remote Execute Exception 806212
oracle.ias.sysmgmt.exception.ProcessMgmtException: OPMN operation failure
at oracle.ias.sysmgmt.clustermanagement.OpmnAgent.validateOperation(Unknown
Source)
at oracle.ias.sysmgmt.clustermanagement.OpmnAgent.startOHS(Unknown Source)
at oracle.ias.sysmgmt.clustermanagement.StartInput.execute(Unknown Source)
at oracle.ias.sysmgmt.clustermanagement.ClusterManager.execute(Unknown Source)
at oracle.ias.sysmgmt.task.ClusterManagementAdapter.execute(Unknown Source)
at oracle.ias.sysmgmt.task.TaskMaster.execute(Unknown Source)
at oracle.ias.sysmgmt.task.TaskMasterReceiver.process(Unknown Source)
at oracle.ias.sysmgmt.task.TaskMasterReceiver.handle(Unknown Source)
at oracle.ias.sysmgmt.task.TaskMasterReceiver.run(Unknown Source)

is Any Inputs highly appreciated,we need to get it up


as soon as possible.

Regards
Ishaq Baig

From: Oracle, Rhoderick Butial <mailto:rhoderick.butial@oracle.com>19-Nov-03


14:36
Subject: Re : Enable to Access the Middle Tier Instance from EM Website

Hello,

What type of changes did you alter?


Did you try restarting all of the other components on the mid tier?

There should be some errors generated in the error_log file, please post these
errors
in your next reply. You may want to review the following notes:

Note.236112.1 Wrong user supplied to ossoreg causing ADMN-906025 exception, 806212

Note.223586.1 Starting Oracle HTTP Server gives ADMN-906025 error


Note.222051.1 Starting Oracle HTTP Server gives ADMN-906025 Error

Also, I noticed that you have listed your 9iAS version as 9.0.2, did you apply the
latest patchsets before implementing the changes?

If not, you will need to apply the patchsets first before making the changes.
Please review..

Note.215882.1 9iAS Release 2 Patching Recommendations Within the Version Lifecycle

Thank you,

Rod
Oracle Technical Support

3.2
---

Displayed below are the messages of the selected thread.


Thread Status: Closed
From: Ron Miller <mailto:ron.miller@tccd.edu>28-Oct-03 16:13
Subject: EM Website extremely slow for 9iAS

RDBMS Version:: 9.0.1.3.0


Operating System and Version:: AIX 4.3.3
Product (i.e. Trace, DB Diff, Expert, etc):: Oracle9i Application Server
Product Version:: 9.0.2.2.0
OEM Console Operating System and Version:: Windows 2000

EM Website extremely slow for 9iAS

When I use the EM website to access the components of my 9i App server, the
response time is very slow.
It takes 2 or 3 minutes to go from screen to another. I have found information on
this forum that others
are experiencing the same problem. The response from Oracle support has been that
this is a known problem
and there is a bug, 2756262, which is to be fixed in 9.0.4. However, I cannot find
any information on when
this release will be available. It seems to keep getting pushed back. Does anyone
know a release date?
Has anyone requested a backport of this fix to an earlier release? Thanks for any
response.

From: Oracle, Kathy Ting <mailto:Kathy.Ting@oracle.com>29-Oct-03 05:41


Subject: Re : EM Website extremely slow for 9iAS

The base architecture is being redesign. Due to the redesign, backports are not
being accepted.

Look for a much better improved EM website in future releases.

Thank you for using the MetaLink Forum,


Kathy
Oracle Support.

From: Ron Miller <mailto:ron.miller@tccd.edu>29-Oct-03 14:52


Subject: Re : Re : EM Website extremely slow for 9iAS

Thanks for the reply Kathy. I will look forward to the redesign since the current
product is pretty much useless.

From: Oracle, Kathy Ting <mailto:Kathy.Ting@oracle.com>29-Oct-03 22:04


Subject: Re : Re : Re : EM Website extremely slow for 9iAS

As do we.

Thank you for using the MetaLink Forum,


Kathy
Oracle Support.

4. Explanation of IAS_ADMIN and ORCLADMIN Accounts


==================================================

Note:244161.1
Subject: Explanation of IAS_ADMIN and ORCLADMIN Accounts
Type: BULLETIN
Status: PUBLISHED

PURPOSE
-------

To provide an explanation for the IAS_ADMIN and ORCLADMIN accounts that are
established with Oracle9i Application Server (9iAS) Release 2 (9.0.2.x).

SCOPE & APPLICATION


-------------------

Website Administrators installing and maintaining 9iAS

Explanation of IAS_ADMIN and ORCLADMIN Accounts


------------------------------------------------

There are two users that can create some confusion: ias_admin and orcladmin.
However, the interaction is more or less internally managed. You log into the EM
Website with ias_admin, but use the orcladmin password after initially installing
9iAS. So when changing the orcladmin password, you may not get the results
intended with the ias_admin login.

But, if the obfuscation gets skewed, we found you sometimes need to reinstate
the password obfuscation between the two with the resetiASpasswd script. This
assumes the same password is used, and no resulting changes are noted. The
*change* occurred internally. These changes, and methods, can cause some
confusion.

You can actually change the EM Website login separately with the emctl utility.
Or, change the orcladmin username separately, depending on how your want
to manage this.

IAS_ADMIN Account
-----------------

In EM 9.0.2 and 9.0.3, you will need to use the IAS_ADMIN account to access the
EM Website Home Page. This account is not known within the database or to the
Oracle Management Server. Instead, it is a new account used only for access to
the 9iAS Administration (EM) Web Site. The following note can be used to
supplement the Documentation and Release Notes dealing with modifying this
password:

[NOTE:204182.1]
<http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_id=204182.
1&p_database_id=NOT>
How to Change the IAS_ADMIN password for Enterprise Manager

NOTE:
If you change the IAS_ADMIN password (as described in Note:204182.1),
the ORCLADMIN password does not change.

ORCLADMIN Account
-----------------

ORCLADMIN is used as a superuser account for administering 9iAS. During the


initial installation of 9iAS, the installer prompts you to create the IAS_ADMIN
password. This password is then also assigned to the ORCLADMIN account.

To reset (not change) the ORCLADMIN password, you must run the script,
ResetiASpasswd.sh.

$ORACLE_HOME/bin/resetiASpasswd.sh "cn=orcladmin" <orcladminpassword_given_before>


<$ORACLE_HOME>

Note:
There is a resetiASpasswd.bat on Windows, to be used the same way.

If you suspect that the encryption is skewed, use the SAME password, to *reset*
this. If you desire to change the password you enter when signing onto EM, use
the emctl utility, (as described in Note:204182.1).

If you actually want to change the ORCLADMIN password, you should use the
Oracle Directory Manager, to modify this super user.

- Start the Directory Manager from $ORACLE_HOME/bin/oidadmin

- In the navigator pane, expand Oracle Internet Directory Servers.

- Select a server. The group of tab pages for that server appear in the right
pane.

- Select the System Passwords tab. This page displays the current user names
and passwords for each type of user. Note that passwords are not displayed
in the password fields.

SUMMARY
-------

Is the goal to reset the internally encrypted ias_admin password, change the
actual orcladmin password, or just change the password when logging onto EM?
Thats the main question to ask.

1.
To reset the internally encrypted ias_admin password, use the resetiASpasswd
script, and use the same password as previously given.

2.
To change the orcladmin password, it is best to use the Oracle Directory Manager.
Please see the Oracle Internet Directory Administrator's Guide for more
information.

3.
Change the EM website or emctl password:
Within the EM Web Site...Preferences link...top right-hand side of the screen.
Or, on command line, using emctl.
RELATED DOCUMENTS
-----------------

[NOTE:234712.1]
<http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_id=234712.
1&p_database_id=NOT> Managing Schemas of the 9iAS Release 2 Metadata Repository

[NOTE:253149.1]
<http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_id=253149.
1&p_database_id=NOT> Resetting the Single Sign-On password for ORCLADMIN

5. Password for ORASSO Database Schema


======================================

Password for ORASSO Database Schema


goal: What is the password for the ORASSO database schema?
fact: Oracle9i Application Server Enterprise Edition 9.0.2
fact: Oracle9iAS Single Sign-On 9.0.2

fix: During installation a random password is generated for the ORASSO database
schema.
You need to look up this password in the Oracle Internet Directory.
The following text is taken from the Interoperability Patch Readme
(a patch that was mandatory for 9.0.2.0.0 but is no longer needed for 9.0.2.0.1):

If you do not know the password for the orasso schema, you can use the following
procedure
to determine the password: Note: Do not use the "alter user" SQL command to change
the orasso password.
If you need to change the orasso password, use Enterprise Manager so that it can
propagate the password
to all components that need to access orasso.

Start up the Oracle Internet Directory administration tool from infrastructure


machine.

prompt> $ORACLE_HOME/bin/oidadmin

Log into the oidadmin tool using the OID administrator account (cn=orcladmin) for
the Infrastructure installation.
Username: cn=orcladmin
Password: administrator_password
Server : host running Oracle Internet Directory and port number where Oracle
Internet Directory
is listening
The administrator password is the same as the ias_admin password.
The default port for Oracle Internet Directory is 389 (without SSL).
Navigate the Single Sign-On schema (orasso) entry using the administration tool.

> cn=orcladmin@OID_hostname:OID_port (for example: cn=orcladmin@infra.acme.


com:389)
> Entry Management
> cn=OracleContext
> cn=Products
> cn=IAS
> cn=Infrastructure Databases
> orclReferenceName=Single Sign-On database SID:Single Sign-On Server hostname
(for example: orclReferenceName=iasdb:infra.acme.com)
> orclResourceName=ORASSO Click the above entry and look for the
orclpasswordattribute
attribute value on the right panel. This value is the password for the orasso
schema.
NOTE: If you have multiple Infrastructures installed using one Oracle Internet
Directory,
ensure that you are looking at the correct Single Sign-On database entry since all

the infrastructure instances would have an ORASSO schema entry, but only one of
them is actually being used.

6. Windows Script to Determine orasso Password in 9iAS Release 2 (9.0.2)


========================================================================

Note:205984.1
Subject: Windows Script to Determine orasso Password in 9iAS Release 2 (9.0.2)
Type: BULLETIN
Status: PUBLISHED

PURPOSE
-------

The showPassword utility was developed to avoid having to use the oidadmin
tool to look up various OID passwords, by using ldapsearch with Oracle9i
Application Server (9iAS) Release 2 (9.0.2).

As a script, varying on different environments, it is not supported by Oracle


Support Services. It is intended as an example, to aid in the understanding
of the product.

SCOPE & APPLICATION


-------------------

9iAS Administrators and Windows Administrators

Windows Script to Determine orasso Password in 9iAS Release 2 (9.0.2)


---------------------------------------------------------------------

1. Paste the following script in a file named showPassword.bat and copy it in


a directory. Please also ensure that ldapserach is there in PATH on your
widnows machine.

8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<

set OIDHOST=bldel18.in.oracle.com
set OIDPORT=4032
if "%1"== "" goto cont
if "%2"== "" goto cont
ldapsearch -h %OIDHOST% -p %OIDPORT% -D "cn=orcladmin" -w "%1" -b "cn=IAS
Infrastructure
Databases,cn=IAS,cn=Products,cn=OracleContext" -s sub "orclResourceName=%2"
orclpasswordattribute
goto :end
:cont
echo Correct Syntax is
echo showpassword.bat orcladminpassword username
:end

8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<

Note that the "ldapsearch...orclpasswordattribute" commands should be put on


one line.

2. Edit the script and update with your own hostname and OID port
OIDHOST=bldel18.in.oracle.com
OIDPORT=4032

3. Ensure that you have ldapsearch from the correct ORACLE_HOME in the PATH

4. Check that OID is up and running before proceeding.

5. Run the script, and enter the schema name as: orasso, and the password value
is shown.

For example:
(all ONE line...may be easier to copy/paste from Notepad)

C:\> showPassword.bat oracle1 orasso


OrclResourceName=ORASSO,orclReferenceName=iasdb.bldel18.in.oracle.com,cn=IAS Inf
rastructure Databases,cn=IAS,cn=Products,cn=OracleContext
orclpasswordattribute=Gbn3Fd24

The orasso password in this example is Gbn3Fd24.

6. STARTING AND STOPPING 9iAS WITH SCRIPTS.


===========================================

----------------------------------------------------------------
5.1 From metalink:

a) StartInfrastructure.bat:
REM ####################################################
REM ####################################################
REM ## Script to start Infrastructure ##
REM ## ##
REM ####################################################
REM ####################################################
REM ##
REM ## Set environment variables for Infrastructure
REM ####################################################
set ORACLE_HOME=D:\IAS90201I
set ORACLE_SID=IASDB
set PATH=%ORACLE_HOME%\bin;%ORACLE_HOME%\dcm\bin;%ORACLE_HOME%\opmn\bin;%PATH%;
REM #####################################################
REM ## Start Oracle Internet Directory processes
REM #####################################################
echo .....Starting %ORACLE_HOME% Internet Directory ......
oidmon start
oidctl server=oidldapd instance=1 start
timeout 20
REM #####################################################
REM ## Start Oracle HTTP Server and OC4J processes
REM #####################################################
echo .....Starting OHS and OC4J processes.......
call dcmctl start -ct ohs
call dcmctl start -ct oc4j
REM #####################################################
REM ## Check OHS and OC4J processes are running
REM #####################################################
echo .....Checking OHS and OC4J status.....
call dcmctl getstate -v
pause

REM ####################################################
b) StartMidTier.bat:
REM ####################################################
REM ####################################################
REM ## Script to start MidTier ##
REM ## ##
REM ####################################################
REM ####################################################
REM ##
REM ## Set environment variables for Midtier
REM ####################################################
set ORACLE_HOME=D:\IAS90201J
set PATH=%ORACLE_HOME%\bin;%ORACLE_HOME%\dcm\bin;%ORACLE_HOME%\opmn\bin;%PATH%;
REM #####################################################
REM ## Start Oracle HTTP Server and OC4J processes
REM #####################################################
echo .....Starting OHS and OC4J processes.......
call dcmctl start -ct ohs
call dcmctl start -ct oc4j
REM #####################################################
REM ## Check OHS and OC4J processes are running
REM #####################################################
echo .....Checking OHS and OC4J status.....
call dcmctl getstate -v
REM ####################################################
REM ## Start Webcache
REM ####################################################
echo .....Starting Webcache..........
webcachectl start
REM ####################################################
REM ## Start Enterprise Manager Website
REM ####################################################
echo .....Starting EM Website.....
net start Oracleias90201iEMWebsite
echo ....Done
pause
REM ####################################################
c) StopMidTier.bat:
REM ####################################################
REM ####################################################
REM ## Script to stop Midtier ##
REM ## ##
REM ####################################################
REM ####################################################
REM ##
REM ## Set environment variables for Midtier
REM ####################################################
set ORACLE_HOME=D:\IAS90201J
set PATH=%ORACLE_HOME%\bin;%ORACLE_HOME%\dcm\bin;%ORACLE_HOME%\opmn\bin;%PATH%;
REM ####################################################
REM ## Stop Enterprise Manager Website
REM ####################################################
echo .....Stopping EM Website.....
net stop Oracleias90201iEMWebsite
REM ####################################################
REM ## Stop Webcache
REM ####################################################
echo .....Stopping %ORACLE_HOME% Webcache..........
webcachectl stop
REM ####################################################
REM ## Stop Oracle HTTP Server and OC4J processes
REM ####################################################
echo .....Stopping %ORACLE_HOME% OHS and OC4J........
dcmctl shutdown
echo ....Done
pause
REM ####################################################
d)StopInfrastructure.bat:
REM ####################################################
REM ####################################################
REM ## Script to stop Infrastructure ##
REM ## ##
REM ####################################################
REM ####################################################
REM ##
REM ## Set environment variables for Infrastructure
REM ####################################################
set ORACLE_HOME=D:\IAS90201I
set ORACLE_SID=IASDB
set PATH=%ORACLE_HOME%\bin;%ORACLE_HOME%\dcm\bin;%ORACLE_HOME%\opmn\bin;%PATH%;
set EM_ADMIN_PWD=<your_pwd>
REM ####################################################
REM ## Stop Enterprise Manager Website
REM ####################################################
echo .....Stopping EM Website.....
call emctl stop
REM ####################################################
REM ## Stop Oracle HTTP Server and OC4J processes
REM ####################################################
echo .....Stopping %ORACLE_HOME% OHS and OC4J........
call dcmctl shutdown
REM #####################################################
REM ## Stop Oracle Internet Directory processes
REM #####################################################
echo .....Stopping %ORACLE_HOME% Internet Directory ......
oidctl server=oidldapd configset=0 instance=1 stop
timeout 20
oidmon stop
echo ....Done
pause
REM #####################################################

----------------------------------------------------------------
5.2 Our scripts:

Starting:
=========

@ECHO OFF
TITLE Startup all
REM **********************************************************
REM Adjust the following values
set ORACLE_BASE=D:\oracle
set IAS_HOME=%ORACLE_BASE%\ias902
set IAS_BIN=%IAS_HOME%\bin
set INFRA_HOME=%ORACLE_BASE%\infra902
set INFRA_BIN=%INFRA_HOME%\bin
REM **********************************************************

echo **********************************************************
echo Parameters used are:
echo ORACLE_BASE = %ORACLE_BASE%
echo IAS_HOME = %IAS_HOME%
echo IAS_BIN = %IAS_BIN%
echo INFRA_HOME = %INFRA_HOME%
echo INFRA_BIN = %INFRA_BIN%
echo **********************************************************

echo **********************************************************
echo "Starting up infra"
echo **********************************************************

echo "Starting iasdb instance"


echo connect sys/change_on_install as sysdba > $$tmp$$
echo startup >> $$tmp$$
echo exit >> $$tmp$$
%INFRA_BIN%\sqlplus /nolog < $$tmp$$
del $$tmp$$

echo "Starting Oracle Internet Directory..."


%INFRA_BIN%\oidmon start
%INFRA_BIN%\oidctl server=oidldapd instance=1 start
timeout 10

echo "Starting Enterprise manager Services..."


net start Oracleinfra902EMWebsite

echo "Starting OEM ..."


net start Oracleinfra902ManagementServer

rem net start Oracleinfra902TNSListener


net start Oracleinfra902Agent
echo "Starting up infra services..."
%INFRA_HOME%\opmn\bin\opmnctl startall

echo **********************************************************
echo "Done kickin' up infra!"
echo **********************************************************
echo.

echo **********************************************************
echo "Starting all mid tier services..."
echo **********************************************************

%IAS_HOME%\opmn\bin\opmnctl startall

echo "Starting webcache..."


%IAS_BIN%\webcachectl start

echo "Starting all services..."


net start Oracleias902Discoverer
rem net start Oracleias902ProcessManager
rem net start Oracleias902WebCacheAdmin
rem net start Oracleias902WebCache

echo **********************************************************
echo "Done starting up mid tier!"
echo **********************************************************

pause

Stopping:
=========

@ECHO OFF
TITLE Shutdown all
REM **********************************************************
REM Adjust the following values
set ORACLE_BASE=D:\oracle
set IAS_HOME=%ORACLE_BASE%\ias902
set IAS_BIN=%IAS_HOME%\bin
set INFRA_HOME=%ORACLE_BASE%\infra902
set INFRA_BIN=%INFRA_HOME%\bin
REM **********************************************************

echo **********************************************************
echo Parameters used are:
echo ORACLE_BASE = %ORACLE_BASE%
echo IAS_HOME = %IAS_HOME%
echo IAS_BIN = %IAS_BIN%
echo INFRA_HOME = %INFRA_HOME%
echo INFRA_BIN = %INFRA_BIN%
echo **********************************************************

echo **********************************************************
echo "Shutting down mid tier..."
echo **********************************************************
echo "Stopping all mid tier services..."
%IAS_HOME%\opmn\bin\opmnctl stopall

echo "Stopping webcache..."


%IAS_BIN%\webcachectl stop

echo "Stopping Discoverer service..."


net stop Oracleias902Discoverer

echo "Sanity stops for WebCache"


net stop Oracleias902WebCache
net stop Oracleias902WebCacheAdmin

echo **********************************************************
echo "Done shutting down mid tier!"
echo **********************************************************
echo.
echo **********************************************************
echo "Shutting down Infrastructure..."
echo **********************************************************

echo "Stopping Enterprise Manager Website"


call %INFRA_BIN%\emctl stop welcome1

echo "Stopping Enterprise Manager Management Console..."


call %INFRA_BIN%\oemctl stop oms sysman/sysman

echo "Stopping Infra Services..."


%INFRA_HOME%\opmn\bin\opmnctl stopall

echo "Stopping Oracle Internet Directory..."


%INFRA_BIN%\oidctl server=oidldapd instance=1 stop
timeout 10
%INFRA_BIN%\oidmon stop

echo "Stopping infra database..."


echo connect sys/change_on_install as sysdba > $$tmp$$
echo shutdown immediate >> $$tmp$$
echo exit >> $$tmp$$
%INFRA_BIN%\sqlplus /nolog < $$tmp$$
del $$tmp$$

echo "Stopping all Remaining NT Services..."


rem net stop Oracleinfra902TNSListener
net stop Oracleinfra902Agent

echo **********************************************************
echo "Done shutting down infra!"
echo **********************************************************

pause

Starting BI:
============

@echo off
title Starting Oracle Reports
rem ********************************************************************
set IAS_HOME=d:\oracle\ias902
set IAS_BIN=%IAS_HOME%\bin
rem ********************************************************************

echo ********************************************************************
echo Parameters used:
echo.
echo IAS_HOME = %IAS_HOME%
echo IAS_BIN = %IAS_BIN%
echo ********************************************************************
echo.
echo ********************************************************************
echo Bringing up OC4J_BI_Forms (Business Intelligence/Forms)
echo ********************************************************************
call %IAS_HOME%\dcm\bin\dcmctl start -co OC4J_BI_Forms -v
timeout 5

echo Check to see if the instance really started up:


echo.
call %IAS_HOME%\dcm\bin\dcmctl getReturnStatus
echo Done starting up OC4J_BI_FORMS...

pause

Starting CMSDK:
===============

@echo off
title Starting Oracle CM SDK 9.0.3.1.
rem ********************************************************************
set IAS_HOME=d:\oracle\ias902
set IAS_BIN=%IAS_HOME%\bin
rem ********************************************************************

echo ********************************************************************
echo Parameters used:
echo.
echo IAS_HOME = %IAS_HOME%
echo IAS_BIN = %IAS_BIN%
echo ********************************************************************
echo.
echo ********************************************************************
echo Bringing up Domain Controller, note default password is: ifsdp
echo ********************************************************************
call %IAS_HOME%\ifs\cmsdk\bin\ifsctl start
echo Done bringing up Domain Controller
echo.
echo ********************************************************************
echo Bringing up OC4J Instance...
echo ********************************************************************
call %IAS_HOME%\dcm\bin\dcmctl start -co OC4J_iFS_cmsdk -v
timeout 5

echo Check to see if the instance really started up:


echo.
call %IAS_HOME%\dcm\bin\dcmctl getReturnStatus
echo Done starting up OC4J Instance.
echo Done starting up CM SDK.

pause

8. Warning: Stop EMD Before Using DCMCTL Utility.


=================================================

Note:207208.1
Subject: Warning: Stop EMD Before Using DCMCTL Utility
Type: BULLETIN
Status: PUBLISHED

PURPOSE
-------

Issue a warning for the use of the dcmctl utility when administering the Oracle9i
Application Server (9iAS) Release 2 (9.0.2.0.x). There is now a Patch available
which
resolves the issue of running DCM and EM at the same time.

SCOPE & APPLICATION


-------------------

This article is intended for 9iAS Administrators. It gives a general description


of a problem that can occur when dcmctl is used without precautions.

DCMCTL RESTRICTIONS
-------------------

1.
Do not use dcmctl while EMD (Enterprise Manager Console/Website) is running.

The dcmctl utility is issuing DCM commands to control the state of components
in 9iAS. The same is done from the EMD, which is generally reachable at the
following URLs:

http://yourserver:1810/emd/console
http://yourserver:1810/

When the dcmctl utility is used while EMD is running, this may cause out-of-sync
problems with your 9iAS instance. This is caused by only one DCM daemon being
available to 'listen' to requests.

How to Avoid Problems


---------------------

Stop EMD:
$ emctl stop
Issue your command with dcmctl
When you are done, restart EMD:
$ emctl start

2.
If an Infrastructure and Mid-Tier(s) are installed on same server, EM must be
stopped when issuing dcmctl from either the Infrastructure or a Mid-tier
directories.
This is because EM is common to all 9iAS instances on the server. Stopping
multiple
instances of EM across multiple servers is not neccessary. The DCM/EM concurrency

conflict will only come into play with instances on a given machine.

3.
Do not issue multiple DCM commands at once, and do not issue a DCM command
while one might still be running.

4.
If you start a component with DCM, it is recommended to also stop it with DCM.
If you start a component with the EM Website, it is recommended stop it with
the EM Website.

SOLUTION
--------

If out-of-sync errors occur because of EM being up while using dcmctl, then a


reinstall may be neccessary. Please apply the following patches in order to
prevent this concurrency problem from happening inadvertently:

Patch 2542920 : 9iAS 9.0.2.1 Core Patchset


Patch 2591631 : DCM/EM Concurrency Fix

* The 9.0.2.1 Patchset is a pre-requisite of the DCM Patch.


* Both patches should be applied to all associated 9iAS Tiers.
* Please refer to the readme for important information.
* Future releases (9.0.2.2+) will have this fix included.

9. MISCELLANEOUS:
=================

9.1 Change of hostname:


-----------------------

If you change the HOSTNAME for the repository (infrastructure) database,


then you need to update the ssoServerMachineName property for the oracle SSO
target
in INFRA_ORACLE_HOME/sysman/emd/targets.xml

If you change the PORT for the repository database, discoverer is affected -
update the port
for discodemo in tnsnames.ora.

9.2 Files with IP in the name:


------------------------------
9.3 ldapcheck and ldapsearch examples:
--------------------------------------

List users and or passwords: use ldapcheck and ldapsearch

Example 1:
----------

ldapsearch -h uks799 -p 4032 -D "cn=orcladmin" -w your_ias_or_oid_password -b


"cn=Users,dc=uk,dc=oracle,dc=com" -s sub -v "objectclass=*"

set OIDHOST=bldel18.in.oracle.com
set OIDPORT=4032
if "%1"== "" goto cont
if "%2"== "" goto cont
ldapsearch -h %OIDHOST% -p %OIDPORT% -D "cn=orcladmin" -w "%1" -b "cn=IAS
Infrastructure
Databases,cn=IAS,cn=Products,cn=OracleContext" -s sub "orclResourceName=%2"
orclpasswordattribute
goto :end
:cont
echo Correct Syntax is
echo showpassword.bat orcladminpassword username
:end

C:\> showPassword.bat oracle1 orasso


OrclResourceName=ORASSO,orclReferenceName=iasdb.bldel18.in.oracle.com,cn=IAS Inf
rastructure Databases,cn=IAS,cn=Products,cn=OracleContext
orclpasswordattribute=Gbn3Fd24

The orasso password in this example is Gbn3Fd24.

Example 2:
----------

9.4 dcmctl commands:


--------------------

On a simple 9iAS webcache/j2ee installation, you might try the following command:

F:\oracle\ias902\dcm\bin>dcmctl getstate -V

Current State for Instance:ias902dev.localhost

Component Type Up Status In Sync Status

===========================================================================
1 home oc4j Up True
2 HTTP Server ohs Up True
3 OC4J_Demos oc4j Up True
4 OC4J_iFS_cmsdk oc4j Up True

dcmctl getstate -ct ohs - show status of ohs of the current instance ONLY.

dcmctl updateConfig Atempt to update DCM's view of the world after a


manual configuration change.
dcmctl getstate -v determines which component aren't starting.
dcmctl resyncInstance -force force resync of the instance.

9.5 Fault tolerance:


====================

217368.1 from Metalink - "Advanced Configurations and Topologies for Enterprise


Deployments of E-Business"

Hot site Oracle disaster recovery configuration


Oracle failover with Oracle standby database
Oracle failover with Oracle9i Dataguard
Oracle failover with Oracle9i TAF (Transparent Application Failover)
Oracle failover with Oracle9i Real Application Clusters (RAC)

|----------------------------------|
|Machine A |
| |
| |-----------------------------| |
| |Instance A | |
| | - Cluster manager | |
| | - Distributed Lock Manager | |
| | - OS Shared Disk Driver | |--------------
| ----------------------------- | |
|----------------------------------| |
| |
| interconnect ------------
| | Shared |
|----------------------------------| | Disk |
|Machine B | | Subsystem|
| | ------------
| |-----------------------------| | |
| |Instance B | | |
| | - Cluster manager | | |
| | - Distributed Lock Manager | | |
| | - OS Shared Disk Driver | |---------------
| ----------------------------- |
|----------------------------------|

Note 1:
-------
Local Clustering Definition
Local cluster is defined as two or more physical machines (nodes) that share
common disk storage
and logical IP address. Clustered nodes exchange cluster information over
heartbeat link(s).
Cluster software collects information and checks the situation on both nodes. On
error condition,
software will execute a predefined script and switch the clustered services over
to a secondary machine.
Oracle instance, as one of clustered services, will be switched off together with
listener process,
and restarted on the secondary (surviving) node.

HA Oracle Agent
HA Oracle Agent software controls Oracle database activity on Sun Cluster nodes.
The agent performs
fault checking using two processes on the local node and two process on the remote
node by querying
V$SYSSTAT table for active sessions. If the database has no active sessions, HA
Agent will open
a test transaction (connect and execute in serial create, insert, update, drop
table commands).
Return error codes from HA Agent have been validated against a special action file
on location.

/etc/opt/SUNWscor/haoracle_config_V1:

# Action file for HA-DBMS Oracle fault monitor


# State DBMS_er proc_di log_msg timeout int_err new_sta action message
---
co * * * * 1 *
stop Internal HA-DBMS Oracle error connecting to db
on 28 * * * * di
none Session killed by DBA, will reconnect
* 50 * * * * di
takeover O/S error occurred while obtaining an enqueue
co 0 * * 1 0 *
restart A timeout has occured during connect
--

Takeover - cluster software will switch to another node.

Stop - cluster will stop DBMS

None - no action taken

Restart - database restarted locally on the same node

HA Oracle Agent requires Oracle configuration files (listener.ora, oratab and


tnsnames.ora)
on unique predefined location /var/opt/oracle

Note 2:
-------

You Asked (Jump to Tom's latest followup)


If I want to use Oracle Fail Safe and Dataguard do the servers have to be
clustered? Right now I have a primary database on one server and a separate
server for the logical standby database. I want automatic failover, but it
looks like Oracle Fail Safe requires clustered servers.

The DATAGUARD manual mentions that you can use ORACLE FAIL SAFE on the windows
platform, but the ORACLE FAIL SAFE documentation doesn't say squat about
DATAGUARD or how to configure for it. Is there any documentation of this
subject that you can refer me to?

and we said...

Fail Safe is a clustering solution.

The two (data guard & failsafe) are complimentary but somewhat orthogonal here.

Failsafe is designed to keep the single database up and available -- in a single


data center. As long as that room exists -- failsafe keeps the database up.

data guard is a disaster recovery solution. It is for when the room the data
center is in "goes away" for whatever reason.

Data guard wants the machines to be independent (no clusters) of eachother and
separated by some geographic distance.

Failsafe, like 9i RAC, wants the machines to be tethered together - sitting


right next to eachother in a cluster.

Failsafe is HA (high availability)


Data guard is DR (disaster recovery)

Failsafe will give you automated failover. As long as the data center exists,
that database is up.

With data guard -- you do not WANT automated failover (many *think* they do but
you don't). Do you really want your DR solution to kick in due to a WAN
failure? No, not really. For DR to take over, you want a human to say "yup,
data center burnt to the ground, lets head for the mountains". You do not want
the DR site to kick in because "it thinks the primary site is gone" -- you need
to tell it "the primary site is gone". In a cluster -- the machines are very
aware of eachother and automated failover is "safe"

So, data guards reference to failsafe is incidental.


That failsafe doesn't talk about data guard is of no real consequence.

They are independent feature/functions.

Note 3: terms:
--------------

Note 4:
-------

FAQ RAC:
Real Application Clusters
General RAC
Is it supported to install CRS and RAC as different users. (09-SEP-04)
I have changed my spfile with alter system set <parameter_name> =....
scope=spfile. The spfile is on
ASM storage and the database will not start. (18-APR-04)
Is it difficult to transition from Single Instance to RAC? (18-JUL-05)
What are the dependencies between OCFS and ASM in Oracle10g ? (05-MAY-05)
What software is necessary for RAC? Does it have a separate installation CD to
order? (05-MAY-05)
Do we have to have Oracle RDBMS on all nodes? (02-APR-04)
What kind of HW components do you recommend for the interconnect? (02-APR-04)
Is rcp and/or rsh required for normal RAC operation ? (06-NOV-03)
Are there any suggested roadmaps for implementing a new RAC installation? (26-NOV-
02)
What is Cache Fusion and how does this affect applications? (26-NOV-02)
Can I use iSCSI storage with my RAC cluster? (13-JUL-05)
Can I use RAC in a distributed transaction processing environment? (16-JUN-05)
Is it a good idea to add anti-virus software to my RAC cluster? (31-JAN-05)
When configuring the NIC cards and switch for a GigE Interconnect should it be set
to FULL or Half duplex in RAC? (05-NOV-04)
What would you recomend to customer, Oracle clusterware or Vendor Clusterware
(I.E. MC Service Guard, HACMP, Sun Cluster, Veritas etc.) with Oracle Database 10g
Real Application Clusters? (21-OCT-04)
What is Standard Edition RAC? (01-SEP-04)
High Availability
If I use Services with Oracle Database 10g, do I still need to set up Load
Balancing ? (16-JUN-05)
Why do we have a Virtual IP (VIP) in 10g? Why does it just return a dead
connection when its primary node fails? (12-MAR-04)
I am receiving an ORA-29740 error. What should I do? (02-DEC-02)
Can RMAN backup Real Application Cluster databases? (26-NOV-02)
What is Server-side Transparent Application Failover (TAF) and how do I use it?
(07-JUL-05)
What is CLB_GOAL and how should I set it? (16-JUN-05)
Can I use TAF and FAN/FCF? (16-JUN-05)
What clients provide integration with FAN and FCF? (28-APR-05)
What are my options for load balancing with RAC? Why do I get an uneven number of
connections on my instances? (15-MAR-05)
Can our 10g VIP fail over from NIC to NIC as well as from node to node ? (10-DEC-
04)
Can I use ASM as mechanism to mirror the data in an Extended RAC cluster? (18-OCT-
04)
What does the Virtual IP service do? I understand it is for failover but do we
need a separate network card? Can we use the existing private/public cards? What
would happen if we used the public ip? (15-MAR-04)
What do the VIP resources do once they detect a node has failed/gone down? Are the
VIPs automatically acquired, and published, or is manual intervention required?
Are VIPs mandatory? (15-MAR-04)
Scalability
I am seeing the wait events 'ges remote message', 'gcs remote message', and/or
'gcs for action'. What should I do about these? (02-APR-04)
What are the changes in memory requirements from moving from single instance to
RAC? (02-DEC-02)
What is the Load Balancing Advisory? (16-JUN-05)
What is Runtime Connection Load Balancing? (16-JUN-05)
How do I enable the load balancing advisory? (16-JUN-05)
Manageability
How do I stop the GSD? (22-MAR-04)
How should I deal with space management? Do I need to set free lists and free list
groups? (16-JUN-03)
I was installing RAC and my Oracle files did not get copied to the remote node(s).
What went wrong? (26-NOV-02)
What is the Cluster Verification Utiltiy (cluvfy)? (16-JUN-05)
What versions of the database can I use the cluster verification utility (cluvfy)
with? (16-JUN-05)
What are the implications of using srvctl disable for an instance in my RAC
cluster? I want to have it available to start if I need it but at this time to not
want to run this extra instance for this database. (31-MAR-05)
Platform Specific
How many nodes can be had in an HP/Sun/IBM/Compaq/NT/Linux cluster? (21-OCT-04)
Is crossover cable supported as an interconnect with 9iRAC/10gRAC on any
platform ? (21-FEB-05)
Is it possible to run RAC on logical partitions (i.e. LPARs) or virtual separate
servers. (18-MAY-04)
Can the Oracle Database Configuration Assistant (DBCA) be used to create a
database with Veritas DBE / AC 3.5? (10-JAN-03)
How do I check RAC certification? (26-NOV-02)
Where I can find information about how to setup / install RAC on different
platforms ? (08-AUG-02)
Is Veritas Storage Foundation 4.0 supported with RAC? (05-OCT-04)
Platform Specific -- Linux
Is 3rd Party Clusterware supported on Linux such as Veritas or Redhat? (11-MAY-05)

Can you have multiple RAC $ORACLE_HOME's on Linux? (19-JUL-05)


After installing patchset 9013 and patch_2313680 on Linux, the startup was very
slow (20-DEC-04)
Is CFS Available for Linux? (20-DEC-04)
Where can I find more information about hangcheck-timer module on Linux ? And how
do we configure hangcheck-timer module ? (20-DEC-04)
Can RAC 10g and 9i RAC be installed and run on the same physical Linux cluster?
(20-DEC-04)
Is the hangcheck timer still needed with Oracle Database 10g RAC? (20-DEC-04)
How to configure bonding on Suse SLES8. (29-NOV-04)
How to configure bonding on Suse SLES9. (29-NOV-04)
Platform Specific -- Solaris
Does RAC run faster with Sun-cluster or Veritas cluster-ware? (these being
alternatives with Sun hardware) Is there some clusterware that would make RAC run
faster? (20-DEC-04)
Platform Specific -- HP-UX
Is HMP supported with 10g on all HP platforms ? (20-DEC-04)
Platform Specific -- Windows
Does the Oracle Cluster File System (OCFS) support network access through NFS or
Windows Network Shares? (27-JAN-05)
Can I run my 9i RAC and RAC 10g on the same Windows cluster? (01-JUL-05)
My customer wants to understand what type of disk caching they can use with their
Windows RAC Cluster, the install guide tells them to disable disk caching? (31-
MAR-05)
Platform Specific -- IBM AIX
Do I need HACMP/GPFS to store my OCR/Voting file on a shared device. (20-DEC-04)
Platform Specific -- IBM-z/OS (Mainframe)
Can I run Oracle RAC 10g on my IBM Mainframe Sysplex environment (z/OS)? (07-JUL-
05)
Diagnosibility
What are the cdmp directories in the background_dump_dest used for? (11-AUG-03)
EBusiness Suite with RAC
What is the optimal migration path to be used while migrating the E-Business suite
to RAC? (08-JUL-05)
Is the Oracle E-Business Suite (Oracle Applications) certified against RAC? (04-
JUN-03)
Can I use TAF with e-Business in a RAC environment? (02-APR-03)
How to configure concurrent manager in a RAC environment? (20-SEP-02)
Should functional partitioning be used with Oracle Applications? (20-SEP-02)
Which e-Business version is prefereable? (20-SEP-02)
Can I use Automatic Undo Management with Oracle Applications? (20-SEP-02)
Clustered File Systems
Can I use OCFS with SE RAC? (01-SEP-04)
What are the maximum number of nodes under OCFS on Linux ? (06-NOV-03)
Where can I find documentation on OCFS ? (06-NOV-03)
What files can I put on Linux OCFS? (14-AUG-03)
Is Sun QFS supported with RAC? What about Sun GFS? (19-JAN-05)
Is Red Hat GFS(Global File System) is certified by Oracle for use with Real
Application Clusters? (22-NOV-04)
Oracle Clusterware (CRS)
Is it possible to use ASM for the OCR and voting disk? (19-JUL-05)
Is it supported to rerun root.sh from the Oracle Clusterware installation ? (05-
MAY-05)
Is it supported to allow 3rd Party Clusterware to manage Oracle resources
(instances, listeners, etc) and turn off Oracle Clusterware management of these?
(05-MAY-05)
What is the High Availability API? (05-MAY-05)
How to move the OCR location ? (24-MAR-04)
Does Oracle Clusterware support application vips? (11-JUL-05)
Why is the home for Oracle Clusterware not recommended to be subdirectory of the
Oracle base directory? (11-JUL-05)
Can I use Oracle Clusterware to provide cold failover of my 9i or 10g single
instance Oracle Databases? (01-JUL-05)
How do I put my application under the control of Oracle Clusterware to achieve
higher availability? (16-JUN-05)
How do I protect the OCR and Voting in case of media failure? (05-MAY-05)
How do I use multiple network interfaces to provide High Availability for my
interconnect with Oracle Clusterware? (06-APR-05)
How to Restore a Lost Voting Disk used by Oracle Clusterware 10g (02-DEC-04)
With Oracle Clusterware 10g, how do you backup the OCR? (02-DEC-04)
Does the hostname have to match the public name or can it be anything else? (05-
NOV-04)
Is it a requirement to have the public interface linked to ETH0 or does it only
need to be on a ETH lower than the private interface?: - public on ETH1 - private
on ETH2 (05-NOV-04)
How do I restore OCR from a backup? On Windows, can I use ocopy? (27-OCT-04)
What should the permissions be set to for the voting disk and ocr when doing a RAC
Install? (22-OCT-04)
Which processes access to OCR ? (22-OCT-04)
Can I change the name of my cluster after I have created it when I am using Oracle
Database 10g Clusterware? (05-OCT-04)
Can I change the public hostname in my Oracle Database 10g Cluster using Oracle
Clusterware? (05-OCT-04)
During CRS installation, I am asked to define a private node name, and then on the
next screen asked to define which interfaces should be used as private and public
interfaces. What information is required to answer these questions? (24-MAR-04)
Answers
I have changed my spfile with alter system set <parameter_name> =....
scope=spfile. The spfile is on
ASM storage and the database will not start.
How to recover:

In $ORACLE_HOME/dbs

. oraenv <instance_name>

sqlplus "/ as sysdba"

startup nomount

create pfile='recoversp' from spfile


/
shutdown immediate
quit

Now edit the newly created pfile to change the parameter to something sensible.

Then:

sqlplus "/ as sysdba"

startup pfile='recoversp' (or whatever you called it in step one).

create spfile='+DATA/GASM/spfileGASM.ora' from pfile='recoversp'


/
N.B.The name of the spfile is in your original init<instance_name>.ora so adjust
to suit

shutdown immediate
startup
quit

Modified: 18-APR-04 Ref #: ID-5068

--------------------------------------------------------------------------------

Is it supported to install CRS and RAC as different users.


Yes, CRS and RAC can be installed as different users. The CRS user and the RAC
user must both have "oinstall" as their primary group, and the RAC user should be
a member of the OSDBA group.
Modified: 09-SEP-04 Ref #: ID-5769

--------------------------------------------------------------------------------

Do we have to have Oracle RDBMS on all nodes?


Each node of a cluster will typically have the RDBMS and RAC software loaded on
it, but not actual datafiles (these need to be available via shared disk). For
example, if you wish to run RAC on 2 nodes of a 4-node cluster, you would need to
install it on all nodes, but it would only need to be licensed on the two nodes
running the RAC database. Note that using a clustered file system, or NAS storage
can provide a configuration that does not necessarily require the Oracle binaries
to be installed on all nodes.
Modified: 02-APR-04 Ref #: ID-4024
--------------------------------------------------------------------------------

What kind of HW components do you recommend for the interconnect?


The general recommendation for the interconnect is to provide the highest bandwith
interconnect, together with the lowest latency protocol that is available for a
given platform. In practice, Gigabit Ethernet with UDP has proven sufficient in
every case it has been implemented, and tends to be the lowest common denominator
across platforms.
Modified: 02-APR-04 Ref #: ID-4049

--------------------------------------------------------------------------------

Are there any suggested roadmaps for implementing a new RAC installation?
Yes, Oracle Support recommends the following best practices roadmap to
successfully implement RAC:

A Smooth Transition to Real Application Clusters

The Purpose of this document is to provide a best practices road map to


successfully implement Real Application Clusters.

Modified: 26-NOV-02 Ref #: ID-4062

--------------------------------------------------------------------------------

What is Cache Fusion and how does this affect applications?


Cache Fusion is a new parallel database architecture for exploiting clustered
computers to achieve scalability of all types of applications. Cache Fusion is a
shared cache architecture that uses high speed low latency interconnects available
today on clustered systems to maintain database cache coherency. Database blocks
are shipped across the interconnect to the node where access to the data is
needed. This is accomplished transparently to the application and users of the
system. Cache Fusion scales to clusters with a large numbers of nodes. For more
information about cache fusion see the following links:
Additional Information can be found at:

Understanding 9i Real Application Clusters Cache Fusion

There is also a whitepaper ""Cache Fusion Delivers Scalability"" available at


http://otn.oracle.com/products/oracle9i/content.html

Cache Fusion in the Oracle Documentation

Modified: 26-NOV-02 Ref #: ID-4065

--------------------------------------------------------------------------------

Is it difficult to transition from Single Instance to RAC?


If the cluster and the cluster software are not present, these components must be
installed and configured. The RAC option must be added using the Oracle Universal
Installer, which necessitates the existing DB instance must be shut down. There
are no changes necessary on the user data within the database. However, a
shortage of freelists and freelist groups can cause contention with header blocks
of tables and indexes as multiple instances vie for the same block. This may
cause a performance problem and require data partitioning. However, the need for
these changes should be rare.

Recommendation: apply automatic space segment management to perform these changes


automatically. The free space management will replace the freelists and freelist
groups and is better. The database requires one Redo thread and one Undo
tablespace for each instance, which are easily added with SQL commands or with
Enterprise Manager tools.

Datafiles will need to be moved to either a clustered file system (CFS) or raw
devices so that all nodes can access it. Also, the MAXINSTANCES parameter in the
control file must be greater than or equal to number of instances you will start
in the cluster.

For more detailed information, please see Migrating from single-instance to RAC in
the Oracle Documentation

With Oracle Database 10g Release 2, $ORACLE_HOME/bin/rconfig tool can be used to


convert Single instance database to RAC. This tool takes in a xml input file and
convert the Single Instance database whose information is provided in the xml. You
can run this tool in "verify only" mode prior to performing actual conversion.
This is documented in the RAC admin book and a sample xml can be found
$ORACLE_HOME/assistants/rconfig/sampleXMLs/ConvertToRAC.xml. Grid Control 10g
Release 2 provides a easy to use wizard to perform this function.
Note: Please be aware that you may hit bug 4456047 (shutdown immediate hangs) as
you convert the database. The bug is updated with workaround and the w/a should is
release noted as well.

Modified: 18-JUL-05 Ref #: ID-4101

--------------------------------------------------------------------------------

What are the dependencies between OCFS and ASM in Oracle10g ?


In an Oracle Database 10g RAC environment, there is no dependency between
Automatic Storage Management (ASM)
and Oracle Cluster File System (OCFS).
OCFS is not required if you are using Automatic Storage Management (ASM) for
database files. You can use OCFS
on Windows( Version 2 on Linux ) for files that ASM does not handle - binaries
(shared oracle home),
trace files, etc. Alternatively, you could place these files on local file systems
even though it's not
as convenient given the multiple locations.
If you do not want to use ASM for your database files, you can still use OCFS for
database files in Oracle Database 10g.
Please refer to ASM and OCFS Positioning
Modified: 05-MAY-05 Ref #: ID-4116

--------------------------------------------------------------------------------

Is rcp and/or rsh required for normal RAC operation ?


rcp"" and ""rsh"" are not required for normal RAC operation. However ""rsh"" and
""rcp"" should to be enabled for RAC and patchset installation. In future
releases, ssh will be used for these operations.
Modified: 06-NOV-03 Ref #: ID-4117
--------------------------------------------------------------------------------

What software is necessary for RAC? Does it have a separate installation CD to


order?
Real Application Clusters is an option of Oracle Database and therefore part of
the Oracle Database CD. With Oracle 9i, RAC is part of Oracle9i Enterprise
Edition. If you install 9i EE onto a cluster, and the Oracle Universal Installer
(OUI) recognizes the cluster, you will be provided the option of installing RAC.
Most UNIX platforms require an OSD installation for the necessary clusterware. For
Intel platforms (Linux and Windows), Oracle provides the OSD software within the
Oracle9i Enterprise Edition release.

With Oracle Database 10g, RAC is an option of EE and available as part of SE.
Oracle provides Oracle Clusterware on its own CD included in the database CD pack.

Please check the certification matrix (Note 184875.1) or with the appropriate
platform vendor for more information.

@ Sent by Karin Brandauer

Modified: 05-MAY-05 Ref #: ID-4132

--------------------------------------------------------------------------------

What is Standard Edition RAC?


With Oracle Database 10g, a customer who has purchased Standard Edition is allowed
to use the RAC option within the limitations of Standard Edition(SE). For
licensing restrictions you should read the Oracle Database 10g License Doc. At a
high level this means that you can have a max of 4 cpus in the cluster, you must
use ASM for all database files. Oracle Cluster File System (OCFS) is not supported
for use with SE RAC.
Modified: 01-SEP-04 Ref #: ID-5750

--------------------------------------------------------------------------------

Can I use iSCSI storage with my RAC cluster?


For iSCSI, Oracle has made the statement that, as a block protocol, this
technology does not require validation for single instance database. There are
many early adopter customers of iSCSI running Oracle9i and Oracle Database 10g. As
for RAC, Oracle has chosen to validate the iSCSI technology (not each vendor's
targets) for the 10g platforms - this has been completed for Linux, Unix and
Windows. For Windows we have tested up to 4 nodes - Any Windows iSCSI products
that are supported by the host and storage device are supported by Oracle. No
vendor-specific information will be posted on Certify.
Modified: 13-JUL-05 Ref #: ID-5788

--------------------------------------------------------------------------------

What would you recomend to customer, Oracle clusterware or Vendor Clusterware


(I.E. MC Service Guard, HACMP,
Sun Cluster, Veritas etc.) with Oracle Database 10g Real Application Clusters?

You will be installing and using Oracle Clusterware whether or not you use the
Vendor Clusterware. The question
you need to ask is whether the Vendor Clusterware gives you something that Oracle
Clusterware does not.
Is the RAC database on the same server as the application server? Are there any
other processes on the same server
as the database that you require Vendor Clusterware to fail over to another server
in the cluster if the server
it is running on fails? IF this is the case, you may want the vendor clusterware,
if not, why spend the extra money
when Oracle Clusterware supplies everything you need to for the clustered database
included with your RAC license.
Modified: 21-OCT-04 Ref #: ID-5968

--------------------------------------------------------------------------------

When configuring the NIC cards and switch for a GigE Interconnect should it be set
to FULL or Half duplex in RAC?
You've got to use Full Duplex, regardless of RAC or not, but for all network
communication. Half Duplex means you can only either send OR receive at the same
time.
Modified: 05-NOV-04 Ref #: ID-6048

--------------------------------------------------------------------------------

Is it a good idea to add anti-virus software to my RAC cluster?


For customers who choose to run anti-virus (AV) software on their database
servers, they should be aware that the nature of AV software is that disk IO
bandwidth is reduced slightly as most AV software checks disk writes/reads. Also,
as the AV software runs, it will use CPU cycles that would normally be consumed by
other server processes (e.g your database instance). As such, databases will have
faster performance when not using AV software. As some AV software is known to
lock the files whilst is scans then it is a good idea to exclude the Oracle
Datafiles/controlfiles/logfiles from a regular AV scan
Modified: 31-JAN-05 Ref #: ID-6595

--------------------------------------------------------------------------------

Can I use RAC in a distributed transaction processing environment?


YES. Best practices is to have all tightly coupled branches of a distributed
transaction running on a RAC database must run on the same instance. Between
transactions and between services, transactions can be load balanced across all of
the database instances.
You can use services to manage DTP environments. By defining the DTP property of a
service, the service is guaranteed to run on one instance at a time in a RAC
database. All global distributed transactions performed through the DTP service
are ensured to have their tightly-coupled branches running on a single RAC
instance.
Modified: 16-JUN-05 Ref #: ID-6864

--------------------------------------------------------------------------------

Why do we have a Virtual IP (VIP) in 10g? Why does it just return a dead
connection when its primary node fails?
Its all about availability of the application.
When a node fails, the VIP associated with it is supposed to be automatically
failed over to some other node. When this occurs, two things happen. (1) the new
node re-arps the world indicating a new MAC address for the address. For directly
connected clients, this usually causes them to see errors on their connections to
the old address; (2) Subsequent packets sent to the VIP go to the new node, which
will send error RST packets back to the clients. This results in the clients
getting errors immediately.
This means that when the client issues SQL to the node that is now down, or
traverses the address list while connecting, rather than waiting on a very long
TCP/IP time-out (~10 minutes), the client receives a TCP reset. In the case of
SQL, this is ORA-3113. In the case of connect, the next address in tnsnames is
used.
Without using VIPs, clients connected to a node that died will often wait a 10
minute TCP timeout period before getting an error.
As a result, you don't really have a good HA solution without using VIPs.
Modified: 12-MAR-04 Ref #: ID-4609

--------------------------------------------------------------------------------

If I use Services with Oracle Database 10g, do I still need to set up Load
Balancing ?
Yes, Services allow you granular definition of workload and the DBA can
dynamically define which instances provide the service. Connection Load Balancing
still needs to be set up to allow the user connections to be balanced across all
instances providing a service.
Modified: 16-JUN-05 Ref #: ID-6731

--------------------------------------------------------------------------------

Can RMAN backup Real Application Cluster databases?


Absolutely. RMAN can be configured to connect to all nodes within the cluster to
parallelize the backup of the database files and archive logs. If files need to be
restored, using set AUTOLOCATE ON alerts RMAN to search for backed up files and
archive logs on all nodes.

RAC with RMAN in the Oracle Documentation

Modified: 26-NOV-02 Ref #: ID-4035

--------------------------------------------------------------------------------

I am receiving an ORA-29740 error. What should I do?


This error can occur when problems are detected on the cluster:

Error: ORA-29740 (ORA-29740)


Text: evicted by member %s, group incarnation %s
---------------------------------------------------------------------------
Cause: This member was evicted from the group by another member of the
cluster database for one of several reasons, which may include a
communications error in the cluster, failure to issue a heartbeat
to the control file, etc.
Action: Check the trace files of other active instances in the cluster
group for indications of errors that caused a reconfiguration.

For more information on troubleshooting this error, see the following Metalink
note:
Note 219361.1
Troubleshooting ORA-29740 in a RAC Environment

Modified: 02-DEC-02 Ref #: ID-4093

--------------------------------------------------------------------------------

What does the Virtual IP service do? I understand it is for failover but do we
need a separate network card? Can we use the existing private/public cards? What
would happen if we used the public ip?
The 10g Virtual IP Address (VIP) exists on every RAC node for public network
communication. All client communication should use the VIPs in their TNS
connection descriptions. The TNS ADDRESS_LIST entry should direct clienst to VIPs
rather than using hostnames. During normal runtime, the behaviour is the same as
hostnames, however when the node goes down or is shutdown the VIP is hosted
elsewhere on the cluster, and does not accept connection requests. This results in
a silent TCP/IP error and the client fails immediately to the next TNS address. If
the network interface fails within the node, the VIP can be configured to use
alternate interfaces in the same node. The VIP must use the public interface
cards. There is no requirement to purchase additional public interface cards
(unless you want to take advantage of within-node card failover.)
Modified: 15-MAR-04 Ref #: ID-4636

--------------------------------------------------------------------------------

What do the VIP resources do once they detect a node has failed/gone down? Are the
VIPs automatically acquired, and published, or is manual intervention required?
Are VIPs mandatory?
When a node fails, the VIP associated with the failed node is automatically failed
over to one of the other nodes in the cluster. When this occurs, two things
happen:
The new node re-arps the world indicating a new MAC address for this IP address.
For directly connected clients, this usually causes them to see errors on their
connections to the old address;
Subsequent packets sent to the VIP go to the new node, which will send error RST
packets back to the clients. This results in the clients getting errors
immediately.
In the case of existing SQL conenctions, errors will typically be in the form of
ORA-3113 errors, while a new connection using an address list will select the next
entry in the list. Without using VIPs, clients connected to a node that died will
often wait for a TCP/IP timeout period before getting an error. This can be as
long as 10 minutes or more. As a result, you don't really have a good HA solution
without using VIPs.
Modified: 15-MAR-04 Ref #: ID-4638

--------------------------------------------------------------------------------

What are my options for load balancing with RAC? Why do I get an uneven number of
connections on my instances?
All the types of load balancing available currently (9i-10g) occur at connect
time.
This means that it is very important how one balances connections and what these
connections do on a long term basis.
Since establishing connections can be very expensive for your application, it is
good programming practice to connect once and stay connected. This means one needs
to be careful as to what option one uses. Oracle Net Services provides load
balancing or you can use external methods such as hardware based or clusterware
solutions.
The following options exist:
Random
Either client side load balancing or hardware based methods will randomize the
connections to the instances.
On the negative side this method is unaware of load on the connections or even if
they are up meaning they might cause waits on TCP/IP timeouts.
Load Based
Server side load balancing (by the listener) redirects connections by default
depending on the RunQ length of each of the instances. This is great for short
lived connections. Terrible for persistent connections or login storms. Do not use
this method for connections from connection pools or applicaton servers
Session Based
Server side load balancing can also be used to balance the number of connections
to each instance. Session count balancing is method used when you set a listener
parameter, prefer_least_loaded_node_listener-name=off. Note listener name is the
actual name of the listener which is different on each node in your cluster and by
default is listener_nodename.
Session based load balancing takes into account the number of sessions connected
to each node and then distributes ne connections to balance the number of sessions
across the different nodes.
Modified: 15-MAR-05 Ref #: ID-4940

--------------------------------------------------------------------------------

Can I use ASM as mechanism to mirror the data in an Extended RAC cluster?
Yes, but it cannot replicate everything that needs replication.
ASM works well to replicate any object you can put in ASM. But you cannot put the
OCR or Voting Disk in ASM.
In 10gR1 they can either be mirrored using a different mechanism (which could then
be used instead of ASM) or the OCR needs to be restored from backup and the Voting
Disk can be recreated.
In the future we are looking at providing Oracle redundancy for both.
Modified: 18-OCT-04 Ref #: ID-5948

--------------------------------------------------------------------------------

Can our 10g VIP fail over from NIC to NIC as well as from node to node ?
Yes the 10g VIP implementation is capable from failing over within a node from NIC
to NIC and back if the failed NIC is back online again, and also we fail over
between nodes. The NIC to NIC failover is fully redundant if redundant switches
are installed.
Modified: 10-DEC-04 Ref #: ID-6348

--------------------------------------------------------------------------------

What clients provide integration with FAN and FCF?


With Oracle Database 10g Release 1, JDBC clients (both thick and thin driver) are
integrated with FAN by providing FCF. With Oracle Database 10g Release 2, we have
added ODP.NET and OCI. Other applications can integrate with FAN by using the API
to subscribe to the FAN events.
Modified: 28-APR-05 Ref #: ID-6735
--------------------------------------------------------------------------------

What is CLB_GOAL and how should I set it?


CLB_GOAL is the connection load balancing goal for a service. There are 2 options,
CLB_GOAL_SHORT and CLB_GOAL_LONG (default).
Long is for applications that have long-lived connections. This is typical for
connection pools and SQL*Forms sessions. Long is the default connection load
balancing goal.
Short is for applications that have short-lived connections.
The GOAL for a service can be set with EM or DBMS_SERVICE.
Note: You must still configure load balancing with Oracle Net Services
Modified: 16-JUN-05 Ref #: ID-6854

--------------------------------------------------------------------------------

Can I use TAF and FAN/FCF?


With Oracle Database 10g Release 1, NO. With Oracle Database 10g Release 2, the
answer is YES for OCI and ODP.NET, it is recommended. For JDBC, you should not use
TAF and FCF even with the Thick JDBC driver.
Modified: 16-JUN-05 Ref #: ID-6866

--------------------------------------------------------------------------------

What is Server-side Transparent Application Failover (TAF) and how do I use it?
Oracle Database 10g Release 2, introduces server-side TAF when using services.
After you create a service, you can use the dbms_service.modify_service pl/sql
procedure to define the TAF policy for the service. Only the basic method is
supported. Note this is different than the TAF policy (traditional client TAF)
that is supported by srvctl and EM Services page. If your service has a server
side TAF policy defined, then you do not have to encode TAF on the client
connection string. If the instance where a client is connected, fails, then the
connection will be failed over to another instance in the cluster that is
supporting the service. All restrictions of TAF still apply.
NOTE: both the client and server must be 10.2 and aq_ha_notifications must be set
to true for the service.
Sample code to modify service:
execute dbms_service.modify_service (service_name => 'gl.us.oracle.com' -
, aq_ha_notifications => true -
, failover_method => dbms_service.failover_method_basic -
, failover_type => dbms_service.failover_type_select -
, failover_retries => 180 -
, failover_delay => 5 -
, clb_goal => dbms_service.clb_goal_long);

Modified: 07-JUL-05 Ref #: ID-6912

--------------------------------------------------------------------------------

I am seeing the wait events 'ges remote message', 'gcs remote message', and/or
'gcs for action'. What should I do about these?
These are idle wait events and can be safetly ignored. The 'ges remote message'
might show up in a 9.0.1 statspack report as one of the top wait events. To have
this wait event not show up you can add this event to the
PERFSTAT.STATS$IDLE_EVENT table so that it is not listed in Statspack reports.

Modified: 02-APR-04 Ref #: ID-4092

--------------------------------------------------------------------------------

What are the changes in memory requirements from moving from single instance to
RAC?
If you are keeping the workload requirements per instance the same, then about 10%
more buffer cache and 15% more shared pool is needed. The additional memory
requirement is due to data structures for coherency management. The values are
heuristic and are mostly upper bounds. Actual esource usage can be monitored by
querying current and maximum columns for the gcs resource/locks and ges
resource/locks entries in V$RESOURCE_LIMIT.

But in general, please take into consideration that memory requirements per
instance are reduced when the same user population is distributed over multiple
nodes. In this case:

Assuming the same user population N number of nodes M buffer cache for a single
system then

(M / N) + ((M / N )*0.10) [ + extra memory to compensate for failed-over users ]

Thus for example with a M=2G & N=2 & no extra memory for failed-over users

=( 2G / 2 ) + (( 2G / 2 )) *0.10

=1G + 100M

Modified: 02-DEC-02 Ref #: ID-4030

--------------------------------------------------------------------------------

What is the Load Balancing Advisory?


To assist in the balancing of application workload across designated resources,
Oracle Database 10g Release 2 provides the Load Balancing Advisory. This Advisory
monitors the current workload activity across the cluster and for each instance
where a service is active; it provides a percentage value of how much of the total
workload should be sent to this instance as well as service quality flag. The
feedback is provided as an entry in the Automatic Workload Repository and a FAN
event is published.
Modified: 16-JUN-05 Ref #: ID-6858

--------------------------------------------------------------------------------

What is Runtime Connection Load Balancing?


Runtime connection load balancing enables the connection pool to route incoming
work requests to the available database connection that will provide it with the
best service. This will provide the best service times globally, and routing
responds fast to changing conditions in the system. Oracle has implemented runtime
connection load balancing with ODP.NET and JDBC connection pools. Runtime
Connection Load Balancing is tightly integrated with the automatic workload
balancing features introduced with Oracle Database 10g I.E. Services, Automatic
Workload Repository, and the new Load Balancing Advisory.
Modified: 16-JUN-05 Ref #: ID-6860

--------------------------------------------------------------------------------

How do I enable the load balancing advisory?


The load balancing advisory requires the use of services and Oracle Net connection
load balancing.
To enable it, on the server: set a goal (service_time or throughput, for ODP.NET
enable AQ_HA_NOTIFICATIONS=>true, and set CLB_GOAL ) on your service.
For client, you must be using the connection pool.
For JDBC, enable the datasource parameter FastConnectionFailoverEnabled.
For ODP.NET enable the datasource parameter Load Balancing=true.
Modified: 16-JUN-05 Ref #: ID-6862

--------------------------------------------------------------------------------

How do I stop the GSD?


If you are on 9.0 on Unix you would issue:

$ ps -ef | grep jre


$ kill -9 <gsd process>

Stop the OracleGSDService on Windows.

Note: Make sure that this is the process in use by GSD

If you are on 9.2 you would issue:

$ gsdctl stop

Modified: 22-MAR-04 Ref #: ID-4091

--------------------------------------------------------------------------------

How should I deal with space management? Do I need to set free lists and free list
groups?
Manually setting free list groups is a complexity that is no longer required.

We recommend using Automatic Segment Space Management rather than trying to manage
space manually. Unless you are migrating from an earlier database version with OPS
and have already built and tuned the necessary structures, Automatic Segment Space
Management is the preferred approach.

Automatic Segment Space Management is NOT the default, you need to set it.

For more information see:

Automatic Space Segment Management in RAC Environments

Modified: 16-JUN-03 Ref #: ID-4074

--------------------------------------------------------------------------------
I was installing RAC and my Oracle files did not get copied to the remote node(s).
What went wrong?
First make sure the cluster is running and is available on all nodes. You should
be able to see all nodes
when running an 'lsnodes -v' command.

If lsnodes shows that all members of the cluster are available, then you may have
an rcp/rsh problem on Unix
or shares have not been configured on Windows.

You can test rcp/rsh on Unix by issuing the following from each node:

[node1]/tmp> touch test.tst


[node1]/tmp> rcp test.tst node2:/tmp

[node2]/tmp> touch test.tst


[node2]/tmp> rcp test.tst node1:/tmp

On Windows, ensure that each node has administrative access to all these
directories within the Windows environment by running the following at the command
prompt:

NET USE \\host_name\C$

Clustercheck.exe also checks for this.

More information can be found in the Step-by-Step RAC notes available on Metalink.
To find these search Metalink for 'Step-by-Step Installation of RAC'.

Modified: 26-NOV-02 Ref #: ID-4094

--------------------------------------------------------------------------------

What are the implications of using srvctl disable for an instance in my RAC
cluster? I want to have it available
to start if I need it but at this time to not want to run this extra instance for
this database.
During node reboot, any disabled resources will not be started by the Clusterware,
therefore this instance
will not be restarted. It is recommended that you leave the vip, ons,gsd enabled
in that node. For example,
VIP address for this node is present in address list of database services, so a
client connecting to these services
will still reach some other database instance providing that service via listener
redirection. J
ust be aware that by disabling an Instance on a node, all that means is that the
instance itself is not starting.
However, if the database was originally created with 3 instances, that means there
are 3 threads of redo.
So, while the instance itself is disabled, the redo thread is still enabled, and
will occasionally cause
log switches. The archived logs for this 'disabled' instance would still be needed
in any potential database
recovery scenario. So, if you are going to disable the instance through srvctl,
you may also want to consider
disabling the redo thread for that instance.
srvctl disable instance -d orcl -i orcl2

SQL> alter database disable public thread 2;

Do the reverse to enable the instance.

SQL> alter database enable public thread 2;

srvctl enable instance -d orcl -i orcl2


Modified: 31-MAR-05 Ref #: ID-6672

--------------------------------------------------------------------------------

What is the Cluster Verification Utiltiy (cluvfy)?


The Cluster Verification Utility (CVU) is a validation tool that you can use to
check all the important components that need to be verified at different stages of
deployment in a RAC environment. The wide domain of deployment of CVU ranges from
initial hardware setup through fully operational cluster for RAC deployment and
covers all the intermediate stages of installation and configuration of various
components. Cluvfy does not take any corrective action following the failure of a
verification task, does not enter into areas of performance tuning or monitoring,
does not perform any cluster or RAC operation, and does not attempt to verify the
internals of cluster database or cluster elements.
Modified: 16-JUN-05 Ref #: ID-6850

--------------------------------------------------------------------------------

What versions of the database can I use the cluster verification utility (cluvfy)
with?
The cluster verification utility is release with Oracle Database 10g Release 2 but
can also be used with Oracle Database 10g Release 1.
Modified: 16-JUN-05 Ref #: ID-6852

--------------------------------------------------------------------------------

How many nodes can be had in an HP/Sun/IBM/Compaq/NT/Linux cluster?


The number of nodes supported is not limited by Oracle, but more generally by the
clustering software/hardware
in question.

When using Solely Oracle Clusterware: 63 nodes (9i or 10gR1)

When using a third party clusterware:

Sun: 8

HP UX: 16

HP Tru64: 8

IBM AIX:

* 8 nodes for Physical Shared (CLVM) SSA disk


* 16 nodes for Physical Shared (CLVM) non-SSA disk

* 128 nodes for Virtual Shared Disk (VSD)

* 128 nodes for GPFS

* Subject to storage subsystem limitations

Veritas: 8-16 nodes (check w/ Veritas)

Modified: 21-OCT-04 Ref #: ID-4047

--------------------------------------------------------------------------------

Where I can find information about how to setup / install RAC on different
platforms ?
There is a roadmap for implementing Real Application Clusters' available at:

A Smooth Transition to Real Application Clusters

There are also Step-by-Step notes available for each platform available on the
Metalink 'Top Tech Docs' for RAC:

High Availability - Real Application Clusters Library Page Index

Additional information can be found on OTN:

http://technet.oracle.com/products/oracle9i/content.html --> 'Oracle Real


Application Clusters'

Modified: 08-AUG-02 Ref #: ID-4067

--------------------------------------------------------------------------------

Is it possible to run RAC on logical partitions (i.e. LPARs) or virtual separate


servers.
Yes, it is possible. The E10K and other high end servers can be partitioned into
domains of smaller sizes, each domain with its own CPU(s) and operating system.
Each domain is effectively a virtual server. RAC can be run on cluster comprises
of domains. The benefits of using this is similar to a regular cluster, any domain
failure will have little effect on other domains. Besides, the management of the
cluster may be easier since there is only one physical server. Note however, since
one E10K is still just one server. There are single points of failures. Any
failures, such as back plane failure, that crumble the entire server will shutdown
the virtual cluster. That is the tradeoff users have to make in how best to build
a cluster database.
Modified: 18-MAY-04 Ref #: ID-4075

--------------------------------------------------------------------------------

How do I check RAC certification?


See the following Metalink note:

Note 184875.1
How To Check The Certification Matrix for Real Application Clusters

Please note that certifications for Real Application Clusters are performed
against the Operating System and Clusterware versions. The corresponding system
hardware is offered by System vendors and specialized Technology vendors. Some
system vendors offer pre-installed, pre-configured RAC clusters. These are
included below under the corresponding OS platform selection within the
certification matrix.

Modified: 26-NOV-02 Ref #: ID-4095

--------------------------------------------------------------------------------

Can the Oracle Database Configuration Assistant (DBCA) be used to create a


database with Veritas DBE / AC 3.5?
DBCA can be used to create databases on raw devices in 9i RAC Release 1 and 9i
Release 2. Standard database creation scripts using SQL commands will work with
file system and raw.

DBCA cannot be used to create databases on file systems on Oracle 9i Release 1.


The user can choose to set up a database on raw devices, and have DBCA output a
script. The script can then be modified to use cluster file systems instead.

With Oracle 9i RAC Release 2 (Oracle 9.2), DBCA can be used to create databases on
a cluster filesystem. If the ORACLE_HOME is stored on the cluster filesystem, the
tool will work directly. If ORACLE_HOME is on local drives on each system, and the
customer wishes to place database files onto a cluster file system, they must
invoke DBCA as follows: dbca -datafileDestination /oradata where /oradata is on
the CFS filesystem. See 9iR2 README and bug 2300874 for more info.

Modified: 10-JAN-03 Ref #: ID-4124

--------------------------------------------------------------------------------

Is crossover cable supported as an interconnect with 9iRAC/10gRAC on any


platform ?

NO. CROSS OVER CABLES ARE NOT SUPPORTED.


The requirement is to use a switch:

Detailed Reasons:
1) cross-cabling limits the expansion of RAC to two nodes
2) cross-cabling is unstable:
a) Some NIC cards do not work properly with it.
b) Instability. We have seen different problems e.g.. ORA-29740 at
configurations using crossover cable, and other errors.

Due to the benefits and stability provided by a switch, and their afforability,
this is the only supported configuration.

Please see certify.us.oracle.com as well.


(content consolidated from that of Massimo Castelli, Roland Knapp and others)

Modified: 21-FEB-05 Ref #: ID-4150

--------------------------------------------------------------------------------

Is Veritas Storage Foundation 4.0 supported with RAC?


Veritas Storage Foundation 4.0 is certified on AIX, Solaris and HPUX for 9i RAC
and Oracle Database 10g RAC. Veritas is production also on Linux, but it is not
certified by Oracle. If customers choose Veritas on Linux, Oracle will support the
Oracle products in the stack, but they do not qualify for Unbreakable Linux
support.
Modified: 05-OCT-04 Ref #: ID-5888

--------------------------------------------------------------------------------

Is 3rd Party Clusterware supported on Linux such as Veritas or Redhat?


No, Oracle RAC 10g does not support 3rd Party clusterware on Linux. This means
that if a cluster file system requires a 3rd party clusterware, the cluster file
system is not supported.
Modified: 11-MAY-05 Ref #: ID-6743

--------------------------------------------------------------------------------

Can you have multiple RAC $ORACLE_HOME's on Linux?


No, there should be only one Oracle Cluster Manager (ORACM) running on each node.
All RAC databases should run out of the $ORACLE_HOME that ORACM is installed in.
Modified: 19-JUL-05 Ref #: ID-6931

--------------------------------------------------------------------------------

After installing patchset 9013 and patch_2313680 on Linux, the startup was very
slow

Please carefully read the following new information about configuring Oracle
Cluster Management on Linux, provided as part of the patch README:

Three parameters affect the startup time:

soft_margin (defined at watchdog module load)

-m (watchdogd startup option)

WatchdogMarginWait (defined in nmcfg.ora).

WatchdogMarginWait is calculated using the formula:

WatchdogMarginWait = soft_margin(msec) + -m + 5000(msec).

[5000(msec) is hardcoded]

Note that the soft_margin is measured in seconds, -m and WatchMarginWait are


measured in milliseconds.
Based on benchmarking, it is recommended to set soft_margin between 10 and 20
seconds. Use the same value for -m (converted to milliseconds) as used for
soft_margin. Here is an example:

soft_margin=10 -m=10000 WatchdogMarginWait = 10000+10000+5000=25000

If CPU utilization in your system is high and you experience unexpected node
reboots, check the wdd.log file. If there are any 'ping came too late' messages,
increase the value of the above parameters.

Modified: 20-DEC-04 Ref #: ID-4069

--------------------------------------------------------------------------------

Is CFS Available for Linux?

Yes, OCFS (Oracle Cluster Filesystem) is now available for Linux. The following
Metalink note has information for obtaining the latest version of OCFS:

Note 238278.1 - How to find the current OCFS version for Linux

Modified: 20-DEC-04 Ref #: ID-4089

--------------------------------------------------------------------------------

Where can I find more information about hangcheck-timer module on Linux ? And how
do we configure hangcheck-timer module ?
In releases 9.2.0.2.0 and later, Oracle recommends using a new I/O fencing model
-- HangCheck-Timer module. Hangcheck-Timer
module monitors the Linux kernel for long operating system hangs that could affect
the reliability of a RAC node. You can configure hangcheck-timer module using 3
parameters -- hangcheck_tick, hangcheck_margin and MissCount.

For more details, please review Note :: 259487.1


Modified: 20-DEC-04 Ref #: ID-4179

--------------------------------------------------------------------------------

Can RAC 10g and 9i RAC be installed and run on the same physical Linux cluster?
Yes - CRS / CSS and oracm can coexist.
Modified: 20-DEC-04 Ref #: ID-4408

--------------------------------------------------------------------------------

Is the hangcheck timer still needed with Oracle Database 10g RAC?
YES! The hangcheck-timer module monitors the Linux kernel for extended operating
system hangs that could affect the reliability
of the RAC node ( I/O fencing) and cause database corruption. To verify the
hangcheck-timer module is running on every node:

as root user:
/sbin/lsmod | grep hangcheck
If the hangcheck-timer module is not listed enter the following command as the
root user:

/sbin/insmod hangcheck-timer hangcheck_tick=30 hangcheck_margin=180

To ensure the module is loaded every time the system reboots, verify that the
local system startup file (/etc/rc.d/rc.local) contains the command above.

For additional information please review the Oracle RAC Install and Configuration
Guide (5-41).

Modified: 20-DEC-04 Ref #: ID-6208

--------------------------------------------------------------------------------

How to configure bonding on Suse SLES8.


Please see note:291958.1
Modified: 29-NOV-04 Ref #: ID-6288

--------------------------------------------------------------------------------

How to configure bonding on Suse SLES9.


Please see note:291962.1
Modified: 29-NOV-04 Ref #: ID-6290

--------------------------------------------------------------------------------

Does RAC run faster with Sun-cluster or Veritas cluster-ware? (these being
alternatives with Sun hardware) Is there some clusterware that would make RAC run
faster?
RAC scalability and performance are independent of the clusterware. However, we
recommend that the customer uses a very
fast memory based interconnect if one wants to optimize the performance. For
Example, Sun can use FireLink, a very fast proprietary interconnect which is more
optimal for RAC, while Veritas is limited to using Gigabit Ethernet.

Starting with 10g there will be an alternative to SunCluster and Veritas Cluster
than this is Oracle CRS/CSS.

Modified: 20-DEC-04 Ref #: ID-4088

--------------------------------------------------------------------------------

Is HMP supported with 10g on all HP platforms ?

- 10g RAC + HMP + PA-RISC = yes

- 10g RAC + HMP + Itanium, "Oracle has no plans and will likely never
support RAC over HMP on IPF."

- 10g RAC + UDP + Itanium = yes (even over Hyperfabric)


"Oracle recommends that HMP not be used. UDP is the recommended interconnect
protocol across all platforms."

Modified: 20-DEC-04 Ref #: ID-5488

--------------------------------------------------------------------------------

Does the Oracle Cluster File System (OCFS) support network access through NFS or
Windows Network Shares?
No, in the current release the Oracle Cluster File System (OCFS) is not supported
for use by network access approaches like NFS or Windows Network Shares.
Modified: 27-JAN-05 Ref #: ID-4122

--------------------------------------------------------------------------------

My customer wants to understand what type of disk caching they can use with their
Windows RAC Cluster, the install guide tells them to disable disk caching?
If the write cache identified is local to the node then that is bad for RAC. If
the cache is visible to all nodes as a 'single cache', typically in the storage
array, and is also 'battery backed' then that is OK.
Modified: 31-MAR-05 Ref #: ID-6670

--------------------------------------------------------------------------------

Can I run my 9i RAC and RAC 10g on the same Windows cluster?
Yes but the 9i RAC database must have the 9i Cluster Manager and you must run
Oracle Clusterware for the Oracle Database 10g. 9i Cluster Manager can coexsist
with Oracle Clusterware 10g.
Modified: 01-JUL-05 Ref #: ID-6889

--------------------------------------------------------------------------------

Do I need HACMP/GPFS to store my OCR/Voting file on a shared device.

The prerequisites doc for AIX clearly says:

"If you are not using HACMP, you must use a GPFS file system to store the Oracle
CRS files" ==>
this is a documentation bug and this will be fixed with 10.1.0.3

-----

On AIX it is important to put the reserve_lock=no/reserve_policy =no_reserve

in order to allow AIX to access the devices from more than one node
simultaneously.

Use the /dev/rhdisk devices (character special) for the crs and voting disk and
change the attribute with the command
chdev -l hdiskn -a reserve_lock=no

(for ESS, EMC, HDS, CLARiiON, and MPIO-capable devices you have to do an chdev -l
hdiskn -a reserve_policy=no_reserve)

Modified: 20-DEC-04 Ref #: ID-5288

--------------------------------------------------------------------------------

Can I run Oracle RAC 10g on my IBM Mainframe Sysplex environment (z/OS)?
YES! There is no separate documentation for RAC on z/OS. What you would call
"clusterware" is built in to the OS
and the native file systems are global. IBM z/OS documentation explains how to set
up a Sysplex Cluster;
once the customer has done that it is trivial to set up a RAC database. The few
steps involved are covered
in in Chapter 14 of the Oracle for z/OS System Admin Guide, which you can read
here. There is also an Install Guide
for Oracle on z/OS ( here) but I don't think there are any RAC-specific steps in
the installation. By the way,
RAC on z/OS does not use Oracle's clusterware (CSS/CRS/OCR).
Modified: 07-JUL-05 Ref #: ID-6910

--------------------------------------------------------------------------------

What are the cdmp directories in the background_dump_dest used for?


These directories are produced by the diagnosibility daemon process (DIAG). DIAG
is a process related to RAC
which as one of its tasks, performs cash dumping. The DIAG process dumps out
tracing to file when it discovers
the death of an essential process (foreground or background) in the local
instance. A dump directory named something
like cdmp_ is created in the bdump or background_dump_dest directory, and all the
trace dump files DIAG creates are
placed in this directory.
Modified: 11-AUG-03 Ref #: ID-4152

--------------------------------------------------------------------------------

Is the Oracle E-Business Suite (Oracle Applications) certified against RAC?


Yes. (There is no seperate certification required for RAC.) ""
Modified: 04-JUN-03 Ref #: ID-4029

--------------------------------------------------------------------------------

What is the optimal migration path to be used while migrating the E-Business suite
to RAC?
Following is the recommended and most optimal path to migrate you E-Business suite
to RAC environment:

1. Migrate the existing application to new hardware. (If applicable).

2. Use Clustered File System for all data base files or migrate all database files
to raw devices. (Use dd for Unix or ocopy for NT)

3. Install/upgrade to the latest available e-Business suite.

4. Upgrade database to Oracle9i (Refer document 216550.1 on Metalink)

5. In step 4, install RAC option while installing Oracle9i and use Installer to
perform install for all the nodes.

6. Clone Oracle Application code tree.

Reference Documents:
Oracle E-Business Suite Release 11i with 9i RAC: Installation and Configuration :
Metalink Note# 279956.1
E-Business Suite 11i on RAC : Configuring Database Load balancing & Failover:
Metalink Note# 294652.1
Oracle E-Business Suite 11i and Database - FAQ : Metalink# 285267.1

Modified: 08-JUL-05 Ref #: ID-4107

--------------------------------------------------------------------------------

How to configure concurrent manager in a RAC environment?


Large clients commonly put the concurrent manager on a separate server now (in the
middle tier) to reduce the load on the database server. The concurrent manager
programs can be #tied# to a specific middle tier (e.g., you can have CMs running
on more than one middle tier box). It is advisable to use specilize CM. CM middle
tiers are set up to point to the appropriate database instance based on product
module being used.

Modified: 20-SEP-02 Ref #: ID-4108

--------------------------------------------------------------------------------

Should functional partitioning be used with Oracle Applications?


We do not recommend functional partitioning unless throughput on your server
architecture demands it. Cache fusion has been optimized to scale well with non-
partitioned workload.

If your processing requirements are extreme and your testing proves you must
partition your workload in order to reduce internode communications, you can use
Profile Options to designate that sessions for certain applications
Responsibilities are created on a specific middle tier server. That middle tier
server would then be configured to connect to a specific database instance.

To determine the correct partitioning for your installation you would need to
consider several factors like number of concurrent users, batch users, modules
used, workload characteristics etc.

Modified: 20-SEP-02 Ref #: ID-4109

--------------------------------------------------------------------------------

Which e-Business version is prefereable?


Versions 11.5.5 onwards are certified with Oracle9i and hence with Oracle9i RAC.
However we recommend the latest available version.

Modified: 20-SEP-02 Ref #: ID-4110

--------------------------------------------------------------------------------

Can I use Automatic Undo Management with Oracle Applications?


Yes. In a RAC environment we highly recommend it.

Modified: 20-SEP-02 Ref #: ID-4111

--------------------------------------------------------------------------------

Can I use TAF with e-Business in a RAC environment?


TAF itself does not work with e-Business suite due to Forms/TAF limitations, but
you can configure the tns failover clause. On instance failure, when the user logs
back into the system, their session will be directed to a surviving instance, and
the user will be taken to the navigator tab. Their committed work will be
available; any uncommitted work must be re-started.

We also recommend you configure the forms error URL to identify a fallback middle
tier server for Forms processes, if no router is available to accomplish switching
across servers.

Modified: 02-APR-03 Ref #: ID-4112

--------------------------------------------------------------------------------

Can I use OCFS with SE RAC?


It is not supported to use OCFS with Standard Edition RAC. All database files must
use ASM (redo logs, recovery area,
datafiles, control files etc). We recommend that the binaries and trace files
(non-ASM supported files) to be
replicated on all nodes. This is done automatically by install.
Modified: 01-SEP-04 Ref #: ID-5748

--------------------------------------------------------------------------------

What are the maximum number of nodes under OCFS on Linux ?


Oracle 9iRAC on Linux, using OCFS for datafiles, can scale to a maximum of 32
nodes.
Modified: 06-NOV-03 Ref #: ID-4118

--------------------------------------------------------------------------------

Where can I find documentation on OCFS ?


For Main Page >>> http://oss.oracle.com/projects/ocfs/ For User Manual >>>
http://oss.oracle.com/projects/ocfs/documentation/ For OCFS Files >>>
http://oss.oracle.com/projects/ocfs/files/supported/
Modified: 06-NOV-03 Ref #: ID-4119

--------------------------------------------------------------------------------
What files can I put on Linux OCFS?
For optimal performance, you should only put the following files on Linux OCFS:

- Datafiles
- Control Files
- Redo Logs
- Archive Logs
- Shared Configuration File (OCR)
- Quorum / Voting File
- SPFILE

Modified: 14-AUG-03 Ref #: ID-4156

--------------------------------------------------------------------------------

Is Sun QFS supported with RAC? What about Sun GFS?

Sun QFS is supported with Oracle 9i RAC.


Sun is planning to certify QFS with Oracle Database 10g and RAC but as of November
15,2004, this certification is "planned".

For 9i, Software Stack details:

For SVM you need Solaris 9 9/04 (Solaris 9 update 7),SVM Patch 116669-03(this is
required SUN patch), Sun Cluster 3.1 Update 3, Oracle 9.2.0.5 + Oracle patch
3366258

For SharedQFS you need Solaris 9 04/03 and above or Solaris 8 02/02 and above, QFS
4.2, Sun Cluster 3.1 Update 2 or above, Oracle 9.2.0.5 + Oracle patch 3566420
Differently, Sun GFS (Global File System) is only supported for Oracle binary and
archive logs only, but NOT for database files.
Modified: 19-JAN-05 Ref #: ID-6128

--------------------------------------------------------------------------------

Is Red Hat GFS(Global File System) is certified by Oracle for use with Real
Application Clusters?
Sistina Cluster Filesystem is not part of the standard RedHat kernel and therefore
is not certified under
the unbreakable Linux but falls under a kernel extension. This however, does not
mean that Oracle RAC is
not certified with it. As a fact, Oracle RAC does not certify against a filesystem
per se, but certifies
against an operating system. If, as is the case with Sistina filesystem, the
filesystem is certified with
the operating system, this only means that the combination does not fall under the
unbreakable Linux combination
and Oracle does not provide direct support and fix the filesystem in case of an
error. Customer will have to contact
the filesystem provider for support.
Modified: 22-NOV-04 Ref #: ID-6228

--------------------------------------------------------------------------------
How to move the OCR location ?
- stop the CRS stack on all nodes using "init.crs stop" - Edit
/var/opt/oracle/ocr.loc on all nodes and set up ocrconfig_loc=new OCR device -
Restore from one of the automatic physical backups using ocrconfig -restore. - Run
ocrcheck to verify. - reboot to restart the CRS stack. - additional information
can be found at http://st-
doc.us.oracle.com/10/101/rac.101/b10765/storage.htm#i1016535
Modified: 24-MAR-04 Ref #: ID-4728

--------------------------------------------------------------------------------

Is it supported to rerun root.sh from the Oracle Clusterware installation ?


Rerunning root.sh after the initial install is expressly discouraged and
unsupported. We strongly recommend not doing it.
Modified: 05-MAY-05 Ref #: ID-4730

--------------------------------------------------------------------------------

Is it supported to allow 3rd Party Clusterware to manage Oracle resources


(instances, listeners, etc) and turn off
Oracle Clusterware management of these?
In 10g we do not support using 3rd Party Clusterware for failover and restart of
Oracle resources. Oracle Clusterware
resources should not be disabled.
Modified: 05-MAY-05 Ref #: ID-6528

--------------------------------------------------------------------------------

What is the High Availability API?


An application-programming interface to allow processes to be put under the High
Availability infrastructure that is part of the Oracle Clusterware distributed
with Oracle Database 10g. A user written script defines how Oracle Clusterware
should start, stop and relocate the process when the cluster node status changes.
This extends the high availability services of the cluster to any application
running in the cluster. Oracle Database 10g Real Application Clusters (RAC)
databases and associated Oracle processes (E.G. listener) are automatically
managed by the clusterware.
Modified: 05-MAY-05 Ref #: ID-6741

--------------------------------------------------------------------------------

Is it possible to use ASM for the OCR and voting disk?


No, the OCR and voting disk must be on raw or CFS (cluster filesystem).
Modified: 19-JUL-05 Ref #: ID-6929

--------------------------------------------------------------------------------

During CRS installation, I am asked to define a private node name, and then on the
next screen asked to define which interfaces should be used as private and public
interfaces. What information is required to answer these questions?
The private names on the first screen determine which private interconnect will be
used by CSS.
Provide exactly one name that maps to a private IP address, or just the IP address
itself. If a logical name is used, then the IP address this maps to can be changed
subsequently, but if you IP address is specified CSS will always use that IP
address. CSS cannot use multiple private interconnects for its communication hence
only one name or IP address can be specified.

The private interconnect enforcement page determines which private interconnect


will be used by the RAC instances.
It's equivalent to setting the CLUSTER_INTERCONNECTS init.ora parameter, but is
more convenient because it is a cluster-wide setting that does not have to be
adjusted every time you add nodes or instances. RAC will use all of the
interconnects listed as private in this screen, and they all have to be up, just
as their IP addresses have to be when specified in the init.ora paramter. RAC does
not fail over between cluster interconnects; if one is down then the instances
using them won't start.

Modified: 24-MAR-04 Ref #: ID-4724

--------------------------------------------------------------------------------

Can I change the name of my cluster after I have created it when I am using Oracle
Database 10g Clusterware?
No, you must properly deinstall CRS and then re-install. To properly de-install
CRS, you MUST follow the directions in the Installation Guide Chapter 10. This
will ensure the ocr gets cleaned out.
Modified: 05-OCT-04 Ref #: ID-5890

--------------------------------------------------------------------------------

Can I change the public hostname in my Oracle Database 10g Cluster using Oracle
Clusterware?
Hostname changes are not supported in CRS, unless you want to perform a deletenode
followed by a new addnode operation.
Modified: 05-OCT-04 Ref #: ID-5892

--------------------------------------------------------------------------------

What should the permissions be set to for the voting disk and ocr when doing a RAC
Install?
The Oracle Real Application Clusters install guide is correct. It describes the
PRE INSTALL ownership/permission requirements for ocr and voting disk. This step
is needed to make sure that the CRS install succeeds. Please don't use those
values to determine what the ownership/permmission should be POST INSTALL. The
root script will change the ownership/permission of ocr and voting disk as part of
install. The POST INSTALL permissions will end up being : OCR - root:oinstall -
640 Voting Disk - oracle:oinstall - 644
Modified: 22-OCT-04 Ref #: ID-5988

--------------------------------------------------------------------------------

Which processes access to OCR ?


Oracle Cluster Registry (OCR) is used to store the cluster configuration
information among other things. OCR needs to be accessible from all nodes in the
cluster. If OCR became inaccessible the CSS daemon would soon fail, and take down
the node. PMON never needs to write to OCR. To confirm if OCR is accessible, try
ocrcheck from your ORACLE_HOME and ORA_CRS_HOME.
Modified: 22-OCT-04 Ref #: ID-5990

--------------------------------------------------------------------------------

How do I restore OCR from a backup? On Windows, can I use ocopy?


The only recommended way to restore an OCR from a backup is "ocrconfig -restore ".
The ocopy command will not be able to perform the restore action for OCR.
Modified: 27-OCT-04 Ref #: ID-6008

--------------------------------------------------------------------------------

Does the hostname have to match the public name or can it be anything else?
When there is no vendor clusterware, only CRS, then the public node name must
match the host name. When vendor clusterware is present, it determines the public
node names, and the installer doesn't present an opportunity to change them. So,
when you have a choice, always choose the hostname.
Modified: 05-NOV-04 Ref #: ID-6050

--------------------------------------------------------------------------------

Is it a requirement to have the public interface linked to ETH0 or does it only


need to be on a ETH lower than the private interface?: - public on ETH1 - private
on ETH2
There is no requirement for interface name ordering. You could have - public on
ETH2 - private on ETH0 Just make sure you choose the correct public interface in
VIPCA, and in the installer's interconnect classification screen.
Modified: 05-NOV-04 Ref #: ID-6052

--------------------------------------------------------------------------------

How to Restore a Lost Voting Disk used by Oracle Clusterware 10g


Please read Note:279793.1 and for OCR Note:268937.1
Modified: 02-DEC-04 Ref #: ID-6308

--------------------------------------------------------------------------------

With Oracle Clusterware 10g, how do you backup the OCR?


There is an automatic backup mechanism for OCR. The default location is :
$ORA_CRS_HOME\cdata\"clustername"\

To display backups : ocrconfig -showbackup


To restore a backup : ocrconfig -restore

The automatic backup mechanism keeps upto about a week old copy. So, if you want
to retain a backup copy more than that, then you should copy that "backup" file to
some other name.

Unfortunately there are a couple of bugs regarding backup file manipulation, and
changing default backup dir on Windows. These will be fixed in 10.1.0.4. OCR
backup on Windows are absent. Only file in the backup directory is
temp.ocr which would be the last backup. You can restore this most recent backup
by using the command ocr -restore temp.ocr

If you want to take a logical copy of OCR at any time use : ocrconfig -export
, and use -import option to restore the contents back.

Modified: 02-DEC-04 Ref #: ID-6328

--------------------------------------------------------------------------------

How do I protect the OCR and Voting in case of media failure?


In Oracle Database 10g Release 1 the OCR and Voting device are not mirrored within
Oracle,hence both must be mirrored via a storage vendor method, like RAID 1.
Starting with Oracle Database 10g Release 2 Oracle Clusterware will multiplex the
OCR and Voting Disk (two for the OCR and three for the Voting).
Please read Note:279793.1 and Note:268937.1 regarding backup and restore a lost
Voting/OCR and FAQ 6238 regarding OCR backup.
Modified: 05-MAY-05 Ref #: ID-6612

--------------------------------------------------------------------------------

How do I use multiple network interfaces to provide High Availability for my


interconnect with Oracle Clusterware?
This needs to be done externally to Oracle Clusterware usually by some OS provided
nic bonding which gives Oracle Clusterware a single ip address for the
interconnect but provide failover across multiple nic cards. There are several
articles in Metalink on how to do this. For example for Sun Solaris search for
IPMP. On Linux, read the doc on rac.us
Configure Redundant Network Cards / Switches for Oracle Database 10g Release 1
Real Application Cluster on Linux
Modified: 06-APR-05 Ref #: ID-6680

--------------------------------------------------------------------------------

How do I put my application under the control of Oracle Clusterware to achieve


higher availability?
First write a control agent. It must accept 3 different parameters: start-The
control agent should start the application, check-The control agent should check
the application, stop-The Control agent should start the application. Secondly you
must create a profile for your application using crs_profile. Thirdly you must
register your application as a resource with Oracle Clusterware (crs_register).
See the RAC Admin and Deployment Guide for details.
Modified: 16-JUN-05 Ref #: ID-6846

--------------------------------------------------------------------------------

Can I use Oracle Clusterware to provide cold failover of my 9i or 10g single


instance Oracle Databases?
Oracle does not provide the necessary wrappers to fail over single-instance
databases using Oracle Clusterware 10g Release 2. But since it's possible for
customers to use Oracle Clusterware to wrap arbitrary applications, it'd be
possible for them to wrap single-instance databases this way.
Modified: 01-JUL-05 Ref #: ID-6891
--------------------------------------------------------------------------------

Does Oracle Clusterware support application vips?


Yes, with Oracle Database 10g Release 2, Oracle Clusterware now supports an
"application" vip. This is to support putting applications under the control of
Oracle Clusterware using the new high availability API and allow the user to use
the same URL or connection string regardless of which node in the cluster the
application is running on. The application vip is a new resource defined to Oracle
Clusterware and is a functional vip. It is defined as a dependent resource to the
application. There can be many vips defined, typically one per user application
under the control of Oracle Clusterware. You must first create a profile
(crs_profile), then register it with Oracle Clusterware (crs_register). The usrvip
script must run as root.
Modified: 11-JUL-05 Ref #: ID-6893

--------------------------------------------------------------------------------

Why is the home for Oracle Clusterware not recommended to be subdirectory of the
Oracle base directory?
If anyone other than root has write permissions to the parent directories of the
CRS home, then they can give themselves root escalations. This is a security
issue. The CRS home itself is a mix of root and non-root permissions, as
appropriate to the security requirements. Please follow the install docs about who
is your primary group and what other groups you need to create and be a member of.

Modified: 11-JUL-05 Ref #: ID-6915

--------------------------------------------------------------------------------

--------------------------------------------------------------------------------

Copyright �

9.6 JRE:
========

JRE:
----

Oracle 9.2 uses JRE 1.3.1

- Java Compiler (javac): Compiles programs written in the Java programming


language into bytecodes.

- Java Interpreter (java): Executes Java bytecodes. In other words, it runs


programs written in the Java programming language.

- Jave Runtime Interpreter (jre): Similar to the Java Interpreter (java), but
intended for
end users who do not require all the development-related options available with
the java tool.
The PATH statement enables Windows to find the executables (javac, java, javadoc,
etc.)
from any current directory.

The CLASSPATH tells the Java virtual machine and other applications (which are
located in the
"jdk_<version>\bin" directory) where to find the class libraries, such as
classes.zip file
(which is in the lib directory).

Note 1:
-------

Suppose on a Solaris 5.9 machine with Oracle 9.2, we search for jre:

# find . -name "jre*" -print

./opt/app/oracle/product/9.2/inventory/filemap/jdk/jre
./opt/app/oracle/product/9.2/jdk/jre
./opt/app/oracle/jre
./opt/app/oracle/jre/1.1.8/bin/sparc/native_threads/jre
./opt/app/oracle/jre/1.1.8/bin/jre
./opt/app/oracle/jre/1.1.8/jre_config.txt
./usr/j2se/jre
./usr/iplanet/console5.1/bin/base/jre
./usr/java1.2/jre

Suppose on a AIX 5.2 machine with Oracle 9.2, we search for jre:

./apps/oracle/product/9.2/inventory/filemap/jdk/jre
./apps/oracle/product/9.2/inventory/filemap/jre
./apps/oracle/product/9.2/jdk/jre
./apps/oracle/product/9.2/jre
./apps/oracle/oraInventory/filemap/apps/oracle/jre
./apps/oracle/oraInventory/filemap/apps/oracle/jre/1.3.1/jre
./apps/oracle/jre
./apps/oracle/jre/1.1.8/bin/jre
./apps/oracle/jre/1.1.8/bin/aix/native_threads/jre
./apps/oracle/jre/1.3.1/jre
./apps/ora10g/product/10.2/jdk/jre
./apps/ora10g/product/10.2/jre
./usr/java131/jre
./usr/idebug/jre

Note 2:
-------

jre - The Java Runtime Interpreter (Solaris)


jre interprets (executes) Java bytecodes.
SYNOPSIS
jre [ options ] classname <args>

DESCRIPTION
The jre command executes Java class files. The classname argument is the name of
the class to be executed.
Any arguments to be passed to the class must be placed after the classname on the
command line.
Class paths for the Solaris version of the jre tool can be specified using the
CLASSPATH environment variable
or by using the -classpath or -cp options. The Windows version of the jre tool
ignores the CLASSPATH
environment variable. For both Solaris and Windows, the -cp option is recommend
for specifying class paths
when using jre.

OPTIONS
-classpath path(s)
Specifies the path or paths that jre uses to look up classes. Overrides the
default or the CLASSPATH environment
variable if it is set. If more than one path is specified, they must be separated
by colons.
Each path should end with the directory containing the class file(s) to be
executed.
However, if a file to be executed is a zip or jar file, the path to that file must
end with the file's name.
Here is an example of an argument for -classpath that specifies three paths
consisting of the current directory
and two additional paths:
.:/home/xyz/classes:/usr/local/java/classes/MyClasses.jar

-cp path(s)
Prepends the specified path or paths to the base classpath or path given by the
CLASSPATH environment variable.
If more than one path is specified, they must be separated by colons. Each path
should end with the directory
containing the class file(s) to be executed. However, if a file to be executed is
a zip or jar file,
the path to that file must end with the file's name. Here is an example of an
argument for -cp that specifies
three paths consisting of the current directory and two additional paths:
.:/home/xyz/classes:/usr/local/java/classes/MyClasses.jar

-help
Print a usage message.

-mx x
Sets the maximum size of the memory allocation pool (the garbage collected heap)
to x.
The default is 16 megabytes of memory. x must be greater than or equal to 1000
bytes.
By default, x is measured in bytes. You can specify x in either kilobytes or
megabytes by appending the letter
"k" for kilobytes or the letter "m" for megabytes.

-ms x
Sets the startup size of the memory allocation pool (the garbage collected heap)
to x. The default is 1 megabyte
of memory. x must be > 1000 bytes.
By default, x is measured in bytes. You can specify x in either kilobytes or
megabytes by appending the letter
"k" for kilobytes or the letter "m" for megabytes.

-noasyncgc
Turns off asynchronous garbage collection. When activated no garbage collection
takes place unless
it is explicitly called or the program runs out of memory. Normally garbage
collection runs as an
asynchronous thread in parallel with other threads.

-noclassgc
Turns off garbage collection of Java classes. By default, the Java interpreter
reclaims space for unused
Java classes during garbage collection.

-nojit
Specifies that any JIT compiler should be ignored and instead invokes the default
Java interpreter.

-ss x
Each Java thread has two stacks: one for Java code and one for C code. The -ss
option sets the maximum stack size
that can be used by C code in a thread to x. Every thread that is spawned during
the execution of the program
passed to jre has x as its C stack size. The default units for x are bytes. The
value of x must be greater than
or equal to 1000 bytes.
You can modify the meaning of x by appending either the letter "k" for kilobytes
or the letter "m" for megabytes.
The default stack size is 128 kilobytes ("-ss 128k").

-oss x
Each Java thread has two stacks: one for Java code and one for C code. The -oss
option sets the maximum stack size
that can be used by Java code in a thread to x. Every thread that is spawned
during the execution of the program
passed to jre has x as its Java stack size. The default units for x are bytes. The
value of x must be greater
than or equal to 1000 bytes.
You can modify the meaning of x by appending either the letter "k" for kilobytes
or the letter "m" for megabytes.
The default stack size is 400 kilobytes ("-oss 400k").

-v, -verbose
Causes jre to print a message to stdout each time a class file is loaded.

-verify
Performs byte-code verification on the class file. Beware, however, that java
-verify does not perform
a full verification in all situations. Any code path that is not actually executed
by the interpreter
is not verified. Therefore, java -verify cannot be relied upon to certify class
files unless all code paths
in the class file are actually run.

-verifyremote
Runs the verifier on all code that is loaded into the system via a classloader.
verifyremote is the default
for the interpreter.

-noverify
Turns verification off.

-verbosegc
Causes the garbage collector to print out messages whenever it frees memory.

-DpropertyName=newValue
Defines a property value. propertyName is the name of the property whose value you
want to change and newValue
is the value to change it to. For example, this command line
% jre -Dawt.button.color=green ...

sets the value of the property awt.button.color to "green". jre accepts any number
of -D options on the command line.

ENVIRONMENT VARIABLES
CLASSPATH
You can use the CLASSPATH environment variable to specify the path to the class
file or files that you want to execute.
CLASSPATH consists of a colon-separated list of directories that contain the class
files to be executed. For example:
.:/home/xyz/classes

If the file to be executed is a zip file or a jar file, the path should end with
the file name. For example:
.:/usr/local/java/classes/MyClasses.jar

SEE ALSO
CLASSPATH

Note 3:
-------

Solaris: Installing IBM JRE, Version 1.3.1


To install JRE 1.3.1 on Solaris, follow these steps:

Log on as root.
Insert the IBM Tivoli Access Manager for Solaris CD.
Install the IBM JRE 1.3.1 package:
pkgadd -d /cdrom/cdrom0/solaris -a /cdrom/cdrom0/solaris/pddefault SUNWj3rt
where -d /cdrom/cdrom0/solaris specifies the location of the package and -a
/cdrom/cdrom0/solaris/pddefault
specifies the location of the installation administration script.

Set the PATH environmental variable:


PATH=/usr/j2se/jre/bin:$PATH
export PATH
After you install IBM JRE 1.3.1, no configuration is necessary.

##################################################################################
#

=========
30 LOBS:
=========

30.1 General LOB info:


----------------------

Note 1:
=======

A LOB is a Large Object. LOBs are used to store large, unstructured data, such as
video, audio,
photo images etc. With a LOB you can store up to 4 Gigabytes of data.
They are similar to a LONG or LONG RAW but differ from them in quite a few ways.

LOBs offer more features to the developer than a LONG or LONG RAW. The main
differences between
the data types also indicate why you would use a LOB instead of a LONG or LONG
RAW. These differences
include the following: -
� You can have more than one LOB column in a table, whereas you are restricted
to just one LONG
or LONG RAW column per table.
� When you insert into a LOB, the actual value of the LOB is stored in a
separate segment
(except for in-line LOBs) and only the LOB locator is stored in the row,
thus making it more
efficient from a storage as well as query perspective. With LONG or LONG
RAW, the entire data
is stored in-line with the rest of the table row.
� LOBs allow a random access to its data, whereas with a LONG you have to go
in for a sequential read
of the data from beginning to end.
� The maximum length of a LOB is 4 Gig as compared to a 2 Gig limit on LONG
� Querying a LOB column returns the LOB locator and not the entire value of
the LOB.
On the other hand, querying LONG returns the entire value contained within
the LONG column

You can have two categories of LOBs based on their location with respect to the
database. The categories
include internal LOBs and external LOBs. As the names suggest, internal LOBs are
stored within the database,
as table columns. External LOBs are stored outside the database as operating
system files.
Only a reference to the actual OS file is stored in the database. An internal LOB
can also be persistent
or temporary depending on the life of the internal LOB.

An internal LOB can be one of three different data types as follows: -


� CLOB � A Character LOB. Used to store character data.
� BLOB � A Binary LOB. Used to store binary, raw data
� NCLOB � A LOB that stores character data that corresponds to the national
character set
defined for the database.

The only external LOB data type in Oracle 8i is called a BFILE.


� BFILE - Short for Binary File. These hold references to large binary data
stored as physical files
in the OS outside the database.

DBA_LOBS displays the BLOBs and CLOBs contained in all tables in the database.
BFILEs are stored outside the database,
so they are not described by this view. This view's columns are the same as those
in "ALL_LOBS".

NCLOB and CLOB, are both encoded a internal fixed-width Unicode character set.

CLOB = Character Large Object 4Gigabytes


NCLOB = National Character Large Object 4Gigabytes
BLOB = Binary Large Object 4Gigabytes
BFILE = pointer to binary file on disk 4Gigabytes

- A limited number of BFILEs can be open simultaneously per session. The


initialization parameter,
SESSION_MAX_OPEN_FILES defines an upper limit on the number of simultaneously
open files in a session.

The default value for this parameter is 10. That is, you can open a maximum of
10 files at the same time
per session if the default value is utilized. If you want to alter this limit,
the database administrator
can change the value of this parameter in the init.ora file. For example:

SESSION_MAX_OPEN_FILES=20

If the number of unclosed files exceeds the SESSION_MAX_OPEN_FILES value then


you will not be able
to open any more files in the session. To close all open files, use the
FILECLOSEALL call.

- LOB locators
Regardless of where the value of the internal LOB is stored, a locator is stored
in the row.
You can think of a LOB locator as a pointer to the actual location of the LOB
value. A LOB locator
is a locator to an internal LOB while a BFILE locator is a locator to an
external LOB.
When the term locator is used without an identifying prefix term, it refers to
both LOB locators and BFILE locators.

- Internal LOB Locators


For internal LOBs, the LOB column stores a locator to the LOB's value which is
stored in a database tablespace.
Each LOB column/attribute for a given row has its own distinct LOB locator and
copy of the LOB value
stored in the database tablespace.

- LOB Locator Operations


Setting the LOB Column/Attribute to contain a locator
Before you can start writing data to an internal LOB, the LOB column/attribute
must be made non-null,
that is, it must contain a locator. Similarly, before you can start accessing
the BFILE value,
the BFILE column/attribute must be made non-null.

For internal LOBs, you can accomplish this by initializing the internal LOB to
empty in an
INSERT/UPDATE statement using the functions EMPTY_BLOB() for BLOBs or
EMPTY_CLOB() for CLOBs and NCLOBs.

For external LOBs, you can initialize the BFILE column to point to an external
file
by using the BFILENAME() function.

Note 2:
=======

From: Oracle, Kalpana Malligere 29-Aug-01 14:50


Subject: Re : What is my best LOB choice

Hello,

There are several articles/discussions available in the MetaLink Repository which


discuss LOBs, including BFILEs.
They are accessible via the Search option and the following articles should assist
you to make you choice:

66431.1 LOBS - Storage, Redo and Performance Issues


66046.1 Oracle8i: LOBs
107441.1 Comparison between LOBs, and LONG & LONG Raw Datatypes

To find any performance comparison between BFILEs and BLOBs, the best
suggestion is to try a small scale test. One of the customer wrote that his rule
of thumb is that a small number
of large LOBs => bfile, and a large number of small LOBs => BLOB.

The BLOB datatype can store up to 4Gb of data. BLOBs can participate fully in
transactions.
Changes made to a BLOB value by the DBMS_LOB package, PL/SQL, or the OCI can be
committed or rolled back.
The BFILE datatype stores unstructured binary data (such as image files) in
operating-system files
outside the database. A BFILE column or attribute stores a file locator that
points to an external file
containing the data. BFILEs can also store up to 4Gb of data.

Howerver, BFILEs are read-only; you cannot modify them. They support only random
(not sequential) reads,
and they do not participate in transactions. The underlying operating system must
maintain the file integrity
and durability for BFILEs. The database administrator must ensure that the file
exists and that Oracle processes
have operating-system read permissions on the file.

Your application will have an impact on which is preferable. BFILEs will really
help if your application is
WEB based because you can access them through an annonymous FTP connect into the
browser by passing
the URL to the HTML. You can also do this through a regular BLOB, but this would
make you drag the
entire image through the Oracle server buffer cache everytime it is requested. The
separation of the backup
can be beneficial especially if the the image files are mostly static. This
reduces the backup volume of
the database itself. You also don't need a special program for loading them into
the database.
You just copy the files to the OS and run a DML statement to add them. This way
you also avoid the redo
created by inserting them as an internal BLOB.

On the other side of the coin, you will have to devise a file naming
convention/directory structure to prevent
overwriting the BFILE's.
You may want to do only one backup instead of both. With BLOBs, if you backup the
database,
you have everything needed. You won't be able to update a BFILE through the
database, you will always have to
make modifcations through the OS. LOB types can be replicated, but not BFILE.

The Oracle 8i Application Developer's Guide - Large Objects (LOBs), provides


information on the various
programmatic environments and how to operate on LOB and BFILE data. Questions on
these capabilities
should be posted to the appropriate forum (i.e. Oracle PL/SQL, Oracle Call
Interface, Oracle Precompiler, etc.).

To answer your question, it depends on how you want to use the data.
A LOB is stored in line by default if it is less than 3,960 bytes, whereas an out-
of-line LOB takes about
20 bytes per row. An inline LOB (i.e. one that is actually stored in the row) is
always logged, but an out-of-line
can be made non-logging. Preference is always to DISABLE STORAGE IN ROW, but if
your LOBs are actually very small,
and the way you use them is sufficiently special then you may want to store them
in line.
But if so, they could probably become simple varchar2(4000).
Note - the minimum size an out-of-line LOB can use is one Oracle block (plus a bit
of extra space in the LOBINDEX).

Thanks!
Kalpana
Oracle Technical Support

Note 3:
=======

Doc ID </help/usaeng/Search/search.html>: Note:66431.1 Content Type:


TEXT/PLAIN
Subject: LOBS - Storage, Redo and Performance Issues Creation Date: 05-
NOV-1998
Type: BULLETIN Last Revision Date: 25-JUL-2002
Status: PUBLISHED

Introduction
~~~~~~~~~~~~
This is a short note on the internal storage of LOBs. The information
here is intended to supplement the documentation and other notes
which describe how to use LOBS. The focus is on the storage characteristics
and configuration issues which can affect performance.

There are 4 types of LOB:


CLOB, BLOB, NCLOB stored internally to Oracle
BFILE stored externally

The note mainly discusses the first 3 types of LOB which as stored INTERNALLY
within the Oracle DBMS. BFILE's are pointers to external files and
are only mentioned briefly.
Examples of handling LOBs can be found in
[NOTE:47740.1] <ml2_documents.showDocument?p_id=47740.1&p_database_id=NOT>

Attributes
~~~~~~~~~~
There are many attributes associated with LOB columns. The aim here
is to cover the fundamental points about each of the main attributes.
The attributes for each LOB column are specified using the
"LOB (lobcolname) STORE AS ..." syntax.

A table containing LOBs (CLOB, NCLOB and BLOB) creates 2 additional


disk segments per LOB column - a LOBINDEX and a LOBSEGMENT. These
can be viewed, along with the LOB attributes, using the dictionary views:

DBA_LOBS, ALL_LOBS or USER_LOBS

which give the columns:

OWNER Table Owner


TABLE_NAME Table name
COLUMN_NAME Column name in the table
SEGMENT_NAME Segment name of the LOBSEGMENT
INDEX_NAME Segment name of the LOBINDEX
CHUNK Chunk size (bytes)
PCTVERSION PctVersion
CACHE Cache option of the LOB Segment (yes/no)
LOGGING Logging mode of the LOB segment (yes/no)
IN_ROW Whether storage in row is allowed (yes/no)

SELECT
l.table_name as "TABLE",
l.column_name as "COLUMN",
l.segment_name as "SEGMENT",
l.index_name as "INDEX",
l.chunk as "CHUNKSIZE", l.LOGGING, l.IN_ROW, t.tablespace_name
FROM DBA_LOBS l, DBA_TABLES t
WHERE l.table_name=t.table_name AND
l.owner in ('VPOUSERDB','TRIDION_CM');

Storage Parameters
~~~~~~~~~~~~~~~~~~
By default LOB segments are created in the same tablespace as the
base table using the tablespaces default storage details. You can
specify the storage attributes of the LOB segments thus:

Create table DemoLob ( A number, B clob )


LOB(b)
STORE AS lobsegname (
TABLESPACE lobsegts
STORAGE (lobsegment storage clause)
INDEX lobindexname (
TABLESPACE lobidxts
STORAGE ( lobindex storage clause )
)
)
TABLESPACE tables_ts
STORAGE( tables storage clause )
;

CREATE TABLE t_lob


(DOCUMENT_NR NUMBER(16,0) NOT NULL,
DOCUMENT_BLOB BLOB NOT NULL
)
STORAGE
(INITIAL 100k
NEXT 100K
PCTINCREASE 0
MAXEXTENTS 100
)
TABLESPACE system
lob (DOCUMENT_BLOB) store as DOCUMENT_LOB
(tablespace ts storage
(initial 30K next 30K pctincrease 30 maxextents 3)
index (tablespace ts_index storage
(initial 40K next 40K pctincrease 40 maxextents 4)));

In 8.0 the LOB INDEX can be stored separately from the lob segment.
If a tablespace is specified for the LOB SEGMENT then the LOB INDEX
will be placed in the same tablespace UNLESS a different tablespace
is explicitly specified.
Unless you specify names for the LOB segments system generated names
are used.

In ROW Versus Out of ROW


~~~~~~~~~~~~~~~~~~~~~~~~
LOB columns can be allowed to store data within the row or not as detailed
below. Whether in-line storage is allowed or not can ONLY be specified
at creation time.

"STORE AS ( enable storage in row )"


Allows LOB data to be stored in the TABLE segment provided
it is less than about 4000 bytes.

The actual maximum in-line LOB is 3964 bytes.

If the lob value is greater than 3964 bytes then the LOB data is
stored in the LOB SEGMENT (ie: out of line). An out of line
LOB behaves as described under 'disable storage in row' except that
if its size shrinks to 3964 or less the LOB can again be stored
inline.

When a LOB is stored out-of-line in an 'enable storage in row'


LOB column between 36 and 84 bytes of control data remain in-line
in the row piece.

In-line LOBS are subject to normal chaining and row migration


rules within Oracle. Ie: If you store a 3900 byte LOB in a row
with a 2K block size then the row piece will be chained across
two or more blocks.

Both REDO and UNDO are written for in-line LOBS as they are part
of the normal row data.

"STORE AS ( disable storage in row )"


This option prevents any size of LOB from being stored in-line.

Instead a 20 byte LOB locator is stored in the ROW which gives


a unique identifier for a LOB in the LOB segment for this column.

The Lob Locator actually gives a key into the LOB INDEX which
contains a list of all blocks (or pages) that make up the LOB.

The minimum storage allocation for an out of line LOB is 1 Database


BLOCK per LOB ITEM and may be more if CHUNK is larger than a
single block.

UNDO is only written for the column locator and LOB INDEX changes.

No UNDO is generated for pages in the LOB SEGMENT.


Consistent Read is achieved by using page versions.
Ie: When you update a page of a LOB the OLD page remains and a
new page is created. This can appear to waste space but
old pages can be reclaimed and reused.

CHUNK size
~~~~~~~~~~
"STORE AS ( CHUNK bytes ) "
Can ONLY be specified at creation time.

In 8.0 values of CHUNK are in bytes and are rounded to the next
highest multiple of DB_BLOCK_SIZE without erroring.
Eg: If you specify a CHUNK of 3000 with a block size of 2K then
CHUNK is set to 4096 bytes.

"bytes" / DB_BLOCK_SIZE determines the unit of allocation of


blocks to an 'out of line' LOB in the LOB segment.
Eg: if CHUNK is 32K and the LOB is 'disable storage in row'
then even if the LOB is only 10 bytes long 32K will be
allocated in the LOB SEGMENT.

CHUNK does NOT affect in-line LOBS.


PCTVERSION
~~~~~~~~~~
"STORE AS ( PCTVERSION n )"
PCTVERSION can be changed after creation using:
ALTER TABLE tabname MODIFY LOB (lobname) ( PCTVERSION n );

PCTVERSION affects the reclamation of old copies of LOB data.


This affects the ability to perform consistent read.

If a session is attempting to use an OLD version of a LOB


and that version gets overwritten (because PCTVERSION is too small)
then the user will typically see the errors:
ORA-01555: snapshot too old:
rollback segment number with name "" too small
ORA-22924: snapshot too old

PCTVERSION can prevent OLD pages being used and force the segment
to extend instead.

Do not expect PCTVERSION to be an exact percentage of space as there


is an internal fudge factor applied.

CACHE
~~~~~
"STORE AS ( CACHE )" or "STORE AS ( NOCACHE )"
This option can be changed after creation using:
ALTER TABLE tabname MODIFY LOB (lobname) ( CACHE );
or
ALTER TABLE tabname MODIFY LOB (lobname) ( NOCACHE );

With NOCACHE set (the default) reads from and writes to the
LOB SEGMENT occur using direct reads and writes. This means that
the blocks are never cached in the buffer cache and the the Oracle
shadow process performs the reads/writes itself.
The reads / writes show up under the wait events "direct path read"
and "direct path write" and multiple blocks can be read/written at
a time (provided the caller is using a large enough buffer size).

When set the CACHE option causes the LOB SEGMENT blocks to
be read / written via the buffer cache . Reads show up as
"db file sequential read" but unlike a table scan the blocks are
placed at the most-recently-used end of the LRU chain.

The CACHE options for LOB columns is different to the CACHE


option for tables as CACHE_SIZE_THRESHOLD does not limit the
size of LOB read into the buffer cache. This means that extreme
caution is required otherwise the read of a long LOB can effectively
flush the cache.

In-line LOBS are not affected by the CACHE option as they reside
in the actual table block (which is typically accessed via the buffer
cache any way).

The cache option can affect the amount of REDO generated for
out of line LOBS. With NOCACHE blocks are direct loaded and
so entire block images are written to the REDO stream. If CHUNK
is also set then enough blocks to cover CHUNK are written to REDO.
If CACHE is set then the block changes are written to REDO.
Eg: In the extreme case 'DISABLE STORAGE IN ROW NOCACHE CHUNK 32K'
would write redo for the whole 32K even if the LOB was only
5 characters long. CACHE would write a redo record describing the
5 byte change (taking about 100-200 bytes).

LOGGING
~~~~~~~
"STORE AS ( NOCACHE LOGGING )" or "STORE AS ( NOCACHE NOLOGGING )"
This option can be changed after creation but the LOGGING / NOLOGGING
attribute must be prefixed by the NOCACHE option. The CACHE option
implicitly enables LOGGING.

The default for this option is LOGGING.

If a LOB is set to NOCACHE NOLOGGING then updates to the LOB SEGMENT


are not logged to the redo logs. However, updates to in-line LOBS
are still logged as normal. As NOCACHE operations use direct
block updates then all LOB segment operations are affected.
NOLOGGING of the LOB segment means that if you have to recover the
database then sections of the LOB segment will be marked as corrupt
during recovery.

Space required for updates


~~~~~~~~~~~~~~~~~~~~~~~~~~
If a LOB is out-of-line then updates to pages if the LOB cause new
versions of those pages to be created. Rollback is achieved by reverting
back to the pre-updated page versions. This has implications on the
amount of space required when a LOB is being updated as the LOB SEGMENT
needs enough space to hold both the OLD and NEW pages concurrently in case
your transaction rolls back.
Eg: Consider the following:
INSERT a large LOB LOB SEGMENT extends take the new pages
COMMIT;
DELETE the above LOB The LOB pages are not yet free as
they will be needed in case of
rollback.
INSERT a new LOB Hence this insert may require more
space in the LOB SEGMENT
COMMIT; Only after this point could the
deleted pages be used.

Performance Issues
~~~~~~~~~~~~~~~~~~~
Working with LOBs generally requires more than one round trip to the database.
The application first has to obtain the locator and only then can perform
operations against that locator. This is true for inline or out of line
LOBS.

The buffer size used to read / write the LOB can have a significant
impact on performance, as can the SQL*Net packet sizes.
Eg: With OCILobRead() a buffer size is specified for handling the LOB.
If this is small (say 2K) then there can be a round trip to the database
for each 2K chunk of the LOB. To make the issue worse the server will
only fetch the blocks needed to satisfy the current request so may
perform single block reads against the LOB SEGMENT. If however a larger
chunk size is used (say 32K) then the server can perform multiblock
operations and pass the data back in larger chunks.

There is a LOB buffering subsystem which can be used to help improve


the transfer of LOBs between the client and server processes. See the
documentation for details of this.

BFILEs
~~~~~~
BFILEs are quite different to internal LOBS as the only real storage
issue is the space required for the inline locator. This is about 20 bytes
PLUS the length of the directory and filename elements of the BFILENAME.

The performance implications of the buffer size are the same as for internal
LOBS.

References
~~~~~~~~~~

[NOTE:162345.1] <ml2_documents.showDocument?p_id=162345.1&p_database_id=NOT>
LOBS - Storage, Read-consistency and Rollback

Note 4:
=======

Doc ID: Note:159995.1 Content Type: TEXT/X-HTML


Subject: Different Behaviors of Lob and Lobindex Segments in 8.0, 8i and 9i
Creation Date: 05-OCT-2001
Type: BULLETIN Last Revision Date: 27-MAR-2003
Status: PUBLISHED
PURPOSE
-------
This bulletin lists the different behaviors of a lob index segment regarding
tablespace and storage values:
-> When creating the table, the lob and lob index segments
-> Altering the associated lob segment and/or lob index segment.
SCOPE & APPLICATION
-------------------
For all DBAs who manage different versions of Oracle with databases containing
LOB segments, and who need to maintain the associated lob indexes.
Under 8i and 9i
In Oracle8i SQL Reference and Oracle9i SQL Reference, it is clearly stated that:
lob_index_clause
This clause is deprecated as of Oracle8i. Oracle generates an index for each LOB
column.
Oracle names and manages the LOB indexes internally. Although it is still possible
for
you to specify this clause, Oracle Corporation strongly recommends that you no
longer do
so. In any event, do not put the LOB index in a different tablespace from the LOB
data.
1.Lob and lobindex specifications at table creation
If you create a new table in release 8i and 9i and specify a tablespace
and storage values for the LOB index for a non-partitioned table, the
tablespace specification and storage values are ignored.
The LOB index is located in the same tablespace as the LOB segment
with the same storage values, except the NEXT and MAXEXTENTS values.
the NEXT value of the lobindex = INITIAL default value of the tablespace (LOB
segment)

the MAXEXTENTS value of the lobindex = unlimited value (2Gb)

SQL> CREATE TABLE t_lob


2 (DOCUMENT_NR NUMBER(16,0) NOT NULL,
3 DOCUMENT_BLOB BLOB NOT NULL
4 )
5 STORAGE
6 (INITIAL 100k
7 NEXT 100K
8 PCTINCREASE 0
9 MAXEXTENTS 100
10 )
11 TABLESPACE system
12 lob (DOCUMENT_BLOB) store as DOCUMENT_LOB
13 (tablespace ts storage
14 (initial 30K next 30K pctincrease 30 maxextents 3)
15 index (tablespace ts_index storage
16 (initial 40K next 40K pctincrease 40 maxextents 4)));

Table created.

SQL> select segment_name, segment_type, tablespace_name,


2 initial_extent, next_extent, pct_increase, max_extents
3 from user_segments;
SEGMENT_NAME SEGMENT_TY TABLESPA INITIAL NEXT_EXT PCT_INC MAX_EXT
----------------------- ----------- --------- -------- -------- ------- ---------
T_LOB TABLE SYSTEM 102400 102400 0 100
SYS_IL0000020297C00002$$ LOBINDEX TS 30720 10240 30 2147483645
DOCUMENT_LOB LOBSEGMENT TS 30720 30720 30 3
All storage modifications are based on this original table t_lob.
2.Lob and lobindex storage modifications
When you modify the storage values for the lob and lob index segments,
the values of the lob index are kept as initially set, except the PCT_INCREASE.
The value of the lob segment PCTINCREASE spreads out on the lob index:
SQL> alter table t_lob
2 modify lob (document_blob)
3 (storage (next 60K pctincrease 60 maxextents 6)
4 index (storage (next 70K pctincrease 70 maxextents 7)));
Table altered.
SQL> select segment_name, segment_type, tablespace_name,
2 initial_extent, next_extent, pct_increase, max_extents
3 from user_segments;
SEGMENT_NAME SEGMENT_TY TABLESPA INITIAL NEXT_EXT PCT_INC MAX_EXT
----------------------- ----------- --------- -------- -------- ------- ---------
T_LOB TABLE SYSTEM 102400 102400 0 100
SYS_IL0000020297C00002$$ LOBINDEX TS 30720 10240 60 2147483645
DOCUMENT_LOB LOBSEGMENT TS 30720 61440 60 6
3.Storage modifications of lob segment only
If you modify the storage values for the lob segment only, you get the same
behaviour:
SQL> alter table t_lob
2 modify lob (document_blob)
3 (storage (next 60K pctincrease 60 maxextents 6));
Table altered.
SQL> select segment_name, segment_type, tablespace_name,
2 initial_extent, next_extent, pct_increase, max_extents
3 from user_segments;
SEGMENT_NAME SEGMENT_TY TABLESPA INITIAL NEXT_EXT PCT_INC MAX_EXT
----------------------- ----------- --------- -------- -------- ------- ---------
T_LOB TABLE SYSTEM 102400 102400 0 100
SYS_IL0000020297C00002$$ LOBINDEX TS 30720 10240 60 2147483645
DOCUMENT_LOB LOBSEGMENT TS 30720 61440 60 3
4.Storage modifications of lobindex segment only
If you modify the storage values for the lob index segment only, nothing is
altered:
SQL> alter table t_lob
2 modify lob (document_blob)
3 (index (storage (next 70K pctincrease 70 maxextents 7)))
4 ;
Table altered.
SQL> select segment_name, segment_type, tablespace_name,
2 initial_extent, next_extent, pct_increase, max_extents
3 from user_segments;
SEGMENT_NAME SEGMENT_TY TABLESPA INITIAL NEXT_EXT PCT_INC MAX_EXT
----------------------- ----------- --------- -------- -------- ------- ---------
T_LOB TABLE SYSTEM 102400 102400 0 100
SYS_IL0000020297C00002$$ LOBINDEX TS 30720 10240 30 2147483645
DOCUMENT_LOB LOBSEGMENT TS 30720 30720 30 3
If you attempt to modify the storage values of the lob index directly,
you get an error message:
SQL> alter index SYS_IL0000020297C00002$$ storage (pctincrease 80);
alter index SYS_IL0000020297C00002$$ storage (pctincrease 80)
*
ERROR at line 1:
ORA-22864: cannot ALTER or DROP LOB indexes
SQL> alter index SYS_IL0000020297C00002$$ rebuild storage (pctincrease 60);
alter index SYS_IL0000020297C00002$$ rebuild storage (pctincrease 60)
*
ERROR at line 1:
ORA-02327: cannot create index on expression with datatype LOB
Under 8.0
1.Lob and lobindex specifications at table creation
If you create a new table in release 8.0 and specify a tablespace for the LOB
index for
a non-partitioned table, the tablespace specification and storage values are
encountered.
The LOB index is located in the defined tablespace with the user-defined storage
values.
SQL> CREATE TABLE t_lob
2 (DOCUMENT_NR NUMBER(16,0) NOT NULL,
3 DOCUMENT_BLOB BLOB NOT NULL
4 )
5 STORAGE
6 (INITIAL 100k
7 NEXT 100K
8 PCTINCREASE 0
9 MAXEXTENTS 100
10 )
11 TABLESPACE system
12 lob (DOCUMENT_BLOB) store as DOCUMENT_LOB
13 (tablespace ts storage
14 (initial 30K next 30K pctincrease 30 maxextents 3)
15 index (tablespace ts_index storage
16 (initial 40K next 40K pctincrease 40 maxextents 4)));
Table created.
SQL> select segment_name, segment_type, tablespace_name,
2 initial_extent, next_extent, pct_increase, max_extents
3 from user_segments;
SEGMENT_NAME SEGMENT_TY TABLESPA INITIAL NEXT_EXT PCT_INC MAX_EXT
----------------------- ----------- --------- -------- -------- ------- ---------
T_LOB TABLE SYSTEM 102400 102400 0 100
SYS_IL0000020297C00002$$ LOBINDEX TS_INDEX 40960 40960 40 4
DOCUMENT_LOB LOBSEGMENT TS 32768 30720 30 3
All storage modifications are based on this original table t_lob.
2.Lob and lobindex storage modifications
When you modify the storage values for the lob and lob index segments,
the values for the lobindex are kept as initially set:
SQL> alter table t_lob
2 modify lob (document_blob)
3 (storage (next 60K pctincrease 60 maxextents 6)
4 index (storage (next 70K pctincrease 70 maxextents 7)));
Table altered.
SQL> select segment_name, segment_type, tablespace_name,
2 initial_extent, next_extent, pct_increase, max_extents
3 from user_segments;
SEGMENT_NAME SEGMENT_TY TABLESPA INITIAL NEXT_EXT PCT_INC MAX_EXT
----------------------- ----------- --------- -------- -------- ------- ---------
T_LOB TABLE SYSTEM 102400 102400 0 100
SYS_IL0000020297C00002$$ LOBINDEX TS_INDEX 40960 40960 40 4
DOCUMENT_LOB LOBSEGMENT TS 32768 61440 60 6
3.Storage modifications of lob segment only
If you modify the storage values for the lob segment only, you get the same
behavior:
SQL> alter table t_lob
2 modify lob (document_blob)
3 (storage (next 60K pctincrease 60 maxextents 6));
Table altered.
SQL> select segment_name, segment_type, tablespace_name,
2 initial_extent, next_extent, pct_increase, max_extents
3 from user_segments;

SEGMENT_NAME SEGMENT_TY TABLESPA INITIAL NEXT_EXT PCT_INC MAX_EXT


----------------------- ----------- --------- -------- -------- ------- ---------
T_LOB TABLE SYSTEM 102400 102400 0 100
SYS_IL0000020297C00002$$ LOBINDEX TS_INDEX 40960 40960 40 4
DOCUMENT_LOB LOBSEGMENT TS 32768 61440 60 6

Again, the lob segment storage values do not impact the lob index.
4.Storage modifications of lobindex segment only
If you modify the storage values for the lob index segment only, nothing is
altered:
SQL> alter table t_lob
2 modify lob (document_blob)
3 (index (storage (next 70K pctincrease 70 maxextents 7)))
4 ;
Table altered.
SQL> select segment_name, segment_type, tablespace_name,
2 initial_extent, next_extent, pct_increase, max_extents
3 from user_segments;

SEGMENT_NAME SEGMENT_TY TABLESPA INITIAL NEXT_EXT PCT_INC MAX_EXT


----------------------- ----------- --------- -------- -------- ------- ---------
T_LOB TABLE SYSTEM 102400 102400 0 100
SYS_IL0000020297C00002$$ LOBINDEX TS_INDEX 40960 40960 40 4
DOCUMENT_LOB LOBSEGMENT TS 32768 30720 30 3

If you attempt to modify the storage values of the lob index directly,
you get an error message:
SQL> alter index SYS_IL0000020297C00002$$ storage (pctincrease 20);
alter index SYS_IL0000020297C00002$$ storage (pctincrease 20)
*
ERROR at line 1:
ORA-22864: cannot ALTER or DROP LOB indexes
Migration from 7 to 9i
The "Oracle9i Database Migration Release 1 (9.0.1)" documentation states:
LOB Index Clause
If you used the LOB index clause to store LOB index data in a tablespace
separate from the tablespace used to store the LOB, the index data
is relocated to reside in the same tablespace as the LOB.
If you used Export/Import to migrate from Oracle7 to Oracle9i, the index
data was relocated automatically during migration. However, the index data
was not relocated if you used the Migration utility or the Oracle Data
Migration Assistant.
RELATED DOCUMENTS
-----------------
<Note:66431.1> LOBS - Storage, Redo and Performance Issues
<Bug:1353339> ALTER TABLE MODIFY DEFAULT ATTRIBUTES LOB DOES NOT UPDATE LOB INDEX
DEFAULT TS
<Bug:1864548> LARGE LOB INDEX SEGMENT SIZE
<Bug:747326> ALTER TABLE MODIFY LOB STORAGE PARAMETER DOES'T WORK
<Bug:1244654> UNABLE TO CHANGE STORAGE CHARACTERISTICS FOR LOB INDEXES

Note 5:
=======

Calculate sizes:

Example
-------
SQL> create table my_lob
2 (idx number null, a_lob clob null, b_lob blob null)
3 storage (initial 20k maxextents 121 pctincrease 0 )
4 lob (a_lob, b_lob) store as
5 ( storage ( initial 100k next 100K maxextents 999 pctincrease 0));
Table created.
SQL> select object_name,object_type,object_id from user_objects order by 2;
OBJECT_NAME OBJECT_TYPE OBJECT_ID
---------------------------------------- ------------------ ----------
SYS_LOB0000004017C00002$$ LOB 4018
SYS_LOB0000004017C00003$$ LOB 4020
MY_LOB TABLE 4017
SQL> select bytes, s.segment_name,s.segment_type
2 from dba_segments s
3 where s.segment_name='MY_LOB';
BYTES SEGMENT_NAME SEGMENT_TYPE
---------- ------------------------------ ------------------
65536 MY_LOB TABLE
SQL> select sum(bytes), s.segment_name, s.segment_type
2 from dba_lobs l, dba_segments s
3 where s.segment_type = 'LOBSEGMENT'
4 and l.table_name = 'MY_LOB'
5 and s.segment_name = l.segment_name
6 group by s.segment_name,s.segment_type;
SUM(BYTES) SEGMENT_NAME SEGMENT_TYPE
---------- ------------------------------ ------------------
131072 SYS_LOB0000004017C00002$$ LOBSEGMENT
131072 SYS_LOB0000004017C00003$$ LOBSEGMENT
Therefore the total size for the table MY_LOB is:
65536 (for the table) + 131072 (for CLOB segment) + 131072 (for BLOB segment)
=> 327680 bytes

Note 6:
=======

Doc ID: Note:268476.1


Subject: LOB Performance Guideline
Type: WHITE PAPER
Status: PUBLISHED
Content Type: TEXT/X-HTML
Creation Date: 09-APR-2004
Last Revision Date: 22-JUN-2004

LOB Performance Guidelines

An Oracle White Paper

April 2004

LOB Performance Guidelines


Executive
Overview..........................................................................
.... 3
LOB
Overview..........................................................................
............ 3
Important Storage
Parameters................................................................ 4
CHUNK.............................................................................
............... 4
Definition........................................................................
.............. 4
Points to
Note..............................................................................
. 4
Recommendation....................................................................
....... 4
In-line and Out-of-Line storage: ENABLE STORAGE IN ROW and DISABLE STORAGE IN ROW
4
Definition........................................................................
.............. 4
Points to
Note..............................................................................
. 5
Recommendation....................................................................
....... 5
CACHE,
NOCACHE....................................................................... 5
Definition........................................................................
.............. 5
Points to
Note..............................................................................
. 6
Recommendation....................................................................
....... 6
Consistent Reads on LOBs: RETENTION and PCTVERSION...... 6
Definition........................................................................
.............. 6
Points to
Note..............................................................................
. 6
Recommendation....................................................................
....... 7
LOGGING, NOLOGGING............................................................. 7
Definition........................................................................
.............. 7
Points to
Note..............................................................................
. 7
Recommendation....................................................................
....... 7
Performance GUIDELINE ? LOB Loading.......................................... 8
Points to
Note..............................................................................
..... 8
Use array operations for LOB inserts.............................................
8
Scalability problem ? with LOB disable storage in row option...... 8
Row Chaining problem ? with the use of OCILobWrite API......... 8
High number of consistent read blocks created and examined...... 9
CPU time and Elapsed time - not reported accurately................... 9
Reads/Writes are done one chunk at a time in synchronous way 10
High CPU system
time................................................................. 11
Buffer cache sizing
problem......................................................... 11
Multi-byte character set conversion.............................................
11
HWM enqueue contention...........................................................
11
RAC environment
issues.............................................................. 12
Other LOB performance related issues....................................... 12

APPENDIX
A.................................................................................
.... 13
LONG API access to LOB datatype............................................... 13
APPENDIX
B.................................................................................
.... 15
Migration from in-line to out-of-line (and out-of-line to in-line) storage 15
APPENDIX
C.................................................................................
.... 16
How LOB data is
stored.................................................................. 16
In-line LOB ? LOB size less than 3964 bytes............................. 16
In-line LOB ? LOB size = 3965 bytes (1 byte greater than 3964) 16
In-line LOB ? LOB size greater than 12 chunk addresses........... 17
Out-of-line LOBs ? All LOB sizes.............................................. 17

LOB Performance Guidelines

Executive Overview

This document gives a brief overview of Oracle?s LOB data structure, emphasizing
various storage parameter options
and describes scenarios where those storage parameters are best used. The purpose
of the latter is to help describe
the effects of readers select the appropriate LOB storage options. This paper
assumes that most customers load
LOB data once and retrieve many times (less than 10% of DML is update and delete),
so performance guidelines provided
here are for LOB loading.

LOBs were designed to efficiently store and retrieve large amounts of data. Small
LOBs (< 1MB) perform better
than LONGs for inserts, and have comparable performance on selects. Large LOBs
perform better than LONGs in general.

Oracle recommends the use of LOBs to store unstructured or semi-structured data,


and has provided a LONG API
to allow ease of migration from LONGs to LOBs. Oracle plans to de-support LONGs in
the future.

LOB Overview

Whenever a table containing a LOB column is created, two segments are created to
hold the specified LOB column.
These segments are of type LOBSEGMENT and LOBINDEX.
The LOBINDEX segment is used to access LOB chunks/pages that are stored in the
LOBSEGMENT segment.

CREATE TABLE foo (pkey NUMBER, bar BLOB);

SELECT segment_name, segment_type FROM user_extents;

9792 is the object_id of the parent table FOO


(if a table has more than one LOB column, LOB segment names are generated
differently,
use dba|user_lobs view to get parent table association).
SEGMENT_NAME SEGMENT_TYPE
FOO TABLE
SYS_IL0000009792C00002$$ LOBINDEX
SYS_LOB0000009792C00002$$ LOBSEGMENT (also referred as LOB
chunks/pages)

The LOBSEGMENT and the LOBINDEX segments are stored in the same tablespace as the
table containing the LOB,
unless otherwise specified.[1]

Important Storage Parameters

This section defines the important storage parameters of a LOB column (or a LOB
attribute) - .
?fFor each definition we describe the effects of the parameter, and give
recommendations for on how to get
better performance and to avoid errors.

CHUNK

Definition

CHUNK is the smallest unit of LOBSEGMENT allocation. It is a multiple of


DB_BLOCK_SIZE.

Points to Note

? For example, if the value of CHUNK is 8K and an inserted LOB is only 1K


in size, then 1 chunk
is allocated and 7K are wasted in that chunk. The CHUNK option does NOT
affect in-line LOBs
(see the definition in the next section)

? Choose an appropriate chunk size for best performance also to avoid


space wastage.
The maximum chunk size is 32K.

? The CHUNK parameter cannot be altered.

Recommendation

Choose a chunk size for optimal performance and minimum space wastage. For LOBs
that are less than 32K,
a chunk size that is 60% (or more) of the LOB size is a good starting point. For
LOBs larger than 32K,
choose a chunk size equal to the frequent update size.

In-line and Out-of-Line storage: ENABLE STORAGE IN ROW and DISABLE STORAGE IN ROW

Definition

LOB storage is said to be Inin-line when the LOB data is stored with the other
column data in the row.
A LOB can only be stored inline if its size is less than ~4000 bytes. For in-line
LOB data, space is allocated
in the table segment (the LOBINDEX and LOBSEGMENT segments are empty).

LOB storage is said to be out-of-line when the LOB data is stored , in CHUNK sized
blocks in the LOBSEGMENT segment,
separate from the other columns? data.

ENABLE STORAGE IN ROW allows LOB data to be stored in the table segment provided
it is less than ~4000 bytes.

DISABLE STORAGE IN ROW prevents LOB data from being stored in-line, regardless of
the size of the LOB.
Instead only a 20-byte LOB locator is stored with the other column data in the
table segment.

Points to Note

? In-line LOBs are subject to normal chaining and row migration rules
within Oracle. If you store a
3900 byte LOB in a row with 2K block size then the row will be chained
across two or more blocks.
Both REDO and UNDO are written for in-line LOBs as they are part of the
normal row data.
The CHUNK option does not affect in-line LOBs.

? With out-of-line storage, UNDO is written only for the LOB locator and
LOBINDEX changes.
No UNDO is generated for chunks/pages in the LOBSEGMENT. Consistent Read
is achieved by using
page versions (see the RETENTION or PCTVERSION options).

? DML operations on out-of-line LOBs can generate high amounts of redo


information, because redo is
generated for the entire chunk. For example, in the extreme case,
?DISABLE STORAGE IN ROW CHUNK 32K? would write redo for the whole 32K
even if the LOB changes were was
only 5 bytes.

? When in-line LOB data is updated, and if the new LOB size is greater
than 3964 bytes, then it is
migrated and stored out-of-line. If this migrated LOB is updated again
and its size becomes less
than 3964 bytes, it is not moved back in-line (except when we use LONG
API for update).

? ENABLE|DISABLE STORAGE IN ROW parameters cannot be altered.

Recommendation

Use ENABLE STORAGE IN ROW, except in cases where the LOB data is not retrieved as
much as other columns? data.
In this case, if the LOB data is stored out-of-line, the biggest gain is achieved
while performing full table scans,
as the operation does not retrieve the LOB?s data.

CACHE, NOCACHE

Definition
The CACHE storage parameter causes LOB data blocks to be read/written via the
buffer cache.

With the NOCACHE storage parameter, LOB data is read/written using direct
reads/writes. This means that the LOB data
blocks are never in the buffer cache and the Oracle server process performs the
reads/writes.

Points to Note

? With the CACHE option, LOB data reads show up as wait event ?db file
sequential read?, writes are performed
by the DBWR process. With the NOCACHE option, LOB data reads/writes show
up as wait events
direct path read (lob)?/?direct path write (lob)?. Corresponding
statistics are ?physical reads direct (lob)?
and ?physical writes direct (lob)?.

? In-line LOBs are not affected by the CACHE option as they reside with
the other column data,
which is typically accessed via the buffer cache.

? The CACHE option gives better read/write performance than the NOCACHE
option.

? The CACHE option for LOB columns is different from the CACHE option for
tables. This means that caution
is required otherwise the read of a large LOB can effectively flush the
buffer cache.

? The CACHE|NOCACHE option can be altered.

Recommendation

Enable caching, except for cases where caching LOBs would severely impact
performance for other online users,
by forcing these users to perform disk reads rather than getting cache hits.

Consistent Reads on LOBs: RETENTION and PCTVERSION

Consistent Read (CR) on LOBs uses a different mechanism than that used for other
data blocks in Oracle.
Older versions of the LOB are retained in the LOB segment and CR is used on the
LOB index to access these
older versions (for in-line LOBs which are stored in the table segment, the
regular UNDO mechanism is used).
There are two ways to control how long older versions are maintained.

Definition

? RETENTION ? time-based: this specifies how long older versions are to be


retained.

? PCTVERSION ? space-based: this specifies what percentage of the LOB


segment is to be used
to hold older versions.

Points to Note

? RETENTION is a keyword in the LOB column definition. No value can be


specified for RETENTION.
The RETENTION value is implicit,.. If a LOB is created with database
compatibility set to
9.2.0.0 or higher, undo_management=TRUE and PCTVERSION is not explicitly
specified,
time-based retention is used. The LOB RETENTION value is always equal to
the value of the
UNDO_RETENTION database instance parameter.

? You cannot specify both PCTVERSION and RETENTION.

? PCTVERSION is applicable only to LOB chunks/pages allocated in


LOBSEGMENTS. Other LOB related data
in the table column and the LOBINDEX segment use regular undo mechanism.

? PCTVERSION=0: the space allocated for older versions of LOB data in


LOBSEGMENTS can be reused
by other transactions and can cause ?snapshot too old? errors.

? PCTVERSION=100: the space allocated by older versions of LOB data can


never be reused by other transactions.
LOB data storage space is never reclaimed and it always increases.

? RETENTION and PCTVERSION can be altered

Recommendation

Time-based retention using the RETENTION keyword is preferred.

A high value for RETENTION or PCTVERSION may be needed to avoid ?snapshot too old?
errors in environments
with high concurrent read/write LOB access.

LOGGING, NOLOGGING

Definition

LOGGING: enables logging of LOB data changes to the redo logs.

NOLOGGING: changes to LOB data (stored in LOBSEGMENTs) are not logged into the
redo logs, however in-line LOB
changes are still logged as normal.

Points to Note

? The CACHE option implicitly enables LOGGING.

? If NOLOGGING was set, and if you have to recover the database,


then sections of the LOBSEGMENT will be marked as corrupt during
recovery
(LOBINDEX changes are logged to redo logs and are recovered, but the
corresponding LOBSEGMENTs
are not logged for recovery).
? LOGGING|NOLOGGING can be altered. The NOCACHE option is required to turn
off LOGGING, e.g. (NOCACHE NOLOGGING).

Recommendation

Use NOLOGGING only when doing bulk loads or migrating from LONG to LOB.
Backup is recommended after bulk operations.

Performance GUIDELINE LOB Loading

In the rest of the document, you will notice LOB API and LONG API methods being
referenced many times.
The difference between these APIs is as follows:

LOB API: the LOB data is accessed by first selecting the LOB locator.
LONG API: the LOB data is accessed without using the LOB locator.

Points to Note
Use array operations for LOB inserts
Scalability problem with LOB disable storage in row option

BUG 3180333 - LOB LOADING USING SQLLDR DOESN'T SCALE

Problem scenario: 2 (or more) concurrent sqlldr processes trying to load LOB data
(LOB column defined with
DISABLE STORAGE IN ROW). Loading will run almost serially. Serialization point is
getting a CR copy of the LOBINDEX block.

Workaround: use ENABLE STORAGE IN ROW even for LOBs whose size is greater than
3964 bytes.
With ENABLE STORAGE IN ROW, we store the first 12 chunk addresses in the table row
and if the inserted LOB data size
can be addressed within these first 12 chunk addresses, then LOBINDEX is empty.
Generating a CR version of a table block
is more efficient and,,, in some cases, not required. This code path provides much
better scalability. Please note that
if LOB data is larger than 12 chunk addresses, then we may see CR contention with
the ENABLE STORAGE IN ROW option as well.

Row Chaining problem with the use of OCILobWrite API

TAR 2760194.995 (UK) - LOADING SMALL (AVG LEN 1120) CLOB DATA INTO TABLE PRODUCES
MUCH CHAINING, WHY?

Problem scenario: in 10gR1 (and older releases), SQL*Loader uses OCILobWrite API
for LOB loading.
This leads to a row chaining problem, as described below:

CREATE TABLE foo (pkey NUMBER NOT NULL, bar BLOB);

Load 3 rows with LOB data size as 3700, 3000 and 3400 respectively.

SQL*Loader loads the LOB columns, first by inserting empty_blob, and second, by
writing the LOB data using the LOB locator.
In the first step, the average row length is pkey length + empty_blob length= 4 +
40 bytes = ~44 bytes.
Assuming that DB_BLOCK_SIZE=8192, these 3 rows can be inserted into one data
block.

In the second step, loading LOB data, the 1st row, 3700 bytes of LOB, and the 2nd
row, 3000 bytes of LOB, can be inserted
into the same block. However, for the 3rd row of LOB data, there is no space left
in that block, so the row must be chained.

Workaround: the first workaround could be to increase the value of PCTFREE. It may
help solve this problem,
but it unnecessarily wastes space. The second workaround is to write a loader
program using the LONG API method
(please note that an enhancement request against sqlldr component is filed for
this problem, and there is a plan
to fix it in the future release).

High number of consistent read blocks created and examined

BUG 3297800 - SQLLDR MAY NEED TO USE LONG API INTERFACE FOR LOBS LESS THAN 2GB

Problem scenario: 2 (or more) concurrent sqlldr processes loading LOB data in
conventional mode. Using the LOB API method
for loading the LOB data in a single user environment may also cause a high number
of CR block creation to occur.

As mentioned earlier, loading the LOB data is performed in 2 steps. . In the first
step, sqlldr inserts empty_blob
for LOB columns. Then, with this LOB locator, the LOB data is written using an
OCILobWrite call.
In a multi-user loading environment, before OCILobWrite is invoked, if other
loading processes change the data block,
it may be required to examine the block and, if required, a CR version of the
block is created.

Workaround: None, other than writing a loader program using he LONG API method

CPU time and Elapsed time - not reported accurately

BUG 3504487 - DBMS_LOB/OCILob* CALL RESOURCE USAGE IS NOT REPORTED, AS THEY ARE
NOT PART OF A CURSOR

Problem scenario: the work done using LOB API calls is not part of the cursor, so
reporting resource usage while
collecting statistics for the LOB workload, such as the CPU time or the elapsed
time, may not be accurate.

Example to illustrate this situation:

(We have already a table created as: CREATE TABLE foo (pkey NUMBER, bar BLOB);)

Declare
lob_loc blob;
buffer raw(32767);
lob_amt binary_integer := 16384;
begin
buffer := utl_raw.cast_to_raw(rpad('FF', 32767, 'FF'));
for j in 1..10000 loop
select bar into lob_loc from foo where pkey = j for update;
dbms_lob.write(lob_loc, lob_amt, 1, buffer );
commit;
end loop;
dbms_output.put_line ('Write test finished ');
end;
/

After executing the above PL/SQL, query V$SQL to measure cpu_time and elapsed time
resource usage.

select sql_text, cpu_time/100000, elapsed_time/100000


from v$sql
where sql_text like '%foo%' or sql_text like ?%dbms_lob%?;

SQL_TEXT
----------------------------------------------------------------------------------
--------------
CPU_TIME/1000000 ELAPSED_TIME/1000000
-------------------------- -----------------------------------

declare lob_loc blob; buffer raw(32767);


lob_amt binary_integer := 16384 ;
begin buffer := utl_raw.cast_to_raw(rpad('FF', 32767,
'FF'));
for j in 1..10000 loop
select
bar into lob_loc from foo where pkey = j for update;
dbms_lob.write(lob_loc, lob_amt, 1, buffer );
commit; end loop; dbms_output.put_line ('Write test
finished '); end;
19.54 19.28

SELECT bar from foo where pkey = :b1 for update


5.00 4.81

As you can see, the PL/SQL block took about 19.54 seconds in CPU time and 19.28
seconds in elapsed time respectively.
Out of 19.54 secondss , the SELECT statement contributed to 5.00 seconds, so the
remaining 14 seconds (approximately)
were spent in dbms_lob.write. This is not reported, because the work done by
dbms_lob.write is not part of a cursor.
Similarly OCILOB API calls were not part of a cursor as well.

Workaround: None
Reads/Writes are done one chunk at a time in synchronous way

BUG 3437770 - LOB DIRECT PATH READ/WRITES ARE LIMITED BY CHUNK SIZE
Problem scenario: The Oracle server process does NOCACHE LOB reads/writes using a
direct path mechanism.
The limitation here is that reads/writes are done one chunk at a time in a
synchronous way. Consider the example below:

Assuming CHUNK size=8K, DB_BLOCK_SIZE=2k, LOB data = 64K, 8 writes are done (each
doing 4 blocks of write at a time)
to load the entire LOB data, waiting for each write to complete before issuing
another write.

Workaround: use as many loader processes as possible to maximize disk throughput.

High CPU system time

BUG 3437770 - LOB DIRECT PATH READ/WRITES ARE LIMITED BY CHUNK SIZE

This is probably due to the above limitation (reads/writes are done one chunk at
time in synchronous way)

Buffer cache sizing problem

Problem scenario: loading LOB data with the CACHE option will most likely fill up
even a large buffer cache.
Under this condition, a degradation in the load rate can be seen if the database
writer doesn?t keep up with
the foreground free buffer requests.

Workaround: follow the general instance tuning guidelines

- use asynchronous I/O (if not possible, use multiple db writer processes)

- stripe datafiles across many spindles

- use the NOCACHE option

The CACHE option will also force other online users to perform physical disk
reads.
This can be avoided by using multiple block sizes.

For example, keep online user objects in 4k (or 8k) block size tablespace and and
cached LOB data in 8kK (or 16k)
block size tablespace. Allocate the required amount of buffer cache for each block
sizes
(e.g. db_4k_block_buffer=500M, db_8k_block_buffer=2000M)

Multi-byte character set conversion

BUG 3324897 - LOBS LESS THAN 3964 BYTES ARE STORED OUT-OF-LINE WHILE LOADING USING
SQLLDR

Problem scenario: wWhen dealing with multi-byte character set, additional bytes
are required for CLOB data. This may cause
client side CLOB data of ~ 4000 bytes, being stored out-of-line in the database.

Workaround: None
Use array operations for LOB inserts

HWM enqueue contention

BUG 3537749 - HW ENQUEUE CONTENTION WHEN LOADING LOB DATA

Problem scenario: given the large size of LOB data (compare to relational table
row size), blocks under
HWM are filled rapidly (under high concurrent load condition) and can cause HW
enqueue contention.

Workaround: ASSM with larger extent size may help.

RAC environment issues

BUG 3429986 - CONVENTIONAL LOAD OF LOB FROM 2 RAC NODE DO NOT SCALE DUE TO LOG
FLUSH LATENCIES

Problem scenario: In a RAC environment, when loading LOB data into one partition,
you may notice contention on 1st level
bitmap and LOB header segment with ASSM. You may notice the same contention on a
single instance
(with a large number of CPUs) with a high number of concurrent loaders.

Workaround: loading into separate partitions will avoid this situation. If this is
not possible, use range-hash partition
instead of just range partitions. FREEPOOLS should help in this situation, but we
need to do more testing to see
the effect of this parameter.but didn?t provide any improvement in our testing.

Other LOB performance related issues

BUG 3234751 - EXCESSIVE USAGE OF TEMP TS WHILE LOADING LOB USING SQLLDR IN
CONVENTIONAL MODE

BUG 3230541 - LOB LOADING USING SQLLDR DIRECT PATH SLOWER THAN CONVENTIONAL

BUG 3189083 - OPEN/CLOSE OF DATAFILE FOR EVERY LOB CHUNK WRITEWRITES

APPENDIX A

APPENDIX A

LONG API access to LOB datatype

Oracle provides transparent access to LOBs from applications that use LONG and
LONG RAW datatypes. If your application
uses DML (INSERT, UPDATE, DELETE) statements from OCI or PL/SQL (PRO*C etc) for
LONG or LONG RAW data, no application
changes are required after the column is converted to a LOB.

For example, you can SELECT a CLOB into a character variable, or a BLOB into a RAW
variable. You can define a CLOB column
as SQLT_CHR or a BLOB column as SQLT_BIN and select the LOB data directly into a
CHARACTER or RAW buffer without selecting
out the locator first.

The following example demonstrates this concept:

create table foo


(
pkey number(10) not null,
bar long raw
);

set serveroutput on

declare
in_buf raw(32767);
out_buf raw(32767);
out_pkey number;
begin
in_buf := utl_raw.cast_to_raw (rpad('FF', 32767, 'FF'));

for j in 1..10 loop


insert into foo values (j, in_buf) ;
commit;
end loop;
dbms_output.put_line ('Write test finished ');

for j in 1..10 loop


select pkey, bar into out_pkey, out_buf from foo where pkey=j ;
end loop;
dbms_output.put_line ('Read test finished ');

end;
/

Now migrate LONG RAW column to BLOB column

alter table foo modify (bar blob);

That works.

alter table foo modify (bar long raw);


ERROR at line 1:
ORA-22859: invalid modification of columns

So that does not work.

There are few things customer should note when doing the LONG to LOB migration.
This alter table migration statement runs
serially in 9i. i (what about 8i,10g). Indexes need to be rebuilt and statistics
recollected.

After the LONG to LOB migration, the above PL/SQL block will work without any
modifications.

Advanced LOB features may require the use of the LOB API, described in the Oracle
Documentation[2]
APPENDIX B

Migration FROM from in-line to out-of-line (and out-of-line to in-line) STORAGE


This section explains one major difference between the LOB API and LONG API
methods.

If a change to the in-line LOB data makes it larger than 3964 bytes, then it is
automatically moved out of table segment
and stored out-of-line. If during future operations, the LOB data shrinks to under
3964 bytes, it will remain out-of-line.

In other words, once a LOB is migrated out, it is always stored out-of-line


irrespective of its size, with the following
exception scenario.

Consider a scenario where you used the LONG API to update the LOB datatype

[..]
begin
in_buf := utl_raw.cast_to_raw (rpad('FF', 3964, 'FF'));
insert into foo values (1, in_buf) ;
commit;
[..]

Above LOB is stored in-line, update the LOB to a size more than 3964 bytes

[..]
in_buf := utl_raw.cast_to_raw (rpad('FF', 4500, 'FF'));
update foo set bar=buffer where pkey=1;
commit;
[..]

After the update LOB is stored out-of-line, now update the LOB to a size smaller
than 3964 bytes

[..]
in_buf := utl_raw.cast_to_raw (rpad('FF', 3000, 'FF'));
update foo set bar=buffer where pkey=1;
commit;
[..]

LOB is stored in-line again.

When using the LONG API for update, the older LOB is deleted (or space is
reclaimed as per RETENTION or PCTVERSION setting)
and a new LOB is created, with a new LOB locator. This is different from using LOB
API, where DML on LOB is possible only
using the LOB locator (the LOB locator doesn?t change)

APPENDIX C

How LOB data is stored

The purpose of this section is to differentiate how the ENABLE STORAGE IN ROW
option is different from the
DISABLE STORAGE IN ROW option for LOB data size greater than 3964 bytes. It also
highlights customers when LOBINDEX
is really used (following example scenarios assume Solaris OS and Oracle 9204 32
bit version)..

In-line LOB LOB size less than 3964 bytes


LOB can be NULL, EMPTY_BLOB, and actual LOB data

create table foo


(
pkey number(10) not null,
bar BLOB
)
lob (bar) store as (enable storage in row chunk 2k);

declare
inbuf raw(3964);

begin
inbuf := utl_raw.cast_to_raw(rpad('FF', 3964, 'FF'));
insert into foo values (1, NULL);
insert into foo values (2, EMPTY_BLOB() );
insert into foo values (3, inbuf );
commit;
end;
/

note: RPAD ('-', 60,


'-')==>'------------------------------------------------------------'

Now Foo table rows are:

Pkey=1
Bar=0 byte (nothing is stored)

Pkey=2
Bar=36 byte (10 byte metadata + 10 byte LobId + 16 byte Inode)

Pkey=3
Bar=4000 byte (36 byte + 3964 byte of LOB data, nothing stored in LOBINDEX and
LOBSEGMENT

LobId - LOB Locator

In-line LOB ? LOB size = 3965 bytes (1 byte greater than 3964)

LOB is defined as in-line, but actual data is greater than 3964 bytes, so moved
out ? please note this is different from
LOB being defined as out-of-line.

[..]
inbuf := utl_raw.cast_to_raw(rpad('FF', 3965, 'FF'));
insert into foo values (4, inbuf );
[..]
Foo table row
Pkey=4
Bar=40 bytes (36 byte + 4 byte for one chunk RDBA). Using this RDBA, we directly
access LOB data in LOBSEGMENT.
Nothing stored in LOBINDEX

RDBA ? Relative Database Block Address

In-line LOB ? LOB size greater than 12 chunk addresses

With in-line LOB option, we store the first 12 chunk addresses in the table row.
This takes 84 bytes (36+4*12) of size
in table row. LOBs that are less than 12 chunks in size will not have entries in
the LOBINDEX if
ENABLE STORAGE IN ROW is used

[..]
inbuf := utl_raw.cast_to_raw(rpad('FF', 32767, 'FF'));
insert into foo values (5, inbuf );
[..]

Here, we are inserting 32767 bytes of LOB data, given our chunk
size of 2k, we need approximately 16 blocks (32767/2048). So we store first 12
chunk RDBAs in table row and the rest in LOBINDEX

Foo table row

Pkey=5
Bar=84 bytes (36 byte + 4*12 byte for first 12 chunk RDBA). Using this RDBA, we
directly
access 12 LOB chunks in LOBSEGMENT. Then using the LobId, we lookup LOBINDEX to
get rest of the LOB chunk RDBAs.

Out-of-line LOBs ? All LOB sizes

With out-of-line LOB option, only LOB locator is stored in table row. Using LOB
locator, we lookup LOBINDEX and find the range
of chunk RDBAs, using this RDBAs we read LOB data from LOBSEGMENT

create table foo (pkey number(10) not null, bar BLOB)


lob (bar) store as (disable storage in row chunk 2k);

[..]
inbuf := utl_raw.cast_to_raw(rpad('FF', 20, 'FF'));
insert into foo values (6, inbuf );
[..]

Foo table rows


Pkey=6
Bar=20 bytes (10 byte metadata + 10 byte LobId). Please note Inode and chunk
RDBAs are stored in LOBINDEX.

LOB Performance Guidelines


April 2004

Author: V. Jegraj (Vinayagam.Djegaradjane)

Acknowledgements: Vishy Karra, Krishna Kunchithapadam, Cecilia Gervasio

Oracle Corporation
World Headquarters
500 Oracle Parkway
Redwood Shores, CA 94065
U.S.A.

Worldwide Inquiries:
Phone: +1.650.506.7000
Fax: +1.650.506.7200
www.oracle.com

Copyright ? 2004 Oracle Corporation

All rights reserved.

--------------------------------------------------------------------------------

[1] In Oracle8i, users can specify storage parameters for LOB index, but from
Oracle9i Database onwards,
specifying storage parameters for a LOB index is ignored without any error and
the index is stored
in the same tablespace as the LOB segment, with an Oracle generated index
name.

[2] Large Objects (LOBs) in Oracle9i Application Developer's Guide, DBMS_LOB


package in Oracle9i
Supplied PL/SQL Packages and Types Reference, LOB and FILE Operations in
Oracle Call Interface Programmer?s guide

--------------------------------------------------------------------------------

Copyright � 2005, Oracle. All rights reserved. Legal Notices and Terms of Use.

Note 7:
=======

Doc ID: Note:1071540.6 Content Type: TEXT/PLAIN


Subject: Converting a Long datatype to Clob in Oracle8i?Creation Date: 27-
MAY-1999
Type: BULLETIN Last Revision Date: 24-JUN-2004
Status: PUBLISHED
PURPOSE
This note describes the Oracle 8.1.x function that converts data stored in
LONG and LONG RAW datatypes to CLOB and BLOB datatypes respectively. This
is done using the TO_LOB function.

Converting a long datatype to a Clob:


=========================================

The TO_LOB function is provided in Oracle 8.1.x to convert LONG and LONG RAW
datatypes to CLOB and BLOB datatypes respectively.

Note: The TO_LOB function is not provided in Oracle 8.0.x.

Oracle recommends that long datatypes be converted to CLOBs, NCLOB or BLOBs.

Note: When a LOB is stored in a table, the data (LOB VALUE) and a pointer to
the data called a LOB LOCATOR, are stored separately. The data may be stored
along with the locator in the table itself or in a separate table. The LOB
clause in the create table command can be used to specify whether an attempt
should be made to store data in the main table or a separate one. The LOB
clause may also be used to specify a separate tablespace and storage clause
for both the LOB table and its associated index.

Example:

SQL> create table long_data (c1 number, c2 long);

Table created.

SQL> desc long_data


Name Null? Type
------------------------------- -------- ----
C1 NUMBER
C2 LONG

SQL> insert into long_data values


2 (1, 'This is some long data to be migrated to a CLOB');

1 row created.

Note: The TO_LOB function may be used in CREATE TABLE AS SELECT or


INSERT...SELECT statements:

Example:

SQL> create table test_lobs


2 (c1 number, c2 clob);

Table created.

SQL> desc test_lobs


Name Null? Type
------------------------------- -------- ----
C1 NUMBER
C2 CLOB

SQL> insert into test_lobs


2 select c1, to_lob(c2) from long_data;

1 row created.

SQL> select c2 from test_lobs;

C2
-----------------------------------------------
This is some long data to be migrated to a CLOB

References:
===========

Oracle8i SQL Reference Volume 1


[NOTE:66046.1] Oracle8i: LOBs

30.2 How to access LOB data:


============================

30.2.1 SQL DML:


---------------

Using SQL DML for Basic Operations on LOBs


SQL DML provides basic operations -- INSERT, UPDATE, SELECT, DELETE -- that let
you make changes
to the entire values of internal LOBs within the Oracle ORDBMS. To work with parts
of internal LOBs,
you will need to use one of the interfaces that have been developed to handle more
complex requirements.

Oracle8 supports read-only operations on external LOBs. So if you need to


update/write to external LOBs,
you will have to develop client side applications suited to your needs

Suppose you have the following table:

create table multimedia_tab


(
clip_id number,
story clob,
flsub nclob,
photo bfile,
frame blob,
sound blob,
voiced_ref voiced_type,
inseg_ntab inseg_type,
music bfile,
map_obj map_typ
);
create table multimedia_tab
(
clip_id number,
story clob,
flsub nclob,
photo bfile,
frame blob,
sound blob,
music bfile
);

The following INSERT statement populates story with the character string 'JFK
interview',
sets flsub, frame and sound to an empty value, sets photo to NULL, and
initializes music to point to the file 'JFK_interview' located under the logical
directory
'AUDIO_DIR' (see the CREATE DIRECTORY command in the Oracle8i Reference. Character
strings are inserted
using the default character set for the instance.

INSERT INTO Multimedia_tab


VALUES (101, 'JFK interview', EMPTY_CLOB(), NULL,
EMPTY_BLOB(), EMPTY_BLOB(), NULL, NULL,
BFILENAME('AUDIO_DIR', 'JFK_interview'), NULL);

Similarly, the LOB attributes for the Map_typ column in Multimedia_tab can be
initialized to NULL
or set to empty as shown below. Note that you cannot initialize a LOB object
attribute with a literal.

INSERT INTO Multimedia_tab


VALUES (1, EMPTY_CLOB(), EMPTY_CLOB(), NULL, EMPTY_BLOB(),
EMPTY_BLOB(), NULL, NULL, NULL,
Map_typ('Moon Mountain', 23, 34, 45, 56, EMPTY_BLOB(), NULL);

SELECTing a LOB
Performing a SELECT on a LOB returns the locator instead of the LOB value. In the
following PL/SQL fragment
you select the LOB locator for story and place it in the PL/SQL locator variable
Image1 defined
in the program block. When you use PL/SQL DBMS_LOB functions to manipulate the LOB
value, you refer
to the LOB using the locator.

DECLARE
Image1 BLOB;
ImageNum INTEGER := 101;
BEGIN
SELECT story INTO Image1 FROM Multimedia_tab
WHERE clip_id = ImageNum;
DBMS_OUTPUT.PUT_LINE('Size of the Image is: ' ||
DBMS_LOB.GETLENGTH(Image1));
/* more LOB routines */
END;
DECLARE
Image1 BLOB;
ImageNum INTEGER := 101;
BEGIN
SELECT content INTO Image1 FROM binaries2
WHERE id = 1211;
DBMS_OUTPUT.PUT_LINE('Size of the Image is: ' ||
DBMS_LOB.GETLENGTH(Image1));
/* more LOB routines */
END;
/

XXX So you can retrieve all kinds of info with DBMS_LOB

30.2.2 The EMPTY_BLOB and EMPTY_CLOB functions:


-----------------------------------------------

The EMPTY_BLOB function returns an empty locator of type BLOB (binary large
object).
The specification for the EMPTY_BLOB function is:

FUNCTION EMPTY_BLOB RETURN BLOB;


You can call this function without any parentheses or with an empty pair. Here are
some examples:

INSERT INTO family_member (name, photo)


VALUES ('Steven Feuerstein', EMPTY_BLOB());

DECLARE
my_photo BLOB := EMPTY_BLOB;
BEGIN

Use EMPTY_BLOB to initialize a BLOB to "empty." Before you can work with a BLOB,
either to reference it
in SQL DML statements such as INSERTs or to assign it a value in PL/SQL, it must
contain a locator.
It cannot be NULL. The locator might point to an empty BLOB value, but it will be
a valid BLOB locator.

The EMPTY_CLOB function returns an empty locator of type CLOB. The specification
for the EMPTY_CLOB function is:

FUNCTION EMPTY_CLOB RETURN CLOB;


You can call this function without any parentheses or with an empty pair. Here are
some examples:

INSERT INTO diary (entry, text)


VALUES (SYSDATE, EMPTY_CLOB());

DECLARE
the_big_novel CLOB := EMPTY_CLOB;
BEGIN
Use EMPTY_CLOB to initialize a CLOB to "empty". Before you can work with a CLOB,
either to reference it
in SQL DML statements such as INSERTs or to assign it a value in PL/SQL, it must
contain a locator.
It cannot be NULL. The locator might point to an empty CLOB value, but it will be
a valid CLOB locator.

30.2.3 DBMS_LOB
---------------

Simple example to get the length of a lob:

DECLARE
Image1 BLOB;
ImageNum INTEGER := 101;
BEGIN
SELECT content INTO Image1 FROM binaries2
WHERE id = 1211;
DBMS_OUTPUT.PUT_LINE('Size of the Image is: ' ||
DBMS_LOB.GETLENGTH(Image1));
/* more LOB routines */
END;
/

DBMS_LOB
The DBMS_LOB package provides subprograms to operate on BLOBs, CLOBs, NCLOBs,
BFILEs, and temporary LOBs.
You can use DBMS_LOB to access and manipulation specific parts of a LOB or
complete LOBs.

DBMS_LOB can read and modify BLOBs, CLOBs, and NCLOBs; it provides read-only
operations for BFILEs.
The bulk of the LOB operations are provided by this package.

Example:

Load Text Files to CLOB then Write Back Out to Disk - (PL/SQL)

Overview

The following example is part of the Oracle LOB Examples Collection.


This example provides two PL/SQL procedures that demonstrate how to populate a
CLOB column with
a text file (an XML file) then write it back out to the file system as a different
file name.

- Load_CLOB_From_XML_File:

This PL/SQL procedure loads an XML file on disk to a CLOB column using a BFILE
reference variable.
Notice that I use the new PL/SQL procedure DBMS_LOB.LoadCLOBFromFile(),
introduced in Oracle 9.2,
that handles uploading to a multi-byte UNICODE database.
- Write_CLOB_To_XML_File:

This PL/SQL procedure writes the contents of the CLOB column in the database
piecewise
back to the file system.

Let's first take a look at an example XML file:

DatabaseInventoryBig.xml:

<?xml version="1.0" ?>


<!DOCTYPE DatabaseInventory (View Source for full doctype...)>
- <DatabaseInventory>
- <DatabaseName>
<GlobalDatabaseName>production.iDevelopment.info</GlobalDatabaseName>
<OracleSID>production</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<DatabaseAttributes Type="Production" Version="9i" />
<Comments>The following database should be considered the most stable for up-to-
date data. The backup strategy includes running the database in Archive Log Mode
and performing nightly backups. All new accounts need to be approved by the DBA
Group before being created.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>development.iDevelopment.info</GlobalDatabaseName>
<OracleSID>development</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<DatabaseAttributes Type="Development" Version="9i" />
<Comments>The following database should contain all hosted applications.
Production data will be exported on a weekly basis to ensure all development
environments have stable and current data.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing1.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing1</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host more than half of the testing for our
hosting environment.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing2.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing2</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
HR department only.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing3.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing3</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
Finance department only.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing4.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing4</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
HQ department only.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing5.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing5</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
Engineering department only.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing6.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing6</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
IT department only.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing7.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing7</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
Marketing department only.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing8.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing8</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
Purchasing department only.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing9.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing9</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
Accounts Payable department only.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing10.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing10</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing OEM.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing11.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing11</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing XMLDB.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing12.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing12</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for tuning.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing13.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing13</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for UAT.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing14.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing14</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for additional monitoring.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing15.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing15</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing upgrades.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing16.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing16</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for certification tesing.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing17.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing17</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing18.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing18</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing19.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing19</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing20.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing20</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing21.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing21</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing22.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing22</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing23.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing23</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
+ <DatabaseName>
<GlobalDatabaseName>testing24.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing24</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
+ <DatabaseName>
<GlobalDatabaseName>testing25.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing25</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
+ <DatabaseName>
<GlobalDatabaseName>testing26.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing26</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
+ <DatabaseName>
<GlobalDatabaseName>testing27.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing27</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
+ <DatabaseName>
<GlobalDatabaseName>testing28.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing28</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing29.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing29</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing30.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing30</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing31.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing31</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing32.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing32</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing33.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing33</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing34.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing34</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing35.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing35</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing36.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing36</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing37.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing37</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing38.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing38</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing39.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing39</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing40.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing40</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing41.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing41</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing42.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing42</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing43.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing43</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing44.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing44</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing45.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing45</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing46.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing46</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing47.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing47</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing48.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing48</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing49.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing49</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing50.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing50</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing51.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing51</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing52.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing52</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing53.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing53</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing54.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing54</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing55.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing55</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing56.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing56</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing57.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing57</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing58.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing58</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing59.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing59</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
DBA department for testing of all ERP application modules.</Comments>
</DatabaseName>
- <DatabaseName>
<GlobalDatabaseName>testing60.iDevelopment.info</GlobalDatabaseName>
<OracleSID>testing60</OracleSID>
<DatabaseDomain>iDevelopment.info</DatabaseDomain>
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody
Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used by the
Sales Force Automation department.</Comments>
</DatabaseName>
</DatabaseInventory

After downloading the above XML file, create all Oracle database objects:

DROP TABLE test_clob CASCADE CONSTRAINTS


/

Table dropped.

CREATE TABLE test_clob (


id NUMBER(15)
, file_name VARCHAR2(1000)
, xml_file CLOB
, timestamp DATE
)
/

Table created.

CREATE OR REPLACE DIRECTORY EXAMPLE_LOB_DIR AS '/u01/app/oracle/lobs'


/

Directory created.

Now, let's define our two example procedures:

CREATE OR REPLACE PROCEDURE Load_CLOB_From_XML_File


IS

dest_clob CLOB;
src_clob BFILE := BFILENAME('EXAMPLE_LOB_DIR',
'DatabaseInventoryBig.xml');
dst_offset number := 1 ;
src_offset number := 1 ;
lang_ctx number := DBMS_LOB.DEFAULT_LANG_CTX;
warning number;

BEGIN

DBMS_OUTPUT.ENABLE(100000);

-- -----------------------------------------------------------------------
-- THE FOLLOWING BLOCK OF CODE WILL ATTEMPT TO INSERT / WRITE THE CONTENTS
-- OF AN XML FILE TO A CLOB COLUMN. IN THIS CASE, I WILL USE THE NEW
-- DBMS_LOB.LoadCLOBFromFile() API WHICH *DOES* SUPPORT MULTI-BYTE
-- CHARACTER SET DATA. IF YOU ARE NOT USING ORACLE 9iR2 AND/OR DO NOT NEED
-- TO SUPPORT LOADING TO A MULTI-BYTE CHARACTER SET DATABASE, USE THE
-- FOLLOWING FOR LOADING FROM A FILE:
--
-- DBMS_LOB.LoadFromFile(
-- DEST_LOB => dest_clob
-- , SRC_LOB => src_clob
-- , AMOUNT => DBMS_LOB.GETLENGTH(src_clob)
-- );
--
-- -----------------------------------------------------------------------

INSERT INTO test_clob(id, file_name, xml_file, timestamp)


VALUES(1001, 'DatabaseInventoryBig.xml', empty_clob(), sysdate)
RETURNING xml_file INTO dest_clob;

-- -------------------------------------
-- OPENING THE SOURCE BFILE IS MANDATORY
-- -------------------------------------
DBMS_LOB.OPEN(src_clob, DBMS_LOB.LOB_READONLY);

DBMS_LOB.LoadCLOBFromFile(
DEST_LOB => dest_clob
, SRC_BFILE => src_clob
, AMOUNT => DBMS_LOB.GETLENGTH(src_clob)
, DEST_OFFSET => dst_offset
, SRC_OFFSET => src_offset
, BFILE_CSID => DBMS_LOB.DEFAULT_CSID
, LANG_CONTEXT => lang_ctx
, WARNING => warning
);

DBMS_LOB.CLOSE(src_clob);

COMMIT;

DBMS_OUTPUT.PUT_LINE('Loaded XML File using DBMS_LOB.LoadCLOBFromFile:


(ID=1001).');

END;
/

SQL> @load_clob_from_xml_file.sql

Procedure created.

CREATE OR REPLACE PROCEDURE Write_CLOB_To_XML_File


IS

clob_loc CLOB;
buffer VARCHAR2(32767);
buffer_size CONSTANT BINARY_INTEGER := 32767;
amount BINARY_INTEGER;
offset NUMBER(38);

file_handle UTL_FILE.FILE_TYPE;
directory_name CONSTANT VARCHAR2(80) := 'EXAMPLE_LOB_DIR';
new_xml_filename CONSTANT VARCHAR2(80) := 'DatabaseInventoryBig_2.xml';
BEGIN

DBMS_OUTPUT.ENABLE(100000);

-- ----------------
-- GET CLOB LOCATOR
-- ----------------
SELECT xml_file INTO clob_loc
FROM test_clob
WHERE id = 1001;

-- --------------------------------
-- OPEN NEW XML FILE IN WRITE MODE
-- --------------------------------
file_handle := UTL_FILE.FOPEN(
location => directory_name,
filename => new_xml_filename,
open_mode => 'w',
max_linesize => buffer_size);

amount := buffer_size;
offset := 1;

-- ----------------------------------------------
-- READ FROM CLOB XML / WRITE OUT NEW XML TO DISK
-- ----------------------------------------------
WHILE amount >= buffer_size
LOOP

DBMS_LOB.READ(
lob_loc => clob_loc,
amount => amount,
offset => offset,
buffer => buffer);

offset := offset + amount;

UTL_FILE.PUT(
file => file_handle,
buffer => buffer);

UTL_FILE.FFLUSH(file => file_handle);

END LOOP;

UTL_FILE.FCLOSE(file => file_handle);

END;
/

SQL> @write_clob_to_xml_file.sql

Procedure created.

Now lets test it:


SQL> set serveroutput on

SQL> exec Load_CLOB_From_XML_File


Loaded XML File using DBMS_LOB.LoadCLOBFromFile: (ID=1001).

PL/SQL procedure successfully completed.

SQL> exec Write_CLOB_To_XML_File

PL/SQL procedure successfully completed.

SQL> SELECT id, DBMS_LOB.GETLENGTH(xml_file) Length FROM test_clob;

ID LENGTH
---------- ----------
1001 41113

SQL> host ls -l DatabaseInventory*


-rw-r--r-- 1 oracle dba 41113 Sep 20 15:02 DatabaseInventoryBig.xml
-rw-r--r-- 1 oracle dba 41113 Sep 20 15:48 DatabaseInventoryBig_2.xml

30.2.4 REMOTE SELECTS, INSERTS, UPDATES:


----------------------------------------

Valid operations on LOB columns in remote tables include:

CREATE TABLE as select * from table1@remote_site;


INSERT INTO t select * from table1@remote_site;
UPDATE t set lobcol = (select lobcol from table1@remote_site);
INSERT INTO table1@remote...
UPDATE table1@remote...
DELETE table1@remote...

30.2.5: Export a BLOB to a file with Java:


------------------------------------------

First we create a Java stored procedure that accepts a file name and a BLOB as
parameters:

CREATE OR REPLACE JAVA SOURCE NAMED "BlobHandler" AS


import java.lang.*;
import java.sql.*;
import oracle.sql.*;
import java.io.*;

public class BlobHandler


{

public static void ExportBlob(String myFile, BLOB myBlob) throws Exception


{
// Bind the image object to the database object
// Open streams for the output file and the blob
File binaryFile = new File(myFile);
FileOutputStream outStream = new FileOutputStream(binaryFile);
InputStream inStream = myBlob.getBinaryStream();

// Get the optimum buffer size and use this to create the read/write buffer
int size = myBlob.getBufferSize();
byte[] buffer = new byte[size];
int length = -1;

// Transfer the data


while ((length = inStream.read(buffer)) != -1)
{
outStream.write(buffer, 0, length);
outStream.flush();
}

// Close everything down


inStream.close();
outStream.close();
}

};
/

ALTER java source "BlobHandler" compile;


show errors java source "BlobHandler"

Next we publish the Java call specification so we can access it via PL/SQL:

CREATE OR REPLACE PROCEDURE ExportBlob (p_file IN VARCHAR2,


p_blob IN BLOB)
AS LANGUAGE JAVA
NAME 'BlobHandler.ExportBlob(java.lang.String, oracle.sql.BLOB)';
/

Next we grant the Oracle JVM the relevant filesystem permissions:

EXEC Dbms_Java.Grant_Permission( -
'SCHEMA-NAME', -
'java.io.FilePermission', -
'<<ALL FILES>>', -
'read ,write, execute, delete');

Finally we can test it:

CREATE TABLE tab1 (col1 BLOB);


INSERT INTO tab1 VALUES(empty_blob());
COMMIT;

DECLARE
v_blob BLOB;
BEGIN
SELECT col1
INTO v_blob
FROM tab1;

ExportBlob('c:\MyBlob',v_blob);
END;
/

30.2.6 Import into a BLOB from a file:


--------------------------------------

Import BLOB Contents


The following article presents a simple methods for importing a file into a BLOB
datatype.
First a directory object is created to point to the relevant filesystem directory:

CREATE OR REPLACE DIRECTORY images AS 'C:\';


Next we create a table to hold the BLOB:

CREATE TABLE tab1 (col1 BLOB);

Finally we import the file into a BLOB datatype and insert it into the table:

DECLARE
v_bfile BFILE;
v_blob BLOB;
BEGIN
INSERT INTO tab1 (col1)
VALUES (empty_blob())
RETURN col1 INTO v_blob;

v_bfile := BFILENAME('IMAGES', 'MyImage.gif');


Dbms_Lob.Fileopen(v_bfile, Dbms_Lob.File_Readonly);
Dbms_Lob.Loadfromfile(v_blob, v_bfile, Dbms_Lob.Getlength(v_bfile));
Dbms_Lob.Fileclose(v_bfile);

COMMIT;
END;
/
Hope this helps. Regards Tim...

30.2.7 Import into a CLOB from a file:


--------------------------------------

Import CLOB Contents


The following article presents a simple methods for importing a file into a CLOB
datatype.
First a directory object is created to point to the relevant filesystem directory:

CREATE OR REPLACE DIRECTORY documents AS 'C:\';


Next we create a table to hold the CLOB:

CREATE TABLE tab1 (col1 CLOB);

Finally we import the file into a CLOB datatype and insert it into the table:
DECLARE
v_bfile BFILE;
v_clob CLOB;
BEGIN
INSERT INTO tab1 (col1)
VALUES (empty_clob())
RETURN col1 INTO v_clob;

v_bfile := BFILENAME('DOCUMENTS', 'Sample.txt');


Dbms_Lob.Fileopen(v_bfile, Dbms_Lob.File_Readonly);
Dbms_Lob.Loadfromfile(v_clob, v_bfile, Dbms_Lob.Getlength(v_bfile));
Dbms_Lob.Fileclose(v_bfile);

COMMIT;
END;
/
Hope this helps. Regards Tim...

Note 5:
-------

You Asked (Jump to Tom's latest followup)

I have a table with a blob column.


It's possible to specify an extra
storage clause for this column ?

and we said...

Yes, the following example is cut and pasted from the SQL Reference Manual, the
CREATE TABLE command:

CREATE TABLE lob_tab (col1 BLOB, col2 CLOB) STORAGE (INITIAL 512 NEXT 256)
LOB (col1, col2) STORE AS
(TABLESPACE lob_seg_ts
STORAGE (INITIAL 6144 NEXT 6144)
CHUNK 4
NOCACHE LOGGING
INDEX (TABLESPACE lob_index_ts
STORAGE (INITIAL 256 NEXT 256)
)
);

The table will be stored in the users default tablespace with (INITIAL 512 NEXT
256). The actual lob data will be in LOB_SEG_TS with (INITIAL 6144 NEXT 6144).
The lob index built on the pages constituting the lob will be stored in yet a
3'rd tablespace -- lob_index_ts with (INITIAL 256 NEXT 256).

Reviews
GOTO a page to Bookmark Review | Bottom | Top
lob storage recovery May 07, 2004
Reviewer: bob from PA

Tom,
If the LOB tablespace is not backed up, can the table data (a different
tablespace) be recovered in a failure scenario?

I know with TSPITR the process validates that no objects cross tablespaces that
are not included in the set being recovered with the TSPITR check/validate
funcion. This doesn't mean the tablespace won't be recovered in the auxiliary
db, it just means the automated process won't continue through to export the
objects, and re-import unless you pass the check. (or at least that was what
happened in the test I ran).

I am just curious about what would happen to this table if its lob tablespace
was lost and non-recoverable. can just the regular data be recovered

Followup:
well, it's going to be problematic as the lob locators will point to "garbage".
You cannot really TSPITR a table with lobs without doing the *same* to the lob
segments.

You'd have to sort of update the lobs to NULL and pull it manually -- but then I
would ask "why have the lobs in the first place, must not be very important"?

so yes, we'd be able to get the scalar data back (complete recovery would be
best here), update the lob to null and go forward with that.

30.3 Errors in LOB:


===================

30.3.1:
-------

Doc ID: Note:293515.1


Subject: ORA-1578 ORA-26040 in a LOB segment - Script to solve the errors
Type: PROBLEM
Status: PUBLISHED
Content Type: TEXT/X-HTML
Creation Date: 09-DEC-2004
Last Revision Date: 25-FEB-2005

Purpose
============
- The purpose of this article is to provide a script to solve errors
ORA-1578 / ORA-26040 when a lob block is accessed by a sql statement.

- Note that the data inside the corrupted lob blocks is not salvageable.
This procedure will update the lob column with an empty lob to avoid errors
ORA-1578 / ORA-26040.

- After applying this solution dbverify would still produce error


DBV-200 until block marked as corrupted is reused and reformatted.
Symptoms
===========
- ORA-1578 and ORA-26040 are produced when accesing a lob column in a table:

ORA-1578 : ORACLE data block corrupted (file # %s, block # %s)


ORA-26040: Data block was loaded using the NOLOGGING option

- dbverify for the datafile that produces the errors fails with:

DBV-00200: Block, dba <dba number>, already marked corrupted

Example:

dbv file=/oracle/oradata/data.dbf blocksize=8192

DBV-00200: Block, dba 54528484, already marked corrupted


.....

The dba can be used to get the relative file number and block number:

Relative File number:

SQL> select dbms_utility.data_block_address_file(54528484) from dual;

DBMS_UTILITY.DATA_BLOCK_ADDRESS_FILE(54528484)
----------------------------------------------
13

Block Number:

SQL> select dbms_utility.data_block_address_block(54528484) from dual;

DBMS_UTILITY.DATA_BLOCK_ADDRESS_BLOCK(54528484)
-----------------------------------------------
2532

Cause
==========
- LOB segment has been defined as NOLOGGING
- LOB Blocks were marked as corrupted by Oracle after a datafile restore /
recovery.

Identify the table referencing the lob segment - Example


=========================================================
Error example when accessing the lob column by a sql statement:

ORA-01578 : ORACLE data block corrupted (file #13 block # 2532)


ORA-01110 : datafile 13: '/oracle/oradata/data.dbf'
ORA-26040 : Data block was loaded using the NOLOGGING option.

1. Query dba_extents to find out the lob segment name


select owner, segment_name, segment_type
from dba_extents
where file_id = 13
and 2532 between block_id and block_id + blocks - 1;

In our example it returned:

owner=SCOTT
segment_name=SYS_LOB0000029815C00006$$
segment_type=LOBSEGMENT

2. Query dba_lobs to identify the table_name and lob column name:

select table_name, column_name


from dba_lobs
where segment_name = 'SYS_LOB0000029815C00006$$'
and owner = 'SCOTT';

In our example it returned:

table_name = EMP
column_name = EMPLOYEE_ID_LOB

Fix
======

1. Identify the table rowid's referencing the corrupted lob segment blocks by
running the following plsq script:

rem ********************* Script begins here ********************

create table corrupted_data (corrupted_rowid rowid);

set concat #

declare
error_1578 exception;
pragma exception_init(error_1578,-1578);
n number;
begin
for cursor_lob in (select rowid r, &&lob_column from
&table_owner.&table_with_lob) loop
begin
n:=dbms_lob.instr(cursor_lob.&&lob_column,hextoraw('8899')) ;
exception
when error_1578 then
insert into corrupted_data values (cursor_lob.r);
commit;
end;
end loop;
end;
/
undefine lob_column
rem ********************* Script ends here ********************

When prompted by variable values and following our example:

Enter value for lob_column: EMPLOYEE_ID_LOB


Enter value for table_owner: SCOTT
Enter value for table_with_lob: EMP

2. Update the lob column with empty lob to avoid ORA-1578 and ORA-26040:

SQL> set concat #


SQL> update &table_owner.&table_with_lob
set &lob_column = empty_blob()
where rowid in (select corrupted_rowid from corrupted_data);

if &lob_column is a CLOB datatype, replace empty_blob by empty_clob.

Reference
==============
Note 290161.1 - The Gains and Pains of Nologging Operations

30.3.2:
-------

Displayed below are the messages of the selected thread.

Thread Status: Closed

From: Neil Bullen 26-Mar-02 08:26


Subject: How do you alter NOLOGGING in lob index partition

RDBMS Version: 8.1.7.2.1


Operating System and Version: Compaq Tru64 Unix 5.2
Error Number (if applicable):
Product (i.e. SQL*Loader, Import, etc.):
Product Version:

How do you alter NOLOGGING in lob index partition

I have discovered that a lob index partition is set to NOLOGGING, how can I alter
this to LOGGING.

The lob is set to CACHE and LOGGING, the index def_logging is set to NONE
and the tablespace is set to LOGGING.

--------------------------------------------------------------------------------

From: Oracle, Rowena Serna 02-Apr-02 03:26


Subject: Re : How do you alter NOLOGGING in lob index partition

You could find the system generated lobindex name and use the "alter index"
command.
Regards,
Rowena Serna
Oracle Corporation

-------------------------------------------------------------------------------

From: Neil Bullen 03-Apr-02 23:42


Subject: Re : How do you alter NOLOGGING in lob index partition

Using alter index on a lob segment index results in error ORA-22864 cannot ALTER
or DROP LOB indexes,
the solution I found was to alter the lob caching setting, even though dba_lobs
showed the CACHE and LOGGING
settings to be 'YES' by issuing the ALTER TABLE <tablename> MODIFY LOB(<lobname>)
(CACHE); command
all partitions of the associated index were changed to LOGGING. What threw me was
the CACHE and LOGGING settings
in dba_lobs already being set correctly, however resetting these again was the
key.

--------------------------------------------------------------------------------

From: Oracle, Rowena Serna 09-Apr-02 02:46


Subject: Re : How do you alter NOLOGGING in lob index partition

Thanks for updating.

Regards,
Rowena Serna
Oracle Corporation

30.3.4 exp/imp errors and LOBS:


-------------------------------

Note 1:
-------

Doc ID: Note:48023.1


Subject: OERR: IMP 64 Definition of LOB was truncated by export
Type: REFERENCE
Status: PUBLISHED
Content Type: TEXT/PLAIN
Creation Date: 07-NOV-1997
Last Revision Date: 26-MAR-2001

Error: IMP 64 Text: Definition of LOB was truncated by export


---------------------------------------------------------------------------
Cause: While producing the dump file, Export was unable to write the * entire

contents of a LOB. Import is therefore unable to * reconstruct the


contents of the LOB. The remainder of the * import of the current table
will be skipped. Action: Delete the offending row in the exported database and
retry the *
export.
.

Note 2:
-------

An export or import of a table with a Large Object (LOB) column,


has a slower performance than an export or import of a table without LOB
columns:
-- create two tables: TESTTAB1 with a VARCHAR2 column, and TESTTAB2 with a
-- CLOB column:
connect / as sysdba
create table scott.testtab1 (nr number, txt varchar2(2000));
create table scott.testtab2 (nr number, txt clob);
-- populate both tables with the same 500,000 rows:
declare
x varchar2(50);
begin
for i in 1..500000 loop
x := 'This is a line with the number: ' || i;
insert into scott.testtab1 values(i,x);
insert into scott.testtab2 values(i,x);
commit;
end loop;
end;
/
-- export both tables:
% exp system/manager file=exp_testtab1.dmp tables=scott.testtab1 direct=y
% exp system/manager file=exp_testtab1a.dmp tables=scott.testtab1
% exp system/manager file=exp_testtab2.dmp tables=scott.testtab2

No CLOB No CLOB With CLOB


DIRECT CONVENTIONAL column
------------ ------------ ------------
8.1.7.4.0 0:13 0:20 7:49
9.2.0.4.0 0:14 0:18 7:37
9.2.0.5.0 0:12 0:15 7:03
10.1.0.2.0 0:16 0:31 7:15

Note 3:
-------

Doc ID: Note:157024.1 Content Type: TEXT/X-HTML


Subject: Insert/Import of Table with Lob Fails IMP-00003 ORA-3237
Creation Date: 24-MAY-2001
Type: PROBLEM Last Revision Date: 21-OCT-2003
Status: PUBLISHED

fact: Oracle Server - Enterprise Edition


fact: Import Utility (IMP)
symptom: Import fails with error
symptom: Insert fails
symptom: Table with LOB column
symptom: Locally managed tablespace
symptom: IMP-00003: ORACLE error %lu encountered
symptom: IMP-00017: following statement failed with ORACLE error %lu:
symptom: ORA-03237: Initial Extent of specified size cannot be allocated
cause: Extent size specified for the tablespace is not large enough.

fix:

For LOBS, ensure that the extent size specification in the tablespace is least
three times the db_block_size.

For example:
If the db_block_size is 8192, then the extent size for the tablespace should be
at least 24576.

Explaination:
Certain objects may require larger extents by virtue of how they are built
internally (Example: an RBS requires at least four blocks and a LOB at least
three).

References:
<Bug:1186625>
SQL Reference Guide, Create Tablespace

Note 4:
-------

Doc ID: Note:211721.1 Content Type: TEXT/X-HTML


Subject: Unable to Import Tables with LOB Columns Creation Date: 13-SEP-2002

Type: PROBLEM Last Revision Date: 03-OCT-2003


Status: PUBLISHED

fact: Oracle Server - Enterprise Edition 9+-


fact: Oracle Server - Enterprise Edition 8.1
fact: Oracle Server - Enterprise Edition 8
fact: Import Utility (IMP)
symptom: Import fails
symptom: ORA-01658: unable to create INITIAL extent for segment in
tablespace '%s'
symptom: ORA-01652: unable to extend temp segment by %s in tablespace %s
symptom: Table contains LOB column
symptom: Problem does not occur for tables without LOB columns

cause: No LOB storage specifications were specified on the table creation


for those tables with LOB columns. LOB data is stored both within and outwith
the table depending on how much data the column contains.

A new database was created and the data reimported into a tablespace with
1.7GB default initial extent size. The LOB storage outwith the table defaults
to the initial extent of the tablespace and this storage requirement could not
be fulfilled.

fix:

As a user with dba privileges issue

alter tablespace <tablespace_name> default storage (initial <x>M)&


#059;

where <tablespace_name> and <x> are replaced with appropriate


values.

See also :
Note:1074731.6 ORA-01658 During 'Create Table' Statement

Note 5:
-------

Doc ID: Note:197699.1


Subject: "IMP-00003 ORA-00959 ON IMPORT OF TABLE WITH CLOB DATATYPES"
Type: PROBLEM
Status: PUBLISHED
Content Type: TEXT/PLAIN
Creation Date: 31-MAY-2002
Last Revision Date: 29-AUG-2002

Problem Description
-------------------
You are attempting to import a table that has CLOB datatype and you receive the
following errors:
IMP-00003: ORACLE error 959 encountered
ORA-00959: tablespace <tablespace_name> does not exist
Solution Description
--------------------
Create the table that has CLOB datatypes before the import, specifying tablespaces

that exist on the target system, and import using IGNORE=Y.


Here is a simple example where you can get this problem and how to resolve it:
I have a user "TEST" with default tablespace has "USERS"
Step-1: Create tst Tablespace
=================================
SQL> create tablespace tst datafile 'c:\temp\tst1.dbf' size 5m; Tablespace
created.
Step-2: Create table with CLOB datatype by login to "TEST" user
=================================================================
SQL> CREATE TABLE "TEST"."PX2000" ("ID" NUMBER(*,0), "SUBMITDATE" DATE,
"COMMENTS" VARCHAR2(4000),"RECOMMENDEDTIMELONG" CLOB)
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1)
TABLESPACE "TST" LOGGING LOB ("RECOMMENDEDTIMELONG")
STORE AS (TABLESPACE "TST" ENABLE STORAGE IN ROW CHUNK 8192
PCTVERSION 10 NOCACHE
STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1)) ;

SQL> select table_name,tablespace_name from user_tables


2 where table_name='PX2000';

TABLE_NAME TABLESPACE_NAME
------------------------------ ------------------------------
PX2000 TST

SQL> select username,default_tablespace from user_users;


USERNAME DEFAULT_TABLESPACE
------------------------------ ------------------------------
TEST USERS

Step-3: Export the Table


=========================
exp test/test file=px2000.dmp tables=px2000 . .
exporting table PX2000
0 rows exported

Step-4: Drop the "TST" tablespace including contents:


Please note that 'AND datafiles' is a new option in version 9i.
Omit this clause if running version prior to 9i.
============================================================
SQL> drop tablespace tst including contents and datafiles;
Tablespace dropped.
Step-5: Import the table back to test schema
==============================================
imp test/test file=px2000.dmp tables=px2000
IMP-00017: following statement failed with ORACLE error 959:
"CREATE TABLE "PX2000" ("ID" NUMBER(*,0), "SUBMITDATE" DATE, "COMMENTS"
VARC" "HAR2(4000), "RECOMMENDEDTIMELONG" CLOB) PCTFREE 10 PCTUSED 40
INITRANS 1 M" "AXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1)

TABLESPACE" " "TST" LOGGING LOB ("RECOMMENDEDTIMELONG")


STORE AS (TABLESPACE "TST" ENAB" "LE STORAGE IN ROW CHUNK 8192
PCTVERSION 10 NOCACHE STORAGE(INITIAL 65536 F" "REELISTS 1 FREELIST GROUPS
1))"
IMP-00003: ORACLE error 959 encountered ORA-00959: tablespace 'TST' does not
exist
Import terminated successfully with warnings.

Step-6: Workaround is to extract the DDL from the dumpfile,change the tablespace
to target database. Create the table manually and import with ignore=y option
=================================================================================
=
% imp test/test file=px2000.dmp full=y show=y log=<logFile>

Step-7: Use the logFile to pre-create the table, then ignore object creation
errors.
==================================================================================
==
% imp test/test file=px2000.dmp full=y ignore=y

Explanation
-----------
For most of the DDL's (except for Partitioned tables,tables without CLOB
datatypes), i
mport will automatically create the objects to the users default tablespace if the

specified tablespace does not exist. DDL's with tables with CLOB datatypes and
partitioned tables
an IMP-00003 and ORA-00959 will result if the tablespace does not exists in target
database.
References ---------- [NOTE:1058330.6]
"IMP-00003 ORA-00959 ON IMPORT OF PARTITIONED TABLE" [BUG:1982168]
"IMP-3 / ORA-959 importing table with CLOB using IGNORE=Y into variable width
charset DB"
[BUG:2398272] "IMPORT TABLE WITH CLOB DATATYPE FAILS WITH IMP-3 AND ORA-959"
Oracle Utilites Manual
.

Note 6:
-------

Displayed below are the messages of the selected thread.


Thread Status: Closed
From: Helmut Daiminger 12-Dec-00 21:50
Subject: MOVE table with LOB column to another tablespace

RDBMS Version: 8.1.6.1.2


Operating System and Version: Win2k, SP1
Error Number (if applicable):
Product (i.e. SQL*Loader, Import, etc.):
Product Version:

MOVE table with LOB column to another tablespace

Hi!

I'm having a problem here: I want to move a table with a LOB column (i.e. LOB
index segment)
to a different tablespace. In the beginning the table and the LOB segment were in
the USERS
tablespace.
I then exported the table using the EXP tool. Then I revoked the user's quota to
the
USERS tablespace and only gave him quota on the default tablespace.

Then I run IMP and import that LOB-table. The table gets recreated in the new
tablespace,
but the creation of the LOB index fails with an error message that I don't have
privileges
to write to the USERS tablespace.

How do I completey move the table and the LOB index segment to a new tablespace?

This is 8.1.6 on Windows 2000 Server.

Thanks,
Helmut

From: Oracle, Ken Robinson 14-Dec-00 21:05


Subject: Re : MOVE table with LOB column to another tablespace

I believe you can do the following:

ALTER TABLE foo MOVE


TABLESPACE new_tbsp STORAGE(new_storage)
LOB (lobcol) STORE AS lobsegment
(TABLESPACE new_tbsp STORAGE (new_storage));

Regards,
Ken Robinson
Oracle Server EE Analyst
Note 7.
-------

Doc ID: Note:176898.1 Content Type: TEXT/X-HTML


Subject: Import Fails with IMP-00032 and IMP-00008 Creation Date: 15-FEB-2002

Type: PROBLEM Last Revision Date: 24-JUN-2003


Status: PUBLISHED

fact: Oracle Server - Enterprise Edition


fact: Import Utility (IMP)
symptom: IMP-00032: SQL statement exceeded buffer length
symptom: IMP-00008: unrecognized statement in the export file

cause: The insert statement run when importing exceeds the default or
specified buffer size.

For import of tables containing LONG, LOB, BFILE, REF, ROWID, LOGICALROWID
or type columns, rows are inserted individually. The size of the buffer must be
large enough to contain the entire row inserted.

fix:

Increase the buffer size, and make sure that it is big enough to contain the
biggest row in the table(s) imported.
For example: imp system/manager file=test.dmp full=y log=test.log buffer=
10000000

Note 8:
-------

For tables with LOB columns, make sure the tablespace already exists in the
target database before the import is done.
Also, make sure the extent size is large enough.

Note 9:
-------

With imp/exp I hit a problem that on remote database users tablespace is called
'users', while on local it's 'users_data'. Now I have to go to documentation to
figure out if those stupid switches would save the day...

Also with schlobs the elegant


insert into t2 select * from t1@remote_db_link;
doesn't work.

I wonder why export/import is not plain sqlplus statements where I can just
specify the right 'where' clause...

Followup:
Yes, when you deal with multi segment objects (tables with LOBS, partitioned
table, IOTs with overflows for example), using EXP/IMP is complicated if the
target database doesn't have the same tablespace structure. That is because the
CREATE statement contains many tablespaces and IMP will only "rewrite" the first
TABLESPACE in it (it will not put multi-tablespace objects into a single
tablespace, the object creation will fail of the tablespaces needed by that
create do not exist).

I dealt with this issue in my book, in there, I recommend you do an:

imp .... full=y indexfile=temp.sql

In temp.sql, you will have all of the DDL for indexes and tables. Simply delete
all index creates and uncomment any table creates you want. Then, you can
specify the tablespaces for the various components -- precreate the objects and
run imp with ignore=y. The objects will now be populated.

You are incorrect with the "schlobs" comment (both in spelling and in
conclusion).

scott@ORA815.US.ORACLE.COM> create table t ( a int, b blob );

Table created.

scott@ORA815.US.ORACLE.COM> desc t
Name Null? Type
----------------------------------- -------- ------------------------
A NUMBER(38)
B BLOB

scott@ORA815.US.ORACLE.COM> select a, dbms_lob.getlength(b) from t;

no rows selected

scott@ORA815.US.ORACLE.COM> insert into t select x, y from t@ora8i.world;

1 row created.

scott@ORA815.US.ORACLE.COM> select a, dbms_lob.getlength(b) from t;

A DBMS_LOB.GETLENGTH(B)
---------- ---------------------
1 1000011

So, the "elegant insert into select * from" does work.

imp/exp can be plain sqlplus statements -- use indexfile=y (if you get my book,
I use this over and over in various places to get the DDL). In 9i, there is a
simple stored procedure interface as well.

Note 10:
--------

Tom
Without using the export import( show=y) Is there any query to find out in which
Tablespace the LOB column is stored
Thanks in advance

Followup:
select * from user_segments

you can join user_segments to user_lobs if you like as well.

user_segments will give you tablespace info.


user_lobs will give you the lob segment name.

Note 11:
--------

IMP-00003 ORACLE error number encountered

Cause: Import encountered the referenced Oracle error.

Action: Look up the Oracle message in the ORA message chapters of this manual,
and take appropriate action.

IMP-00020 long column too large for column buffer size (number)

Cause: The column buffer is too small. This usually occurs when importing LONG
data.

Action: Increase the insert buffer size 10,000 bytes at a time (for example).
Use this step-by-step approach because a buffer size that is too large may cause a
similar problem.

IMP-00064 Definition of LOB was truncated by export

Cause: While producing the dump file, Export was unable to write the entire
contents of a LOB.
Import is therefore unable to reconstruct the contents of the LOB. The remainder
of the import
of the current table will be skipped.

Action: Delete the offending row in the exported database and retry the export.

IMP-00070 Lob definitions in dump file are inconsistent with database.

Cause: The number of LOBS per row in the dump file is different than the number of
LOBS per row
in the table being populated.

Action: Modify the table being imported so that it matches the column attribute
layout
of the table that was exported.

Note 12:
--------

we have a 10 Mill rows table with a BLOB column in it


the size of the lob varies: from 1K up ward to a few megabytes, but most are in
the 2K-3K range.
So currently, we have ENABLE STORAGE IN ROW.
and want to do DISABLE STORAGE IN ROW b/c
we are starting to do a lot of range scan on the table.

When we export/import the table and during import


have moved all the lobs out of line.. the total space
used during the import bloated 5 times from
a 2GIG tablespace into a 10GIG tablespace??? Why?

The database block size is 8K, running 9.2.0.6 with


auto sgement management in the tablespace

CREATE TABLESPACE "BLOB_DATA" LOGGING


DATAFILE 'D:ORACLEORADATATESTDBBLOB_DATA01.ora' SIZE 2048M
REUSE AUTOEXTEND OFF
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 8M
SEGMENT SPACE MANAGEMENT AUTO

Note 13:
--------

To relocate tables using lobs:

Method 1:
=========

1. export data using exp cmd


2. drop all tables
3. create a new LOB tablespace
4. re-create all the tables with the LOB Storage clause, for example

create table FOO (


col1 NUMBER
,col2 BLOB
)
tablespace DATA_TBLSPCE
LOB ( col2 ) STORE AS col2_blob
(
tablespace BLOB_TBLSPCE disable storage in row
chunk 8192 pctversion 10 cache
storage (initial 64K next 64K
minextents 1 maxextents unlimited
pctincrease 0
)

5. import data with ignore=y

Method 2:
=======

Doc ID: Note:130814.1


Subject: How to move LOB Data to Another Tablespace
Type: HOWTO
Status: PUBLISHED
Content Type: TEXT/X-HTML
Creation Date: 19-DEC-2000
Last Revision Date: 05-AUG-2003

Purpose
-------

The purpose of this article is to provide the syntax for altering the storage
parameters of a table that contains one or more LOB columns.

Scope & Application


-------------------

This article will be useful for Oracle DBSs, Developers, and Support Analysts.

How to move LOB Data to Another Tablespace


------------------------------------------
If you want to make no other changes to the table containing a lob other than
to rebuild it, use:

ALTER TABLE foo MOVE;

This will rebuild the table segment. It does not affect any of the lob
segments associated with the lob columns which is the desired optimization.

If you want to change one or more of the physical attibutes of the table
containing
the lob, however no attributes of the lob columns are to be changed,
use the following syntax:

ALTER TABLE foo MOVE TABLESPACE new_tbsp STORAGE(new_storage);

This will rebuild the table segment. It does not rebuild any of the lob
segments associated with the lob columns which is the desired optimization.

If a table containing a lob needs no changes to the physical attributes of the


table segment, but you want to change one or more lob segments; for example,
you want to move the lob column to a new tablespace as well as the lob's
storage attributes, use the following syntax:

ALTER TABLE foo MOVE LOB(lobcol) STORE AS lobsegment


(TABLESPACE new_tbsp STORAGE (new_storage));

Note that this will also rebuild the table segment (although, in this case, in the
same tablespace and without changing the table segment physical attributes).

If a table containing a lob needs changes to both the table attributes as well
as the lob attributes then use the following syntax:

ALTER TABLE foo MOVE


TABLESPACE new_tbsp STORAGE(new_storage)
LOB (lobcol) STORE AS lobsegment
(TABLESPACE new_tbsp STORAGE (new_storage));

Explanation
-----------
The 'ALTER TABLE foo MODIFY LOB (lobcol) ...' syntax does not allow
for a change of tablespace

ALTER TABLE my_lob


MODIFY LOB (a_lob)
(TABLESPACE new_tbsp);

(TABLESPACE new_tbsp)
*
ORA-22853: invalid LOB storage option specification

You have to use the MOVE keyword instead as shown in the examples.

References
----------

Note 66431.1 LOBS - Storage, Redo and Performance Issues


Bug 747326 ALTER TABLE MODIFY LOB STORAGE PARAMETER DOES'T WORK

Additional Search Words


-----------------------

ora-1735 ora-906 ora-2143 ora-22853 clob nclob blob

Method 3:
=========

Move doesnt support Long datatypes. You can either convert them to LOBs and then
move
or do exp/imp of the table with the LONG column or create the table with LONG
in the locally managed tablespace and copy the data from the old table using
PL/SQL loop
or CTAS with to_lob in the locally managed tablespace..

SQL> desc t
Name Null? Type
----------------------------------------- -------- ----------------------------
X NUMBER(38)
Y LONG

SQL> alter table t move;


alter table t move
*
ERROR at line 1:
ORA-00997: illegal use of LONG datatype

-- You can create the new table in the Locally Managed tablespace

SQL> create table t_lob tablespace users as select x,to_lob(y) y from t;

Table created.

SQL> desc t_lob


Name Null? Type
----------------------------------------- -------- ----------------------------
X NUMBER(38)
Y CLOB

-- Now you can drop the old table and rename the new table

-- Or you can move the LOB table to the locally managed tablespace

SQL> alter table t_lob move;

Table altered.

-- Or you can precreate the new table with LONG in the locally managed tablespace
and do exp/imp

-- export the Long table


SQL> !exp / file=t.dmp tables=t compress=n

Export: Release 9.2.0.3.0 - Production on Tue Mar 2 09:32:30 2004

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

Connected to: Oracle9i Enterprise Edition Release 9.2.0.3.0 - Production


With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.3.0 - Production
Export done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set

About to export specified tables via Conventional Path ...


. . exporting table T 2 rows exported
Export terminated successfully without warnings.

-- just rename the old table for reference purposes


SQL> rename t to tbak;

Table renamed.

-- Create the LONG table in the locally managed tablespace

SQL> create table t(x int,y long) tablespace users;

Table created.

-- now import the data

SQL> !imp / file=t.dmp tables=t ignore=y

Import: Release 9.2.0.3.0 - Production on Tue Mar 2 09:33:43 2004

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

Connected to: Oracle9i Enterprise Edition Release 9.2.0.3.0 - Production


With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.3.0 - Production

Export file created by EXPORT:V09.02.00 via conventional path


import done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set
. importing OPS$ORACLE's objects into OPS$ORACLE
. . importing table "T" 2 rows imported
Import terminated successfully without warnings.

SQL> desc t
Name Null? Type
----------------------------------------- -------- ----------------------------
X NUMBER(38)
Y LONG

Note 14:
--------

Doc ID </help/usaeng/Search/search.html>: Note:225337.1 Content Type:


TEXT/PLAIN
Subject: ORA-22285 ON ACCESSING THE BFILE COLUMN OF A TABLE Creation Date:
08-JAN-2003
Type: PROBLEM Last Revision Date: 17-DEC-2004
Status: PUBLISHED
Fact(s)
~~~~~~~

*The directory alias for the relevant directory exists.

*This condition might be encountered in general or particularly after


successful export/import of 'table with bfile column' from one schema
to another.

*Non-bfile columns of the table could be accessed but not the bfile
column.

Symptom(s)
~~~~~~~~~~

Accessing the bfile column of table gives the following errors:

ORA-22285: non-existent directory or file for .....


ORA-06512: at "SYS.DBMS_LOB", line ...

Diagnosis:
~~~~~~~~~~

-- create the exporting user schema and the table with bfile data--

SQL>connect system/manager

SQL>create user test2 identified by test2 default tablespace users


quota 50 m on users
/
SQL>grant connect, create table, create any directory to test2
/
SQL>conn test2/test2

SQL>create table test_lobs (


c1 number,
c2 clob,
c3 bfile,
c4 blob
)
LOB (c2) STORE AS (ENABLE STORAGE IN ROW)
LOB (c4) STORE AS (DISABLE STORAGE IN ROW)
/

create two files (rec2.txt , rec3.txt) using OS utilities in some


directory say ( /tmp )

--create the directory alias --

SQL>create directory tmp_dir as '/tmp'


/

-- Populate the table--

SQL>insert into test_lobs values (1,null,null,null)


/
SQL>insert into test_lobs values
(2,EMPTY_CLOB(),BFILENAME('TMP_DIR','rec2.txt'),EMPTY_BLOB())
/
SQL>insert into test_lobs values (3,'Some data for record3.',
BFILENAME('TMP_DIR','rec2.txt'),
'48656C6C6F'||UTL_RAW.CAST_TO_RAW('there!'))
/

-- access the table--

SQL>column len_c2 format 9999


SQL>column len_c3 format 9999
SQL>column len_c4 format 9999

SQL>select c1, DBMS_LOB.GETLENGTH(c2) len_c2,


DBMS_LOB.GETLENGTH(c3) len_c3,
DBMS_LOB.GETLENGTH(c4) len_c4 from test_lobs
/

C1 LEN_C2 LEN_C3 LEN_C4


-------------------------- ------ ------ ------
1
2 0 124 0
3 22 124 11

-- carry out the schema level export--

$ exp system/manager file=exp44.dmp log=logr44.log owner=test2

IMPORTING DATABASE:

create same two files (rec2.txt , rec3.txt) using OS utilities in some


directory say ( /tmp )

--create the directory alias --


SQL>conn system/manager

SQL>create directory tmp_dir as '/tmp'


/

-- create the importing user schema--

SQL>create user test3 identified by test3 default tablespace users


quota 50 m on users
/
SQL>grant connect, create table, create any directory to test3
/

--carry out the successful schema level import--

$ imp system/manager fromuser=test2 touser=test3 file=exp44.dmp log=log44.log

--try to access the imported table as below (same statement as by the


exporting user--

SQL>select c1, DBMS_LOB.GETLENGTH(c2) len_c2, DBMS_LOB.GETLENGTH(c3) len_c3,


DBMS_LOB.GETLENGTH(c4) len_c4 from test_lobs
/

ERROR:
ORA-22285: non-existent directory or file for GETLENGTH operation
ORA-06512: at "SYS.DBMS_LOB", line 547

-- However non bfile columns could be accessed--

Cause
~~~~~

The importing user lacks the read access on the corresponding directory/
directory alias.

Solution(s)
~~~~~~~~~~~

grant read access on the corressponding directory to the user who tries to
access the bfile table as below:

SQL> conn system/manager


Connected.
SQL> grant read on directory tmp_dir to test3; ( please see the example
above )

Once the read permission is granted ,the bfile column of the said table
is accessible since the corresponding directory (/alias) is accessible.

Refrences:
~~~~~~~~~~
[NOTE:66046.1] <ml2_documents.showDocument?p_id=66046.1&p_database_id=NOT>:
LOBs, Longs, and other Datatypes

Note 15:
--------

Doc ID: Note:279722.1


Subject: IMPORT OF TABLE WITH LOB GENERATES CORE DUMP
Type: PROBLEM
Status: MODERATED
Content Type: TEXT/X-HTML
Creation Date: 31-JUL-2004
Last Revision Date: 02-AUG-2004

The information in this article applies to:


Oracle Server - Enterprise Edition - Version: 9.2.0.3
This problem can occur on any platform.

Symptoms
IMPORT OF TABLE WITH LOB GENERATES CORE DUMP
Cause
<Bug:3091499>

Importing a table having a clob created with chunksize = 32k

Error Details:
-------------

. importing DBAPIDB1's objects into DBAPIDB1


. . importing table "TE2006"Segmentation fault

Trace from the Core Dump:


------------------------
lmmstrmlr 44
lmmstmrg D4
lmmstmrg D4
lmmstfree 104
lmmfree C0
impmfr 24
impplb 5BC
impins 22B8
do_insert 48C
imptabwrk F4
impdta 41C
impdrv 2D68
main 14
__start 94
Fix
FIX:
---
Apply the patch for Bug:3091499
WORKAROUND:
----------
Before import, create the table with chunksize <= 16K and run import setting
ignore=y
References
<BUG:3091499> - Import Of Table With Lob Generates Core Dump

Note 16: keep LOBS at manageble size.


-------------------------------------

(1) Look at PCTVERSION:

Since the LOB segments are usually very large, they are treated differently from
other columns. While other columns
can be guaranteed to give consistent reads, these columns are not. This is
because, it is difficult to manage
with LOB data rollback segments due to their size unlike other columns. So they do
not use rollback segments.
Usually only one copy exists, so the queries reading that column may not get
consistent reads while
other queries modify them. In these cases, the other queries will get "ORA-22924
snapshot too old" errors.

To maintain read consistency Oracle creates new LOB page versions every time a lob
changes. PCTVERSION is
the percentage of all used LOB data space that can be occupied by old versions of
LOB data pages. As soon as
old versions of LOB data pages start to occupy more than the PCTVERSION amount of
used LOB space,
Oracle tries to reclaim the old versions and reuse them. In other words,
PCTVERSION is the percent of used
LOB data blocks that is available for versioning old LOB data. The PCTVERSION can
be set to the percentage
of LOB's that are occasionally updated.

Often a table's a LOB column usually gets the data uploaded only once, but is read
multiple times.
Hence it is not necessary to keep older versions of LOB data. It is recommended
that this value be changed to "0".

By default PCTVERSION is set to 10%. So, most of the instances usually have it set
to 10%,
it must be set to 0% explicitly. The value can be changed any time in a running
system.

Use the following query to find out currently set value for PCTVERSION:

SQL> select PCTVERSION from dba_lobs where TABLE_NAME = 'table_name' and


COLUMN_NAME='column_name';

PCTVERSION
----------
10

PCTVERSION can be changed using the following SQL (it can be run anytime in a
running system):

ALTER TABLE FND_LOBS MODIFY LOB (FILE_DATA) ( PCTVERSION 0 );

Note 17: difference 9iR1 9iR2 with respect to Locally managed tablespace
------------------------------------------------------------------------

Doc ID: Note:159078.1


Subject: Cannot Create Table with LOB Column in Locally Managed Tablespace
Type: PROBLEM
Status: PUBLISHED
Content Type: TEXT/X-HTML
Creation Date: 26-SEP-2001
Last Revision Date: 04-AUG-2004

fact: Oracle Server - Enterprise Edition 9.0.1


symptom: Creating new OEM repository fails
symptom: Create table SMP_LMV_SEARCH_OBJECT fails
symptom: ORA-03001: unimplemented feature
symptom: Table with LOB column
cause: You try to create a LOB segment in a bitmapped (locally managed)
tablespace.

This is a limitation for bitmapped segments in 9i. This is being documented in


the SQL Reference- the restriction will be lifted in 9i Release 2.

fix:

Create the table in a tablespace that was created with clause


SEGMENT SPACE MANAGEMENT MANUAL

Note 18:
--------

In a trace file you either get

ORA-00600: internal error code, arguments: [kkdoilsn1], [], [], [], [], [], [], []
or
ORA-00600: internal error code, arguments: [15265], [], [], [], [], [], [], []

description:
in a 9.2 database, a table with lob and indexsegments was moved to another
tablespace.

Explanation:

9202 2405258 Dictionary corruption / OERI:15265 from MOVE LOB to existing segment
name
2405258 Dictionary corruption / OERI:15265 from MOVE LOB to existing segment name

This is Bug 2405258 Fixed: 9202


Corruption
LOB Related (CLOB/BLOB/BFILE)
Dictionary corruption / ORA-600 [15265] from MOVE LOB toexisting segment name.
Eg:
ALTER TABLE mytab MOVE LOB (artist_bio) STORE AS lobsegment (STORAGE(INITIAL 1M
NEXT 1M));
corrupts the dictionary if "logsegment" already exists.

Bug 2405258 Dictionary corruption / OERI:15265 from MOVE LOB to existing segment
name
This note gives a brief overview of bug 2405258.
Affects:
Product (Component) Oracle Server (RDBMS)
Range of versions believed to be affected Versions >= 8 but < 10G
Versions confirmed as being affected 9.2.0.1
Platforms affected Generic (all / most platforms affected)
Fixed:
This issue is fixed in 9.2.0.2 (Server Patch Set) 10G Production Base Release
Symptoms:
Corruption (Dictionary) <javascript:taghelp('TAGS_CORR_DIC')>
Internal Error may occur (ORA-600) <javascript:taghelp('TAGS_OERI')>
ORA-600 [15265]
Related To:
Datatypes - LOBs (CLOB/BLOB/BFILE)
Description
Dictionary corruption / ORA-600 [15265] from MOVE LOB to
existing segment name.
Eg: ALTER TABLE mytab MOVE LOB (artist_bio)
STORE AS lobsegment (STORAGE(INITIAL 1M NEXT 1M));
corrupts the dictionary if "logsegment" already exists.

=====================
31. BLOCK CORRUPTION:
=====================

Note 1:
=======

Doc ID </help/usaeng/Search/search.html>: Note:47955.1 Content Type:


TEXT/PLAIN
Subject: Block Corruption FAQ Creation Date: 14-NOV-1997
Type: FAQ Last Revision Date: 17-AUG-2004
Status: PUBLISHED
ORACLE SERVER

-------------
BLOCK CORRUPTION
----------------
FREQUENTLY ASKED QUESTIONS
--------------------------
25-JAN-2000

CONTENTS
--------
1. What does the error ORA-01578 mean?
2. How to determine what object is corrupted?
3. What are the recovery options if the object is a table?
4. What are the recovery options if the object is an index?
5. What are the recovery options if the object is a rollback segment?
6. What are the recovery options if the object is a data dictionary object?
7. What methods are available to assist in pro-actively identifying corruption?
8. How can corruption be prevented?
9. What are the common causes of corruption?

QUESTIONS & ANSWERS

1. What does the error ORA-01578 mean?

An Oracle data block is written in an internal binary format which conforms


to a defined structure. The size of the physical data block is determined
by the "init.ora" parameter DB_BLOCK_SIZE set at the time of database
creation. The format of the block is similar regardless of the type of data
contained in the block.

Each formatted block on disk has a wrapper which consists of a block header
and footer. Unformatted blocks should be zero throughout. Whenever a block
is read into the buffer cache, the block wrapper information is checked for
validity. The checks include verifying that the block passed to Oracle by
the operating system is the block requested (data block address) and also
that certain information stored in the block header matches information
stored in the block footer in case of a split (fractured) block.

On a read from disk, if an inconsistency in this information is found, the


block is considered to be corrupt and

ORA-01578: ORACLE data block corrupted (file # %s, block # %s)

is signaled where file# is the file ID of the Oracle


datafile and block# is the block number, in Oracle blocks, within that file.
However, this does not always mean that the block on disk is truely
physically corrupt. That fact needs to be confirmed.

2. How to determine what object is corrupted?

The following query will display the segment name, type, and owner:

SELECT SEGMENT_NAME, SEGMENT_TYPE, OWNER


FROM SYS.DBA_EXTENTS
WHERE FILE_ID = <f>
AND <b> BETWEEN BLOCK_ID AND BLOCK_ID + BLOCKS - 1;

Where <f> is the file number and <b> is the block number reported in the
ORA-01578 error message.

Suppose block 82817 from table 'USERS' is corrupt:

SQL> select extent_id, block_id, blocks from dba_extents where


segment_name='USERS';

EXTENT_ID BLOCK_ID BLOCKS


---------- ---------- ----------
0 82817 8
1 82825 8
2 82833 8
3 82841 8
4 82849 8

SQL> SELECT SEGMENT_NAME, SEGMENT_TYPE, OWNER


2 FROM SYS.DBA_EXTENTS
3 WHERE FILE_ID = 9
4 AND 82817 BETWEEN BLOCK_ID AND BLOCK_ID + BLOCKS - 1;

SEGMENT_NAME
SEGMENT_TYPE OWNER
---------------------------------------------------------------------------------
------------------
USERS
TABLE VPOUSERDB

3. What are the recovery options if the object is a table?

The following options exist for resolving non-index block corruption in a


table which is not part of the data dictionary:

o Restore and recover the database from backup (recommended).


o Recover the object from an export.
o Select the data out of the table bypassing the corrupted block(s).

If the table is a Data Dictionary table, you should contact Oracle Support
Services. The recommended recovery option is to restore the database from
backup.

[NOTE:28814.1] <ml2_documents.showDocument?p_id=28814.1&p_database_id=NOT>
contains information on how to handle ORA-1578 errors in Oracle7.

References:
-----------
[NOTE:28814.1] <ml2_documents.showDocument?p_id=28814.1&p_database_id=NOT>
TECH ORA-1578 and Data Block Corruption in Oracle7

4. What are the recovery options if the object is an index?

If the object is an index which is not part of the data dictionary and the
base table does not contain any corrupt blocks, you can simply drop and
recreate the index.

If the index is a Data Dictionary index, you should contact Oracle Support
Services. The recommended recovery option is to restore the database from
backup. There is a possibility you might be able to drop the index and then
recreate it based on the original create SQL found in the administrative
scripts. Oracle Support Services will be able to make the determination as
to whether this is a viable option for you.

5. What are the recovery options if the object is a rollback segment?

If the object is a rollback segment, you should contact Oracle Support


Services. The recommended recovery option is to restore the database
from backup.

6. What are the recovery options if the object is a data dictionary object?

If the object is a Data Dictionary object, you should contact Oracle Support
Services. The recommended recovery option is to restore the database from
backup.

If the object is an index on a Data Dictionary table, you might be able to


drop the index and then recreate it based on the original create SQL found
in the administrative scripts. Oracle Support Services will be able to make
the determination as to whether this is a viable option.

7. What methods are available to assist in pro-actively identifying corruption?

ANALYZE TABLE/INDEX/CLUSTER ... VALIDATE STRUCTURE is a SQL command which


can be executed against a table, index, or cluster which scans every block
and reports a failure upon encountering any potentially corrupt blocks. The
CASCADE option checks all associated indices and verifies the 1 to 1
correspondence between data and index rows. This is the most detailed block
check available, but requires the database to be open.

DB Verify is a utility which can be run against a datafile of a database


that will scan every block in the datafile and generate a report identifying
any potentially corrupt blocks. DB Verify performs basic block checking
steps, however it does not provide the capability to verify the 1 to 1
correspondence between data and index rows. It can be run when the database
is closed.

Export will read the blocks allocated to each table being exported and
report any potential block corruptions encountered.

References:
-----------

[NOTE:35512.1] <ml2_documents.showDocument?p_id=35512.1&p_database_id=NOT>
DBVERIFY - Database file Verification Utility (7.3.2 onwards)

8. How can corruption be prevented?

Unfortunately, there is no way to totally eliminate the risk of corruption.


You can only minimize the risk and plan accordingly.

9. What are the common causes of corruption?

o Bad I/O, H/W, Firmware.


o Operating System I/O or caching problems.
o Memory or paging problems.
o Disk repair utilities.
o Part of a datafile being overwritten.
o Oracle incorrectly attempting to access an unformatted block.
o Oracle or operating system bug.
Note 77587.1 <ml2_documents.showDocument?p_id=77587.1&p_database_id=NOT>
discusses block corruptions in Oracle and how they are related
to the underlying operating system and hardware.

References:
-----------

[NOTE:77587.1] <ml2_documents.showDocument?p_id=77587.1&p_database_id=NOT>
BLOCK CORRUPTIONS ON ORACLE AND UNIX

Note 2:
=======

ORA-00600: Internal message code, arguments: [01578] [...] [...] [] [] [].


ORA-01578: Oracle data block corrupted (file ..., block ...).

Having encountered the Oracle data block corruption, we must firstly investigate
which database segment
(name and type) the corrupted block is allocated to. Chances are that the block
belongs either to an index
or to a table segment, since these two type of segments fill the major part of our
databases.
The following query will reveil the segment that holds the corrupted block
identified by
<filenumber> and <blocknumber> (which were given to you in the error message):

SELECT ds.*
FROM dba_segments ds, sys.uet$ e
WHERE ds.header_file=e.segfile#
and ds.header_block=e.segblock#
and e.file#=<filenumber>
and <blocknumber> between e.block# and e.block#+e.length-1;

If the segment turns out to be an index segment, then the problem can be very
quickly solved.
Since all the table data required for recreating the index is still accessable, we
can drop and recreate the index
(since the block will reformatted, when taken FROM the free-space list and reused
for the index).
If the segment turns out to be a table segment a number of options for solving the
problem are available:

- restore and recovery of datafile the block is in


- imp table
- sql

The last option involves using SQL to SELECT as much data as possible FROM the
current
corrupted table segment and save the SELECTed rows into a new table.
SELECTing data that is stored in segment blocks that preceede the corrupted block
can be easily done using a full table scan (via a cursor).
Rows stored in blocks after the corrupted block cause a problem.
A full table scan will never reach these. However these rows can still be
fetched using rowids (single row lookups).
2.1 Table was indexed

Using an optimizer hint we can write a query that SELECTs the rows FROM the table
via an index scan (using rowid's), instead of via a full table scan.
Let's assume our table is named X with columns a, b and c. And table X is indexed
uniquely on columns a and b by index X_I, the query would look like:

SELECT /*+index(X X_I) */ a, b, c


FROM X;

We must now exclude the corrupt block FROM being accessed to avoid the
internal exception ORA-00600[01578]. Since the blocknumber is a substring
of the rowid ( ) this can very easily be achieved:

SELECT /*+index(X X_I) */ a, b, c


FROM X
WHERE rowid not like <corrupt_block_number>||'.%.'||<file_number>;

But it is important to realize that the WHERE-clause gets evaluated right


after the index is accessed and before the table is accessed.
Otherwise we would still get the ORA-00600[01578] exception.
Using the above query as a subquery in an insert statement we can restore
all rows of still valid blocks to a new table.

Since the index holds the actual column values of the indexed columns we could
also use the index to restore all indexed columns of rows that reside in the
corrupt block.
The following query,

SELECT /*+index(X X_I) */ a, b


FROM X
WHERE rowid like <corrupt_block_number>||'.%.'||<file_number>;

retreives only indexed columns a and b FROM rows inside the corrupt block.
The optimizer will not access the table for this query.
It can retreive the column values using the index segment only.

Using this technique we are able to restore all indexed column values of the
rows inside the corrupt block, without accessing the corrupt block at all.
Suppose in our example that column c of table X was also indexed by index X_I2.
This enables us to completely restore rows inside the corrupt block.

First restore columns a and b using index X_I:

create table X_a_b(rowkey,a,b) as


SELECT /*+index(X X_I) */ rowid, a, b
FROM X
WHERE rowid like <corrupt_block_number>||'.%.'||<file_number>;

Then restore column c using index X_I2:

create table X_c(rowkey,c) as


SELECT /*+index(X X_I2) */ rowid, c
FROM X
WHERE rowid like <corrupt_block_number>||'.%.'||<file_number>;

And finally join the columns together using the restored rowid:
SELECT x1.a, x1.b, x2.c
FROM X_a_b x1, X_c x2
WHERE x1.rowkey=x2.rowkey;

In summary:
Indexes on the corrupted table segment can be used to restore all columns of all
rows
that are stored outside the corrupted data blocks.
Of rows inside the corrupted data blocks, only the columns that were indexed can
be restored.
We might even be able to use an old version of the table (via Import)
to further restore non-indexed columns of these records.

2.2 Table has no indexes

This situation should rarely occur since every table should have a primary key and
therefore a unique index.
However when no index is present, all rows of corrupted blocks should be
considered lost.
All other rows can be retrieved using rowid's.
Since there is no index we must build a rowid generator ourselves.
The SYS.UET$ table shows us exactly which extents (file#, startblock, endblock)
we need to inspect for possible rows of our table X.
If we make an estimate of the maximum number of rows per block for table X,
we can build a PL/SQL-loop that generates possible rowid's of records inside table
X.
By handling the 'invalid rowid' exception, and skipping the corrupted data block,
we can restore all rows except those inside the corrupted block.

declare
v_rowid varchar2(18);
v_xrec X%rowtype;
e_invalid_rowid exception;
pragma exception_init(e_invalid_rowid,-1410);

begin

for v_uetrec in (SELECT file# file, block# start_block, block#+length#-1 end_block


FROM uet$
WHERE segfile#=6 and segblock#=64) -- Identifies our segment X.
loop for v_blk in v_uetrec.start_block..v_uetrec.end_block
loop if v_uetrec.file<>6 and v_blk<>886 -- 886 in file 6 is our
corrupted block.
then for v_row in 0..200 -- 200 is maximum number of rows per block
for segment X.
loop begin SELECT a,b,c into v_rec
FROM x
WHERE rowid=chartorowid('0000'||hex(v_blk)||'.'||
hex(v_row)||'.'||hex(v_uetrec.file);
insert into x_saved(a,b,c) values(v_rec.a,v_rec.b,v_rec.c);
commit;
exception when e_invalid_rowid then null;
end;
end loop; /*row-loop*/
end if;
end loop; /*blk-loop*/
end loop; /*uet-loop*/
end;
/

The above code assumes that block id's never exceed 4 hexadecimal places.
A definition of the hex-function which is used in the above code can be found in
the appendix.

Note 3:
=======

Doc ID </help/usaeng/Search/search.html>: Note:33405.1 Content Type:


TEXT/PLAIN
Subject: Extracting Data from a Corrupt Table using SKIP_CORRUPT_BLOCKS or Event
10231 Creation Date: 24-JAN-1996
Type: BULLETIN Last Revision Date: 13-SEP-2000
Status: PUBLISHED
*****************
*** ***
*****************
This note is an extension to article [NOTE:28814.1]
<ml2_documents.showDocument?p_id=28814.1&p_database_id=NOT> about handling
block corruption errors where the block wrapper of a datablock indicates
that the block is bad. (Typically for ORA-1578 errors).
The details here will not work if only the block internals are
corrupt (eg: for ORA-600 or other errors).

Please read [NOTE:28814.1]


<ml2_documents.showDocument?p_id=28814.1&p_database_id=NOT> before reading this
note.

Introduction
~~~~~~~~~~~~
This short article explains how to skip corrupt blocks on an object
either using the Oracle8i SKIP_CORRUPT table flag or the special
Oracle event number 10231 which is available in Oracle releases 7
through 8.1 inclusive.
The information here explains how to use these options.

Before proceeding you should:


a) Be certain that the corrupt block is on a USER table.
(i.e.: not a data dictionary table)
b) Have contacted Oracle Support Services and been advised to
use event 10231 or the SKIP_CORRUPT flag.
c) Have decided how you are to recreate the table.
Eg: Export , and disk space is available etc..
d) You have scheduled down-time to attempt the salvage
operation.
e) Have a backup of the database.
f) Have the SQL to rebuild the problem table, its indexes
constraints, triggers, grants etc...
This SQL should include relevant storage clauses.

What is event 10231 ?


~~~~~~~~~~~~~~~~~~~~~
This event allows Oracle to skip certain types of corrupted blocks
on full table scans ONLY hence allowing export or "create table as
select" type operations to retrieve rows from the table which are not
in the corrupt block. Data in the corrupt block is lost.

The scope of this event is limited for Oracle versions prior to


Oracle 7.2 as it only allows you to skip 'soft corrupt' blocks.
Most ORA 1578 errors are a result of media corruptions and in such
cases event 10231 is useless.

From Oracle 7.2 onwards the event allows you to skip many forms of
media corrupt blocks in addition to soft corrupt blocks and so is
far more useful. It is still *NOT* guaranteed to work.
[NOTE:28814.1] <ml2_documents.showDocument?p_id=28814.1&p_database_id=NOT>
describes alternatives which can be used if this event
fails.

What is the SKIP_CORRUPT flag ?


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In Oracle8i the functionality of the 10231 event has been externalised
on a PER-SEGMENT basis such that it is possible to mark a TABLE or
PARTITION to skip over corrupt blocks when possible. The flag is
set or cleared using the DBMS_REPAIR package. DBA_TABLES has a
SKIP_CORRUPT column which indicates if this flag is set for an
object or not.

Setting the event or flag


~~~~~~~~~~~~~~~~~~~~~~~~~
The event can either be set within the session or at database instance
level. If you intend to use a CREATE TABLE AS SELECT then setting
the event in the session may suffice. If you want to EXPORT the table
data then it is best to set the event at instance level, or set the
SKIP_CORRUPT table attribute if on Oracle8i.

Oracle8i
~~~~~~~~
Connect as a DBA user and mark the table as needing to skip
corrupt blocks thus:
execute DBMS_REPAIR.SKIP_CORRUPT_BLOCKS('<schema>','<tablename>');

or for a table partition:


execute
DBMS_REPAIR.SKIP_CORRUPT_BLOCKS('<schema>','<tablename>'.'<partition>');

Now you should be able to issue a CREATE TABLE AS SELECT operation


against the corrupt table to extract data from all non-corrupt
blocks, or EXPORT the table.
Eg:
CREATE TABLE salvage_emp
AS SELECT * FROM corrupt_emp;

To clear the attribute for a table use:


execute DBMS_REPAIR.SKIP_CORRUPT_BLOCKS('<schema>','<tablename>',
flags=>dbms_repair.noskip_flag);

execute DBMS_REPAIR.SKIP_CORRUPT_BLOCKS('VPOUSERDB','USERS',
flags=>dbms_repair.noskip_flag);
Setting the event in a Session
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Connect to Oracle as a user with access to the corrupt table and
issue the command:

ALTER SESSION SET EVENTS


'10231 TRACE NAME CONTEXT FOREVER, LEVEL 10';

Now you should be able to issue a CREATE TABLE AS SELECT operation


against the corrupt table to extract data from all non-corrupt
blocks, but an export would still fail as the event is only set
within your current session.
Eg:
CREATE TABLE salvage_emp
AS SELECT * FROM corrupt_emp;

Setting the event at Instance level


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This requires that the event be added to the init$ORACLE_SID.ora file
used to start the instance:

shutdown the database

Edit your init<SID>.ora startup configuration file and ADD


a line that reads:

event="10231 trace name context forever, level 10"

Make sure this appears next to any other EVENT= lines in the
init.ora file.

STARTUP
If the instance fails to start check the syntax
of the event parameter matches the above exactly.
Note the comma as it is important.

SHOW PARAMETER EVENT


To check the event has been set in the correct place.
You should see the initial portion of text for the
line in your init.ora file. If not check which
parameter file is being used to start the database.

Select out the data from the table using a full table scan
operation.
Eg: Use a table level export
or create table as select.

Export Warning: If the table is very large then some versions of export
may not be able to write more than 2Gb of data to the
export file. See [NOTE:62427.1]
<ml2_documents.showDocument?p_id=62427.1&p_database_id=NOT> for general
information
on 2Gb limits in various Oracle releases.

Salvaging data from the corrupt block itself


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SKIP_CORRUPT and event 10231 extract data from good blocks but
skip over corrupt blocks. To extract information from the corrupt
block there are three main options:

- Select column data from any good indexes


This is discussed towards the end of the following 2 articles:
Oracle7 - using ROWID range scans [NOTE:34371.1]
<ml2_documents.showDocument?p_id=34371.1&p_database_id=NOT>
Oracle8/8i - using ROWID range scans [NOTE:61685.1]
<ml2_documents.showDocument?p_id=61685.1&p_database_id=NOT>

- See if Oracle Support can extract any data from HEX dumps of the
corrupt block.
- It may be possible to salvage some data using Log Miner

Once you have the data extracted


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Once you have the required data extracted either into an export file
or into another table make sure you have a valid database backup before
proceeding. The importance of this cannot be over-emphasised.

Double check you have the SQL to rebuild the object and its indexes
etc..

Double check that you have any diagnostic information if requested by


Oracle support. Once you proceed with dropping the object certain
information is destroyed so it is important to capture it now.

Now you can:

If 10231 was set at instance level:


Remove the 'event' line from the init.ora file

SHUTDOWN and RESTART the database.

SHOW PARAMETER EVENT


Make sure the 10231 event is no longer shown

RENAME or DROP the problem table


If you have space it is advisable to RENAME the
problem table rather than DROP it at this stage.

Recreate the table.


Eg: By importing.
Take special care to get the storage clauses
correct when recreating the table.

Create any indexes, triggers etc.. required


Again take care with storage clauses.

Re-grant any access to the table.

If you RENAMEd the original table you can drop it once


the new table has been tested.

.
Note 4: Analyze table validate structure:
=========================================

validate structure table:

ANALYZE TABLE CHARLIE.CUSTOMERS VALIDATE STRUCTURE;

validate structure index:

ANALYZE INDEX CHARLIE.PK_CUST VALIDATE STRUCTURE;

Als er geen corrupte blocks worden gevonden, is de output slechts "table


analyzed".
Als er wel corrupte blocks worden gevonden, moet een aangemaakte trace file
worden bekeken.

Note 5: DBVERIFY Utility:


=========================

Vanaf de OS prompt kan het dbv utility gedraaid worden om een datafile
te onderzoeken.

$ dbv FILE=/u02/oracle/cc1/data01.dbf BLOCKSIZE=8192

Note 6: DBMS_REPAIR package:


============================

Het DBMS_REPAIR package wordt aangemaakt door bmprpr.sql script.

Stap 1.

via ANALYZE TABLE ben je er achter gekomen dat van een table
een of meer blocks corrupt zijn.

Stap 2.

Gebruik eerst DBMS_REPAIR.ADMIN_TABLES om de REPAIR_TABLE aan te maken.


Deze table zal dan gegevens gaan bevatten over de blocks, en of die
gemarkeerd zijn als zijnde corrupt e.d.

declare

begin
dbms_repair.admin_tables('REPAIR_TABLE, dbms_repair.repair_table,
dbms_repair.create_action, 'users');
end;
/

Stap 3.

Gebruik nu de DBMS_REPAIR.CHECK_OBJECT procedure op het object


om de repair_table uit stap 2 te vullen met corruptie gegevens.
set serveroutput on
declare rpr_count int;

begin
rpr_count:=0;

dbms_repair.check_object('CHARLIE', 'CUSTOMERS', 'REPAIR_TABLE', rpr_count);

dbms_output.put_line('repair_block_count :'||to_char(rpr_count));
end;
/

Note 7:
=======

Tom,

If I have this information:


select * from V$DATABASE_BLOCK_CORRUPTION;

FILE# BLOCK# BLOCKS CORRUPTION_CHANGE# CORRUPTIO


---------- ---------- ---------- ------------------ ---------
11 12357 12 197184960 LOGICAL
and
select * from v$backup_corruption;

RECID STAMP SET_STAMP SET_COUNT PIECE# FILE# BLOCK#


BLOCKS CORRUPTION_CHANGE# MAR CO
---------- ---------- ---------- ---------- ---------- ---------- ----------
---------- ------------
1 533835361 533835140 3089 1 11 12357
12 197184960 NO LOGICAL

How can I get more details of what data resides on this blocks? and being
'Logical' can they be recoverd without loosing that data at all?

Any extra details would be appreciated.

Thanks,

Orlando

Followup:
select * from dba_extents
where file_id = 11
and 12357 between block_id an block_id+blocks-1;

if it is something "rebuildable" -- like an index, drop and recreate might be


the path of least resistance, else you would go back to your backups -- to
before this was detected and restore that file/range of blocks (rman can do
block level recovery)

Tom

trace file generated by analyze contained


table scan: segment: file# 55 block# 229385
skipping corrupt block file# 55 block# 251372

This is repeated every day (analyzed each morning)


but daily direct export / import succeeds.

SQL> select segment_type from dba_extents


where file_id=55
and 229385 between block_id and
(block_id +( blocks -1));

SEGMENT_TYPE
----------------------------------------
TABLE

$ dbv file=/u03/oradata/emu/emu_data_large02.dbf \
blocksize=8192 logfile=/dbv.log

DBVERIFY: Release 8.1.7.2.0 - Production on Mon Aug 10 10:10:13 2004

(c) Copyright 2000 Oracle Corporation. All rights reserved.

DBVERIFY - Verification starting : FILE = /u03/oradata/emu/emu_data_large02.dbf


Block Checking: DBA = 230938092, Block Type = KTB-managed data block
Found block already marked corrupted

DBVERIFY - Verification complete

Total Pages Examined : 256000


Total Pages Processed (Data) : 253949
Total Pages Failing (Data) : 0
Total Pages Processed (Index): 0
Total Pages Failing (Index): 0
Total Pages Processed (Other): 11
Total Pages Empty : 2040
Total Pages Marked Corrupt : 0
Total Pages Influx : 0

Any thoughts ?

Thanks

Note 6:
-------

Detect And Correct Corruption


Oracle provides a number of methods to detect and repair corruption within
datafiles:

DBVerify
ANALYZE .. VALIDATE STRUCTURE
DB_BLOCK_CHECKING.
DBMS_REPAIR.
Other Repair Methods.

DBVerify
DBVerify is an external utility that allows validation of offline datafiles.
In addition to offline datafiles it can be used to check the validity of backup
datafiles:

C:>dbv file=C:\Oracle\oradata\TSH1\system01.dbf feedback=100 blocksize=4096

ANALYZE .. VALIDATE STRUCTURE


The ANALYZE command can be used to verify each data block in the analyzed object.
If any corruption is detected rows are added to the INVALID_ROWS table:

-- Create the INVALID_ROWS table.


SQL> @C:\Oracle\901\rdbms\admin\UTLVALID.SQL

-- Validate the table structure.


SQL> ANALYZE TABLE scott.emp VALIDATE STRUCTURE;

-- Validate the table structure along with all it's indexes.


SQL> ANALYZE TABLE scott.emp VALIDATE STRUCTURE CASCADE;

-- Validate the index structure.


SQL> ANALYZE INDEX scott.pk_emp VALIDATE STRUCTURE;

DB_BLOCK_CHECKING
When the DB_BLOCK_CHECKING parameter is set to TRUE Oracle performs a walk through
of the data
in the block to check it is self-consistent. Unfortunately block checking can add
between 1 and 10% overhead to the server. Oracle recommend setting this parameter
to TRUE
if the overhead is acceptable.

DBMS_REPAIR
Unlike the previous methods dicussed, the DBMS_REPAIR package allows you to detect
and
repair corruption. The process requires two administration tables to hold a list
of
corrupt blocks and index keys pointing to those blocks. These are created as
follows:

BEGIN
Dbms_Repair.Admin_Tables (
table_name => 'REPAIR_TABLE',
table_type => Dbms_Repair.Repair_Table,
action => Dbms_Repair.Create_Action,
tablespace => 'USERS');

Dbms_Repair.Admin_Tables (
table_name => 'ORPHAN_KEY_TABLE',
table_type => Dbms_Repair.Orphan_Table,
action => Dbms_Repair.Create_Action,
tablespace => 'USERS');
END;
/

With the administration tables built we are able to check the table of interest
using the
CHECK_OBJECT procedure:

SET SERVEROUTPUT ON
DECLARE
v_num_corrupt INT;
BEGIN
v_num_corrupt := 0;
Dbms_Repair.Check_Object (
schema_name => 'SCOTT',
object_name => 'DEPT',
repair_table_name => 'REPAIR_TABLE',
corrupt_count => v_num_corrupt);
Dbms_Output.Put_Line('number corrupt: ' || TO_CHAR (v_num_corrupt));
END;
/

Assuming the number of corrupt blocks is greater than 0 the CORRUPTION_DESCRIPTION


and
the REPAIR_DESCRIPTION columns of the REPAIR_TABLE can be used to get more
information
about the corruption.

At this point the currupt blocks have been detected, but are not marked as
corrupt.
The FIX_CORRUPT_BLOCKS procedure can be used to mark the blocks as corrupt,
allowing them
to be skipped by DML once the table is in the correct mode:

SET SERVEROUTPUT ON
DECLARE
v_num_fix INT;
BEGIN
v_num_fix := 0;
Dbms_Repair.Fix_Corrupt_Blocks (
schema_name => 'SCOTT',
object_name=> 'DEPT',
object_type => Dbms_Repair.Table_Object,
repair_table_name => 'REPAIR_TABLE',
fix_count=> v_num_fix);
Dbms_Output.Put_Line('num fix: ' || to_char(v_num_fix));
END;
/

Once the corrupt table blocks have been located and marked all indexes must be
checked to see
if any of their key entries point to a corrupt block. This is done using the
DUMP_ORPHAN_KEYS procedure:

SET SERVEROUTPUT ON
DECLARE
v_num_orphans INT;
BEGIN
v_num_orphans := 0;
Dbms_Repair.Dump_Orphan_Keys (
schema_name => 'SCOTT',
object_name => 'PK_DEPT',
object_type => Dbms_Repair.Index_Object,
repair_table_name => 'REPAIR_TABLE',
orphan_table_name=> 'ORPHAN_KEY_TABLE',
key_count => v_num_orphans);
Dbms_Output.Put_Line('orphan key count: ' || to_char(v_num_orphans));
END;
/

If the orphan key count is greater than 0 the index should be rebuilt.

The process of marking the table block as corrupt automatically removes it from
the freelists.
This can prevent freelist access to all blocks following the corrupt block.
To correct this the freelists must be rebuilt using the REBUILD_FREELISTS
procedure:

BEGIN
Dbms_Repair.Rebuild_Freelists (
schema_name => 'SCOTT',
object_name => 'DEPT',
object_type => Dbms_Repair.Table_Object);
END;
/

The final step in the process is to make sure all DML statements ignore the data
blocks
marked as corrupt. This is done using the SKIP_CORRUPT_BLOCKS procedure:

BEGIN
Dbms_Repair.Skip_Corrupt_Blocks (
schema_name => 'SCOTT',
object_name => 'DEPT',
object_type => Dbms_Repair.Table_Object,
flags => Dbms_Repair.Skip_Flag);
END;
/

The SKIP_CORRUPT column in the DBA_TABLES view indicates if this action has been
successful.

At this point the table can be used again but you will have to take steps to
correct any data
loss associated with the missing blocks.

Other Repair Methods


Other methods to repair corruption include:

Full database recovery.


Individual datafile recovery.
Block media recovery (BMR), available in Oracle9i when using RMAN.
Recreate the table using the CREATE TABLE .. AS SELECT command, taking care to
avoid the
corrupt blocks by retricting the where clause of the query.
Drop the table and restore it from a previous export. This may require some manual
effort
to replace missing data.
Hope this helps. Regards Tim...

Note 7:
-------
If you know the file number and the block number indicating the corruption, you
can salvage
the data in the corrupt table by selecting around the bad blocks.

Set event 10231 in the init.ora file to cause Oracle to skip software- and media-
corrupted blocks when performing full table scans:

Event="10231 trace name context forever, level 10"

Set event 10233 in the init.ora file to cause Oracle to skip software- and media-
corrupted blocks when performing index range scans:

Event="10233 trace name context forever, level 10"

Note 8:
-------

Detecting and reporting data block corruption using the DBMS_REPAIR package:

Note: Note that this event can only be used if the block "wrapper" is marked
corrupt.

Eg: If the block reports ORA-1578.

1. Create DBMS_REPAIR administration tables:

To Create Repair tables, run the below package.

SQL> EXEC DBMS_REPAIR.ADMIN_TABLES(�REPAIR_ADMIN�, 1,1, �REPAIR_TS�);

Note that table names prefix with �REPAIR_� or �ORPAN_�. If the second variable is
1, it will create
�REAIR_key tables, if it is 2, then it will create �ORPAN_key tables.

If the thread variable is

1 then package performs �create� operations.


2 then package performs �delete� operations.
3 then package performs �drop� operations.

2. Scanning a specific table or Index using the DBMS_REPAIR.CHECK_OBJECT


procedure:

In the following example we check the table employee for possible corruption�s
that belongs to the schema TEST.
Let�s assume that we have created our administration tables called REPAIR_ADMIN in
schema SYS.

To check the table block corruption use the following procedure:

SQL> VARIABLE A NUMBER;


SQL> EXEC DBMS_REPAIR.CHECK_OBJECT (�TEST�,�EMP�, NULL,
1,�REPAIR_ADMIN�, NULL, NULL, NULL, NULL,:A);
SQL> PRINT A;

To check which block is corrupted, check in the REPAIR_ADMIN table.


SQL> SELECT * FROM REPAIR_ADMIN;

3. Fixing corrupt block using the DBMS_REPAIR.FIX_CORRUPT_BLOCK procedure:

SQL> VARIABLE A NUMBER;


SQL> EXEC DBMS_REPAIR.FIX.CORRUPT_BLOCKS (�TEST�,�EMP�, NULL,
1,�REP
ARI_ADMIN�, NULL,:A);
SQL> SELECT MARKED FROM REPAIR_ADMIN;

If u select the EMP table now you still get the error ORA-1578.

4. Skipping corrupt blocks using the DBMS_REPAIR. SKIP_CORRUPT_BLOCK procedure:

SQL> EXEC DBMS_REPAIR. SKIP_CORRUPT.BLOCKS (�TEST�, �EMP�, 1,1);

Notice the verification of running the DBMS_REPAIR tool. You have lost some of
data. One main advantage of
this tool is that you can retrieve the data past the corrupted block. However we
have lost some data in the table.

5. This procedure is useful in identifying orphan keys in indexes that are


pointing to corrupt rows of the table:

SQL> EXEC DBMS_REPAIR. DUMP ORPHAN_KEYS (�TEST�,�IDX_EMP�, NULL,


2, �REPAIR_ADMIN�, �ORPHAN_ADMIN�,
NULL,:A);

If u see any records in ORPHAN_ADMIN table you have to drop and re-create the
index to avoid any inconsistencies
in your queries.

6. The last thing you need to do while using the DBMS_REPAIR package is to run the

DBMS_REPAIR.REBUILD_FREELISTS procedure to reinitialize the free list details in


the data dictionary views.

SQL> EXEC DBMS_REPAIR.REBUILD_FREELISTS (�TEST�,�EMP�, NULL, 1);

NOTE

Setting events 10210, 10211, 10212, and 10225 can be done by adding the following
line for each event
in the init.ora file:

Event = "event_number trace name errorstack forever, level 10"

- When event 10210 is set, the data blocks are checked for corruption by checking
their integrity.
Data blocks that don't match the format are marked as soft corrupt.

- When event 10211 is set, the index blocks are checked for corruption by checking
their integrity.
Index blocks that don't match the format are marked as soft corrupt.

- When event 10212 is set, the cluster blocks are checked for corruption by
checking their integrity.
Cluster blocks that don't match the format are marked as soft corrupt.
- When event 10225 is set, the fet$ and uset$ dictionary tables are checked for
corruption
by checking their integrity. Blocks that don't match the format are marked as
soft corrupt.

- Set event 10231 in the init.ora file to cause Oracle to skip software- and
media-corrupted blocks
when performing full table scans:

Event="10231 trace name context forever, level 10"

- Set event 10233 in the init.ora file to cause Oracle to skip software- and
media-corrupted blocks
when performing index range scans:

Event="10233 trace name context forever, level 10"

To dump the Oracle block you can use below command from 8.x on words:

SQL> ALTER SYSTEM DUMP DATAFILE 11 block 9;


This command dumps datablock 9 in datafile11, into USER_DUMP_DEST directory.

Dumping Redo Logs file blocks:

SQL> ALTER SYSTEM DUMP LOGFILE �/usr/oracle8/product/admin/udump/rl. log�;

Rollback segments block corruption, it will cause problems (ORA-1578) while


starting up the database.
With support of oracle, can use below under source parameter to startup the
database.

_CORRUPTED_ROLLBACK_SEGMENTS=(RBS_1, RBS_2)

DB_BLOCK_COMPUTE_CHECKSUM

This parameter is normally used to debug corruption�s that happen on disk.

The following V$ views contain information about blocks marked logically corrupt:

V$ BACKUP_CORRUPTION, V$COPY_CORRUPTION

When this parameter is set, while reading a block from disk to catch, oracle will
compute the checksum
again and compares it with the value that is in the block.

If they differ, it indicates that the block is corrupted on disk. Oracle makes the
block as corrupt and
signals an error. There is an overhead involved in setting this parameter.

DB_BLOCK_CACHE_PROTECT=�TRUE�

Oracle will catch stray writes made by processes in the buffer catch.

Oracle 9i new RMAN futures:

Obtain the datafile numbers and block numbers for the corrupted blocks. Typically,
you obtain this output
from the standard output, the alert.log, trace files, or a media management
interface.
For example, you may see the following in a trace file:

ORA-01578: ORACLE data block corrupted (file # 9, block # 13)


ORA-01110: data file 9: '/oracle/dbs/tbs_91.f'
ORA-01578: ORACLE data block corrupted (file # 2, block # 19)
ORA-01110: data file 2: '/oracle/dbs/tbs_21.f'

$rman target =rman/rman@rmanprod


RMAN> run {
2> allocate channel ch1 type disk;
3> blockrecover datafile 9 block 13 datafile 2 block 19;
4> }

Recovering Data blocks Using Selected Backups:

# restore from backupset


BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 19 FROM BACKUPSET;

# restore from datafile image copy


BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 19 FROM DATAFILECOPY;

# restore from backupset with tag "mondayAM"


BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 199 FROM TAG = mondayAM;

# restore using backups made before one week ago


BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 19 RESTORE
UNTIL 'SYSDATE-7';

# restore using backups made before SCN 100


BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 19 RESTORE UNTIL SCN 100;

# restore using backups made before log sequence 7024


BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 19 RESTORE
UNTIL SEQUENCE 7024;

Note 9:
=======

Displayed below are the messages of the selected thread.

Thread Status: Closed

From: nitinpawar@birlasunlife.com 23-Feb-05 11:51


Subject: ORA-01578 on system datafile

RDBMS Version: Oracle9i Enterprise Edition Release 9.2.0.1.0


Operating System and Version: Windows 2000
Error Number (if applicable): ORA-01578
Product (i.e. SQL*Loader, Import, etc.):
Product Version:

ORA-01578 on system datafile

A data block in SYSTEM tablespace datafile is corrupted.


The error has been occuring since past 7 months. I noticed it recently when I took
over the support.
The database is in archivelog mode. We don't have any old hot backups of the
database files.
Both export and alert log indicate corrupt block to be # 7873, but dbverify
declares block #7875 to be corrupt.
It seems there is no object using the block.

Following is the extract from the alert log.

***
Corrupt block relative dba: 0x00401ec1 (file 1, block 7873)
Fractured block found during buffer read
Data in bad block -
type: 16 format: 2 rdba: 0x00401ec1
last change scn: 0x0000.00007389 seq: 0x1 flg: 0x04
consistency value in tail: 0x23430601
check value in block header: 0x5684, computed block checksum: 0x396b
spare1: 0x0, spare2: 0x0, spare3: 0x0
***
Reread of rdba: 0x00401ec1 (file 1, block 7873) found same corrupted data

From: Oracle, Fahad Abdul Rahman 25-Feb-05 08:18


Subject: Re : ORA-01578 on system datafile

Nitin,
I would suggest you to relocate the system datafiles to a new location on disk and
see
if the corruption is removed. If the issue still persist ,then I would suggest you
to log a TAR
with Oracle Support for further research.

========================
32. iSQL*Plus and EM 10:
========================

32.1 iSQL*Plus:
===============

Note 1:
-------

How to start iSql*Plus:


-----------------------

lsnrctl start
emctl start dbconsole
isqlplusctl start

http://localhost:5561/isqlplus/

Note 2:
-------
Doc ID: Note:281946.1 Content Type: TEXT/X-HTML
Subject: How to Verify that iSQL*Plus 10i is Running and How to Restart the
Processes? Creation Date: 31-AUG-2004
Type: HOWTO Last Revision Date: 06-APR-2005
Status: PUBLISHED
The information in this document applies to:
SQL*Plus - Version: 10.1.0
Information in this document applies to any platform.
Goal
How to verify that iSQL*Plus 10i is running, and how to restart the processes?

Fix
How to Verify that iSQL*Plus is running?
=======================================
UNIX Platform
-------------------
Check whether the iSQL*Plus process is running by entering the following command:

ps -eaf |grep java


The iSQL*Plus process looks something like the following:
oracle 18488 1 0 16:01:30 pts/8 0:36 $ORACLE_HOME/jdk/bin/java -Djava.
awt.headless=true
-Doracle.oc4j.localhome=/ora

Windows Platform
--------------------------
Check whether the iSQL*Plus process is running by opening the Windows services
dialog from the Control Panel and checking
the status of the iSQL*Plus service.
The iSQL*Plus service will be called "OracleOracle_Home_NameiSQL*Plus".

How to Start and Stop iSQL*Plus?


===============================
UNIX Platform
--------------------
To start iSQL*Plus, enter the command:
$ORACLE_HOME/bin/isqlplusctl start

To stop iSQL*Plus, enter the command:


$ORACLE_HOME/bin/isqlplusctl stop

Windows Platform
--------------------------
Use the Windows service to start and stop iSQL*Plus.
The service is set to start automatically on installation and when the operating
system is started.

Note 3:
-------

Doc ID: Note:281847.1 Content Type: TEXT/X-HTML


Subject: How do I configure or test iSQL*Plus 10i? Creation Date: 30-AUG-2004
Type: HOWTO Last Revision Date: 25-MAR-2005
Status: PUBLISHED
The information in this document applies to:
SQL*Plus - Version: 10.1.0.0 to 10.1.0
Information in this document applies to any platform.
Goal
How do I configure or test?iSQL*Plus after the install or Oracle Enterprise
Edition 10i?
Fix
iSQL*Plus 10.x is automatically installed and configured with Enterprise Edition
10i.
At the end of the installation process a file called
$ORACLE_HOME/install/readme.txt has the information needed to configure or test
iSQL*Plus:
readme.txt example:
----------------
The following J2EE Applications have been deployed and are accessible at the URLs
listed below.
Your database configuration files have been installed in?$ORACLE_HOME while other
components selected for installation have been installed in $ORACLE_HOME\Db_1.? Be
cautious not to accidentally delete these configuration files.
Ultra Search URL:
:5620/ultrasearch"http://<your host name>:5620/ultrasearch
Ultra Search Administration Tool URL:
:5620/ultrasearch/admin"http://<your host name>:5620/ultrasearch/admin
iSQL*Plus URL:
:5560/isqlplus"http://<your host name>:5560/isqlplus
Enteprise Manager 10g Database Control URL:
:5500/em"http://<your host name>:5500/em
----------------
The URL for your iSQL*Plus server is:

:port/isqlplus" target=_blankhttp://<your host name>:port /isqlplus

:port/isqlplus/dba" target=_blankhttp://<your host name>:port /isqlplus/dba

The port number is likely to be 5560.

If this URL does not display the iSQL*Plus log in page, check that iSQL*Plus has
been started
For more additional information about iSQL*Plus please check the following
Metalink notes:
Note 281947.1 How to Troubleshoot iSQLPlus 10i when it is not Starting on Unix?
Note 281946.1?How to Verify that iSQLPlus 10i is Running and How to Restart the
Processes?
Note 283114.1?How to connect as sysdba/sysoper through iSQL*Plus in Oracle 10g

Note 4:
-------

Doc ID: Note:283114.1 Content Type: TEXT/X-HTML


Subject: How to connect as sysdba/sysoper through iSQL*Plus in Oracle 10g
Creation Date: 16-SEP-2004
Type: HOWTO Last Revision Date: 12-JAN-2005
Status: MODERATED
This document is being delivered to you via Oracle Support's Rapid Visibility
(RaV) process, and therefore has not been subject to an independent technical
review.
The information in this document applies to:
SQL*Plus - Version: 10.0.1
Information in this document applies to any platform.
Goal
Enabling iSQL*Plus DBA Access.
Fix
Inorder to connect as SYSDBA through iSQL*Plus you will have to use iSQL*Plus DBA
URL. Given below is a sample DBS URL in iSQL*Plus.

" target=_blankhttp://Hostname:Port/isqlplus/dba

Enabling iSQL*Plus DBA Access


=============================

To access the iSQL*Plus DBA URL, you must set up the OC4J user manager. You can
set up OC4J to use:

The XML-based provider type, jazn-data.xml

The LDAP-based provider type, Oracle Internet Directory

This document discusses how to set up the iSQL*Plus DBA URL to use the XML-based
provider. For information on how to set up the LDAP-based provider, see the
Oracle9iAS Containers for J2EE documentation.

To set up the iSQL*Plus DBA URL


=================================

1. Create users for the iSQL*Plus DBA URL.

2. Grant the webDba role to users.

3. Test iSQL*Plus DBA Access

The Oracle JAAS Provider, otherwise known as JAZN (Java AuthoriZatioN), is


Oracle's implementation of the Java Authentication and Authorization Service
(JAAS). Oracle's JAAS Provider is referred to as JAZN in the remainder of this
document. See the Oracle9iAS Containers for J2EE documentation for more
information about JAZN, the Oracle JAAS Provider.

Create and Manage Users for the iSQL*Plus DBA URL


=================================================

The actions available to manage users for the iSQL*Plus DBA URL are:

1. Create users

2. List users
3. Grant the webDba role

4. Remove users

5. Revoke the webDba role

6. Change user passwords

You perform these actions from the $ORACLE_HOME/oc4j/j2ee/isqlplus/application-


deployments/isqlplus directory.

$JAVA_HOME is the location of your JDK (1.4 or above). It should be set to


$ORACLE_HOME/jdk, but you may use another JDK.

admin_password is the password for the iSQL*Plus DBA realm administrator user,
admin. The password for the admin user is set to 'welcome' by default. You should
change this password as soon as possible.

A JAZN shell option, and a command line option are given for all steps.

To start the JAZN shell, enter:

$JAVA_HOME/bin/java -Djava.
security.properties=$ORACLE_HOME/sqlplus/admin/iplus/provider -jar
$ORACLE_HOME/oc4j/j2ee/home/jazn.jar -user "iSQL*Plus DBA/admin" -password
admin_password -shell
To exit the JAZN shell, enter:

EXIT
Create Users
You can create multiple users who have access to the iSQL*Plus DBA URL. To create
a user from the JAZN shell, enter:

JAZN> adduser "iSQL*Plus DBA" username password


To create a user from the command-line, enter:

$JAVA_HOME/bin/java -Djava.
security.properties=$ORACLE_HOME/sqlplus/admin/iplus/provider -jar
$ORACLE_HOME/oc4j/j2ee/home/jazn.jar -user "iSQL*Plus DBA/admin" -password
admin_password -adduser "iSQL*Plus DBA" username password
username and password are the username and password used to log into the iSQL*Plus
DBA URL.

To create multiple users, repeat the above command for each user.

List Users
You can confirm that users have been created and added to the iSQL*Plus DBA realm.
To confirm the creation of a user using the JAZN shell, enter:

JAZN> listusers "iSQL*Plus DBA"


To confirm the creation of a user using the command-line, enter:

$JAVA_HOME/bin/java -Djava.
security.properties=$ORACLE_HOME/sqlplus/admin/iplus/provider -jar
$ORACLE_HOME/oc4j/j2ee/home/jazn.jar -user "iSQL*Plus DBA/admin" -password
admin_password -listusers "iSQL*Plus DBA"
The usernames you created are displayed.

Grant Users the webDba Role


Each user you created above must be granted access to the webDba role. To grant a
user access to the webDba role from the JAZN shell, enter:

JAZN> grantrole webDba "iSQL*Plus DBA" username


To grant a user access to the webDba role from the command-line, enter:

$JAVA_HOME/bin/java -Djava.
security.properties=$ORACLE_HOME/sqlplus/admin/iplus/provider -jar
$ORACLE_HOME/oc4j/j2ee/home/jazn.jar -user "iSQL*Plus DBA/admin" -password
admin_password -grantrole webDba "iSQL*Plus DBA" username
Remove Users
To remove a user using the JAZN shell, enter:

JAZN> remuser "iSQL*Plus DBA" username


To remove a user using the command-line, enter:

$JAVA_HOME/bin/java -Djava.
security.properties=$ORACLE_HOME/sqlplus/admin/iplus/provider -jar
$ORACLE_HOME/oc4j/j2ee/home/jazn.jar -user "iSQL*Plus DBA/admin" -password
admin_password -remuser "iSQL*Plus DBA" username
Revoke the webDba Role
To revoke a user's webDba role from the JAZN shell, enter:

JAZN> revokerole webDba "iSQL*Plus DBA" username


To revoke a user's webDba role from the command-line, enter:

$JAVA_HOME/bin/java -Djava.
security.properties=$ORACLE_HOME/sqlplus/admin/iplus/provider -jar
$ORACLE_HOME/oc4j/j2ee/home/jazn.jar -user "iSQL*Plus DBA/admin" -password
admin_password -revokerole "iSQL*Plus DBA" username
Change User Passwords
To change a user's password from the JAZN shell, enter:

JAZN> setpasswd "iSQL*Plus DBA" username old_password new_password


To change a user's password from the command-line, enter:

$JAVA_HOME/bin/java -Djava.
security.properties=$ORACLE_HOME/sqlplus/admin/iplus/provider -jar
$ORACLE_HOME/oc4j/j2ee/home/jazn.jar -user "iSQL*Plus DBA/admin" -password
admin_password -setpasswd "iSQL*Plus DBA" username old_password new_password
Test iSQL*Plus DBA Access
Test iSQL*Plus DBA access by entering the iSQL*Plus DBA URL in your web browser:

" target=_blankhttp://machine_name.domain:5560/isqlplus/dba
A dialog is displayed requesting authentication for the iSQL*Plus DBA URL. Log in
as the user you created above. You may need to restart iSQL*Plus for the changes
to take effect.

Help us improve our service. Please email us your comments for this document. .

What is a wire protocol ODBC driver?


====================================
A DBMS is written using an application programming interface (API), which is
specific to that database.
For example, an Oracle 9i database has its own version of the API specification
(called Net9),
which must run on each client application.

Developers write applications compliant to the ODBC specification and use ODBC
drivers to access the database.
The ODBC driver communicates with the vendor's native API. Then, the native API
passes instructions
to another vendor-specific low-level API. Finally the wire protocol API
communicates with the database.

The wire protocol architecture eliminates the need for the database's native API
(for example, Net9),
so the driver communicates directly to the database through the database's own
wire level protocol.
This effectively removes an entire communication layer.

================================
33. ADDM and other 10g features:
================================

=========================
33.1 Flash_recovery_area:
=========================

Note 1:
-------

A flash recovery area is a directory, file system, or Automatic Storage Management


disk group
that serves as the default storage area for files related to recovery. Such files
include

- Multiplexed copies of the control file and online redo logs


- Archived redo logs and flashback logs
- RMAN backups
- Files created by RESTORE and RECOVER commands

Recovery components of the database interact with the flash recovery area to
ensure that the database
is completely recoverable using files in the flash recovery area. The database
manages the disk space
in the flash recovery area, and when there is not sufficient disk space to create
new files,
the database creates more room automatically by deleting the minimum set of files
from flash recovery area
that are obsolete, backed up to tertiary storage, or redundant.

Note 2:
-------

Before any Flash Backup and Recovery activity can take place, the Flash Recovery
Area must be set up.
The Flash Recovery Area is a specific area of disk storage that is set aside
exclusively for retention
of backup components such as datafile image copies, archived redo logs, and
control file autobackup copies.
These features include:

Unified Backup Files Storage. All backup components can be stored in one
consolidated spot.
The Flash Recovery Area is managed via Oracle Managed Files (OMF), and it can
utilize disk
resources managed by Oracle Automated Storage Management (ASM). In addition, the
Flash Recovery Area
can be configured for use by multiple database instances if so desired.

Automated Disk-Based Backup and Recovery. Once the Flash Recovery Area is
configured, all backup components
(datafile image copies, archived redo logs, and so on) are managed automatically
by Oracle.

Automatic Deletion of Backup Components. Once backup components have been


successfully created,
RMAN can be configured to automatically clean up files that are no longer needed
(thus reducing
risk of insufficient disk space for backups).

Disk Cache for Tape Copies. Finally, if your disaster recovery plan involves
backing up to alternate media,
the Flash Recovery Area can act as a disk cache area for those backup components
that are
eventually copied to tape.

Flashback Logs. The Flash Recovery Area is also used to store and manage flashback
logs, which are used
during Flashback Backup operations to quickly restore a database to a prior
desired state.

Sizing the Flash Recovery Area. Oracle recommends that the Flash Recovery Area
should be sized large enough
to include all files required for backup and recovery. However, if insufficient
disk space is available,
Oracle recommends that it be sized at least large enough to contain any archived
redo logs that have not yet
been backed up to alternate media.

initialization parameters:

DB_RECOVERY_FILE_DEST_SIZE specifies the total size of all files that can be


stored in the Flash Recovery Area.
Note that Oracle recommends setting this value first.

DB_RECOVERY_FILE_DEST specifies the physical disk location where the Flashback


Recovery Area
will be stored. Oracle recommends that this be a separate location from the
database's datafiles,
control files, and redo logs. Also, note that if the database is using Oracle's
new
Automatic Storage Management (ASM) feature, then the shared disk area that ASM
manages can be targeted
for the Flashback Recovery Area.

Examples:

-----
-- Listing 2.2: Setting up the Flash Recovery Area - open database
-----

-- Be sure to set DB_FILE_RECOVERY_DEST_SIZE first ...


ALTER SYSTEM SET db_file_recovery_dest_size = '5G' SCOPE=BOTH SID='*';
-- ... and then set DB_FILE_RECOVERY_DEST and DB_FLASHBACK_RETENTION_TARGET
ALTER SYSTEM SET db_file_recovery_dest = 'c:\oracle\fbrdata\zdcdb' SCOPE=BOTH
SID='*';
ALTER SYSTEM SET db_flashback_retention_target = 2880;

http://download.oracle.com/docs/cd/B19306_01/backup.102/b14192/toc.htm

Note 2:
-------

Flashback Database Demo


An alternative strategy to the demo presented here is to use Recovery Manager

RMAN> FLASHBACK DATABASE TO SCN = <system_change_number>;


Dependent Objects GV_$FLASHBACK_DATABASE_LOG V_$FLASHBACK_DATABASE_LOG
GV_$FLASHBACK_DATABASE_LOGFILE V_$FLASHBACK_DATABASE_LOGFILE
GV_$FLASHBACK_DATABASE_STAT V_$FLASHBACK_DATABASE_STAT

Syntax 1: SCN FLASHBACK [STANDBY] DATABASE [<database_name>]


TO [BEFORE] SCN <system_change_number>
Syntax 2: TIMESTAMP FLASHBACK [STANDBY] DATABASE [<database_name>]
TO [BEFORE] TIMESTMP <system_timestamp_value>
Syntax 3: RESTORE POINT FLASHBACK [STANDBY] DATABASE [<database_name>]
TO [BEFORE] RESTORE POINT <restore_point_name>

Flashback Syntax Elements


OFF ALTER DATABASE FLASHBACK OFF
alter database flashback off;
ON ALTER DATABASE FLASHBACK ON
alter database flashback on;
Set Retention Target ALTER SYSTEM SET db_flashback_retention_target =
<number_of_minutes>;
alter system set DB_FLASHBACK_RETENTION_TARGET = 2880;
Start flashback on a tablespace ALTER TABLESPACE <tablespace_name> FLASHBACK ON;
alter tablespace example flashback on;
Stop flashback on a tablespace ALTER TABLESPACE <tablespace_name> FLASHBACK OFF;
alter tablespace example flashback off;

Initialization Parameters
Setting the location of the flashback
recovery area db_recovery_file_dest=/oracle/flash_recovery_area
Setting the size of the flashback
recovery area db_recovery_file_dest_size=2147483648
Setting the retention time for flashback files (in minutes) -- 2 days
db_flashback_retention_target=2880

Demo
conn / as sysdba

SELECT flashback_on, log_mode


FROM gv$database;

set linesize 121


col name format a30
col value format a30

SELECT name, value


FROM gv$parameter
WHERE name LIKE '%flashback%';

shutdown immediate;

startup mount exclusive;

alter database archivelog;

alter database flashback on;

alter database open;

SELECT flashback_on, log_mode


FROM gv$database;

SELECT name, value


FROM gv$parameter
WHERE name LIKE '%flashback%';

-- 2 days
alter system set DB_FLASHBACK_RETENTION_TARGET=2880;

SELECT name, value


FROM gv$parameter
WHERE name LIKE '%flashback%';

SELECT estimated_flashback_size
FROM gv$flashback_database_log;

As SYS
As UWCLASS

SELECT current_scn
FROM gv$database;

SELECT oldest_flashback_scn,
oldest_flashback_time
FROM gv$flashback_database_log;
create table t (
mycol VARCHAR2(20))
ROWDEPENDENCIES;

INSERT INTO t VALUES ('ABC');


INSERT INTO t VALUES ('DEF');

COMMIT;

INSERT INTO t VALUES ('GHI');

COMMIT;

SELECT ora_rowscn, mycol FROM t;


SHUTDOWN immediate;

startup mount exclusive;

FLASHBACK DATABASE TO SCN 19513917;

/*
FLASHBACK DATABASE TO TIMESTAMP (SYSDATE-1/24);

FLASHBACK DATABASE TO TIMESTAMP timestamp'2002-11-05 14:00:00';

FLASHBACK DATABASE
TO TIMESTAMP to_timestamp('2002-11-11 16:00:00', 'YYYY-MM-DD HH24:MI:SS');
*/

alter database open;


alter database open resetlogs;

conn uwclass/uwclass

SELECT ora_rowscn, mycol FROM t;


SELECT *
FROM gv$flashback_database_stat;

alter system switch logfile;

shutdown immediate;

startup mount exclusive;

alter database flashback off;

alter database noarchivelog;

alter database open;

SELECT flashback_on, log_mode


FROM gv$database;
host

rman target sys/pwd@orabase

RMAN> crosscheck archivelog all;

RMAN> delete archivelog all;

RMAN> list archivelog all;


-- if out of disk space
ORA-16014: log 2 sequence# 4163 not archived, no available destinations
ORA-00312: online log 2 thread 1: 'c:\oracle\oradata\orabase\redo02.log'

-- what happens
The error ora-16014 is the real clue for this problem. Once the archive
destination becomes full the location also becomes invalid. Normally Oracle does
not do a recheck to see if space has been made available.

-- then
shutdown abort;

-- clean up disk space: then

startup

alter system archive log all to '/oracle/flash_recovery_area/ORABASE/ARCHIVELOG';

==========
33.2 ADDM:
==========

Note 1:
=======

Doc ID: Note:250655.1 Content Type: TEXT/PLAIN


Subject: How to use the Automatic Database Diagnostic Monitor Creation Date:
09-OCT-2003
Type: BULLETIN Last Revision Date: 10-JUN-2004
Status: PUBLISHED
PURPOSE
-------

The purpose of this article is to show an introduction on how to use the


Automatic Database Diagnostic Monitor feature. The ADDM consists of
functionality built into the Oracle kernel to assist in making tuning an
Oracle instance less elaborate.

SCOPE & APPLICATION


-------------------

Audience : Oracle developers and DBAs


Use : Using the Automatic Database Diagnostic Monitor feature
as a first step in the creation of an autotunable
database
Level of detail : medium
Limitation on use: none

USING THE AUTOMATIC DATABASE DIAGNOSTIC MONITOR


-----------------------------------------------
Introduction:
-------------

The Automatic Database Diagnostic Monitor (hereafter called ADDM) is an


integral part of the Oracle RDBMS capable of gathering performance
statistics and advising on changes to solve any exitsing performance issues
measured.

For this it uses the Automatic Workload Repository ( hereafter called AWR),
a repository defined in the database to store database wide usage statistics
at fixed size intervals (60 minutes).

To make use of ADDM, a PL/SQL interface called DBMS_ADVISOR has been


implemented. This PL/SQL interface may be called through the supplied
$ORACLE_HOME/rdbms/admin/addmrpt.sql script, called directly, or used in
combination with the Oracle Enterprise Manager application. Besides this
PL/SQL package, a number of views (with names starting with the DBA_ADVISOR_
prefix) allow retrieval of the results of any actions performed with the
DBMS_ADVISOR API. The preferred way of accessing ADDM is through the
Enterprise Manager interface, as it shows a complete performance overview
including recommendations on how to solve bottlenecks on a single screen.
When accessing ADDM manually, you should consider using the ADDMRPT.SQL
script provided with your Oracle release, as it hides the complexities
involved in accessing the DBMS_ADVISOR package.

To use ADDM for advising on how to tune the instance and SQL, you need to
make sure that the AWR has been populated with at least 2 sets of
performance data. When the STATISTICS_LEVEL is set to TYPICAL or ALL
the database will automatically schedule the AWR
to be populated at 60 minute intervals.

When you wish to create performance snapshots outside of the fixed


intervals, then you can use the DBMS_WORKLOAD_REPOSITORY package for this,
like in:
BEGIN
DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT('TYPICAL');
END;
/

The snapshots need be created before and after the action you wish to
examine. E.g. when examining a bad performing query, you need to have
performance data snapshots from the timestamps before the query was started
and after the query finished.

You may also change the frequency of the snapshots and the duration for which
they
are saved in the AWR. Use the DBMS_WORKLOAD_REPOSITORY package as in the
following example:

execute
DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS(interval=>60,retention=>43200);

Example:
--------

You can use ADDM through the PL/SQL API and query the various advisory views
in SQL*Plus to examine how to solve performance issues.
The example is based on the SCOTT account executing the various tasks. To
allow SCOTT to both generate AWR snapshots and sumbit ADDM recommendation
jobs, he needs to be granted proper access:
CONNECT / AS SYSDBA
GRANT ADVISOR TO scott;
GRANT SELECT_CATALOG_ROLE TO scott;
GRANT EXECUTE ON dbms_workload_repository TO scott;

Furthermore, the buffer cache size (DB_CACHE_SIZE) has been reduced to 24M.

The example presented makes use of a table called BIGEMP, residing in the
SCOTT schema. The table (containing about 14 million rows) has been created
with:
CONNECT scott/tiger
CREATE TABLE bigemp AS SELECT * FROM emp;
ALTER TABLE bigemp MODIFY (empno NUMBER);
DECLARE
n NUMBER;
BEGIN
FOR n IN 1..18
LOOP
INSERT INTO bigemp SELECT * FROM bigemp;
END LOOP;
COMMIT;
END;
/
UPDATE bigemp SET empno = ROWNUM;
COMMIT;

The next step is to generate a performance data snapshot:


EXECUTE dbms_workload_repository.create_snapshot('TYPICAL');

Execute a query on the BIGEMP table to generate some load:


SELECT * FROM bigemp WHERE deptno = 10;

After this, generate a second performance snapshot:


EXECUTE dbms_workload_repository.create_snapshot('TYPICAL');

The easiest way to get the ADDM report is by executing:


@?/rdbms/admin/addmrpt

Running this script will show which snapshots have been generated, asks for
the snapshot IDs to be used for generating the report, and will generate the
report containing the ADDM findings.

When you do not want to use the script, you need to submit and execute the
ADDM task manually. First, query DBA_HIST_SNAPSHOT to see which snapshots
have been created. These snapshots will be used by ADDM to generate
recommendations:
SELECT * FROM dba_hist_snapshot ORDER BY snap_id;

SNAP_ID DBID INSTANCE_NUMBER


---------- ---------- ---------------
STARTUP_TIME
-----------------------------------------------------------------------
BEGIN_INTERVAL_TIME
-----------------------------------------------------------------------
END_INTERVAL_TIME
-----------------------------------------------------------------------
FLUSH_ELAPSED
-----------------------------------------------------------------------
SNAP_LEVEL ERROR_COUNT
---------- -----------
1 494687018 1
17-NOV-03 09.39.17.000 AM
17-NOV-03 09.39.17.000 AM
17-NOV-03 09.50.21.389 AM
+00000 00:00:06.6
1 0
2 494687018 1
17-NOV-03 09.39.17.000 AM
17-NOV-03 09.50.21.389 AM
17-NOV-03 10.29.35.704 AM
+00000 00:00:02.3
1 0
3 494687018 1
17-NOV-03 09.39.17.000 AM
17-NOV-03 10.29.35.704 AM
17-NOV-03 10.35.46.878 AM
+00000 00:00:02.1
1 0

Mark the 2 snapshot IDs (such as the lowest and highest ones) for use in
generating recommendations.

Next, you need to submit and execute the ADDM task manually, using a script
similar to:
DECLARE
task_name VARCHAR2(30) := 'SCOTT_ADDM';
task_desc VARCHAR2(30) := 'ADDM Feature Test';
task_id NUMBER;
BEGIN
(1) dbms_advisor.create_task('ADDM', task_id, task_name, task_desc,
null);
(2) dbms_advisor.set_task_parameter('SCOTT_ADDM', 'START_SNAPSHOT', 1);
dbms_advisor.set_task_parameter('SCOTT_ADDM', 'END_SNAPSHOT', 3);
dbms_advisor.set_task_parameter('SCOTT_ADDM', 'INSTANCE', 1);
dbms_advisor.set_task_parameter('SCOTT_ADDM', 'DB_ID', 494687018);
(3) dbms_advisor.execute_task('SCOTT_ADDM');
END;
/

Here is the explanation of the steps you need to take to successfully


execute an ADDM job:
1) The first step is to create the task. For this, you need to specify the
name under which the task will be known in the ADDM task system. Along
with the name you can provide a more readable description on what the job
should do. The task type must be 'ADDM' in order to have it executed in
the ADDM environment.
2) After having defined the ADDM task, you must define the boundaries within
which the task needs to be executed. For this you need to set the
starting and ending snapshot IDs, instance ID (especially necessary when
running in a RAC environment), and database ID for the newly created job.
3) Finally, the task must be executed.
When querying DBA_ADVISOR_TASKS you see the just created job:
SELECT * FROM dba_advisor_tasks;

OWNER TASK_ID TASK_NAME


------------------------------ ---------- ------------------------------
DESCRIPTION
------------------------------------------------------------------------
ADVISOR_NAME CREATED LAST_MODI PARENT_TASK_ID
------------------------------ --------- --------- --------------
PARENT_REC_ID READ_
------------- -----
SCOTT 5 SCOTT_ADDM
ADDM Feature Test
ADDM 17-NOV-03 17-NOV-03 0
0 FALSE

When the job has successfully completed, examine the recommendations made by
ADDM by calling the DBMS_ADVISOR.GET_TASK_REPORT() routine, like in:
SET LONG 1000000 PAGESIZE 0 LONGCHUNKSIZE 1000
COLUMN get_clob FORMAT a80
SELECT dbms_advisor.get_task_report('SCOTT_ADDM', 'TEXT', 'TYPICAL')
FROM sys.dual;

The recommendations supplied should be sufficient to investigate the


performance issue, as in:

DETAILED ADDM REPORT FOR TASK 'SCOTT_ADDM' WITH ID 5


----------------------------------------------------

Analysis Period: 17-NOV-2003 from 09:50:21 to 10:35:47


Database ID/Instance: 494687018/1
Snapshot Range: from 1 to 3
Database Time: 4215 seconds
Average Database Load: 1.5 active sessions

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

FINDING 1: 65% impact (2734 seconds)


------------------------------------
PL/SQL execution consumed significant database time.

RECOMMENDATION 1: SQL Tuning, 65% benefit (2734 seconds)


ACTION: Tune the PL/SQL block with SQL_ID fjxa1vp3yhtmr. Refer to
the "Tuning PL/SQL Applications" chapter of Oracle's "PL/SQL
User's Guide and Reference"
RELEVANT OBJECT: SQL statement with SQL_ID fjxa1vp3yhtmr
BEGIN EMD_NOTIFICATION.QUEUE_READY(:1, :2, :3); END;

FINDING 2: 35% impact (1456 seconds)


------------------------------------
SQL statements consuming significant database time were found.

RECOMMENDATION 1: SQL Tuning, 35% benefit (1456 seconds)


ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
gt9ahqgd5fmm2.
RELEVANT OBJECT: SQL statement with SQL_ID gt9ahqgd5fmm2 and
PLAN_HASH 547793521
UPDATE bigemp SET empno = ROWNUM

FINDING 3: 20% impact (836 seconds)


-----------------------------------
The throughput of the I/O subsystem was significantly lower than expected.

RECOMMENDATION 1: Host Configuration, 20% benefit (836 seconds)


ACTION: Consider increasing the throughput of the I/O subsystem.
Oracle's recommended solution is to stripe all data file using
the SAME methodology. You might also need to increase the
number of disks for better performance.

RECOMMENDATION 2: Host Configuration, 14% benefit (584 seconds)


ACTION: The performance of file
D:\ORACLE\ORADATA\V1010\UNDOTBS01.DBF was significantly worse
than other files. If striping all files using the SAME
methodology is not possible, consider striping this file over
multiple disks.
RELEVANT OBJECT: database file
"D:\ORACLE\ORADATA\V1010\UNDOTBS01.DBF"

SYMPTOMS THAT LED TO THE FINDING:


Wait class "User I/O" was consuming significant database time.
(34% impact [1450 seconds])

FINDING 4: 11% impact (447 seconds)


-----------------------------------
Undo I/O was a significant portion (33%) of the total database I/O.

NO RECOMMENDATIONS AVAILABLE

SYMPTOMS THAT LED TO THE FINDING:


The throughput of the I/O subsystem was significantly lower than
expected. (20% impact [836 seconds])
Wait class "User I/O" was consuming significant database time.
(34% impact [1450 seconds])

FINDING 5: 9.9% impact (416 seconds)


------------------------------------
Buffer cache writes due to small log files were consuming significant
database time.

RECOMMENDATION 1: DB Configuration, 9.9% benefit (416 seconds)


ACTION: Increase the size of the log files to 796 M to hold at
least 20 minutes of redo information.

SYMPTOMS THAT LED TO THE FINDING:


The throughput of the I/O subsystem was significantly lower than
expected. (20% impact [836 seconds])
Wait class "User I/O" was consuming significant database time.
(34% impact [1450 seconds])

FINDING 6: 9.2% impact (387 seconds)


------------------------------------
Individual database segments responsible for significant user I/O wait
were found.
RECOMMENDATION 1: Segment Tuning, 7.2% benefit (304 seconds)
ACTION: Run "Segment Advisor" on database object "SCOTT.BIGEMP"
with object id 49634.
RELEVANT OBJECT: database object with id 49634
ACTION: Investigate application logic involving I/O on database
object "SCOTT.BIGEMP" with object id 49634.
RELEVANT OBJECT: database object with id 49634

RECOMMENDATION 2: Segment Tuning, 2% benefit (83 seconds)


ACTION: Run "Segment Advisor" on database object
"SYSMAN.MGMT_METRICS_RAW_PK" with object id 47084.
RELEVANT OBJECT: database object with id 47084
ACTION: Investigate application logic involving I/O on database
object "SYSMAN.MGMT_METRICS_RAW_PK" with object id 47084.
RELEVANT OBJECT: database object with id 47084

SYMPTOMS THAT LED TO THE FINDING:


Wait class "User I/O" was consuming significant database time.
(34% impact [1450 seconds])

FINDING 7: 8.7% impact (365 seconds)


------------------------------------
Individual SQL statements responsible for significant physical I/O were
found.

RECOMMENDATION 1: SQL Tuning, 8.7% benefit (365 seconds)


ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
gt9ahqgd5fmm2.
RELEVANT OBJECT: SQL statement with SQL_ID gt9ahqgd5fmm2 and
PLAN_HASH 547793521
UPDATE bigemp SET empno = ROWNUM

RECOMMENDATION 2: SQL Tuning, 0% benefit (0 seconds)


ACTION: Tune the PL/SQL block with SQL_ID fjxa1vp3yhtmr. Refer to
the "Tuning PL/SQL Applications" chapter of Oracle's "PL/SQL
User's Guide and Reference"
RELEVANT OBJECT: SQL statement with SQL_ID fjxa1vp3yhtmr
BEGIN EMD_NOTIFICATION.QUEUE_READY(:1, :2, :3); END;

SYMPTOMS THAT LED TO THE FINDING:


The throughput of the I/O subsystem was significantly lower than
expected. (20% impact [836 seconds])
Wait class "User I/O" was consuming significant database time.
(34% impact [1450 seconds])

FINDING 8: 8.3% impact (348 seconds)


------------------------------------
Wait class "Configuration" was consuming significant database time.

NO RECOMMENDATIONS AVAILABLE

ADDITIONAL INFORMATION: Waits for free buffers were not consuming


significant database time.
Waits for archiver processes were not consuming significant
database time.
Log file switch operations were not consuming significant database
time while waiting for checkpoint completion.
Log buffer space waits were not consuming significant database
time.
High watermark (HW) enqueue waits were not consuming significant
database time.
Space Transaction (ST) enqueue waits were not consuming
significant database time.
ITL enqueue waits were not consuming significant database time.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

ADDITIONAL INFORMATION
----------------------

An explanation of the terminology used in this report is available when


you run the report with the 'ALL' level of detail.

The analysis of I/O performance is based on the default assumption that


the average read time for one database block is 5000 micro-seconds.

Wait class "Administrative" was not consuming significant database time.


Wait class "Application" was not consuming significant database time.
Wait class "Cluster" was not consuming significant database time.
Wait class "Commit" was not consuming significant database time.
Wait class "Concurrency" was not consuming significant database time.
CPU was not a bottleneck for the instance.
Wait class "Network" was not consuming significant database time.
Wait class "Scheduler" was not consuming significant database time.
Wait class "Other" was not consuming significant database time.

============================= END OF ADDM REPORT ======================

ADDM points out which events cause the performance problems to occur and
suggests directions to follow to fix these bottlenecks. The ADDM
recommendations show amongst others that the query on BIGEMP needs to be
examined; in this case it suggests to run the Segment Advisor to check
whether the data segment is fragmented or not; it also advices to check
the application logic involved in accessing the BIGEMP table. Furthermore,
it shows the system suffers from I/O problems (which is in this example
caused by not using SAME and placing all database files on a single disk
partition).

The findings are sorted descending by impact: the issues causing the
greatest performance problems are listed at the top of the report. Solving
these issues will result in the greatest performance benefits. Also, in the
last
section of the report ADDM indicates the areas that are not representing
a problem for the performance of the instance

In this example the database is rather idle. As such the Enterprise Manager
notification job (which runs frequently) is listed at the top. You need not
worry about this job at all.

Please notice that the output of the last query may differ depending on what
took place on your database at the time the ADDM recommendations were
generated.

RELATED DOCUMENTS
-----------------

Oracle10g Database Performance Guide Release 1 (10.1)


Oracle10g Database Reference Release 1 (10.1)
PL/SQL Packages and Types Reference Release 1 (10.1)

Note 2:
=======

To determine which segments will benefit from segment shrink, you can invoke
Segment Advisor.

alter table hr.employees enable row movement;

After the Segment Advisor has been invoked to give recommendations, the findings
are available
in BDA_ADVISOR_FINDINGS and DBA_ADVISOR_RECOMMENDATIONS.

variable task_id number;

declare
name varchar2(100);
desc varchar2(500);
obj_id number;
begin
name:='';
desc:='Check HR.EMPLOYEE';
DBMS_ADVISOR.CREATE_TASK('Segment Advisor', :task_id, name, descr, NULL);
DBMS_ADVISOR.CREATE_OBJECT(name,'TABLE','HR','EMPLOYEES', NULL,NULL,obj_id);
DBMS_ADVISOR.SET_TASK_PARAMETER(name,'RECOMMEND_ALL','TRUE');
DBMS_ADVISOR.EXECUTE_TASK(name);
end;

PL/SQL procedure successfully completed.

print task_id

TASK_ID
-------
6

SELECT owner, task_id, task_name, type, message, more_info


FROM DBA_ADVISOR_FINDINGS
WHERE task_id=6;

OWNER TASK_ID TASK_NAME TYPE MESSAGE


----- ------- --------- ----
--------------------------------------------------
RJB 6 TASK_00003 INFORMATION Perform shrink, estimated savings is
107602 bytes.

In DBA_ADVISOR_ACTIONS, you can even find the exact SQL statement to shrink the
hr.employees segment.

alter table hr.employees shrink space;

==============================
34. ASM and RAC in Oracle 10g:
==============================

34.1 ASM
========

========
Note 1:
========

Automatic Storage Management (ASM) in Oracle Database 10g

With ASM, Automatic Storage Management, there is a separate lightweight 10g


database involved.
This ASM database (+ASM), contains all metadata about the ASM system.
It also acts as the interface between the regular database and the filesystems.

ASM will provide for presentation and implementation of a special filesystem, on


which a number
of redundancy/availability and performance features are implemented.

In addition to the normal database background processes like CKPT, DBWR, LGWR,
SMON, and PMON,
an ASM instance uses at least two additional background processes to manage data
storage operations.
The Rebalancer process, RBAL, coordinates the rebalance activity for ASM disk
groups,
and the Actual ReBalance processes, ARBn, handle the actual rebalance of data
extent movements.
There are usually several ARB background processes (ARB0, ARB1, and so forth).

Every database instance that uses ASM for file storage, will also need two new
processes.
The Rebalancer background process (RBAL) handles global opens of all ASM disks in
the ASM Disk Groups,
while the ASM Bridge process (ASMB) connects as a foreground process into the ASM
instance when the
regular database instance starts. ASMB facilitates communication between the ASM
instance and
the regular database, including handling physical file changes like data file
creation and deletion.
ASMB exchanges messages between both servers for statistics update and instance
health validation.
These two processes are automatically started by the database instance when a new
Oracle file type -
for example, a tablespace's datafile -- is created on an ASM disk group. When an
ASM instance mounts
a disk group, it registers the disk group and connect string with Group Services.
The database instance
knows the name of the disk group, and can therefore use it to locate connect
information for
the correct ASM instance.

========
Note 2:
========

Some terminology in RAC:

CRS cluster ready services - Clusterware:

For Oracle10g on Linux and Windows-based platforms, CRS co-exists with but does
not inter-operate
with vendor clusterware. You may use vendor clusterware for all UNIX-based
operating systems
except for Linux. Even though, many of the Unix platforms have their own
clusterware products,
you need to use the CRS software to provide the HA support services. CRS (cluster
ready services)
supports services and workload management and helps to maintain the continuous
availability of the services.
CRS also manages resources such as virtual IP (VIP) address for the node and the
global services daemon.
Note that the "Voting disks" and the "Oracle Cluster Registry", are regarded as
part of the CRS.

OCR:

The Oracle Cluster Registry (OCR) contains cluster and database configuration
information
for Real Application Clusters Cluster Ready Services (CRS), including the list of
nodes
in the cluster database, the CRS application, resource profiles, and the
authorizations for
the Event Manager (EVM). The OCR can reside in a file on a cluster file system or
on a shared raw device.
When you install Real Application Clusters, you specify the location of the OCR.

OCFS:

OCFS is a shared disk cluster filesystem. Version 1 released for Linux is


specifically designed
to alleviate the need for manag-ing raw devices. It can contain all the
oracle datafiles, archive log files and controlfiles. It is however not designed
as a
general purpose filesystem.

OCFS2 is the next generation of the Oracle Cluster File System for Linux. It is an
extent based,
POSIX compliant file system. Unlike the previous release (OCFS), OCFS2 is a
general-purpose
file system that can be used for shared Oracle home installations making
management of
Oracle Real Application Cluster (RAC) installations even easier. Among the new
features and benefits are:

Node and architecture local files using Context Dependent Symbolic Links (CDSL)
Network based pluggable DLM
Improved journaling / node recovery using the Linux Kernel "JBD" subsystem
Improved performance of meta-data operations (space allocation, locking, etc).
Improved data caching / locking (for files such as oracle binaries, libraries,
etc)

- OCFS1 does NOT support a shared Oracle Home


- OCFS2 does support a shared Oracle Home

Though ASM appears to be the intended replacement for Oracle Cluster File System
(OCFS)
for the Real Applications Cluster (RAC).
ASM supports Oracle Real Application Clusters (RAC), so there is no need
for a separate Cluster LVM or a Cluster File System.

So it boils down to:


- You use or OCFS2, or RAW, or ASM (preferrably) for your database files.

Storage Option Oracle Clusterware Database Recovery


area
-------------- ------------------ -------- ------------
-
Automatic Storage Management No Yes Yes
Cluster file system (OCFS) Yes Yes Yes
Shared raw storage Yes Yes No

========
Note 3:
========

Automatic Storage Management (ASM) simplifies database administration. It


eliminates the need for you,
as a DBA, to directly manage potentially thousands of Oracle database files. It
does this by enabling
you to create disk groups, which are comprised of disks and the files that reside
on them. You only need
to manage a small number of disk groups.

In the SQL statements that you use for creating database structures such as
tablespaces, redo log and
archive log files, and control files, you specify file location in terms of disk
groups.
Automatic Storage Management then creates and manages the associated underlying
files for you.

Automatic Storage Management extends the power of Oracle-managed files. With


Oracle-managed files,
files are created and managed automatically for you, but with Automatic Storage
Management you get
the additional benefits of features such as mirroring and striping.
The primary component of Automatic Storage Management is the disk group. You
configure Automatic Storage Management
by creating disk groups, which, in your database instance, can then be specified
as the default
location for files created in the database. Oracle provides SQL statements that
create and manage
disk groups, their contents, and their metadata.

A disk group consists of a grouping of disks that are managed together as a unit.
These disks are referred
to as ASM disks. Files written on ASM disks are ASM files, whose names are
automatically generated
by Automatic Storage Management. You can specify user-friendly alias names for ASM
files,
but you must create a hierarchical directory structure for these alias names.

You can affect how Automatic Storage Management places files on disks by
specifying failure groups.
Failure groups define disks that share components, such that if one fails then
other disks sharing
the component might also fail. An example of what you might define as a failure
group would be a set
of SCSI disks sharing the same SCSI controller. Failure groups are used to
determine which ASM disks
to use for storing redundant data. For example, if two-way mirroring is specified
for a file,
then redundant copies of file extents must be stored in separate failure groups.

If you would take a look at the v$datafile, v$logfile, and v$controlfile of the
regular Database,
you would see information like in the following example:

SQL> select file#, name from v$datafile;

1 +DATA1/rac0/datafile/system.256.1
2 +DATA1/rac0/datafile/undotbs.258.1
3 +DATA1/rac0/datafile/sysaux.257.1
4 +DATA1/rac0/datafile/users.259.1
5 +DATA1/rac0/datafile/example.269.1

SQL> select name from v$controlfile;

+DATA1/rac0/controlfile/current.261.3
+DATA1/rac0/controlfile/current.260.3

-- Initialization Parameters (init.ora or SPFILE) for ASM Instances

The following initialization parameters relate to an ASM instance. Parameters that


start with ASM_
cannot be set in database instances.

Name Description
INSTANCE_TYPE Must be set to INSTANCE_TYPE = ASM.
Note: This is the only required parameter. All other parameters
take suitable defaults
for most environments.

DB_UNIQUE_NAME Unique name for this group of ASM instances within the cluster or
on a node.
Default: +ASM (Needs to be modified only if trying to run multiple ASM
instances on the same node)

ASM_POWER_LIMIT The maximum power on an ASM instance for disk rebalancing.


Default: 1 Can range from 1 to 11. 1 is the lowest priority.

See Also: "Tuning Rebalance Operations"

ASM_DISKSTRING Limits the set of disks that Automatic Storage Management


considers for discovery.
Default: NULL (This default causes ASM to find all of the disks in a platform-
specific location to which
it has read/write access.).
Example: /dev/raw/*

ASM_DISKGROUPS Lists the names of disk groups to be mounted by an ASM instance


at startup,
or when the ALTER DISKGROUP ALL MOUNT statement is used.
Default: NULL (If this parameter is not specified, then no disk groups are
mounted.)

Note: This parameter is dynamic and if you are using a server parameter file
(SPFILE), then you should
rarely need to manually alter this value. Automatic Storage Management
automatically adds a disk group
to this parameter when a disk group is successfully mounted, and automatically
removes a disk group that
is specifically dismounted. However, when using a traditional text initialization
parameter file,
remember to edit the initialization parameter file to add the name of any disk
group that you want automatically
mounted at instance startup, and remove the name of any disk group that you no
longer want automatically mounted.

-- ASM Views:

The ASM configuration can be viewed using the V$ASM_% views, which often contain
different information
depending on whether they are queried from the ASM instance, or a dependant
database instance.

Viewing ASM Instance Information Via SQL Queries


Finally, there are several dynamic and data dictionary views available to view an
ASM configuration from within
the ASM instance itself:

ASM Dynamic Views: FROM ASM Instance Information

View Name Description


V$ASM_ALIAS Shows every alias for every disk group mounted by the ASM
instance

V$ASM_CLIENT Shows which database instance(s) are using any ASM disk groups
that are being mounted by this ASM instance

V$ASM_DISK Lists each disk discovered by the ASM instance, including disks
that are not part of any ASM disk group

V$ASM_DISKGROUP Describes information about ASM disk groups mounted by the ASM
instance

V$ASM_FILE Lists each ASM file in every ASM disk group mounted by the ASM
instance

V$ASM_OPERATION Like its counterpart, V$SESSION_LONGOPS, it shows each long-


running ASM operation in the ASM instance

V$ASM_TEMPLATE Lists each template present in every ASM disk group mounted by
the ASM instance

-- Managing disk groups

The SQL statements introduced in this section are only available in an ASM
instance.
You must first start the ASM instance.

Creating disk group examples:

Example 1:
----------

Creating a Disk Group: Example

The following examples assume that the ASM_DISKSTRING is set to '/devices/*'.


Assume the following:

ASM disk discovery identifies the following disks in directory /devices.

/devices/diska1
/devices/diska2
/devices/diska3
/devices/diska4
/devices/diskb1
/devices/diskb2
/devices/diskb3
/devices/diskb4

The disks diska1 - diska4 are on a separate SCSI controller from disks diskb1 -
diskb4.

The following SQL*Plus session illustrates starting an ASM instance and creating a
disk group named dgroup1.
% SQLPLUS /NOLOG
SQL> CONNECT / AS SYSDBA

SQL> CREATE DISKGROUP dgroup1 NORMAL REDUNDANCY


2 FAILGROUP controller1 DISK
3 '/devices/diska1',
4 '/devices/diska2',
5 '/devices/diska3',
6 '/devices/diska4',
7 FAILGROUP controller2 DISK
8 '/devices/diskb1',
9 '/devices/diskb2',
10 '/devices/diskb3',
11 '/devices/diskb4';

In this example, dgroup1 is composed of eight disks that are defined as belonging
to either
failure group controller1 or controller2. Since NORMAL REDUNDANCY level is
specified for the disk group,
then Automatic Storage Management provides redundancy for all files created in
dgroup1 according to the
attributes specified in the disk group templates.

For example, in the system default template shown in the table in "Managing Disk
Group Templates",
normal redundancy for the online redo log files (ONLINELOG template) is two-way
mirroring. This means that
when one copy of a redo log file extent is written to a disk in failure group
controller1, a mirrored copy
of the file extent is written to a disk in failure group controller2. You can see
that to support normal
redundancy level, at least two failure groups must be defined.

Since no NAME clauses are provided for any of the disks being included in the disk
group,
the disks are assigned the names of dgroup1_0001, dgroup1_0002, ..., dgroup1_0008.

Example 2:
----------

CREATE DISKGROUP disk_group_1 NORMAL REDUNDANCY


FAILGROUP failure_group_1 DISK
'/devices/diska1' NAME diska1,
'/devices/diska2' NAME diska2,
FAILGROUP failure_group_2 DISK
'/devices/diskb1' NAME diskb1,
'/devices/diskb2' NAME diskb2;

Example 3:
----------

At some point in using OUI in installing the software, and creating a database,
you will
see the following screen:

----------------------------------------------------
|SPECIFY Database File Storage Option |
| |
| o File system |
| Specify Database file location: ######### |
| |
| o Automatic Storage Management (ASM) |
| |
| o Raw Devices |
| |
| Specify Raw Devices mapping file: ########## |
----------------------------------------------------

Suppose that you have on a Linux machine the following raw disk devices:

/dev/raw/raw1 8GB
/dev/raw/raw2 8GB
/dev/raw/raw3 6GB
/dev/raw/raw4 6GB
/dev/raw/raw5 6GB
/dev/raw/raw6 6GB

Then you can choose ASM in the upper screen, and see the following screen, where
you can create the initial diskgroup and assign disks to it:

-----------------------------------------------------
| Configure Automatic Storage Management |
| |
| Disk Group Name: data1 |
| |
| Redundancy |
| o High o Normal o External |
| |
| Add member Disks |
| |-------------------------------- |
| | select Disk Path | |
| |[#] /dev/raw/raw1 | |
| |[#] /dev/raw/raw2 | |
| |[ ] /dev/raw/raw3 | |
| |[ ] /dev/raw/raw4 | |
| -------------------------------- |
| |
-----------------------------------------------------

-- Mounting and Dismounting Disk Groups

Disk groups that are specified in the ASM_DISKGROUPS initialization parameter are
mounted automatically
at ASM instance startup. This makes them available to all database instances
running on the same node
as Automatic Storage Management. The disk groups are dismounted at ASM instance
shutdown.
Automatic Storage Management also automatically mounts a disk group when you
initially create it,
and dismounts a disk group if you drop it.

There may be times that you want to mount or dismount disk groups manually. For
these actions use
the ALTER DISKGROUP ... MOUNT or ALTER DISKGROUP ... DISMOUNT statement. You can
mount or dismount
disk groups by name, or specify ALL.

If you try to dismount a disk group that contains open files, the statement will
fail, unless you also
specify the FORCE clause.

Example

The following statement dismounts all disk groups that are currently mounted to
the ASM instance:

ALTER DISKGROUP ALL DISMOUNT;

The following statement mounts disk group dgroup1:

ALTER DISKGROUP dgroup1 MOUNT;

========
Note 4:
========

-- Installing Oracle ASMLib for Linux:

ASMLib is a support library for the Automatic Storage Management feature of Oracle
Database 10g.
This document is a set of tips for installing the Linux specific ASM library and
its assocated driver.
This library is provide to enable ASM I/O to Linux disks without the limitations
of the
standard Unix I/O API. The steps below are steps that the system administrator
must follow.

The ASMLib software is available from the Oracle Technology Network. Go to ASMLib
download page
and follow the link for your platform.
You will see 4-6 packages for your Linux platform.

-The oracleasmlib package provides the actual ASM library.


-The oracleasm-support package provides the utilities used to get the ASM driver
up and running. Both of these packages need to be installed.
-The remaining packages provide the kernel driver for the ASM library. Each
package provides
the driver for a different kernel. You must install the appropriate package for
the kernel you are running.
Use the "uname -r command to determine the version of the kernel. The oracleasm
kerel driver package
will have that version string in its name. For example, if you were running Red
Hat Enterprise Linux 4 AS,
and the kernel you were using was the 2.6.9-5.0.5.ELsmp kernel, you would choose
the
oracleasm-2.6.9-5.0.5-ELsmp package.
So, for example, to install these packages on RHEL4 on an Intel x86 machine, you
might use the command:

rpm -Uvh oracleasm-support-2.0.0-1.i386.rpm \


oracleasm-lib-2.0.0-1.i386.rpm \
oracleasm-2.6.9-5.0.5-ELsmp-2.0.0-1.i686.rpm

Once the command completes, ASMLib is now installed on the system.

-- Configuring ASMLib:

Now that the ASMLib software is installed, a few steps have to be taken by the
system administrator
to make the ASM driver available. The ASM driver needs to be loaded, and the
driver filesystem needs
to be mounted. This is taken care of by the initialization script,
"/etc/init.d/oracleasm".
Run the "/etc/init.d/oracleasm" script with the "configure" option. It will ask
for the user and group
that default to owning the ASM driver access point. If the database was running as
the 'oracle' user
and the 'dba' group, the output would look like this:

[root@ca-test1 /]# /etc/init.d/oracleasm configure


Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting without typing an
answer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface []: oracle


Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Fix permissions of Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration [ OK ]
Creating /dev/oracleasm mount point [ OK ]
Loading module "oracleasm" [ OK ]
Mounting ASMlib driver filesystem [ OK ]
Scanning system for ASM disks [ OK ]

This should load the oracleasm.o driver module and mount the ASM driver
filesystem.
By selecting enabled = 'y' during the configuration, the system will always load
the module
and mount the filesystem on boot.
The automatic start can be enabled or disabled with the 'enable' and 'disable'
options
to /etc/init.d/oracleasm:

[root@ca-test1 /]# /etc/init.d/oracleasm disable


Writing Oracle ASM library driver configuration [ OK ]
Unmounting ASMlib driver filesystem [ OK ]
Unloading module "oracleasm" [ OK ]

[root@ca-test1 /]# /etc/init.d/oracleasm enable


Writing Oracle ASM library driver configuration [ OK ]
Loading module "oracleasm" [ OK ]
Mounting ASMlib driver filesystem [ OK ]
Scanning system for ASM disks [ OK ]

-- Making Disks Available to ASMLib:

The system administrator has one last task. Every disk that ASMLib is going to be
accessing
needs to be made available. This is accomplished by creating an ASM disk. The
/etc/init.d/oracleasm script
is again used for this task:

[root@ca-test1 /]# /etc/init.d/oracleasm createdisk VOL1 /dev/sdg1


Creating Oracle ASM disk "VOL1" [ OK ]

Disk names are ASCII capital letters, numbers, and underscores. They must start
with a letter.
Disks that are no longer used by ASM can be unmarked as well:

[root@ca-test1 /]# /etc/init.d/oracleasm deletedisk VOL1


Deleting Oracle ASM disk "VOL1" [ OK ]

Any operating system disk can be queried to see if it is used by ASM:

[root@ca-test1 /]# /etc/init.d/oracleasm querydisk /dev/sdg1


Checking if device "/dev/sdg1" is an Oracle ASM disk [ OK ]
[root@ca-test1 /]# /etc/init.d/oracleasm querydisk /dev/sdh1
Checking if device "/dev/sdh1" is an Oracle ASM disk [FAILED]

Existing disks can be listed and queried:

[root@ca-test1 /]# /etc/init.d/oracleasm listdisks


VOL1
VOL2
VOL3
[root@ca-test1 /]# /etc/init.d/oracleasm querydisk VOL1
Checking for ASM disk "VOL1" [ OK ]

When a disk is added to a RAC setup, the other nodes need to be notified about it.

Run the 'createdisk' command on one node, and then run 'scandisks' on every other
node:

[root@ca-test1 /]# /etc/init.d/oracleasm scandisks


Scanning system for ASM disks [ OK ]

-- Discovery Strings for Linux ASMLib:

ASMLib uses discovery strings to determine what disks ASM is asking for. The
generic Linux ASMLib
uses glob strings. The string must be prefixed with "ORCL:". Disks are specified
by name.
A disk created with the name "VOL1" can be discovered in ASM via the discovery
string "ORCL:VOL1".
Similarly, all disks that start with the string "VOL" can be queried with the
discovery string "ORCL:VOL*".
Disks cannot be discovered with path names in the discovery string. If the prefix
is missing,
the generic Linux ASMLib will ignore the discovery string completely, expecting
that it is intended
for a different ASMLib. The only exception is the empty string (""), which is
considered a full wildcard.
This is precisely equivalent to the discovery string "ORCL:*".

NOTE: Once you mark your disks with Linux ASMLib, Oracle Database 10g R1 (10.1)
OUI will not be able
to discover your disks. It is recommended that you complete a Software Only
install and then use DBCA
to create your database (or use the custom install).

========
Note 5:
========

Automatic Storage Management (ASM) is a new feature that has be introduced in


Oracle 10g to
simplify the storage of Oracle datafiles, controlfiles and logfiles.

- Overview of Automatic Storage Management (ASM)


- Initialization Parameters and ASM Instance Creation
- Startup and Shutdown of ASM Instances
- Administering ASM Disk Groups
- Disks
- Templates
- Directories
- Aliases
- Files
- Checking Metadata
- ASM Filenames
- ASM Views
- SQL and ASM
- Migrating to ASM Using RMAN

Overview of Automatic Storage Management (ASM)


Automatic Storage Management (ASM) simplifies administration of Oracle related
files by allowing
the administrator to reference disk groups rather than individual disks and files,
which are managed by ASM.
The ASM functionality is an extention of the Oracle Managed Files (OMF)
functionality that also includes
striping and mirroring to provide balanced and secure storage. The new ASM
functionality can be used in
combination with existing raw and cooked file systems, along with OMF and manually
managed files.

The ASM functionality is controlled by an ASM instance. This is not a full


database instance,
just the memory structures and as such is very small and lightweight.
The main components of ASM are disk groups, each of which comprise of several
physical disks that are controlled
as a single unit. The physical disks are known as ASM disks, while the files that
reside on the disks
are know as ASM files. The locations and names for the files are controlled by
ASM, but user-friendly aliases and directory structures can be defined for ease of
reference.

The level of redundancy and the granularity of the striping can be controlled
using templates.
Default templates are provided for each file type stored by ASM, but additional
templates can be defined as needed.

Failure groups are defined within a disk group to support the required level of
redundancy.
For two-way mirroring you would expect a disk group to contain two failure groups
so individual files
are written to two locations.

In summary ASM provides the following functionality:

Manages groups of disks, called disk groups.


Manages disk redundancy within a disk group.
Provides near-optimal I/O balancing without any manual tuning.
Enables management of database objects without specifying mount points and
filenames.
Supports large files.
Initialization Parameters and ASM Instance Creation

The init.ora / spfile initialization parameters that are of specific interest for
an ASM instance are:

INSTANCE_TYPE - Set to ASM or RDBMS depending on the instance type. The default
is RDBMS.
DB_UNIQUE_NAME - Specifies a globally unique name for the database. This defaults
to +ASM but
must be altered if you intend to run multiple ASM instances.
ASM_POWER_LIMIT - The maximum power for a rebalancing operation on an ASM
instance. The valid values range
from 1 to 11, with 1 being the default. The higher the limit the
more resources are allocated
resulting in faster rebalancing operations. This value is also
used as the default
when the POWER clause is omitted from a rebalance operation.
ASM_DISKGROUPS - The list of disk groups that should be mounted by an ASM
instance during instance startup,
or by the ALTER DISKGROUP ALL MOUNT statement. ASM configuration
changes are automatically
reflected in this parameter.
ASM_DISKSTRING - Specifies a value that can be used to limit the disks considered
for discovery.
Altering the default value may improve the speed of disk group
mount time and the speed
of adding a disk to a disk group. Changing the parameter to a
value which prevents
the discovery of already mounted disks results in an error. The
default value is NULL
allowing all suitable disks to be considered.
Incorrect usage of parameters in ASM or RDBMS instances result in ORA-15021
errors.

To create an ASM instance first create a file called init+ASM.ora in the /tmp
directory
containing the following information.

INSTANCE_TYPE=ASM

Next, using SQL*Plus connect to the ide instance.

export ORACLE_SID=+ASM

sqlplus / as sysdba

Create an spfile using the contents of the init+ASM.ora file.

SQL> CREATE SPFILE FROM PFILE='/tmp/init+ASM.ora';

File created.

Finally, start the instance with the NOMOUNT option.

SQL> startup nomount


ASM instance started

Total System Global Area 125829120 bytes


Fixed Size 1301456 bytes
Variable Size 124527664 bytes
Database Buffers 0 bytes
Redo Buffers 0 bytes
SQL>

The ASM instance is now ready to use for creating and mounting disk groups.
To shutdown the ASM instance issue the following command.

SQL> shutdown
ASM instance shutdown
SQL>

Once an ASM instance is present disk groups can be used for the following
parameters
in database instances (INSTANCE_TYPE=RDBMS) to allow ASM file creation:

DB_CREATE_FILE_DEST
DB_CREATE_ONLINE_LOG_DEST_n
DB_RECOVERY_FILE_DEST
CONTROL_FILES
LOG_ARCHIVE_DEST_n
LOG_ARCHIVE_DEST
STANDBY_ARCHIVE_DEST

Startup and Shutdown of ASM Instances


ASM instance are started and stopped in a similar way to normal database
instances. The options
for the STARTUP command are:
FORCE - Performs a SHUTDOWN ABORT before restarting the ASM instance.
MOUNT - Starts the ASM instance and mounts the disk groups specified by the
ASM_DISKGROUPS parameter.
NOMOUNT - Starts the ASM instance without mounting any disk groups.
OPEN - This is not a valid option for an ASM instance.

The options for the SHUTDOWN command are:

NORMAL - The ASM instance waits for all connected ASM instances and SQL sessions
to exit then shuts down.
IMMEDIATE - The ASM instance waits for any SQL transactions to complete then shuts
down.
It doesn't wait for sessions to exit.
TRANSACTIONAL - Same as IMMEDIATE.
ABORT - The ASM instance shuts down instantly.

Aministering ASM Disk Groups

Disk groups are created using the CREATE DISKGROUP statement. This statement
allows you to specify
the level of redundancy:

NORMAL REDUNDANCY - Two-way mirroring, requiring two failure groups.


HIGH REDUNDANCY - Three-way mirroring, requiring three failure groups.
EXTERNAL REDUNDANCY - No mirroring for disks that are already protected using
hardware mirroring or RAID.

In addition failure groups and preferred names for disks can be defined. If the
NAME clause is omitted
the disks are given a system generated name like "disk_group_1_0001". The FORCE
option can be used
to move a disk from another disk group into this one.

CREATE DISKGROUP disk_group_1 NORMAL REDUNDANCY


FAILGROUP failure_group_1 DISK
'/devices/diska1' NAME diska1,
'/devices/diska2' NAME diska2,
FAILGROUP failure_group_2 DISK
'/devices/diskb1' NAME diskb1,
'/devices/diskb2' NAME diskb2;

Disk groups can be deleted using the DROP DISKGROUP statement.

DROP DISKGROUP disk_group_1 INCLUDING CONTENTS;

Disks can be added or removed from disk groups using the ALTER DISKGROUP
statement.
Remember that the wildcard "*" can be used to reference disks so long as the
resulting string does not match
a disk already used by an existing disk group.

-- Add disks.
ALTER DISKGROUP disk_group_1 ADD DISK
'/devices/disk*3',
'/devices/disk*4';

-- Drop a disk.
ALTER DISKGROUP disk_group_1 DROP DISK diska2;
Disks can be resized using the RESIZE clause of the ALTER DISKGROUP statement.
The statement can be used to resize individual disks, all disks in a failure group
or all disks
in the disk group. If the SIZE clause is omitted the disks are resized to the size
of the disk returned by the OS.

-- Resize a specific disk.


ALTER DISKGROUP disk_group_1
RESIZE DISK diska1 SIZE 100G;

-- Resize all disks in a failure group.


ALTER DISKGROUP disk_group_1
RESIZE DISKS IN FAILGROUP failure_group_1 SIZE 100G;

-- Resize all disks in a disk group.


ALTER DISKGROUP disk_group_1
RESIZE ALL SIZE 100G;The UNDROP DISKS clause of the ALTER DISKGROUP statement
allows pending disk drops
to be undone. It will not revert drops that have completed, or disk drops
associated with the dropping of a disk group.

ALTER DISKGROUP disk_group_1 UNDROP DISKS;

Disk groups can be rebalanced manually using the REBALANCE clause of the ALTER
DISKGROUP statement.
If the POWER clause is omitted the ASM_POWER_LIMIT parameter value is used.
Rebalancing is only needed
when the speed of the automatic rebalancing is not appropriate.

ALTER DISKGROUP disk_group_1 REBALANCE POWER 5;

Disk groups are mounted at ASM instance startup and unmounted at ASM instance
shutdown.
Manual mounting and dismounting can be accomplished using the ALTER DISKGROUP
statement as seen below.

ALTER DISKGROUP ALL DISMOUNT;


ALTER DISKGROUP ALL MOUNT;
ALTER DISKGROUP disk_group_1 DISMOUNT;
ALTER DISKGROUP disk_group_1 MOUNT;

Templates
Templates are named groups of attributes that can be applied to the files within a
disk group.
The following example show how templates can be created, altered and dropped.

-- Create a new template.


ALTER DISKGROUP disk_group_1 ADD TEMPLATE my_template ATTRIBUTES (MIRROR FINE);

-- Modify template.
ALTER DISKGROUP disk_group_1 ALTER TEMPLATE my_template ATTRIBUTES (COARSE);

-- Drop template.
ALTER DISKGROUP disk_group_1 DROP TEMPLATE my_template;Available attributes
include:

UNPROTECTED - No mirroring or striping regardless of the redundancy setting.


MIRROR - Two-way mirroring for normal redundancy and three-way mirroring for high
redundancy.
This attribute cannot be set for external redundancy.
COARSE - Specifies lower granuality for striping. This attribute cannot be set for
external redundancy.
FINE - Specifies higher granularity for striping. This attribute cannot be set for
external redundancy.

Directories
A directory heirarchy can be defined using the ALTER DISKGROUP statement to
support ASM file aliasing.
The following examples show how ASM directories can be created, modified and
deleted.

-- Create a directory.
ALTER DISKGROUP disk_group_1 ADD DIRECTORY '+disk_group_1/my_dir';

-- Rename a directory.
ALTER DISKGROUP disk_group_1 RENAME DIRECTORY '+disk_group_1/my_dir' TO
'+disk_group_1/my_dir_2';

-- Delete a directory and all its contents.


ALTER DISKGROUP disk_group_1 DROP DIRECTORY '+disk_group_1/my_dir_2' FORCE;Aliases
Aliases allow you to reference ASM files using user-friendly names, rather than
the fully qualified ASM filenames.
-- Create an alias using the fully qualified filename.
ALTER DISKGROUP disk_group_1 ADD ALIAS '+disk_group_1/my_dir/my_file.dbf'
FOR '+disk_group_1/mydb/datafile/my_ts.342.3';

-- Create an alias using the numeric form filename.


ALTER DISKGROUP disk_group_1 ADD ALIAS '+disk_group_1/my_dir/my_file.dbf'
FOR '+disk_group_1.342.3';

-- Rename an alias.
ALTER DISKGROUP disk_group_1 RENAME ALIAS '+disk_group_1/my_dir/my_file.dbf'
TO '+disk_group_1/my_dir/my_file2.dbf';

-- Delete an alias.
ALTER DISKGROUP disk_group_1 DELETE ALIAS '+disk_group_1/my_dir/my_file.dbf';

Attempting to drop a system alias results in an error.

Files
Files are not deleted automatically if they are created using aliases, as they are
not Oracle Managed Files (OMF),
or if a recovery is done to a point-in-time before the file was created. For these
circumstances
it is necessary to manually delete the files, as shown below.

-- Drop file using an alias.


ALTER DISKGROUP disk_group_1 DROP FILE '+disk_group_1/my_dir/my_file.dbf';

-- Drop file using a numeric form filename.


ALTER DISKGROUP disk_group_1 DROP FILE '+disk_group_1.342.3';

-- Drop file using a fully qualified filename.


ALTER DISKGROUP disk_group_1 DROP FILE '+disk_group_1/mydb/datafile/my_ts.342.3';
Checking Metadata
The internal consistency of disk group metadata can be checked in a number of ways
using the CHECK clause
of the ALTER DISKGROUP statement.

-- Check metadata for a specific file.


ALTER DISKGROUP disk_group_1 CHECK FILE '+disk_group_1/my_dir/my_file.dbf'

-- Check metadata for a specific failure group in the disk group.


ALTER DISKGROUP disk_group_1 CHECK FAILGROUP failure_group_1;

-- Check metadata for a specific disk in the disk group.


ALTER DISKGROUP disk_group_1 CHECK DISK diska1;

-- Check metadata for all disks in the disk group.


ALTER DISKGROUP disk_group_1 CHECK ALL;

ASM Views
The ASM configuration can be viewed using the V$ASM_% views, which often contain
different information
depending on whether they are queried from the ASM instance, or a dependant
database instance.

Viewing ASM Instance Information Via SQL Queries


Finally, there are several dynamic and data dictionary views available to view an
ASM configuration from within
the ASM instance itself:

-- ASM Dynamic Views: FROM ASM Instance Information

View Name Description

V$ASM_ALIAS Shows every alias for every disk group mounted by the ASM
instance

V$ASM_CLIENT Shows which database instance(s) are using any ASM disk groups
that are being mounted by this ASM instance

V$ASM_DISK Lists each disk discovered by the ASM instance, including disks
that are not part of any ASM disk group

V$ASM_DISKGROUP Describes information about ASM disk groups mounted by the ASM
instance

V$ASM_FILE Lists each ASM file in every ASM disk group mounted by the ASM
instance

V$ASM_OPERATION Like its counterpart, V$SESSION_LONGOPS, it shows each long-


running ASM operation in the ASM instance

V$ASM_TEMPLATE Lists each template present in every ASM disk group mounted by
the ASM instance

I was also able to query the following dynamic views against my database instance
to view the related ASM storage
components of that instance:
-- ASM Dynamic Views: FROM Database Instance Information

View Name Description

V$ASM_DISKGROUP Shows one row per each ASM disk group that's mounted by the
local ASM instance

V$ASM_DISK Displays one row per each disk in each ASM disk group that are
in use by the database instance

V$ASM_CLIENT Lists one row per each ASM instance for which the database
instance has any open ASM files

ASM Filenames
There are several ways to reference ASM file. Some forms are used during creation
and some for
referencing ASM files. The forms for file creation are incomplete, relying on ASM
to create the fully qualified name,
which can be retrieved from the supporting views. The forms of the ASM filenames
are summarised below.

Filename Type Format


Fully Qualified ASM Filename
+dgroup/dbname/file_type/file_type_tag.file.incarnation
Numeric ASM Filename +dgroup.file.incarnation
Alias ASM Filenames +dgroup/directory/filename
Alias ASM Filename with Template +dgroup(template)/alias
Incomplete ASM Filename +dgroup
Incomplete ASM Filename with Template +dgroup(template)

SQL and ASM


ASM filenames can be used in place of conventional filenames for most Oracle file
types, including controlfiles,
datafiles, logfiles etc. For example, the following command creates a new
tablespace with a datafile
in the disk_group_1 disk group.

CREATE TABLESPACE my_ts DATAFILE '+disk_group_1' SIZE 100M AUTOEXTEND ON;Migrating


to ASM Using RMAN
The following method shows how a primary database can be migrated to ASM from a
disk based backup:

Disable change tracking (only available in Enterprise Edition) if it is currently


being used.

SQL> ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;Shutdown the database.

SQL> SHUTDOWN IMMEDIATEModify the parameter file of the target database as


follows:

Set the DB_CREATE_FILE_DEST and DB_CREATE_ONLINE_LOG_DEST_n parameters to the


relevant ASM disk groups.
Remove the CONTROL_FILES parameter from the spfile so the control files will be
moved to the DB_CREATE_* destination
and the spfile gets updated automatically. If you are using a pfile the
CONTROL_FILES parameter must be set
to the appropriate ASM files or aliases.

Start the database in nomount mode.

RMAN> STARTUP NOMOUNTRestore the controlfile into the new location from the old
location.

RMAN> RESTORE CONTROLFILE FROM 'old_control_file_name';Mount the database.

RMAN> ALTER DATABASE MOUNT;Copy the database into the ASM disk group.

RMAN> BACKUP AS COPY DATABASE FORMAT '+disk_group';Switch all datafile to the new
ASM location.

RMAN> SWITCH DATABASE TO COPY;Open the database.

RMAN> ALTER DATABASE OPEN;Create new redo logs in ASM and delete the old ones.

Enable change tracking if it was being used.

SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;Form more information see:

Using Automatic Storage Management


Migrating a Database into ASM
Hope this helps. Regards Tim...

Note 6:
=======

Good example !!!!

How to Use Oracle10g release 2 ASM on Linux:

[root@danaly etc]# fdisk /dev/cciss/c0d0

The number of cylinders for this disk is set to 8854.


There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/cciss/c0d0: 72.8 GB, 72833679360 bytes


255 heads, 63 sectors/track, 8854 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System


/dev/cciss/c0d0p1 * 1 33 265041 83 Linux
/dev/cciss/c0d0p2 34 555 4192965 82 Linux swap
/dev/cciss/c0d0p3 556 686 1052257+ 83 Linux
/dev/cciss/c0d0p4 687 8854 65609460 5 Extended
/dev/cciss/c0d0p5 687 1730 8385898+ 83 Linux
/dev/cciss/c0d0p6 1731 2774 8385898+ 83 Linux
/dev/cciss/c0d0p7 2775 3818 8385898+ 83 Linux
/dev/cciss/c0d0p8 3819 4601 6289416 83 Linux

Command (m for help): n


First cylinder (4602-8854, default 4602):
Using default value 4602
Last cylinder or +size or +sizeM or +sizeK (4602-8854, default 8854): +20000M

Command (m for help): n


First cylinder (7035-8854, default 7035):
Using default value 7035
Last cylinder or +size or +sizeM or +sizeK (7035-8854, default 8854): +3000M

Command (m for help): n


First cylinder (7401-8854, default 7401):
Using default value 7401
Last cylinder or +size or +sizeM or +sizeK (7401-8854, default 8854): +3000M

Command (m for help): p

Disk /dev/cciss/c0d0: 72.8 GB, 72833679360 bytes


255 heads, 63 sectors/track, 8854 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System


/dev/cciss/c0d0p1 * 1 33 265041 83 Linux
/dev/cciss/c0d0p2 34 555 4192965 82 Linux swap
/dev/cciss/c0d0p3 556 686 1052257+ 83 Linux
/dev/cciss/c0d0p4 687 8854 65609460 5 Extended
/dev/cciss/c0d0p5 687 1730 8385898+ 83 Linux
/dev/cciss/c0d0p6 1731 2774 8385898+ 83 Linux
/dev/cciss/c0d0p7 2775 3818 8385898+ 83 Linux
/dev/cciss/c0d0p8 3819 4601 6289416 83 Linux
/dev/cciss/c0d0p9 4602 7034 19543041 83 Linux
/dev/cciss/c0d0p10 7035 7400 2939863+ 83 Linux
/dev/cciss/c0d0p11 7401 7766 2939863+ 83 Linux

Command (m for help): w


The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource
busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.

[root@danaly data1]# /etc/init.d/oracleasm createdisk VOL5 /dev/cciss/c0d0p10


Marking disk "/dev/cciss/c0d0p10" as an ASM disk: [ OK ]
[root@danaly data1]# /etc/init.d/oracleasm createdisk VOL6 /dev/cciss/c0d0p11
Marking disk "/dev/cciss/c0d0p11" as an ASM disk: [ OK ]
[root@danaly data1]# /etc/init.d/oracleasm listdisks
VOL1
VOL2
VOL3
VOL4
VOL5
VOL6

(THE FOLLOWING QUERIES ARE ISSUED FROM THE ASM INSTANCE.)

[oracle@danaly ~]$ export ORACLE_SID=+ASM


[oracle@danaly ~]$ sqlplus "/ as sysdba"

SQL*Plus: Release 10.2.0.1.0 - Production on Sun Sep 3 00:28:09 2006

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup
ASM instance started

Total System Global Area 83886080 bytes


Fixed Size 1217836 bytes
Variable Size 57502420 bytes
ASM Cache 25165824 bytes
ASM diskgroups mounted

SQL> select group_number,disk_number,mode_status from v$asm_disk;

GROUP_NUMBER DISK_NUMBER MODE_STATUS


------------ ----------- --------------
0 4 ONLINE
0 5 ONLINE
1 0 ONLINE
1 1 ONLINE
1 2 ONLINE
1 3 ONLINE

6 rows selected.

SQL> select group_number,disk_number,mode_status,name from v$asm_disk;

GROUP_NUMBER DISK_NUMBER MODE_STATUS NAME


------------ ----------- -------------- ---------------------------------
0 4 ONLINE
0 5 ONLINE
1 0 ONLINE VOL1
1 1 ONLINE VOL2
1 2 ONLINE VOL3
1 3 ONLINE VOL4

6 rows selected.

SQL> create diskgroup orag2 external redundancy disk 'ORCL:VOL5';

Diskgroup created.

SQL> select group_number,disk_number,mode_status,name from v$asm_disk;

GROUP_NUMBER DISK_NUMBER MODE_STATUS NAME


------------ ----------- -------------- -------------------------------------
0 5 ONLINE
1 0 ONLINE VOL1
1 1 ONLINE VOL2
1 2 ONLINE VOL3
1 3 ONLINE VOL4
2 0 ONLINE VOL5

6 rows selected.

(THE FOLLOWING QUERIES ARE ISSUED FROM THE DATABASE INSTANCE.)

[oracle@danaly ~]$ export ORACLE_SID=danaly


[oracle@danaly ~]$ sqlplus "/ as sysdba"

SQL*Plus: Release 10.2.0.1.0 - Production on Sun Sep 3 00:47:04 2006

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 943718400 bytes


Fixed Size 1222744 bytes
Variable Size 281020328 bytes
Database Buffers 654311424 bytes
Redo Buffers 7163904 bytes
Database mounted.
Database opened.

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
+ORADG/danaly/datafile/system.264.600016955
+ORADG/danaly/datafile/undotbs1.265.600016969
+ORADG/danaly/datafile/sysaux.266.600016977
+ORADG/danaly/datafile/users.268.600016987

SQL> create tablespace eygle datafile '+ORAG2' ;

Tablespace created.

SQL> select name from v$datafile;

NAME
---------------------------------------------------------------------------------
+ORADG/danaly/datafile/system.264.600016955
+ORADG/danaly/datafile/undotbs1.265.600016969
+ORADG/danaly/datafile/sysaux.266.600016977
+ORADG/danaly/datafile/users.268.600016987
+ORAG2/danaly/datafile/eygle.256.600137647

oracle@danaly log]$ export ORACLE_SID=+ASM


[oracle@danaly log]$ sqlplus "/ as sysdba"

SQL*Plus: Release 10.2.0.1.0 - Production on Sun Sep 3 01:36:37 2006

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP and Data Mining Scoring Engine
options

SQL> alter diskgroup orag2 add disk 'ORCL:VOL6';

Diskgroup altered.

============
Note 7: OMF
============

Using Oracle-managed files simplifies the administration of an Oracle database.


Oracle-managed files eliminate
the need for you, the DBA, to directly manage the operating system files
comprising an Oracle database.
You specify operations in terms of database objects rather than filenames. Oracle
internally uses standard
file system interfaces to create and delete files as needed for the following
database structures:

Tablespaces
Online redo log files
Control files

The following initialization parameters init.ora/spfile.ora allow the database


server to use
the Oracle Managed Files feature:

- DB_CREATE_FILE_DEST
Defines the location of the default file system directory where Oracle creates
datafiles
or tempfiles when no file specification is given in the creation operation. Also
used as the default
file system directory for online redo log and control files if
DB_CREATE_ONLINE_LOG_DEST_n is not specified.

- DB_CREATE_ONLINE_LOG_DEST_n
Defines the location of the default file system directory for online redo log
files and
control file creation when no file specification is given in the creation
operation. You can use this
initialization parameter multiple times, where n specifies a multiplexed copy of
the online redo log
or control file. You can specify up to five multiplexed copies
Example:

DB_CREATE_FILE_DEST = '/u01/oradata/payroll'
DB_CREATE_ONLINE_LOG_DEST_1 = '/u02/oradata/payroll'
DB_CREATE_ONLINE_LOG_DEST_2 = '/u03/oradata/payroll'

34.2 RAC 10g:


=============

===========================================
Note 1: High Level Overview Oracle 10g RAC
===========================================

- RAC Architecture Overview

Let's begin with a brief overview of RAC architecture.

A cluster is a set of 2 or more machines (nodes) that share or coordinate


resources to perform the same task.
A RAC database is 2 or more instances running on a set of clustered nodes, with
all instances accessing
a shared set of database files.
Depending on the O/S platform, a RAC database may be deployed on a cluster that
uses vendor clusterware
plus Oracle's own clusterware (Cluster Ready Services), or on a cluster that
solely uses
Oracle's own clusterware.
Thus, every RAC sits on a cluster that is running Cluster Ready Services. srvctl
is the primary tool DBAs use
to configure CRS for their RAC database and processes.

- Cluster Ready Services and the OCR

Cluster Ready Services, or CRS, is a new feature for 10g RAC. Essentially, it is
Oracle's own clusterware.
On most platforms, Oracle supports vendor clusterware; in these cases, CRS
interoperates with the vendor
clusterware, providing high availability support and service and workload
management. On Linux and Windows clusters,
CRS serves as the sole clusterware. In all cases, CRS provides a standard cluster
interface that is consistent
across all platforms.

CRS consists of four processes (crsd, occsd, evmd, and evmlogger) and two disks:
the Oracle Cluster Registry (OCR), and the voting disk.

CRS manages the following resources:

. The ASM instances on each node


. Databases
. The instances on each node
. Oracle Services on each node
. The cluster nodes themselves, including the following processes, or "nodeapps":
. VIP
. GSD
. The listener
. The ONS daemon

CRS stores information about these resources in the OCR. If the information in the
OCR for one of these
resources becomes damaged or inconsistent, then CRS is no longer able to manage
that resource.
Fortunately, the OCR automatically backs itself up regularly and frequently.

10g RAC (10.2) uses, or depends on,:

- Oracle Clusterware (10.2), formerly referred to as CRS "Cluster Ready Services"


(10.1).
- Oracle's optional Cluster File System OCFS (This is optional), or use ASM and
RAW.
- Oracle Database extensions

RAC is "scale out" technology: just add commodity nodes to the system.
The key component is "cache fusion". Data are transferred from one node
to another via very fast interconnects.
Essential to 10g RAC is a "Shared Cache" technology.

Automatic Workload Repository (AWR) plays a role also. The Fast Application
Notification (FAN) mechanism
that is part of RAC, publishes events that describe the current service level
being provided
by each instance, to AWR. The load balancing advisory information is then used to
determine
the best instance to serve the new request.

. With RAC, ALL Instances of ALL nodes in a cluster, access a SINGLE database.
. But every instance has it's own UNDO tablespace, and REDO logs.

The Oracle Clusterware comprise several background processes that facilitate


cluster operations.
The Cluster Synchronization Service CSS, Event Management EVM, and Oracle Cluster
components
communicate with other cluster components layers in the other instances within the
same
cluster database environment.

Questions per implementation arise in the following points:


. Storage
. Computer Systems/Storage-Interconnect
. Database
. Application Server
. Public and Private networks
. Application Control & Display

On the Storage level, it can be said that 10g RAC supports


- Automatic Storage Management (ASM)
- Oracle Cluster File System (OCFS)
- ??? Network File System (NFS) - limited (only theoretical actually)
- Disk raw partitions
- Third party cluster file systems

For application control and tools, it can be said that 10g RAC supports
- OEM Grid Control http://hostname:5500/em
OEM Database Control http://hostname:1158/em
- "svrctl" is a command line interface to manage the cluster configuration,
for example, starting and stopping all nodes in one command.
- Cluster Verification Utility (cluvfy) can be used for an installation and sanity
check.

Failure in Client connections:

Depending on the Net configuration, type of connection, type of transaction etc..,

Oracle Net services provides a feature called "Transparant Application Failover",


or TAF,
which can fail over a client session to another backup connection.

About HA and DR:

- RAC is HA , High Availability, that will keep things Up and Running in one
site.
- Data Guard is DR, Disaster Recovery, and is able to mirror one site to another
remote site.

===========================================================
Note 2: 10g RAC processes, services, daemons and start stop
===========================================================

CRS consists of four processes (crsd, occsd, evmd, and evmlogger) and two disks:
the Oracle Cluster Registry (OCR), and the voting disk.

On most platforms, you may see the following processes:

oprocd the Process Monitor Daemon


crsd the CRS Daemon
occsd Oracle Cluster Synchronization Service Daemon
evmd Event Volume Manager Daemon

To start and stop CRS when the machine starts or shutdown, on unix there are rc
scripts in place.

You can also, as root, manually start, stop, enable or disable the services with:

/etc/init.d/init.crs start
/etc/init.d/init.crs stop
/etc/init.d/init.crs enable
/etc/init.d/init.crs disable

Or with

# crsctl start crs


# crsctl stop crs
# crsctl enable crs
# crsctl disable crs

==============================================
Note 3: Installation notes 10g RAC on Windows
==============================================

See the next note for installation on Linux

3.1 Before you install:


-----------------------

Each node in a cluster requires the following:

> One private internet protocol (IP) address for each node to serve as the private
interconnect.
The following must be true for each private IP address:

-It must be separate from the public network


-It must be accessible on the same network interface on each node
-It must have a unique address on each node

The private interconnect is used for inter-node communication by both Oracle


Clusterware and RAC.
If the private address is available from a network name server (DNS), then you
can use that name.
Otherwise, the private IP address must be available in each node's
C:\WINNT\system32\drivers\etc\hosts file.

> One public IP address for each node, to be used as the Virtual IP (VIP) address
for client connections
and for connection failover. The name associated with the VIP must be different
from the default host name.

This VIP must be associated with the same interface name on every node that is
part of your cluster.
In addition, the IP addresses that you use for all of the nodes that are part of a
cluster must be from
the same subnet.

> One public fixed hostname address for each node, typically assigned by the
system administrator
during operating system installation. If you have a DNS, then register both the
fixed IP and the VIP address
with DNS. If you do not have DNS, then you must make sure that the public IP and
VIP addresses for all
nodes are in each node's host file.

For example, with a two node cluster where each node has one public and one
private interface,
you might have the configuration shown in the following table for your network
interfaces,
where the hosts file is %SystemRoot%\system32\drivers\etc\hosts:

Node Interface Name Type IP Address Registered In


rac1 rac1 Public 143.46.43.100 DNS (if available, else the
hosts file)
rac1 rac1-vip Virtual 143.46.43.104 DNS (if available, else the
hosts file)
rac1 rac1-priv Private 10.0.0.1 Hosts file
rac2 rac2 Public 143.46.43.101 DNS (if available, else the
hosts file)
rac2 rac2-vip Virtual 143.46.43.105 DNS (if available, else the
hosts file)
rac2 rac2-priv Private 10.0.0.2 Hosts file

The virtual IP addresses are assigned to the listener process.

To enable VIP failover, the configuration shown in the preceding table defines the
public and VIP addresses
of both nodes on the same subnet, 143.46.43. When a node or interconnect fails,
then the associated VIP
is relocated to the surviving instance, enabling fast notification of the failure
to the clients connecting
through that VIP. If the application and client are configured with transparent
application failover options,
then the client is reconnected to the surviving instance.

To disable Windows Media Sensing for TCP/IP, you must set the value of the
DisableDHCPMediaSense parameter to 1
on each node. Disable Media Sensing by completing the following steps on each node
of your cluster:

Use Registry Editor (Regedt32.exe) to view the following key in the registry:

HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Tcpip\Parameters

Add the following registry value:

Value Name: DisableDHCPMediaSense


Data Type: REG_DWORD -Boolean
Value: 1

- External shared disks for storing Oracle Clusterware and database files.
The disk configuration options available to you are described in Chapter 3,
"Storage Pre-Installation Tasks".
Review these options before you decide which storage option to use in your RAC
environment. However, note
that when Database Configuration Assistant (DBCA) configures automatic disk
backup, it uses a
database recovery area which must be shared. The database files and recovery files
do not necessarily have
to be located on the same type of storage.

Determine the storage option for your system and configure the shared disk. Oracle
recommends that
you use Automatic Storage Management (ASM) and Oracle Managed Files (OMF), or a
cluster file system.
If you use ASM or a cluster file system, then you can also take advantage of OMF
and other Oracle Database 10g
storage features. If you use RAC on Oracle Database 10g Standard Edition, then you
must use ASM.
If you use ASM, then Oracle recommends that you install ASM in a separate home
from the
Oracle Clusterware home and the Oracle home.

Oracle Database 10g Real Application Clusters installation is a two-phase


installation.
In phase one, use Oracle Universal Installer (OUI) to install Oracle Clusterware.
In phase two, install the database software using OUI.

When you install Oracle Clusterware or RAC, OUI copies the Oracle software onto
the node from which
you are running it. If your Oracle home is not on a cluster file system, then OUI
propagates the software
onto the other nodes that you have selected to be part of your OUI installation
session.

- Shared Storage for Database Recovery Area


When you configure a database recovery area in a RAC environment, the database
recovery area must be on
shared storage. When Database Configuration Assistant (DBCA) configures automatic
disk backup, it uses
a database recovery area that must be shared.

If the database files are stored on a cluster file system, then the recovery area
can also be shared through
the cluster file system.

If the database files are stored on an Automatic Storage Management (ASM) disk
group, then the recovery area
can also be shared through ASM.

If the database files are stored on raw devices, then you must use either a
cluster file system or ASM
for the recovery area.

Note:

ASM disk groups are always valid recovery areas, as are cluster file systems.
Recovery area files do not have
to be in the same location where datafiles are stored. For instance, you can store
datafiles on raw devices,
but use ASM for the recovery area.

Data files are not placed on NTFS partitions, because they cannot be shared.
Data files can be placed on Oracle Cluster File System (OCFS), on raw disks using
ASM, or on raw disks.

- Oracle Clusterware
You must provide OUI with the names of the nodes on which you want to install
Oracle Clusterware.
The Oracle Clusterware home can be either shared by all nodes, or private to each
node, depending
on your responses when you run OUI. The home that you select for Oracle
Clusterware must be different
from the RAC-enabled Oracle home.
Versions of cluster manager previous to Oracle Database 10g were sometimes
referred to as "Cluster Manager".
In Oracle Database 10g, this function is performed by a Oracle Clusterware
component known as
Cluster Synchronization Services (CSS). The OracleCSService, OracleCRService, and
OracleEVMService
replace the service known previous to Oracle Database 10g as OracleCMService9i.

3.2 cluvfy or runcluvfy.bat:


----------------------------

Once you have installed Oracle Clusterware, you can use CVU by entering cluvfy
commands on the command line.
To use CVU before you install Oracle Clusterware, you must run the commands using
a command file available
on the Oracle Clusterware installation media. Use the following syntax to run a
CVU command run from the
installation media, where media is the location of the Oracle Clusterware
installation media and options
is a list of one or more CVU command options:

media\clusterware\cluvfy\runcluvfy.bat options

The following code example is of a CVU help command, run from a staged copy of the
Oracle Clusterware
directory downloaded from OTN into a directory called stage on your C: drive:

C:\stage\clusterware\cluvfy> runcluvfy.bat comp nodereach -n node1,node2 -verbose

For a quick test, you can run the following CVU command that you would normally
use after you have completed
the basic hardware and software configuration:

prompt> media\clusterware\cluvfy\runcluvfy.bat stage �post hwos �n node_list

Use the location of your Oracle Clusterware installation media for the media value
and a list of the nodes,
separated by commas, in your cluster for node_list. Expect to see many errors if
you run this command
before you or your system administrator complete the cluster pre-installation
steps.

On Oracle Real Application Clusters systems, each member node of the cluster must
have user equivalency
for the Administrative privileges account that installs the database. This means
that the administrative
privileges user account and password must be the same on all nodes.

- Checking the Hardware and Operating System Setup with CVU


You can use two different CVU commands to check your hardware and operating system
configuration.
The first is a general check of the configuration, and the second specifically
checks for the components required
to install Oracle Clusterware.

The syntax of the more general CVU command is:


cluvfy stage �post hwos �n node_list [-verbose]

where node_list is the names of the nodes in your cluster, separated by commas.
However, because you have
not yet installed Oracle Clusterware, you must execute the CVU command from the
installation media using a command
like the one following. In this example, the command checks the hardware and
operating system of a two-node
cluster with nodes named node1 and node2, using a staged copy of the installation
media in a directory called
stage on the C: drive:

C:\stage\clusterware\cluvfy> runcluvfy.bat stage �post hwos �n node1,node2


-verbose

You can omit the -verbose keyword if you do not wish to see detailed results
listed as CVU performs
each individual test.

The following example is a command, without the -verbose keyword, to check for the
readiness of the cluster
for installing Oracle Clusterware:

C:\stage\clusterware\cluvfy> runcluvfy.bat comp sys -n node1,node2 -p crs

- Checking the Network Setup


Enter a command using the following syntax to verify node connectivity between all
of the nodes
for which your cluster is configured:

cluvfy comp nodecon -n node_list [-verbose]

- Verifying Cluster Privileges


Before running Oracle Universal Installer, from the node where you intend to run
the Installer,
verify that you have administrative privileges on the other nodes. To do this,
enter the following command
for each node that is a part of the cluster:

net use \\node_name\C$

where node_name is the node name. If your installation will access drives in
addition to the C: drive, repeat
this command for every node in the cluster, substituting the drive letter for each
drive you plan to use.

For the installation to be successful, you must use the same user name and
password on each node in a cluster
or use a domain user name. If you use a domain user name, then log on under a
domain with a user name and password
to which you have explicitly granted local administrative privileges on all nodes.

3.3 Shared disk considerations:


-------------------------------

Preliminary Shared Disk Preparation


Complete the following steps to prepare shared disks for storage:
-- Disabling Write Caching
You must disable write caching on all disks that will be used to share data
between nodes in your cluster.
To disable write caching, perform these steps:

Click Start, then click Settings, then Control Panel, then Administrative Tools,
then Computer Management,
then Device Manager, and then Disk drives
Expand the Disk drives and double-click the first drive listed
Under the Disk Properties tab for the selected drive, uncheck the option that
enables the write cache
Double-click each of the other drives listed in the Disk drives hive and disable
the write cache as described
in the previous step

Caution:

Any disks that you use to store files, including database files, that will be
shared between nodes,
must have write caching disabled.

-- Enabling Automounting for Windows 2003


If you are using Windows 2003, then you must enable disk automounting, depending
on the Oracle products
you are installing and on other conditions.

You must enable automounting when using:

Raw partitions for Oracle Real Application Clusters (RAC)


Cluster file system for Oracle Real Application Clusters
Oracle Clusterware
Raw partitions for a single-node database installation
Logical drives for Automatic Storage Management (ASM)

To enable automounting:

Enter the following commands at a command prompt:

c:\> diskpart
DISKPART> automount enable
Automatic mounting of new volumes enabled.

Type exit to end the diskpart session

Repeat steps 1 and 2 for each node in the cluster.

3.4 Reviewing Storage Options for Oracle Clusterware, Database, and Recovery
Files:
----------------------------------------------------------------------------------
-

This section describes supported options for storing Oracle Clusterware files,
Oracle Database software,
and database files.

-- Overview of Oracle Clusterware Storage Options


Note that Oracle Clusterware files include the Oracle Cluster Registry (OCR) and
the Oracle Clusterware voting disk.

There are two ways to store Oracle Clusterware files:

1. Oracle Cluster File System (OCFS): The cluster file system Oracle provides for
the Windows and Linux communities.
If you intend to store Oracle Clusterware files on OCFS, then you must ensure that
OCFS volume sizes
are at least 500 MB each.

2. Raw storage: Raw logical volumes or raw partitions are created and managed by
Microsoft Windows
disk management tools or by tools provided by third party vendors.

Note that you must provide disk space for one mirrored Oracle Cluster Registry
(OCR) file,
and two mirrored voting disk files.

-- Overview of Oracle Database and Recovery File Options

There are three ways to store Oracle Database and recovery files on shared disks:

1. Automatic Storage Management (database files only): Automatic Storage


Management (ASM) is an integrated,
high-performance database file system and disk manager for Oracle files. Because
ASM requires an
Oracle Database instance, it cannot contain Oracle software, but you can use ASM
to manage database
and recovery files.

2. Oracle Cluster File System (OCFS): Note that if you intend to use OCFS for your
database files,
then you should create partitions large enough for the database files when you
create partitions
for Oracle Clusterware

Note:

If you want to have a shared Oracle home directory for all nodes, then you must
use OCFS.

3. Raw storage: Note that you cannot use raw storage to store Oracle database
recovery files.

The storage option that you choose for recovery files can be the same as or
different to the option
you choose for the database files.

Storage Option Oracle Clusterware Database Recovery


area
-------------- ------------------ -------- ------------
-
Automatic Storage Management No Yes Yes
Cluster file system (OCFS) Yes Yes Yes
Shared raw storage Yes Yes No
-- Checking for Available Shared Storage with CVU
To check for all shared file systems available across all nodes on the cluster,
use the following CVU command:

cluvfy comp ssa -n node_list

Remember to use the full path name and the runcluvfy.bat command on the
installation media and include
the list of nodes in your cluster, separated by commas, for the node_list. The
following example is for
a system with two nodes, node1 and node2, and the installation media on drive F:

F:\clusterware\cluvfy> runcluvfy.bat comp ssa -n node1,node2

If you want to check the shared accessibility of a specific shared storage type to
specific nodes
in your cluster, then use the following command syntax:

cluvfy comp ssa -n node_list -s storageID_list

In the preceding syntax, the variable node_list is the list of nodes you want to
check, separated by commas,
and the variable storageID_list is the list of storage device IDs for the storage
devices managed by the
file system type that you want to check.

=====================================
Note 4: Installation on Redhat Linux
=====================================

4.2 Prepare your nodes:


-----------------------

4.2.1 Scetch of a 2-node Linux cluster

192.168.2.0
------------------------------------------ public network
| |
| |
------------ -------------
|InstanceA |Private network |InstanceB |
| |Ethernet | |
| |--------------------| |
| |192.168.1.0 | |
| | | |
| |____________ | |
| | ----- -|--- | |
| |--|PWR| |PWR|----| |
| | ----- ----- | |
| | |_______________| |
| | | |
------------ -------------
| SCSI bus or Fible Channel |
------------------ --------------
Interconnect | |
| |
Fig 4.1 -----------
|Shared | - has Single DB on: ASM or OCFS or RAW
|Disk | - has OCR and Voting disk on: OCFS or RAW
(not ASM)
|Storage | - has Recovery area on: ASM or OCFS (not
RAW)
-----------

4.2.2 Storage Options

Storage Oracle Clusterware Database Recovery


area
-------------- ------------------ -------- ------------
-
Automatic Storage Management No Yes Yes
Cluster file system (OCFS) Yes Yes Yes
Shared raw storage Yes Yes No

In the following, we will do an example installation on 3 nodes.

4.2.3 Install Redhat on all nodes with all options.

4.2.4 create oracle user and groups dba, oinstall on all nodes.
Make sure they all have the same UID and GUI.

4.2.5 Make sure the user oracle has an appropriate .profile or .bash_profile

4.2.6 Every node needs a private network connection and a public network
connection (at least
two networkcards).

4.2.7 Linux kernel parameters:

Most out of the box kernel parameters (of RHELS 3,4,5) are set correctly for
Oracle
except a few.

You should have the following minimal configuration:

net.ipv4.ip_local_port_range 1024 65000


kernel.sem 250 32000 100 128
kernel.shmmni 4096
kernel.shmall 2097152
kernel.shmmax 2147483648
fs.file-max 65536

You can check the most important parameters using the following command:

# /sbin/sysctl -a | egrep 'sem|shm|file-max|ip_local'


net.ipv4.ip_local_port_range = 1024 65000
kernel.sem = 250 32000 100 128
kernel.shmmni = 4096
kernel.shmall = 2097152
kernel.shmmax = 2147483648
fs.file-max = 65536

If some value should be changed, you can change the "/etc/sysctl.conf" file and
run the "/sbin/sysctl -p" command
to change the value immediately.
Every time the system boots, the init program runs the /etc/rc.d/rc.sysinit
script. This script contains
a command to execute sysctl using /etc/sysctl.conf to dictate the values passed to
the kernel.
Any values added to /etc/sysctl.conf will take effect each time the system boots.

4.2.8 make sure ssh and scp are working on all nodes without asking for a
password.
Use shh-keygen to arrange that.

4.2.9 Example "/etc/host" on the nodes:

Suppose you have the following 3 hosts, with their associated public and private
names:

public private
oc1 poc1
oc2 poc2
oc3 poc3

Then this could be a valid host file on the nodes:

127.0.0.1 localhost.localdomain localhost

192.168.2.99 rhes30
192.168.2.166 oltp
192.168.2.167 mw

192.168.2.101 oc1 #public1


192.168.1.101 poc1 #private1
192.168.2.176 voc1 #virtual1

192.168.2.102 oc2 #public2


192.168.1.102 poc2 #private2
192.168.2.177 voc2 #virtual2

192.168.2.103 oc3 #public3


192.168.1.103 poc3 #private3
192.168.2.178 voc3 #virtual3

4.2.10 Example disk devices

On all nodes, the shared disk devices should be accessible through the same
devices names.
Raw Device Name Physical Device Name Purpose
/dev/raw/raw1 /dev/sda1 ASM Disk 1: +DATA1
/dev/raw/raw2 /dev/sdb1 ASM Disk 1: +DATA1
/dev/raw/raw3 /dev/sdc1 ASM Disk 2: +RECOV1
/dev/raw/raw4 /dev/sdd1 ASM Disk 2: +RECOV1
/dev/raw/raw5 /dev/sde1 OCR Disk (on RAW device)
/dev/raw/raw6 /dev/sdf1 Voting Disk (on RAW device)

4.3 CRS installation:


---------------------

4.3.1 First install CRS in its own home directory

First install CRS in its own home directory, e.g. CRS10gHome, apart from the
Oracle home dir.

As Oracle user:

./runInstaller

---------------------------------------------------
| | Screen 1
|Specify File LOcations |
| |
|Source |
|Path: /install/crs10g/Disk1/stage/products.xml |
| |
|Destination |
|Name: CRS10gHome |
|Path: /u01/app/oracle/product/10.1.0/CRS10gHome |
| |
---------------------------------------------------

---------------------------------------------------
| | Screen 2
|Cluster Configuration |
| |
|Cluster Name: lec1 |
| |
| Public Node Name Private Node Name |
| --------------------------------------------- |
| |oc1 | p0c1 | |
| |-------------------------------------------- |
| |oc2 | p0c2 | |
| |-------------------------------------------- |
| |oc3 | poc3 | |
| |-------------------------------------------- |
---------------------------------------------------

In the next screen, you specify which of your networks is to be used as


the public interface (to connect to the public network) and which will be used
for the private interconnect to support cache fushion and the cluster heartbeat.

---------------------------------------------------
| | Screen 3
|Private Interconnect Enforcement |
| |
| |
| |
| Interface Name Subnet Interface type |
| --------------------------------------------- |
| |eth0 |192.168.2.0 |Public | |
| |-------------------------------------------- |
| |eth1 |192.168.1.0 |Private | |
| |-------------------------------------------- |
| |
---------------------------------------------------

In the next screen, you specify /dev/raw/raw5 as the raw disk for the Oracle
Cluster Registry.

---------------------------------------------------
| | Screen 4
|Oracle Cluster Registry |
| |
|Specify OCR Location: /dev/raw/raw5 |
| |
---------------------------------------------------

In a similar fashion you specify the location of the Voting Disk.

---------------------------------------------------
| | Screen 5
|Voting Disk |
| |
|Specify Voting Disk: /dev/raw/raw6 |
| |
---------------------------------------------------

You now have to execute the /u01/app/oracle/orainventory/orainstRoot.sh script


on all Cluster Nodes as the root user.

After this, you can continue with the other window, and see an "Install Summary"
screen.
Now you click "Install" and the installation begins.
Apart from the node you work on, the software will also be copied to the other
nodes as well.

After the installation is complete, you are once again prompted to run a script as
root
on each node of the Cluster.
This is the script "/u01/app/oracle/product/10.1.0/CRS10gHome/root.sh".

-- The olsnodes command.

After finishing the CSR installation, you can verify that the installation
completed successfully
by running on any node the following command:

# cd /u01/app/oracle/product/10.1.0/CRS10gHome/bin
# olsnodes -n
oc1 1
oc2 2
oc3 3

4.4 Database software installation:


-----------------------------------

You can install the database software into the same directory in each node.
With OCFS2, you might do one install in a common shared directory for all nodes.

Because CSR is already running, the OUI detects that, and because its cluster
aware, it
provides you with the options to install a clustered implementation.

You start the installation by running ./runInstaller as the oracle user on one
node.
For most part, it looks the same as a single-instance installation.

After the file location screen, that is source and destination, you will see this
screen:

---------------------------------------------------
| |
|Specify Hardware Cluster Installation Mode |
| |
| o Cluster installation mode |
| |
| Node name |
| --------------------------------------------- |
| | [] oc1 | |
| | [] oc2 | |
| | [] oc3 | |
| --------------------------------------------- |
| |
| o Local installation (non cluster) |
| |
|-------------------------------------------------|

Most of the time, you will do a "software only" installation, and create the
database later
with the DBCA.

For the first node only, after some time, the Virtual IP Configuration Assistant,
VIPCA, will start.
Here you can configure the Virtual IP adresses you will use for application
failover
and the Enterprise Manager Agent.
Here you will select the Virtual IP's for all nodes.
VIPCA only needs to run once per Cluster.

4.5 Creating the RAC database with DBCA:


----------------------------------------

Launching the DBCA for installing a RAC database is much the same as launching
DBCA for a single instance.
If DBCA detects cluster software installed, it gives you the option to install a
RAC database
or a single instance.

as oracle user:

% dbca &

---------------------------------------------------
| |
|Welcome to the database configuration assistant |
| |
| |
| |
| o Oracle Real Application Cluster database |
| |
| o Oracle single instance database |
| |
|-------------------------------------------------|

After selecting RAC, the next screen gives you the option to select nodes:

---------------------------------------------------
| |
|Select the nodes on which you want to create |
|the cluster database. The local node oc1 will |
|always be used whether or not it is selected. |
| |
| Node name |
| --------------------------------------------- |
| | [] oc1 | |
| | [] oc2 | |
| | [] oc3 | |
| --------------------------------------------- |
| |
| |
|-------------------------------------------------|

In the next screens, you can choose the type of database (oltp, dw etc..), and all
other items, just like a single instance install.
At a cetain point, you can choose to use ASM diskgroups, flash-recovery area etc..

===========================================
Note 5. RAC tools an utilities.
===========================================

Example 1: removing and adding a failed node


--------------------------------------------

Suppose, using above example, that instance rac3 on node oc3, fails. Suppose that
you need to repair
the node (e.g. harddisk crash).

-- Remove the instance:

% srvctl remove instance -d rac -i rac3


Remove instance rac3 for the database rac (y/n)? y
-- Remove the node from the cluster:

# cd /u01/app/oracle/product/10.1.0/CRS10gHome/bin
# ./olsnode -n
oc1 1
oc2 2
oc3 3
# cd ../install
# ./rootdeletenode.sh oc3,3
# cd ../bin
# ./olsnode -n
oc1 1
oc2 2
#

Suppose that you have repared host oc3. We now want to add it back into the
cluster.
Host oc3 has the OS newly installed, and its /etc/host file is just like it is on
the other nodes.

-- Add the node at the clusterware layer:

From oc1 or oc2, go to the $CRS_Home/oui/bin directory, and run

# ./addNode.sh

A graphical screen pops up, and you are able to add oc3 to the cluster.
Al CRS files are copied to the new node.

To start the services on the new node, you are then prompted to run
"rootaddnode.sh" on the active node
and "root.sh" on the new node.

# ./rootaddnode.sh

# ssh oc3
# cd /u01/app/oracle product/10.1.0/CRS10gHome
# ./root.sh

-- Install the Oracle software on the new node:

Example 2: showing all nodes from a node


----------------------------------------

# lsnodes -v

# cd /u01/app/oracle/product/10.1.0/CRS10gHome/bin
# ./olsnodes -n
oc1 1
oc2 2
oc3 3
Example 3: using svrctl
-----------------------

The Server Control SVRCTL utility is installed on each node by default.


You can use SRVCTL to start and stop the database and instances, manage
configuration information,
and to move or remove instances and services.

Some SVRCTL operations store configuration information in the OCR.


SVRCTL performs other operations, such as starting and stopping instances, by
sending request
to the Oracle Clusterware process CSRD, which then starts or stops the Oracle
Clusterware resources.

srvctl must be run from the $ORACLE_HOME of the RAC you are administering.
The basic format of a srvctl command is

srvctl <command> <target> [options]

where command is one of

enable|disable|start|stop|relocate|status|add|remove|modify|getenv|setenv|
unsetenv|config

and the target, or object, can be a


-database,
-instance,
-service,
-ASM instance, or the
-nodeapps.

-- Example 1: To view help:

% svrctl -h
% svrctl command -h

-- Example 2: To see the SRVCTL version number, enter

% svrctl -V

-- Example 3. Bring up the MYSID1 instance of the MYSID database.

% srvctl start instance -d MYSID -i MYSID1

-- Example 4. Stop the MYSID database: all its instances and all its services, on
all nodes.

% srvctl stop database -d MYSID

The following command mounts all of the non-running instances, using the default
connection information:

% srvctl start database -d orcl -o mount

-- Example 5. Stop the nodeapps on the myserver node. NB: Instances and services
also stop.
% srvctl stop nodeapps -n myserver

-- Example 6. Add the MYSID3 instance, which runs on the myserver node, to the
MYSID clustered database.

% srvctl add instance -d MYSID -i MYSID3 -n myserver

-- Example 7. Add a new node, the mynewserver node, to a cluster.

% srvctl add nodeapps -n mynewserver -o $ORACLE_HOME -A


149.181.201.1/255.255.255.0/eth1
(The -A flag precedes an address specification.)

-- Example 8. To change the VIP (virtual IP) on a RAC node, use the command

% srvctl modify nodeapps -A new_address

-- Example 9. Status of components

. Find out whether the nodeapps on mynewserver are up.

% srvctl status nodeapps -n mynewserver


VIP is running on node: mynewserver
GSD is running on node: mynewserver
Listener is not running on node: mynewserver
ONS daemon is running on node: mynewserver

. Find out whether the ASM is running:

% srvctl status asm -n docrac1


ASM instance +ASM1 is running on node docrac1.

. Find status of cluster database

% srvctl status database -d EOPP


Instance EOPP1 is running on node dbq0201
Instance EOPP2 is running on node dbq0102

% srvctl config database -d EOPP


dbq0201 EOPP1 /ora/product/10.2.0/db
dbq0102 EOPP2 /ora/product/10.2.0/db

% srvctl config service -d EOPP


opp.et.supp PREF: EOPP1 AVAIL: EOPP2
opp.et.grid PREF: EOPP1 AVAIL: EOPP2

-- Example 10. The following command and output show the expected configuration
for a three node
database called ORCL.

% srvctl config database -d ORCL

server01 ORCL1 /u01/app/oracle/product/10.1.0/db_1


server02 ORCL2 /u01/app/oracle/product/10.1.0/db_1
server03 ORCL3 /u01/app/oracle/product/10.1.0/db_1

-- Example 11. Disable the ASM instance on myserver for maintenance.

% srvctl disable asm -n myserver

-- Example 12. Debugging srvctl

Debugging srvctl in 10g couldn't be easier. Simply set the SRVM_TRACE environment
variable.

% export SRVM_TRACE=true

-- Example 13. Question Version 10G RAC

Q: how to add a listener to the nodeapps using the srvctl command ??


or even if it can be added using srvctl ??

A: just edit listener.ora on all concerned nodes and add entries ( the usual way).
srvctl will automatically make use of it.
For example

% srvctl start database -d SAMPLE

will start database SAMPLE and its associated listener LSNR_SAMPLE.

-- Example 14. Adding services.

% srvctl add database -d ORCL -o /u01/app/oracle/product/10.1.0/db_1


% srvctl add instance -d ORCL -i ORCL1 -n server01
% srvctl add instance -d ORCL -i ORCL2 -n server02
% srvctl add instance -d ORCL -i ORCL3 -n server03

-- Example 15. Administering ASM Instances with SRVCTL in RAC

You can use SRVCTL to add, remove, enable, and disable an ASM instance as
described in the following procedure:

Use the following to add configuration information about an existing ASM instance:
% srvctl add asm -n node_name -i asm_instance_name -o oracle_home

Use the following to remove an ASM instance:


% srvctl remove asm -n node_name [-i asm_instance_name]

-- Example 16. Stop multiple instances.

The following command provides its own connection information to shut down the two
instances orcl3 and orcl4
using the IMMEDIATE option:

% srvctl stop instance -d orcl -i "orcl3,orcl4" -o immediate -c "sysback/oracle as


sysoper"
-- Example 17. Showing policies.

Clusterware can automatically start your RAC database when the system restarts.
You can use Automatic or Manual "policies", to control whether clusterware
restarts RAC.

To display the current policy:

% srvctl config database -d database_name -a

To change to another policy:

% srvctl modify database -d database_name -y policy_name

-- Example 18.

% srvctl start service -d DITOB

-- More examples

% srvctl remove instance -d rac -i rac3


% srvctl disable instance -d orcl -i orcl2
% srvctl enable instance -d orcl -i orcl2

Example 4: crsctl
-----------------

Use CRSCTL to Control Your Clusterware

Oracle Clusterware enables servers in an Oracle database Real Application Cluster


to coordinate simultaneous
workload on the same database files. The crsctl command provides administrators
many useful capabilities.
For example, with crsctl, you can check Clusterware health disable/enable Oracle
Clusterware startup on boot,
find information on the voting disk and check the Clusterware version, and more.

1. Do you want to check the health of the Clusterware?


# crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy

2. Do you want to reboot a node for maintenance without Clusterware coming up on


boot?
## Disable clusterware on machine2 bootup:
# crsctl disable crs
## Stop the database then stop clusterware processes:
# srvctl stop instance �d db �i db2
# crsctl stop crs
# reboot

## Enable clusterware on machine bootup:


# crsctl enable crs
# crsctl start crs
# srvctl start instance �d db �i db2

3. Do you wonder where your voting disk is?


# crsctl query css votedisk
0. 0 /dev/raw/raw2

4. Do you need to find out what clusterware version is running on a server?


# crsctl query crs softwareversion
CRS software version on node [db2] is [10.2.0.2.0]

5. Adding and Removing Voting Disks

You can dynamically add and remove voting disks after installing Oracle RAC. Do
this using the following
commands where path is the fully qualified path for the additional voting disk.
Run the following command
as the root user to add a voting disk:

# crsctl add css votedisk path

Run the following command as the root user to remove a voting disk:

# crsctl delete css votedisk path

Example 5: cluvfy
-----------------

The Cluster Verification Utility pre or post validates an Oracle Clusterware


environment or configuration.
We found the CVU utility to be very useful for checking a cluster server
environment for RAC.
The CVU can check shared storage, interconnects, server systems and user
permissions. The Universal Installer runs
the verification utility at the end of the cluster ware install. The utility can
also be run from the command line
with parameters and options to validate components.

For example, a script that verifies a cluster using cluvfy is named runcluvfy.sh
and is located on
the /clusterware/cluvfy directory in the installation area. This script unpacks
the utility, sets environment
variables and executes the verification command.

This command verifies that the hosts atlanta1, atlanta2 and atlanta3 are ready for
a clustered database
install of release 2.

./runcluvfy.sh stage -pre dbinst -n atlanta1,atlanta2,atlanta3 -r 10gR2 -osdba dba


�verbose

The results of the command above check user and group equivalence across machines,
connectivity,
interface settings, system requirements like memory, disk space and kernel
settings and versions,
required Linux package existence and so on. Any problems are reported as errors,
all successful
checks are marked as passed.

Many other aspects of the cluster can be verified with this utility for Release 2
or Release 1.

Some more examples:

-- Checking for Available Shared Storage with CVU


To check for all shared file systems available across all nodes on the cluster,
use the following CVU command:

% cluvfy comp ssa -n node_list

Remember to use the full path name and the runcluvfy.bat command on the
installation media and include
the list of nodes in your cluster, separated by commas, for the node_list. The
following example is for
a system with two nodes, node1 and node2, and the installation media on drive F:

% runcluvfy.bat comp ssa -n node1,node2

If you want to check the shared accessibility of a specific shared storage type to
specific nodes
in your cluster, then use the following command syntax:

% cluvfy comp ssa -n node_list -s storageID_list

In the preceding syntax, the variable node_list is the list of nodes you want to
check, separated by commas,
and the variable storageID_list is the list of storage device IDs for the storage
devices managed by the
file system type that you want to check.

=================================
Note 6: Example tnsnames.ora in RAC
=================================

Example 1:
----------

tnsnames.ora File

TEST =
(DESCRIPTION =
(LOAD_BALANCE = ON)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = testlinux1)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = testlinux2)(PORT = 1521)))
(CONNECT_DATA =
(SERVICE_NAME = TEST))))

TEST1 =
(DESCRIPTION =
(ADDRESS_LIST =
(LOAD_BALANCE = ON)
(ADDRESS = (PROTOCOL = TCP)(HOST = testlinux1)(PORT = 1521)))
(CONNECT_DATA =
(SERVICE_NAME = TEST)(INSTANCE_NAME = TEST1)))

TEST2 =
(DESCRIPTION =
(ADDRESS_LIST =
(LOAD_BALANCE = ON)
(ADDRESS = (PROTOCOL = TCP)(HOST = testlinux2)(PORT = 1521)))
(CONNECT_DATA =
(SERVICE_NAME = TEST)(INSTANCE_NAME = TEST2)))

EXTPROC_CONNECTION_DATA =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC)))
(CONNECT_DATA =
(SID=PLSExtProc)(PRESENTATION = RO)))

LISTENERS_TEST =
(ADDRESS = (PROTOCOL = TCP)(HOST = testlinux1)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = testlinux2)(PORT = 1521))

Example 2:
----------

Connect-Time Failover
From the clients end, when your connection fails at one node or service, you can
then do a look up
from your tnsnames.ora file and go on seeking a connection with the other
available node. Take this example
of our 4-node VMware ESX 3.x Oracle Linux Servers:

FOKERAC =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = nick01.wolga.com)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = nick02.wolga. com)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = brian01.wolga. com)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = brian02.wolga. com)(PORT = 1521))
(CONNECT_DATA =
(SERVICE_NAME = fokerac)
)
)

Here the first address in the list is tried at the client�s end. Should the
connection to nick01.wolga.nl fail,
then the next address, nick02.wolga.nl, will be tried. This phenomenon is called
connection-time failover.
You could very well have a 32-node RAC cluster monitoring the galactic system at
NASA and thus have all
those nodes typed in your tnsnames.ora file. Moreover, these entries do not
necessarily have to be part
of the RAC cluster. So it is possible that you are using Streams, Log Shipping or
Advanced Replication
to maintain your HA (High Availability) model. These technologies facilitate
continued processing of the
database by such a HA (High Availability) model in a non-RAC environment. In a RAC
environment we know
(and expect) the data to be the same across all nodes since there is only one
database.

Example 3:
----------

TAF (Transparent Application Failover)


Transparent Application Failover actually refers to a failover that occurs when a
node or instance
is unavailable due to an outage or other reason that prohibits a connection to be
established on that node.
This can be set to on with the following parameter FAILOVER. Setting it to ON will
activate the TAF.
It is turned on by default unless you set it to OFF to disable it. Now, when you
turn it on you have two types
of connections available by the means of the FAILOVER_MODE parameter. The type can
be session, which is default
or select. When the type is SESSION, if the instance fails, then the user is
automatically connected to the next
available node without the user�s manual intervention. The SQL statements need to
be carried out again
on the next node. However, when you set the TYPE to SELECT, then if you are
connected and are in the middle
of your query, then your query will be restarted after you have been failed over
to the next available node.
Take this example of our tnsnames.ora file, (go to the section beginning with
CONNECT_DATA):

(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = fokerac.wolga.com)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)

==============================================
Note 7: Notes about Backup and Restore of RAC
==============================================

7.1 Backing up Voting Disk:


---------------------------

Run the following command to backup a voting disk. Perform this operation on every
voting disk
as needed where 'voting_disk_name' is the name of the active voting disk, and
'backup_file_name'
is the name of the file to which you want to backup the voting disk contents:
# dd if=voting_disk_name of=backup_file_name

When you use the dd command for making backups of the voting disk, the backup can
be performed while
the Cluster Ready Services (CRS) process is active; you do not need to stop the
crsd.bin process
before taking a backup of the voting disk.

-- Adding and Removing Voting Disks


You can dynamically add and remove voting disks after installing Oracle RAC. Do
this using the following
commands where path is the fully qualified path for the additional voting disk.
Run the following command
as the root user to add a voting disk:

# crsctl add css votedisk path

Run the following command as the root user to remove a voting disk:

# crsctl delete css votedisk path

7.2 Recovering Voting Disk:


---------------------------

Run the following command to recover a voting disk where 'backup_file_name'


is the name of the voting disk backupfile, and 'voting_disk_name' is the name of
the active
voting disk:

# dd if=backup_file_name of=voting_disk_name

7.3 Backup and Recovery OCR:


----------------------------

Oracle Clusterware automatically creates OCR backups every 4 hours. At any one
time, Oracle Clusterware
always retains the latest 3 backup copies of the OCR that are 4 hours old, 1 day
old, and 1 week old.

You cannot customize the backup frequencies or the number of files that Oracle
Clusterware retains.
You can use any backup software to copy the automatically generated backup files
at least once daily
to a different device from where the primary OCR file resides. The default
location for generating backups
on Red Hat Linux systems is "CRS_home/cdata/cluster_name" where cluster_name is
the name of your cluster
and CRS_home is the home directory of your Oracle Clusterware installation.

-- Viewing Available OCR Backups


To find the most recent backup of the OCR, on any node in the cluster, use the
following command:
# ocrconfig -showbackup

-- Backing Up the OCR


Because of the importance of OCR information, Oracle recommends that you use the
ocrconfig tool to make copies
of the automatically created backup files at least once a day.

In addition to using the automatically created OCR backup files, you should also
export the OCR contents
to a file before and after making significant configuration changes, such as
adding or deleting nodes
from your environment, modifying Oracle Clusterware resources, or creating a
database.
Exporting the OCR contents to a file lets you restore the OCR if your
configuration changes cause errors.
For example, if you have unresolvable configuration problems, or if you are unable
to restart your cluster database
after such changes, then you can restore your configuration by importing the saved
OCR content
from the valid configuration.

To export the contents of the OCR to a file, use the following command, where
backup_file_name is the name
of the OCR backup file you want to create:

# ocrconfig -export backup_file_name

-- Recovering the OCR


This section describes two methods for recovering the OCR. The first method uses
automatically generated
OCR file copies and the second method uses manually created OCR export files.

In event of a failure, before you attempt to restore the OCR, ensure that the OCR
is unavailable.
Run the following command to check the status of the OCR:

# ocrcheck

If this command does not display the message 'Device/File integrity check
succeeded' for at least one copy
of the OCR, then both the primary OCR and the OCR mirror have failed. You must
restore the OCR from a backup.

-- Restoring the Oracle Cluster Registry from Automatically Generated OCR Backups
When restoring the OCR from automatically generated backups, you first have to
determine which backup file
you will use for the recovery.

To restore the OCR from an automatically generated backup on a Red Hat Linux
system:

Identify the available OCR backups using the ocrconfig command:

# ocrconfig -showbackup

Note:
You must be logged in as the root user to run the ocrconfig command.

Review the contents of the backup using the following ocrdump command, where
file_name is the name
of the OCR backup file:

$ ocrdump -backupfile file_name

As the root user, stop Oracle Clusterware on all the nodes in your Oracle RAC
cluster by executing
the following command:

# crsctl stop crs

Repeat this command on each node in your Oracle RAC cluster.

As the root user, restore the OCR by applying an OCR backup file that you
identified in step 1
using the following command, where file_name is the name of the OCR that you want
to restore.
Make sure that the OCR devices that you specify in the OCR configuration exist,
and that these OCR devices
are valid before running this command.

# ocrconfig -restore file_name

As the root user, restart Oracle Clusterware on all the nodes in your cluster by
restarting each node,
or by running the following command:

# crsctl start crs

Repeat this command on each node in your Oracle RAC cluster.

Use the Cluster Verify Utility (CVU) to verify the OCR integrity. Run the
following command,
where the -n all argument retrieves a list of all the cluster nodes that are
configured as part of your cluster:

$ cluvfy comp ocr -n all [-verbose]

-- Recovering the OCR from an OCR Export File


Using the ocrconfig -export command enables you to restore the OCR using the
-import option if your
configuration changes cause errors.

To restore the previous configuration stored in the OCR from an OCR export file:

Place the OCR export file that you created previously with the ocrconfig -export
command in an accessible
directory on disk.

As the root user, stop Oracle Clusterware on all the nodes in your Oracle RAC
cluster by executing
the following command:

# crsctl stop crs


Repeat this command on each node in your Oracle RAC cluster.

As the root user, restore the OCR data by importing the contents of the OCR export
file using the
following command, where file_name is the name of the OCR export file:

# ocrconfig -import file_name

As the root user, restart Oracle Clusterware on all the nodes in your cluster by
restarting each node,
or by running the following command:

# crsctl start crs

Repeat this command on each node in your Oracle RAC cluster.

Use the CVU to verify the OCR integrity. Run the following command, where the -n
all argument retrieves
a list of all the cluster nodes that are configured as part of your cluster:

$ cluvfy comp ocr -n all [-verbose]

7.4 RMAN snapshot controlfile:


------------------------------

RMAN> SHOW SNAPSHOT CONTROLFILE NAME;

RMAN> CONFIGURE SNAPSHOT CONTROLFILE NAME TO 'ORACLE_HOME/dbf/scf/snap_prod.cf';

=================================
Note 8: Noticable items in 10g RAC
=================================

8.1 SPFILE:
-----------

If an initialization parameter applies to all instances, use *.<parameter>


notation, otherwise
prefix the parameter with the name of the instance.
For example:

*.OPEN_CURSORS=500
prod1.OPEN_CURSORS=1000

8.2 Start and stop of RAC:


--------------------------
8.2.1 Stopping RAC:
-------------------

#### NOTE 1: ####

> Stop Oracle Clusterware or Cluster Ready Services Processes


If you are modifying an Oracle Clusterware or Oracle Cluster Ready Services (CRS)
installation,
then shut down the following Oracle Database 10g services.

Note:

You must perform these steps in the order listed.


Shut down any processes in the Oracle home on each node that might be accessing a
database; for example,
shut down Oracle Enterprise Manager Database Control.

Note:

Before you shut down any processes that are monitored by Enterprise Manager Grid
Control, set a blackout in
Grid Control for the processes that you intend to shut down. This is necessary so
that the availability
records for these processes indicate that the shutdown was planned downtime,
rather than an unplanned system outage.
Shut down all Oracle RAC instances on all nodes. To shut down all Oracle RAC
instances for a database,
enter the following command, where db_name is the name of the database:

$ oracle_home/bin/srvctl stop database -d db_name

Shut down all ASM instances on all nodes. To shut down an ASM instance, enter the
following command,
where node is the name of the node where the ASM instance is running:

$ oracle_home/bin/srvctl stop asm -n node

Stop all node applications on all nodes. To stop node applications running on a
node, enter the following command,
where node is the name of the node where the applications are running

$ oracle_home/bin/srvctl stop nodeapps -n node

Log in as the root user, and shut down the Oracle Clusterware or CRS process by
entering the following command
on all nodes:

# CRS_home/bin/crsctl stop crs

#### END NOTE 1 ####

#### NOTE 2: ####

To stop process in an existing Oracle Real Application Clusters Database, where


you want to shut down
the entire database, complete the following steps.

-- Shut Down Oracle Real Application Clusters Databases


Shut down any existing Oracle Database instances on each node, with normal or
immediate priority.
If Automatic Storage Management (ASM) is running, then shut down all databases
that use ASM, and then shut down
the ASM instance on each node of the cluster.

Note:

-- Stop All Oracle Processes


Stop all listener and other processes running in the Oracle home directories where
you want to modify
the database software.

Note:

If you shut down ASM instances, then you must first shut down all database
instances that use ASM,
even if these databases run from different Oracle homes.

-- Stop Oracle Clusterware or Cluster Ready Services Processes


If you are modifying an Oracle Clusterware or Oracle Cluster Ready Services (CRS)
installation,
then shut down the following Oracle Database 10g services.

Note:

You must perform these steps in the order listed.


Shut down any processes in the Oracle home on each node that might be accessing a
database; for example, shut down
Oracle Enterprise Manager Database Control.

Note:

Before you shut down any processes that are monitored by Enterprise Manager Grid
Control, set a blackout in
Grid Control for the processes that you intend to shut down. This is necessary so
that the availability records
for these processes indicate that the shutdown was planned downtime, rather than
an unplanned system outage.
Shut down all Oracle RAC instances on all nodes. To shut down all Oracle RAC
instances for a database,
enter the following command, where db_name is the name of the database:

$ oracle_home/bin/srvctl stop database -d db_name

Shut down all ASM instances on all nodes. To shut down an ASM instance, enter the
following command,
where node is the name of the node where the ASM instance is running:

$ oracle_home/bin/srvctl stop asm -n node

Stop all node applications on all nodes. To stop node applications running on a
node, enter the following command,
where node is the name of the node where the applications are running
$ oracle_home/bin/srvctl stop nodeapps -n node

Log in as the root user, and shut down the Oracle Clusterware or CRS process by
entering the following command
on all nodes:

# CRS_home/bin/crsctl stop crs

#### END NOTE 2 ####

Notes about Starting up:


------------------------

crsd : Cluster Ready Services Daemon (CRSD)


occsd : Oracle Cluster Synchronization Server Daemon (OCSSD), the CCS.
evmd : Event Manager Daemon (EVMD).
evmlogger

The CRSD manages the HA functionality by starting, stopping, and failing over the
application resources
and maintaining the profiles and current states in the Oracle Cluster Registry
(OCR) whereas the OCSSD
manages the participating nodes in the cluster by using the voting disk. The OCSSD
also protects against
the data corruption potentially caused by "split brain" syndrome by forcing a
machine to reboot.

>Linux:

# cat /etc/inittab | grep crs


h3:35:respawn:/etc/init.d/init.crsd run > /dev/null 2>&1 </dev/null

# cat /etc/inittab | grep evmd


h1:35:respawn:/etc/init.d/init.evmd run > /dev/null 2>&1 </dev/null

# cat /etc/inittab | grep css


h2:35:respawn:/etc/init.d/init.cssd fatal > /dev/null 2>&1 </dev/null

/etc/init.d> ls -al *init*


init.crs
init.crsd
init.cssd
init.evmd

# cat /etc/inittab
..
..
h1:35:respawn:/etc/init.d/init.evmd run > /dev/null 2>&1 </dev/null
h2:35:respawn:/etc/init.d/init.cssd fatal > /dev/null 2>&1 </dev/null
h3:35:respawn:/etc/init.d/init.crsd run > /dev/null 2>&1 </dev/null

init.crsd -> calls crsd

correct order for stopping: Reverse order of startup. crsd should be shutdown
before
cssd and evmd. evmd should be shutdown before cssd.

init.crs stop:
init.crsd
init.evmd
init.cssd

init.crs start
init.cssd autostart|manualstart

-------------------------------------------
links:
http://dmx0201.nl.eu.abnamro.com:7900/wi
https://dmp0101.nl.eu.abnamro.com:1159/em
-------------------------------------------

============================
35. ORACLE STREAMS AND CDC:
============================

35.1 Data replication, Heterogeneous Services, Gateway. Streams:


================================================================

To connect Oracle to a non Oracle database:

There are a couple of answers

a)
http://www.oracle.com/gateways/
is the most complete. distributed query, distributed transactions -- 100%
functionality.
Lets you treat DB2 as if it were an Oracle instance for all intents and purposes.

b) generic connectivity. If you have ODBC on the SERVER (oracle server) and can
use that
to connect to DB2, you can use generic connectivity. Less functional then a)

http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:4406709207206
c) lastly, you can get their type4 (thin) jdbc (all java) drivers and load them
into
Oracle. Then, you can write a java stored procedure in Oracle that accesses DB2
over
jdbc.

35.2 Information on CDC:


========================
Change Data Capture can capture and publish committed change data in either of the
following modes:

-- Synchronous

Triggers on the source database allow change data to be captured immediately, as


each SQL statement that performs
a data manipulation language (DML) operation (INSERT, UPDATE, or DELETE) is made.
In this mode, change data
is captured as part of the transaction modifying the source table. Synchronous
Change Data Capture is available
with Oracle Standard Edition and Enterprise Edition.

-- Asynchronous

By taking advantage of the data sent to the redo log files, change data is
captured after a SQL statement
that performs a DML operation is committed. In this mode, change data is not
captured as part of the transaction
that is modifying the source table, and therefore has no effect on that
transaction.
Asynchronous Change Data Capture is available with Oracle Enterprise Edition only.

There are three modes of asynchronous Change Data Capture:


HotLog, Distributed HotLog, and AutoLog.

Asynchronous Change Data Capture is built on, and provides a relational interface
to, Oracle Streams.
See Oracle Streams Concepts and Administration for information on Oracle Streams.

- Change tables
With any CDC mode, change tables are involved.
A given change table contains the change data resulting from DML operations
performed on a
given source table. A change table consists of two things: the change data itself,
which is stored
in a database table, ; and the system metadata necessary to maintain the change
table,
which includes control columns.

The publisher specifies the source columns that are to be included in the change
table. Typically,
for a change table to contain useful data, the publisher needs to include the
primary key column
in the change table along with any other columns of interest to subscribers. For
example, suppose subscribers
are interested in changes that occur to the UNIT_COST and the UNIT_PRICE columns
in the sh.costs table.
If the publisher does not include the PROD_ID column in the change table,
subscribers will know only that
the unit cost and unit price of some products have changed, but will be unable to
determine for which products
these changes have occurred.

There are optional and required control columns. The required control columns are
always included
in a change table; the optional ones are included if specified by the publisher
when creating
the change table. Control columns are managed by Change Data Capture.

- Interface
Change Data Capture includes the DBMS_CDC_PUBLISH and DBMS_CDC_SUBSCRIBE packages,
which provide
easy-to-use publish and subscribe interfaces.

- Publish and Subscribe Model


Most Change Data Capture systems have one person who captures and publishes change
data;
this person is the publisher. There can be multiple applications or individuals
that access
the change data; these applications and individuals are the subscribers.
Change Data Capture provides PL/SQL packages to accomplish the publish and
subscribe tasks.

-- TASKS:

=> These are the main tasks performed by the publisher:

. Determines the source databases and tables from which the subscribers are
interested in viewing change data,
and the mode (synchronous or one of the asynchronous modes) in which to capture
the change data.

. Uses the Oracle-supplied package, DBMS_CDC_PUBLISH, to set up the system to


capture change data from
the source tables of interest.

. Allows subscribers to have controlled access to the change data in the change
tables by using the
SQL GRANT and REVOKE statements to grant and revoke the SELECT privilege on
change tables for users
and roles. (Keep in mind, however, that subscribers use views, not change
tables directly, to access change data.)

=> These are the main tasks performed by the subscriber:

The subscribers are consumers of the published change data. A subscriber performs
the following tasks:

> Uses the Oracle supplied package, DBMS_CDC_SUBSCRIBE, to:

. Create subscriptions
A subscription controls access to the change data from one or more source
tables of interest
within a single change set. A subscription contains one or more subscriber
views.

A subscriber view is a view that specifies the change data from a specific
publication in a subscription. The subscriber is restricted to seeing change data
that the publisher has published and has granted the subscriber access to use. See
"Subscribing to Change Data" for more information on choosing a method for
specifying a subscriber view.
. Notify Change Data Capture when ready to receive a set of change data

A subscription window defines the time range of rows in a publication that the
subscriber can currently
see in subscriber views. The oldest row in the window is called the low
boundary; the newest row
in the window is called the high boundary. Each subscription has its own
subscription window that applies
to all of its subscriber views.

. Notify Change Data Capture when finished with a set of change data

> Uses SELECT statements to retrieve change data from the subscriber views.

-- Other items:

Source Database
MODE CHANGE SOURCE Represented Associated Change Set
----------------------------------------------------------------------------------
-------------------
Synchronous Predefined SYNC_SOURCE Local Predefined
SYNC_SET and publisher-defined

Async HotLog Predefined HOTLOG_SOURCE Local Publisher-


defined

Async Distr
HotLog Publisher-defined Remote Publisher-
defined. Change sets must all be
on the same staging database

Async AutoLog
online Publisher-defined Remote Publisher-
defined. There can only be one
change set in an AutoLog
online change source

Asynchronous
AutoLog archive Publisher-defined Remote Publisher-defined

-- Views intended for Publisher or Subscriber:

CHANGE_SOURCES Describes existing change sources.

CHANGE_PROPAGATIONS Describes the Oracle Streams propagation associated with a


given Distributed HotLog change source on the source database. This view is
populated on the source database for 10.2 change sources or on the staging
database for 9.2 or 10.1 change sources.

CHANGE_PROPAGATION_SETS Describes the Oracle Streams propagation associated with a


given Distributed HotLog change set on the staging database. This view is
populated on the source database for 10.2 change sources or on the staging
database for 9.2 or 10.1 change sources.
CHANGE_SETS Describes existing change sets.

CHANGE_TABLES Describes existing change tables.

DBA_SOURCE_TABLES Describes all published source tables in the database.

DBA_PUBLISHED_COLUMNS Describes all published columns of source tables in the


database.

DBA_SUBSCRIPTIONS Describes all subscriptions.

DBA_SUBSCRIBED_TABLES Describes all source tables to which any subscriber has


subscribed.

DBA_SUBSCRIBED_COLUMNS Describes the columns of source tables to which any


subscriber has subscribed.

ALL_SOURCE_TABLES Describes all published source tables accessible to the current


user.

USER_SOURCE_TABLES Describes all published source tables owned by the current


user.

ALL_PUBLISHED_COLUMNS Describes all published columns of source tables accessible


to the current user.

USER_PUBLISHED_COLUMNS Describes all published columns of source tables owned by


the current user.

ALL_SUBSCRIPTIONS Describes all subscriptions accessible to the current user.

USER_SUBSCRIPTIONS Describes all the subscriptions owned by the current user.

ALL_SUBSCRIBED_TABLES Describes the source tables to which any subscription


accessible to the current user has subscribed.

USER_SUBSCRIBED_TABLES Describes the source tables to which the current user has
subscribed.

ALL_SUBSCRIBED_COLUMNS Describes the columns of source tables to which any


subscription accessible to the current user has subscribed.

USER_SUBSCRIBED_COLUMNS Describes the columns of source tables to which the


current user has subscribed.

-- Adjusting Initialization Parameter Values When Oracle Streams Values Change

Asynchronous Change Data Capture uses an Oracle Streams configuration for each
change set.
This Streams configuration consists of a Streams capture process and a Streams
apply process,
with an accompanying queue and queue table. Each Streams configuration uses
additional processes,
parallel execution servers, and memory. For details about the Streams
architecture, see
Oracle Streams Concepts and Administration.

Oracle Streams capture and apply processes each have a parallelism parameter that
is used to improve performance.
When a publisher first creates a change set, its capture parallelism value and
apply parallelism value are each 1.
If desired, a publisher can increase one or both of these values using Streams
interfaces.

If Oracle Streams capture parallelism and apply parallelism values are increased
after change sets
are created, the DBA (or DBAs in the case of the Distributed HotLog mode) must
adjust
initialization parameter values accordingly. How these adjustments are made vary
slightly, depending on the
mode of Change Data Capture being employed, as described in the following
sections.

-- Adjustments for HotLog and AutoLog Change Data Capture

For HotLog and AutoLog change data capture, adjustments to initialization


parameters are made
on the staging database.

Examples below demonstrate how to obtain the current capture parallelism and apply
parallelism values
for change set CHICAGO_DAILY. By default, each parallelism value is 1, so the
amount by which a
given parallelism value has been increased is the returned value minus 1.

Example 1 Obtaining the Oracle Streams Capture Parallelism Value for a Change Set

SELECT cp.value FROM DBA_CAPTURE_PARAMETERS cp, CHANGE_SETS cset


WHERE cset.SET_NAME = 'CHICAGO_DAILY'
AND cset.CAPTURE_NAME = cp.CAPTURE_NAME
AND cp.PARAMETER = 'PARALLELISM';

Example 2 Obtaining the Oracle Streams Apply Parallelism Value for a Change Set

SELECT ap.value FROM DBA_APPLY_PARAMETERS ap, CHANGE_SETS cset


WHERE cset.SET_NAME = 'CHICAGO_DAILY'
AND cset.APPLY_NAME = ap.APPLY_NAME
AND ap.parameter = 'PARALLELISM';

The staging database DBA must adjust the staging database initialization
parameters as described
in the following list to accommodate the parallel execution servers and other
processes and memory
required for Change Data Capture:

PARALLEL_MAX_SERVERS

For each change set for which Oracle Streams capture or apply parallelism values
were increased,
increase the value of this parameter by the increased Streams parallelism value.

For example, if the statement in Example 1 returns a value of 2, and the statement
in Example 2 returns
a value of 3, then the staging database DBA should increase the value of the
PARALLEL_MAX_SERVERS
parameter by (2-1) + (3-1), or 3 for the CHICAGO_DAILY change set. If the Streams
capture or apply parallelism
values have increased for other change sets, increases for those change sets must
also be made.

PROCESSES

For each change set for which Oracle Streams capture or apply parallelism values
were changed, increase
the value of this parameter by the sum of increased Streams parallelism values.
See the previous
list item, PARALLEL_MAX_SERVERS, for an example.

STREAMS_POOL_SIZE

For each change set for which Oracle Streams capture or apply parallelism values
were changed,
increase the value of this parameter by
(10MB * (the increased capture parallelism value)) + (1MB * increased apply
parallelism value).

For example, if the statement in Example 1 returns a value of 2, and the statement
in Example 2 returns
a value of 3, then the staging database DBA should increase the value of the
STREAMS_POOL_SIZE parameter by
(10 MB * (2-1) + 1MB * (3-1)), or 12MB for the CHICAGO_DAILY change set. If the
Oracle Streams capture or
apply parallelism values have increased for other change sets, increases for those
change sets must also be made.

See Oracle Streams Concepts and Administration for more information on Streams
capture parallelism
and apply parallelism values. See Oracle Database Reference for more information
about
database initialization parameters.

Note 3: Oracle 10.2 Sync CDC Example:


=====================================

CDC Mode : Synchroneous CDC


Source table : hr.cdc_demo
Changing table : cdcadmin.cdc_demo_ct
table with added change data
added by means of handler function: hr.salary_history

conn / as sysdba

-- *NIX only
define _editor=vi

-- validate database parameters


archive log list -- Archive Mode
show parameter aq_tm_processes -- min 3
show parameter compatible -- must be 10.1.0 or above
show parameter global_names -- must be TRUE
show parameter job_queue_processes -- min 2 recommended 4-6
show parameter open_links -- not less than the default 4
show parameter shared_pool_size -- must be 0 or at least 200MB
show parameter streams_pool_size -- min. 480MB (10MB/capture 1MB/apply)
show parameter undo_retention -- min. 3600 (1 hr.) (900)

-- Examples of altering initialization parameters


alter system set aq_tm_processes=3 scope=BOTH;
alter system set compatible='10.2.0.1.0' scope=SPFILE;
alter system set global_names=TRUE scope=BOTH;
alter system set job_queue_processes=6 scope=BOTH;
alter system set open_links=4 scope=SPFILE;
alter system set streams_pool_size=200M scope=BOTH; -- very slow if making smaller
alter system set undo_retention=3600 scope=BOTH;

/*
JOB_QUEUE_PROCESSES (current value) + 2
PARALLEL_MAX_SERVERS (current value) + (5 * (the number of change sets planned))
PROCESSES (current value) + (7 * (the number of change sets planned))
SESSIONS (current value) + (2 * (the number of change sets planned))
*/

-- Retest parameter after modification

shutdown immediate;

startup mount;

alter database archivelog;

-- important
alter database force logging;

-- one option among several


alter database add supplemental log data;

alter database open;

-- validate archivelogging
archive log list

alter system switch logfile;

archive log list

-- validate force and supplemental logging


SELECT supplemental_log_data_min, supplemental_log_data_pk,
supplemental_log_data_ui,
supplemental_log_data_fk, supplemental_log_data_all, force_logging
FROM gv$database;

SELECT force_logging
FROM dba_tablespaces;
desc dba_hist_streams_apply_sum

SELECT apply_name, reader_total_messages_dequeued, reader_lag,


server_total_messages_applied
FROM dba_hist_streams_apply_sum;

-- examine CDC related data dictionary objects


SELECT table_name
FROM dba_tables
WHERE owner = 'SYS'
AND table_name LIKE 'CDC%$';

desc cdc_system$

SELECT * FROM cdc_system$;

Setup As SYS - Create Streams Administrators


conn / as sysdba

SELECT *
FROM dba_streams_administrator;

CREATE USER cdcadmin


IDENTIFIED BY cdcadmin
DEFAULT TABLESPACE users
TEMPORARY TABLESPACE temp
QUOTA 0 ON system
QUOTA 10M ON sysaux
QUOTA 20M ON users;

-- system privs
GRANT create session TO cdcadmin;
GRANT create table TO cdcadmin;
GRANT create sequence TO cdcadmin;
GRANT create procedure TO cdcadmin;
GRANT dba TO cdcadmin;

-- role privs
GRANT execute_catalog_role TO cdcadmin;
GRANT select_catalog_role TO cdcadmin;

-- object privileges
GRANT execute ON dbms_cdc_publish TO cdcadmin;
GRANT execute ON dbms_cdc_subscribe TO cdcadmin; -- do also to HR

-- streams specific priv


execute dbms_streams_auth.grant_admin_privilege('CDCADMIN');

SELECT account_status, created


FROM dba_users
WHERE username = 'CDCADMIN';

SELECT *
FROM dba_sys_privs
WHERE grantee = 'CDCADMIN';

SELECT username
FROM dba_users u, streams$_privileged_user s
WHERE u.user_id = s.user#;

SELECT *
FROM dba_streams_administrator;

Prepare Schema Tables for CDC Replication


conn / as sysdba

alter user hr account unlock identified by hr;

connect hr/hr

desc employees

SELECT *
FROM employees;

-- create CDC demo table


CREATE TABLE cdc_demo AS
SELECT * FROM employees;

ALTER TABLE cdc_demo


ADD CONSTRAINT pk_cdc_demo
PRIMARY KEY (employee_id)
USING INDEX
PCTFREE 0;

-- a second way to implement supplemental logging


ALTER TABLE cdc_demo
ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

-- table to track salary history changes originating in cdc_demo


--
CREATE TABLE salary_history (
employee_id NUMBER(6),
first_name VARCHAR2(20),
last_name VARCHAR2(25),
old_salary NUMBER(8,2),
new_salary NUMBER(8,2),
pct_change NUMBER(4,2),
action_date DATE);

SELECT table_name
FROM user_tables;

Instantiate Source Table


conn cdcadmin/cdcadmin

desc dba_capture_prepared_tables

SELECT table_name, scn, supplemental_log_data_pk, supplemental_log_data_ui,


supplemental_log_data_fk, supplemental_log_data_all
FROM dba_capture_prepared_tables;

dbms_capture_adm.prepare_table_instantiation(
table_name IN VARCHAR2,
supplemental_logging IN VARCHAR2 DEFAULT 'keys');
Note: This procedure performs the synchronization necessary for instantiating the
table at another database.
This procedure records the lowest SCN of the table for instantiation. SCNs
subsequent to the lowest SCN for an object
can be used for instantiating the object.

exec dbms_capture_adm.prepare_table_instantiation('HR.CDC_DEMO');

SELECT table_name, scn, supplemental_log_data_pk PK, supplemental_log_data_ui UI,


supplemental_log_data_fk FK, supplemental_log_data_all "ALL"
FROM dba_capture_prepared_tables;

Create Synchronous Change Set


conn cdcadmin/cdcadmin

col object_name format a30

SELECT object_name, object_type


FROM user_objects
ORDER BY 2,1;

dbms_cdc_publish.create_change_set(
change_set_name IN VARCHAR2,
description IN VARCHAR2 DEFAULT NULL,
change_source_name IN VARCHAR2,
stop_on_ddl IN CHAR DEFAULT 'N',
begin_date IN DATE DEFAULT NULL,
end_date IN DATE DEFAULT NULL);

-- this may take a minute or two


exec dbms_cdc_publish.create_change_set('CDC_DEMO_SET', 'Synchronous Demo Set',
'SYNC_SOURCE');

SELECT object_name, object_type


FROM user_objects
ORDER BY 2,1;

conn / as sysdba

desc cdc_change_sets$

set linesize 121


col set_name format a20
col capture_name format a20
col queue_name format a20
col queue_table_name format a20

SELECT set_name, capture_name, queue_name, queue_table_name


FROM cdc_change_sets$;

SELECT set_name, change_source_name, capture_enabled, stop_on_ddl, publisher


FROM change_sets;

Create Change Table


conn cdcadmin/cdcadmin

dbms_cdc_publish.create_change_table(
owner IN VARCHAR2,
change_table_name IN VARCHAR2,
change_set_name IN VARCHAR2,
source_schema IN VARCHAR2,
source_table IN VARCHAR2,
column_type_list IN VARCHAR2,
capture_values IN VARCHAR2, -- BOTH, NEW, OLD
rs_id IN CHAR,
row_id IN CHAR,
user_id IN CHAR,
timestamp IN CHAR,
object_id IN CHAR,
source_colmap IN CHAR,
target_colmap IN CHAR,
options_string IN VARCHAR2);

BEGIN
dbms_cdc_publish.create_change_table('CDCADMIN', 'CDC_DEMO_CT', 'CDC_DEMO_SET',
'HR', 'CDC_DEMO',
'EMPLOYEE_ID NUMBER(6), FIRST_NAME VARCHAR2(20), LAST_NAME VARCHAR2(25), SALARY
NUMBER',
'BOTH', 'Y', 'Y', 'Y', 'N', 'N', 'Y', 'Y', ' TABLESPACE USERS pctfree 0 pctused
99');
END;
/

GRANT select ON cdc_demo_ct TO hr;

conn / as sysdba

SELECT set_name, change_source_name, queue_name, queue_table_name


FROM cdc_change_sets$;

desc cdc_change_tables$

SELECT change_set_name, source_schema_name, source_table_name


FROM cdc_change_tables$;

conn cdcadmin/cdcadmin

SELECT object_name, object_type


FROM user_objects
ORDER BY 2,1;

col high_value format a15

SELECT table_name, composite, partition_name, high_value


FROM user_tab_partitions;

Create Subscription
conn hr/hr

dbms_cdc_subscribe.create_subscription(
change_set_name IN VARCHAR2,
description IN VARCHAR2,
subscription_name IN VARCHAR2);

exec dbms_cdc_subscribe.create_subscription('CDC_DEMO_SET', 'Sync Capture Demo


Set', 'CDC_DEMO_SUB');

conn / as sysdba

set linesize 121


col description format a30
col subscription_name format a20
col username format a10

SELECT subscription_name, handle, set_name, username, earliest_scn, description


FROM cdc_subscribers$;

Subscribe to and Activate Subscription


conn hr/hr

dbms_cdc_subscribe.subscribe(
subscription_name IN VARCHAR2,
source_schema IN VARCHAR2,
source_table IN VARCHAR2,
column_list IN VARCHAR2,
subscriber_view IN VARCHAR2);

BEGIN
dbms_cdc_subscribe.subscribe('CDC_DEMO_SUB', 'HR', 'CDC_DEMO',
'EMPLOYEE_ID, FIRST_NAME, LAST_NAME, SALARY', 'CDC_DEMO_SUB_VIEW');
END;
/

desc user_subscriptions

SELECT set_name, subscription_name, status FROM user_subscriptions;


SELECT set_name, subscription_name, status FROM dba_subscriptions;

dbms_cdc_subscribe.activate_subscription(
subscription_name IN VARCHAR2);

exec dbms_cdc_subscribe.activate_subscription('CDC_DEMO_SUB');

SELECT set_name, subscription_name, status


FROM user_subscriptions;

Create Procedure To Populate Salary History Table


conn hr/hr

/* Create a stored procedure to populate the new HR.SALARY_HISTORY table.


The procedure extends the subscription window of the CDC_DEMP_SUB subscription
to get the most recent set of source table changes. It uses the subscriber's
DEMO_SUB_VIEW view
to scan the changes and insert them into the SALARY_HISTORY table. It then purges
the subscription window to indicate that it is finished with that set of changes.
*/
CREATE OR REPLACE PROCEDURE update_salary_history IS
CURSOR cur IS
SELECT *
FROM (
SELECT 'I' opt, cscn$, rsid$, employee_id, first_name, last_name, 0 old_salary,
salary new_salary, commit_timestamp$
FROM cdc_demo_sub_view
WHERE operation$ = 'I '
UNION ALL
SELECT 'D' opt, cscn$, rsid$, employee_id, first_name, last_name, salary
old_salary,
0 new_salary, commit_timestamp$
FROM cdc_demo_sub_view
WHERE operation$ = 'D '
UNION ALL
SELECT 'U' opt , v1.cscn$, v1.rsid$, v1.employee_id, v1.first_name,
v1.last_name,
v1.salary old_salary, v2.salary new_salaryi, v1.commit_timestamp$
FROM cdc_demo_sub_view v1, cdc_demo_sub_view v2
WHERE v1.operation$ = 'UU' and v2.operation$ = 'UN'
AND v1.cscn$ = v2.cscn$
AND v1.rsid$ = v2.rsid$
AND ABS(v1.salary - v2.salary) > 0)
ORDER BY cscn$, rsid$;

percent NUMBER;
BEGIN
--Step 1 Get the change (extend the window).
dbms_cdc_subscribe.extend_window('CDC_DEMO_SUB');

FOR rec IN cur LOOP


IF rec.opt = 'I' THEN
INSERT INTO salary_history
(employee_id, first_name, last_name, old_salary, new_salary, pct_change,
action_date)
VALUES
(rec.employee_id, rec.first_name, rec.last_name, 0, rec.new_salary, NULL,
rec.commit_timestamp$);
END IF;

IF rec.opt = 'D' THEN


INSERT INTO salary_history
(employee_id, first_name, last_name, old_salary, new_salary, pct_change,
action_date)
VALUES
(rec.employee_id, rec.first_name, rec.last_name, rec.old_salary, 0, NULL,
rec.commit_timestamp$);
END IF;

IF rec.opt = 'U' THEN


percent := (rec.new_salary - rec.old_salary) / rec.old_salary * 100;
INSERT INTO salary_history
(employee_id, first_name, last_name, old_salary, new_salary, pct_change,
action_date)
VALUES
(rec.employee_id, rec.first_name, rec.last_name, rec.old_salary,
rec.new_salary, percent, rec.commit_timestamp$);
END IF;
END LOOP;
COMMIT;

--Step 3 Purge the window of consumed data


dbms_cdc_subscribe.purge_window('CDC_DEMO_SUB');
END update_salary_history;
/

desc dba_hist_streams_apply_sum

SELECT apply_name, reader_total_messages_dequeued, reader_lag,


server_total_messages_applied
FROM dba_hist_streams_apply_sum;

DML On Source Table


conn hr/hr

SELECT employee_id, first_name, last_name, salary


FROM cdc_demo
ORDER BY 1 DESC;

SELECT employee_id, first_name, last_name, salary


FROM cdc_demo_sub_view;

SELECT *
FROM salary_history;

UPDATE cdc_demo
SET salary = salary+1
WHERE employee_id = 100;

COMMIT;

SELECT employee_id,first_name,last_name,salary
FROM cdc_demo_sub_view;

exec update_salary_history;

SELECT employee_id,first_name,last_name,salary
FROM cdc_demo_sub_view;

SELECT *
FROM salary_history;

-- Capture Cleanup
conn hr/hr

exec dbms_cdc_subscribe.drop_subscription('CDC_DEMO_SUB');

conn / as sysdba

-- reverse prepare table instantiation


exec dbms_capture_adm.abort_table_instantiation('HR.CDC_DEMO');

-- drop the change table


exec dbms_cdc_publish.drop_change_table('CDCADMIN', 'CDC_DEMO_CT', 'Y');
-- drop the change set
exec dbms_cdc_publish.drop_change_set('CDC_DEMO_SET');

conn hr/hr

drop table salary_history purge;

drop table cdc_demo purge;

drop procedure update_salary_history;

conn / as sysdba

drop user cdcadmin;

Note 4: Oracle 10.2 ASync Hotlog CDC Example:


=============================================

conn / as sysdba

-- *NIX only
define _editor=vi

-- validate database parameters


archive log list -- Archive Mode
show parameter aq_tm_processes -- min 3
show parameter compatible -- must be 10.1.0 or above
show parameter global_names -- must be TRUE
show parameter job_queue_processes -- min 2 recommended 4-6
show parameter open_links -- not less than the default 4
show parameter shared_pool_size -- must be 0 or at least 200MB
show parameter streams_pool_size -- min. 480MB (10MB/capture 1MB/apply)
show parameter undo_retention -- min. 3600 (1 hr.) (900)

-- Examples of altering initialization parameters


alter system set aq_tm_processes=3 scope=BOTH;
alter system set compatible='10.2.0.1.0' scope=SPFILE;
alter system set global_names=TRUE scope=BOTH;
alter system set job_queue_processes=6 scope=BOTH;
alter system set open_links=4 scope=SPFILE;
alter system set streams_pool_size=200M scope=BOTH; -- very slow if making smaller
alter system set undo_retention=3600 scope=BOTH;

/*
JOB_QUEUE_PROCESSES (current value) + 2
PARALLEL_MAX_SERVERS (current value) + (5 * (the number of change sets planned))
PROCESSES (current value) + (7 * (the number of change sets planned))
SESSIONS (current value) + (2 * (the number of change sets planned))
*/

-- Retest parameter after modification

shutdown immediate;

startup mount;
alter database archivelog;

-- important
alter database force logging;

-- one option among several


alter database add supplemental log data;

alter database open;

-- validate archivelogging
archive log list

alter system switch logfile;

archive log list

-- validate force and supplemental logging


SELECT supplemental_log_data_min, supplemental_log_data_pk,
supplemental_log_data_ui,
supplemental_log_data_fk, supplemental_log_data_all, force_logging
FROM gv$database;

SELECT force_logging
FROM dba_tablespaces;

-- examine existing queues


desc dba_queues

set linesize 121


col owner format a6
col queue_table format a25
col user_comment format a31

SELECT owner, name, queue_table, queue_type, user_comment


FROM dba_queues
ORDER BY 1,4,2;

-- examine existing streams


desc dba_hist_streams_capture

SELECT capture_name, total_messages_captured, total_messages_enqueued


FROM dba_hist_streams_capture;

desc dba_hist_streams_apply_sum

SELECT apply_name, reader_total_messages_dequeued, reader_lag,


server_total_messages_applied
FROM dba_hist_streams_apply_sum;

-- examine CDC related data dictionary objects


SELECT table_name
FROM dba_tables
WHERE owner = 'SYS'
AND table_name LIKE 'CDC%$';

desc cdc_system$
SELECT * FROM cdc_system$;

Setup As SYS - Create Streams Administrators


conn / as sysdba

SELECT *
FROM dba_streams_administrator;

CREATE USER cdcadmin


IDENTIFIED BY cdcadmin
DEFAULT TABLESPACE users
TEMPORARY TABLESPACE temp
QUOTA 0 ON system
QUOTA 10M ON sysaux
QUOTA 20M ON users;

-- system privs
GRANT create session TO cdcadmin;
GRANT create table TO cdcadmin;
GRANT create sequence TO cdcadmin;
GRANT create procedure TO cdcadmin;
GRANT dba TO cdcadmin;

-- role privs
GRANT execute_catalog_role TO cdcadmin;
GRANT select_catalog_role TO cdcadmin;

-- object privileges
GRANT execute ON dbms_cdc_publish TO cdcadmin;
GRANT execute ON dbms_cdc_subscribe TO cdcadmin;
-- required for this demo but not by CDC
GRANT execute ON dbms_lock TO cdcadmin;

-- streams specific priv


execute dbms_streams_auth.grant_admin_privilege('CDCADMIN');

SELECT account_status, created


FROM dba_users
WHERE username = 'CDCADMIN';

SELECT *
FROM dba_sys_privs
WHERE grantee = 'CDCADMIN';

SELECT username
FROM dba_users u, streams$_privileged_user s
WHERE u.user_id = s.user#;

SELECT *
FROM dba_streams_administrator;

Prepare Schema Tables for CDC Replication


conn / as sysdba

alter user hr account unlock identified by hr;


connect hr/hr

desc employees

SELECT *
FROM employees;

-- create CDC demo table


CREATE TABLE cdc_demo AS
SELECT * FROM employees;

-- a second way to implement supplemental logging


ALTER TABLE cdc_demo
ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

-- table to track salary history changes originating in cdc_demo


CREATE TABLE salary_history (
employee_id NUMBER(6) NOT NULL,
job_id VARCHAR2(10) NOT NULL,
department_id NUMBER(4),
old_salary NUMBER(8,2),
new_salary NUMBER(8,2),
percent_change NUMBER(4,2),
salary_action_date DATE);

SELECT table_name
FROM user_tables;

Instantiate Source Table


conn / as sysdba

desc dba_capture_prepared_tables

SELECT table_name, scn, supplemental_log_data_pk, supplemental_log_data_ui,


supplemental_log_data_fk, supplemental_log_data_all
FROM dba_capture_prepared_tables;

dbms_capture_adm.prepare_table_instantiation(
table_name IN VARCHAR2,
supplemental_logging IN VARCHAR2 DEFAULT 'keys');

Note: This procedure performs the synchronization necessary for instantiating the
table at another database.
This procedure records the lowest SCN of the table for instantiation. SCNs
subsequent to the lowest SCN for an object
can be used for instantiating the object.

exec dbms_capture_adm.prepare_table_instantiation(table_name => 'HR.CDC_DEMO');

SELECT table_name, scn, supplemental_log_data_pk, supplemental_log_data_ui,


supplemental_log_data_fk, supplemental_log_data_all
FROM dba_capture_prepared_tables;

Create Asynchronous HotLog Change Set


conn cdcadmin/cdcadmin
col object_name format a30

SELECT object_name, object_type


FROM user_objects
ORDER BY 2,1;

dbms_cdc_publish.create_change_set(
change_set_name IN VARCHAR2,
description IN VARCHAR2 DEFAULT NULL,
change_source_name IN VARCHAR2,
stop_on_ddl IN CHAR DEFAULT 'N',
begin_date IN DATE DEFAULT NULL,
end_date IN DATE DEFAULT NULL);

-- this may take awhile don't be impatient


exec dbms_cdc_publish.create_change_set('CDC_DEMO_SET', 'CDC Demo 2 Change Set',
'HOTLOG_SOURCE', 'Y', NULL, NULL);

-- here is why
SELECT object_name, object_type
FROM user_objects
ORDER BY 2,1;

SELECT table_name, tablespace_name, iot_type


FROM user_tables;

col high_value format a15

SELECT table_name, composite, partition_name, high_value


FROM user_tab_partitions;

conn / as sysdba

desc cdc_change_sets$

set linesize 121


col set_name format a20
col capture_name format a20
col queue_name format a20
col queue_table_name format a20

SELECT set_name, capture_name, queue_name, queue_table_name


FROM cdc_change_sets$;

SELECT set_name, change_source_name, capture_enabled, stop_on_ddl, publisher


FROM change_sets;

SELECT process_type, name


FROM streams$_process_params;

Create Change Table


conn cdcadmin/cdcadmin

dbms_cdc_publish.create_change_table(
owner IN VARCHAR2,
change_table_name IN VARCHAR2,
change_set_name IN VARCHAR2,
source_schema IN VARCHAR2,
source_table IN VARCHAR2,
column_type_list IN VARCHAR2,
capture_values IN VARCHAR2, -- BOTH, NEW, OLD
rs_id IN CHAR,
row_id IN CHAR,
user_id IN CHAR,
timestamp IN CHAR,
object_id IN CHAR,
source_colmap IN CHAR,
target_colmap IN CHAR,
options_string IN VARCHAR2);

BEGIN
dbms_cdc_publish.create_change_table('CDCADMIN', 'CDC_DEMO_CT', 'CDC_DEMO_SET',
'HR', 'CDC_DEMO',
'EMPLOYEE_ID NUMBER(6), FIRST_NAME VARCHAR2(20), LAST_NAME VARCHAR2(25), EMAIL
VARCHAR2(25),
PHONE_NUMBER VARCHAR2(20), HIRE_DATE DATE, JOB_ID VARCHAR2(10), SALARY NUMBER,
COMMISSION_PCT NUMBER,
MANAGER_ID NUMBER, DEPARTMENT_ID NUMBER', 'BOTH', 'N', 'N', 'N', 'N', 'N', 'N',
'Y', NULL);
END;
/

exec dbms_cdc_publish.alter_change_table('CDCADMIN', 'CDC_DEMO_CT', rs_id=>'Y');

GRANT select ON cdc_demo_ct TO hr;

conn / as sysdba

SELECT set_name, change_source_name, queue_name, queue_table_name


FROM cdc_change_sets$;

desc cdc_change_tables$

SELECT change_set_name, source_schema_name, source_table_name


FROM cdc_change_tables$;

Enable Capture
conn / as sysdba

SELECT set_name, change_source_name, capture_enabled


FROM cdc_change_sets$;

conn cdcadmin/cdcadmin

dbms_cdc_publish.alter_change_set(
change_set_name IN VARCHAR2,
description IN VARCHAR2 DEFAULT NULL,
remove_description IN CHAR DEFAULT 'N',
enable_capture IN CHAR DEFAULT NULL,
recover_after_error IN CHAR DEFAULT NULL,
remove_ddl IN CHAR DEFAULT NULL,
stop_on_ddl IN CHAR DEFAULT NULL);
exec dbms_cdc_publish.alter_change_set(change_set_name=>'CDC_DEMO_SET',
enable_capture=> 'Y');

conn / as sysdba

SELECT set_name, change_source_name, capture_enabled


FROM cdc_change_sets$;

Create Subscription
conn hr/hr

dbms_cdc_subscribe.create_subscription(
change_set_name IN VARCHAR2,
description IN VARCHAR2,
subscription_name IN VARCHAR2);

exec dbms_cdc_subscribe.create_subscription('CDC_DEMO_SET', 'cdc_demo subx',


'CDC_DEMO_SUB');

conn / as sysdba

set linesize 121


col description format a30
col subscription_name format a20
col username format a10

SELECT subscription_name, handle, set_name, username, earliest_scn, description


FROM cdc_subscribers$;

Subscribe to and Activate Subscription


conn hr/hr

dbms_cdc_subscribe.subscribe(
subscription_name IN VARCHAR2,
source_schema IN VARCHAR2,
source_table IN VARCHAR2,
column_list IN VARCHAR2,
subscriber_view IN VARCHAR2);

BEGIN
dbms_cdc_subscribe.subscribe('CDC_DEMO_SUB', 'HR', 'CDC_DEMO',
'EMPLOYEE_ID, FIRST_NAME, LAST_NAME, EMAIL, PHONE_NUMBER, HIRE_DATE, JOB_ID,
SALARY,
COMMISSION_PCT, MANAGER_ID, DEPARTMENT_ID',
'CDC_DEMO_SUB_VIEW');
END;
/

desc user_subscriptions

SELECT set_name, subscription_name, status


FROM user_subscriptions;

dbms_cdc_subscribe.activate_subscription(
subscription_name IN VARCHAR2);
exec dbms_cdc_subscribe.activate_subscription('CDC_DEMO_SUB');

SELECT set_name, subscription_name, status


FROM user_subscriptions;

Create Procedure To Populate Salary History Table


conn hr/hr

/* Create a stored procedure to populate the new HR.SALARY_HISTORY table. The


procedure extends
the subscription window of the CDC_DEMP_SUB subscription to get the most recent
set of source table changes.
It uses the subscriber's DEMO_SUB_VIEW view to scan the changes and insert them
into the SALARY_HISTORY table.
It then purges the subscription window to indicate that it is finished with that
set of changes. */

CREATE OR REPLACE PROCEDURE update_salary_history IS

CURSOR cur IS
SELECT *
FROM (
SELECT 'I' opt, cscn$, rsid$, employee_id, job_id, department_id, 0 old_salary,
salary new_salary, commit_timestamp$
FROM cdc_demo_sub_view
WHERE operation$ = 'I '
UNION ALL
SELECT 'D' opt, cscn$, rsid$, employee_id, job_id, department_id, salary
old_salary,
0 new_salary, commit_timestamp$
FROM cdc_demo_sub_view
WHERE operation$ = 'D '
UNION ALL
SELECT 'U' opt , v1.cscn$, v1.rsid$, v1.employee_id, v1.job_id,
v1.department_id,
v1.salary old_salary, v2.salary new_salaryi, v1.commit_timestamp$
FROM cdc_demo_sub_view v1, cdc_demo_sub_view v2
WHERE v1.operation$ = 'UO' and v2.operation$ = 'UN'
AND v1.cscn$ = v2.cscn$
AND v1.rsid$ = v2.rsid$
AND ABS(v1.salary - v2.salary) > 0)
ORDER BY cscn$, rsid$;

percent NUMBER;
BEGIN
-- Get the next set of changes to the HR.CDC_DEMO source table
dbms_cdc_subscribe.extend_window('CDC_DEMO_SUB');

-- Process each change


FOR rec IN cur
LOOP
IF rec.opt = 'I' THEN
INSERT INTO salary_history VALUES
(rec.employee_id, rec.job_id, rec.department_id, 0,
rec.new_salary, NULL, rec.commit_timestamp$);
END IF;
IF rec.opt = 'D' THEN
INSERT INTO salary_history VALUES
(rec.employee_id, rec.job_id, rec.department_id, rec.old_salary, 0,
NULL, rec.commit_timestamp$);
END IF;

IF rec.opt = 'U' THEN


percent := (rec.new_salary - rec.old_salary) / rec.old_salary * 100;
INSERT INTO salary_history VALUES
(rec.employee_id, rec.job_id, rec.department_id, rec.old_salary,
rec.new_salary, percent, rec.commit_timestamp$);
END IF;
END LOOP;

-- Indicate subscriber is finished with this set of changes


dbms_cdc_subscribe.purge_window('CDC_DEMO_SUB');
END update_salary_history;
/

Create Procedure To Wait For Changes


/* Create function CDCADMIN.WAIT_FOR_CHANGES to enable this demo to run
predictably. The
asynchronous nature of CDC HotLog mode means that there is a delay for source
table changes to appear
in the CDC change table and the subscriber view. By default this procedure waits
up to 3 minutes
for the change table and 1 additional minute for the subscriber view. This can be
adjusted if
it is insufficient. The caller must specify the name of the change table and the
number of rows
expected to be in the change table. The caller may also optionally specify a
different number
of seconds to wait for changes to appear in the change table. */

conn cdcadmin/cdcadmin

CREATE OR REPLACE FUNCTION wait_for_changes (


rowcount NUMBER, -- number of rows to wait for
maxwait_seconds NUMBER := 300) -- maximum time to wait, in seconds
RETURN VARCHAR2 AUTHID CURRENT_USER AS

numrows NUMBER := 0; -- number of rows in change table


slept NUMBER := 0; -- total time slept
sleep_time NUMBER := 3; -- number of seconds to sleep
return_msg VARCHAR2(100); -- informational message
keep_waiting BOOLEAN := TRUE; -- whether to keep waiting
BEGIN
WHILE keep_waiting LOOP
SELECT COUNT(*)
INTO numrows
FROM CDC_DEMO_CT;

-- Got expected number of rows


IF numrows >= rowcount THEN
keep_waiting := FALSE;
return_msg := 'Change table contains at least ' || TO_CHAR(rowcount) || '
rows';
EXIT;
-- Reached maximum number of seconds to wait
ELSIF slept > maxwait_seconds THEN
return_msg := ' - Timed out while waiting for the change table to reach ' ||

TO_CHAR(rowcount) || ' rows';


EXIT;
END IF;

dbms_lock.sleep(sleep_time);
slept := slept+sleep_time;
END LOOP;
-- additional wait time for changes to become available to subscriber view
dbms_lock.sleep(60);

RETURN return_msg;
END wait_for_changes;
/

Preparation for DML


-- In a separate terminal window
cd $ORACLE_BASE/admin/ORCL/bdump
tail -f alertorcl.log
-- tailing the alert log allows us to watch log miner at work

-- open a SQL*Plus session as SYS


desc gv$streams_capture

set linesize 121


col state format a20

SELECT capture_name, logminer_id, state, total_messages_captured


FROM gv$streams_capture;

-- open a SQL*Plus session as SYS


desc gv$streams_apply_reader

set linesize 121


col state format a20

SELECT apply_name, state, total_messages_dequeued


FROM gv$streams_apply_reader;

DML On Source Table


conn hr/hr

UPDATE cdc_demo SET salary = salary + 500 WHERE job_id = 'SH_CLERK';


UPDATE cdc_demo SET salary = salary + 1000 WHERE job_id = 'ST_CLERK';
UPDATE cdc_demo SET salary = salary + 1500 WHERE job_id = 'PU_CLERK';
COMMIT;

INSERT INTO cdc_demo


(employee_id, first_name, last_name, email, phone_number, hire_date, job_id,
salary,
commission_pct, manager_id, department_id)
VALUES
(207, 'Mary', 'Lee', 'MLEE', '310.234.4590', TO_DATE('10-JAN-2003'), 'SH_CLERK',
4000, NULL, 121, 50);
INSERT INTO cdc_demo
(employee_id, first_name, last_name, email, phone_number, hire_date, job_id,
salary,
commission_pct, manager_id, department_id)
VALUES
(208, 'Karen', 'Prince', 'KPRINCE', '345.444.6756', TO_DATE('10-NOV-2003'),
'SH_CLERK', 3000, NULL, 111, 50);

INSERT INTO cdc_demo


(employee_id, first_name, last_name, email, phone_number, hire_date, job_id,
salary,
commission_pct, manager_id, department_id)
VALUES
(209, 'Frank', 'Gate', 'FGATE', '451.445.5678', TO_DATE('13-NOV-2003'), 'IT_PROG',
8000, NULL, 101, 50);

INSERT INTO cdc_demo


(employee_id, first_name, last_name, email, phone_number, hire_date, job_id,
salary, commission_pct,
manager_id, department_id)
VALUES
(210, 'Paul', 'Jeep', 'PJEEP', '607.345.1112', TO_DATE('28-MAY-2003'), 'IT_PROG',
8000, NULL, 101, 50);

COMMIT;

Validate Capture
-- Expecting 94 rows to appear in the change table CDCADMIN.CDC_DEMO_CT. This
first
-- capture may take a few minutes. Later captures should be substantially faster.

conn cdcadmin/cdcadmin

SELECT wait_for_changes(94, 180) message


FROM dual;

Another Test
conn hr/hr

/* The wait_for_changes function having indicated the changes have been populated
apply the changes
to the salary_history table */

exec update_salary_history;

SELECT employee_id, job_id, department_id, old_salary, new_salary, percent_change


FROM salary_history
ORDER BY 1, 4, 5;

delete from cdc_demo where first_name = 'Mary' and last_name = 'Lee';


delete from cdc_demo where first_name = 'Karen' and last_name = 'Prince';
delete from cdc_demo where first_name = 'Frank' and last_name = 'Gate';
delete from cdc_demo where first_name = 'Paul' and last_name = 'Jeep';
COMMIT;

update cdc_demo set salary = salary + 5000 where job_id = 'AD_VP';


update cdc_demo set salary = salary - 1000 where job_id = 'ST_MAN';
update cdc_demo set salary = salary - 500 where job_id = 'FI_ACCOUNT';
COMMIT;

-- Expecting 122 rows to appear in the change table CDCADMIN.CDC_DEMO_CT.


-- (94 rows from the first set of DMLs and 28 from the second set)
conn cdcadmin/cdcadmin

SELECT wait_for_changes(122, 180) message from dual;

conn hr/hr

exec update_salary_history

SELECT employee_id, job_id, department_id, old_salary, new_salary, percent_change


FROM salary_history
order by 1, 4, 5;

Capture Cleanup
conn hr/hr

exec dbms_cdc_subscribe.drop_subscription('CDC_DEMO_SUB');

conn / as sysdba

-- reverse prepare table instantiation


exec dbms_capture_adm.abort_table_instantiation('HR.CDC_DEMO');

-- drop the change table


exec dbms_cdc_publish.drop_change_table('CDCADMIN', 'CDC_DEMO_CT', 'Y');

-- drop the change set


exec dbms_cdc_publish.drop_change_set('CDC_DEMO_SET');

conn cdcadmin/cdcadmin

drop function wait_for_changes;

SELECT COUNT(*)
FROM user_objects;

conn hr/hr

drop table salary_history purge;


drop table cdc_demo purge;
drop procedure update_salary_history;

conn / as sysdba

drop user cdcadmin;

Note 5: Oracle 9.2 CDC Example:


===============================

-- Change table example code


-- by Jon Emmons
-- www.lifeaftercoffee.com
-- NOTE: This code is provided for educational purposes only! Use at your
-- own risk. I have only used this code on Oracle 9.2 Enterprise Edition.
-- Due to the way variables are handled, this should be run one command at
-- a time, but must be run all in the same SQLPlus session.

-- Connect as a priveleged user


conn system

-- Create scott if he doesn't already exist


CREATE user scott IDENTIFIED BY tiger
DEFAULT tablespace users TEMPORARY tablespace temp
quota unlimited ON users;

-- Grant scott appropriate priveleges


GRANT connect TO scott;
GRANT execute_catalog_role TO scott;
GRANT select_catalog_role TO scott;
GRANT CREATE TRIGGER TO scott;

-- Connect up as scott
conn scott/tiger

-- Create Table
CREATE TABLE scott.classes
(
class_id NUMBER,
class_title VARCHAR2(30),
class_instructor VARCHAR2(30),
class_term_code VARCHAR2(6),
class_credits NUMBER,
CONSTRAINT PK_classes PRIMARY KEY (class_id )
);

-- Load some data


INSERT INTO classes VALUES
(100, 'Reading', 'Jon', '200510', 3);

INSERT INTO classes VALUES


(101, 'Writing', 'Stacey', '200510', 4);

INSERT INTO classes VALUES


(102, 'Arithmetic', 'Laurianne', '200530', 3);

commit;

-- Confirm current data


SELECT * FROM classes;

-- Set up the change table


exec dbms_logmnr_cdc_publish.create_change_table -
('scott', 'classes_ct', 'SYNC_SET', 'scott', 'classes', -
'class_id NUMBER, -
class_title VARCHAR2(30), -
class_instructor VARCHAR2(30), -
class_term_code VARCHAR2(6), -
class_credits NUMBER', -
'BOTH', 'Y', 'N', 'N', 'Y', 'N', 'Y', 'N', NULL);
-- Subscribe to the change table
variable subhandle NUMBER;

execute dbms_logmnr_cdc_subscribe.get_subscription_handle -
(CHANGE_SET => 'SYNC_SET', -
DESCRIPTION => 'Changes to classes table', -
SUBSCRIPTION_HANDLE => :subhandle);

execute dbms_logmnr_cdc_subscribe.subscribe -
(subscription_handle => :subhandle, -
source_schema => 'scott', -
source_table => 'classes', -
column_list => 'class_id, class_title, class_instructor, class_term_code,
class_credits');

execute dbms_logmnr_cdc_subscribe.activate_subscription -
(SUBSCRIPTION_HANDLE => :subhandle);

-- Now modify the table in a few different ways


UPDATE classes SET class_title='Math' WHERE class_id=102;

INSERT INTO classes VALUES


(103, 'Computers', 'Ken', '200510', 1);

INSERT INTO classes VALUES


(104, 'Racketball', 'Matt', '200530', 2);

UPDATE classes SET class_credits=3 WHERE class_id=103;

DELETE FROM classes WHERE class_title='Reading';

commit;

-- Confirm current data


SELECT * FROM classes;

-- Now lets check out the change table


variable viewname varchar2(40)

execute dbms_logmnr_cdc_subscribe.extend_window -
(subscription_handle => :subhandle);

execute dbms_logmnr_cdc_subscribe.prepare_subscriber_view -
(SUBSCRIPTION_HANDLE => :subhandle, -
SOURCE_SCHEMA => 'scott', -
SOURCE_TABLE => 'classes', -
VIEW_NAME => :viewname);

print viewname

-- This little trick will move the bind variable :viewname into the
-- substitution variable named subscribed_view
COLUMN myview new_value subscribed_view noprint
SELECT :viewname myview FROM dual;

-- Examine the actual change data. You could also look at the table in a
-- browser such as TOAD for easier viewing.
SELECT * FROM &subscribed_view;

-- Close the subscriber view


execute dbms_logmnr_cdc_subscribe.drop_subscriber_view -
(SUBSCRIPTION_HANDLE => :subhandle, -
SOURCE_SCHEMA => 'scott', -
SOURCE_TABLE => 'classes');

-- Purge the window


execute dbms_logmnr_cdc_subscribe.purge_window -
(subscription_handle => :subhandle);

-- If done altogether, end the subscription


execute dbms_logmnr_cdc_subscribe.drop_subscription -
(subscription_handle => :subhandle);

-- drop the change table


exec dbms_logmnr_cdc_publish.drop_change_table('scott', 'classes_ct', 'N');

-- Delete the table


DROP TABLE scott.classes;

Note 6:
=======

DBMS_CDC_PUBLISH:

In previous releases, this package was named DBMS_LOGMNR_CDC_PUBLISH. Beginning


with release 10g,
the LOGMNR string has been removed from the name, resulting in the name
DBMS_CDC_PUBLISH.
Although both variants of the name are still supported, the variant with the
LOGMNR string
has been deprecated and may not be supported in a future release

The DBMS_CDC_PUBLISH package is used by a publisher to set up an Oracle Change


Data Capture system
to capture and publish change data from one or more Oracle relational source
tables.

Change Data Capture captures and publishes only committed data. Oracle Change Data
Capture identifies
new data that has been added to, updated in, or removed from relational tables,
and publishes the change data
in a form that is usable by subscribers.

Typically, a Change Data Capture system has one publisher who captures and
publishes changes for any number
of Oracle relational source tables. The publisher then provides subscribers
(applications or individuals)
with access to the published data.
Note 7:
=======

Oracle Tips by Burleson


Oracle 10g Create the change tables

The dbms_cdc_publish.create_change_table procedure is used by the publisher user


on the staging database
to create change tables.

The publisher user creates one or more change tables for each source table to be
published,
specifies which columns should be included, and specifies the combination of
before and after images
of the change data to capture.

To have more control over the physical properties and tablespace properties of the
change tables,
the publisher can set the options_string field of the
dbms_cdc_publish.create_change_table procedure.
The options_string field can contain any option available on the CREATE TABLE
statement.

The following script creates a change table on the staging database that captures
changes made to
a source table in the source database. The example uses the sample table
pl.project_history.

BEGIN
DBMS_CDC_PUBLISH.CREATE_CHANGE_TABLE(
owner => 'cdcproj',
change_table_name => 'PROJ_HIST_CT',
change_set_name => 'PROJECT_DAILY',
source_schema => 'PL',
source_table => 'PROJ_HISTORY',
column_type_list => 'EMPLOYEE_ID NUMBER(6),START_DATE DATE, END_DATE DATE,
PROJ_ID VARCHAR2(10), DEPARTMENT_ID NUMBER(4)',
capture_values => 'both',
rs_id => 'y',
row_id => 'n',
user_id => 'n',
timestamp => 'n',
object_id => 'n',
source_colmap => 'n',
target_colmap => 'y',
options_string => NULL);
END;
/

PL/SQL procedure successfully completed.

This example statement creates a change table named proj_hist_ct, within change
set project_daily.
The column_type_list parameter is used to identify the columns captured by the
change table.
Remember that the source_schema and source_table parameters identify the schema
and source table
that reside in the source database, not the staging database.
Note 8: Example using streams (1)
=================================

http://blogs.ittoolbox.com/oracle/guide/archives/oracle-streams-configuration-
change-data-capture-13501

I have been playing with Oracle Streams again lately. My goal is to capture
changes in 10g and send them
to a 9i database.

Below is the short list for setting up Change Data Capture using Oracle Streams.
These steps are mostly
from the docs with a few tweaks I have added. This entry only covers setting up
the local capture and apply.
I'll add the propagation to 9i later this week or next weekend.

First the set up: we will use the HR account's Employee table. We'll capture all
changes to the Employee table
and insert them into an audit table. I'm not necessarily saying this is the way
you should audit your
database but it makes a nice example.

I'll also add a monitoring piece to capture process. I want to be able to see
exactly what is being captured
when it is being captured.

You will need to have sysdba access to follow along with me. Your database must
also be in archivelog mode.
The changes are picked up from the redo log.

So, away we go!

The first step is to create out streams administrator. I will follow the
guidelines from the oracle docs
exactly for this:

- Connect as sysdba:

sqlplus / as sysdba

- Create the streams tablespace (change the name and/or location to suit):

create tablespace streams_tbs datafile 'c:\temp\stream_tbs.dbf' size 25M


reuse autoextend on maxsize unlimited;

- Create our streams administrator:

create user strmadmin identified by strmadmin


default tablespace streams_tbs
quota unlimited on streams_tbs;

I haven't quite figured out why, but we need to grant our administrator DBA privs.
I think this is a bad thing.
There is probably a work around where I could do some direct grants instead but I
haven't had time
to track those down.

grant dba to strmadmin;

We also want to grant streams admin privs to the user.

BEGIN
SYS.DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(
grantee => 'strmadmin',
grant_privileges => true);
END;
/

-The next steps we'll run as the HR user.

conn hr/hr

- Grant all access to the employee table to the streams admin:

grant all on hr.employee to strmadmin;

- We also need to create the employee_audit table. Note that I am adding three
columns in this table
that do not exist in the employee table.

CREATE TABLE employee(


employee_id NUMBER(6),
first_name VARCHAR2(20),
last_name VARCHAR2(25),
email VARCHAR2(25),
phone_number VARCHAR2(20),
hire_date DATE,
job_id VARCHAR2(10),
salary NUMBER(8,2),
commission_pct NUMBER(2,2),
manager_id NUMBER(6),
department_id NUMBER(4));

ALTER TABLE employee add constraint pk_employee_id PRIMARY KEY;


ALTER TABLE employee ADD CONSTRAINT pk_employee_id PRIMARY KEY
(employee_id)

INSERT INTO hr.employee


VALUES(206, 'Albert', 'Sel','avds@antapex.org',NULL, '07-JUN-94',
'AC_ACCOUNT', 777, NULL, NULL, 110);
COMMIT;

CREATE TABLE employee_audit(


employee_id NUMBER(6),
first_name VARCHAR2(20),
last_name VARCHAR2(25),
email VARCHAR2(25),
phone_number VARCHAR2(20),
hire_date DATE,
job_id VARCHAR2(10),
salary NUMBER(8,2),
commission_pct NUMBER(2,2),
manager_id NUMBER(6),
department_id NUMBER(4),
upd_date DATE,
user_name VARCHAR2(30),
action VARCHAR2(30));

ALTER TABLE employee_audit ADD CONSTRAINT pk_employee_audit_id PRIMARY KEY


(employee_id)

- Grant all access to the audit table to the streams admin user:

grant all on hr.employee_audit to strmadmin;

- We connect as the streams admin user:

conn strmadmin/strmadmin

We can create a logging table. You would NOT want to do this in a high-volume
production system. I am doing this
to illustrate user defined monitoring and show how you can get inside the capture
process.

CREATE TABLE streams_monitor (


date_and_time TIMESTAMP(6) DEFAULT systimestamp,
txt_msg CLOB );

- Here we create the queue.

Unlike AQ, where you have to create a separate table, this step creates the queue
and the underlying ANYDATA table.

BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'strmadmin.streams_queue_table',
queue_name => 'strmadmin.streams_queue');
END;
/

- This just defines that we want to capture DML and not DDL.

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'hr.employee',
streams_type => 'capture',
streams_name => 'capture_emp',
queue_name => 'strmadmin.streams_queue',
include_dml => true,
include_ddl => false,
inclusion_rule => true);
END;
/

| Possible errors on that statement:


|
| ERROR at line 1:
| ORA-32593: database supplemental logging attributes in flux
| ORA-06512: at "SYS.DBMS_STREAMS_ADM", line 372
| ORA-06512: at "SYS.DBMS_STREAMS_ADM", line 312
| ORA-06512: at line 2
|
| Oracle Error :: ORA-32593
| database supplemental logging attributes in flux|
|
| Cause
| there is another process actively modifying the database wide supplemental
logging attributes.
|
| Action
| Retry the DDL or the LogMiner dictionary build that raised this error.
|
| Restaring the database worked for me.

- Tell the capture process that we want to know who made the change:

BEGIN
DBMS_CAPTURE_ADM.INCLUDE_EXTRA_ATTRIBUTE(
capture_name => 'capture_emp',
attribute_name => 'username',
include => true);
END;
/

- We also need to tell Oracle where to start our capture. Change the
source_database_name to match your database.

DECLARE
iscn NUMBER; -- Variable to hold instantiation SCN value
BEGIN
iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(
source_object_name => 'hr.employee',
source_database_name => 'test10g',
instantiation_scn => iscn);
END;
/

Note: To get the latest SCN from a database:

SQL> select DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER() from dual;

DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER()
-----------------------------------------
8854917
And the fun part! This is where we define our capture procedure. I'm taking this
right from the docs
but I'm adding a couple steps.

The follwing will be a userdefined procedure, on what to do additionally when


changes occurs.

CREATE OR REPLACE PROCEDURE emp_dml_handler(in_any IN ANYDATA) IS


lcr SYS.LCR$_ROW_RECORD;
rc PLS_INTEGER;
command VARCHAR2(30);
old_values SYS.LCR$_ROW_LIST;

BEGIN
-- Access the LCR
rc := in_any.GETOBJECT(lcr);
-- Get the object command type
command := lcr.GET_COMMAND_TYPE();

-- I am inserting the XML equivalent of the LCR into the monitoring table.
insert into streams_monitor (txt_msg) values (command ||
DBMS_STREAMS.CONVERT_LCR_TO_XML(in_any) );

-- Set the command_type in the row LCR to INSERT


lcr.SET_COMMAND_TYPE('INSERT');

-- Set the object_name in the row LCR to EMP_DEL


lcr.SET_OBJECT_NAME('EMPLOYEE_AUDIT');

-- Set the new values to the old values for update and delete
IF command IN ('DELETE', 'UPDATE') THEN
-- Get the old values in the row LCR
old_values := lcr.GET_VALUES('old');
-- Set the old values in the row LCR to the new values in the row LCR
lcr.SET_VALUES('new', old_values);
-- Set the old values in the row LCR to NULL
lcr.SET_VALUES('old', NULL);
END IF;

-- Add a SYSDATE for upd_date


lcr.ADD_COLUMN('new', 'UPD_DATE', ANYDATA.ConvertDate(SYSDATE));
-- Add a user column
lcr.ADD_COLUMN('new', 'user_name', lcr.GET_EXTRA_ATTRIBUTE('USERNAME') );
-- Add an action column
lcr.ADD_COLUMN('new', 'ACTION', ANYDATA.ConvertVarChar2(command));

-- Make the changes


lcr.EXECUTE(true);
commit;
END;
/

- Create the DML handlers:


BEGIN
DBMS_APPLY_ADM.SET_DML_HANDLER(
object_name => 'hr.employee',
object_type => 'TABLE',
operation_name => 'INSERT',
error_handler => false,
user_procedure => 'strmadmin.emp_dml_handler',
apply_database_link => NULL,
apply_name => NULL);
END;
/

BEGIN
DBMS_APPLY_ADM.SET_DML_HANDLER(
object_name => 'hr.employee',
object_type => 'TABLE',
operation_name => 'UPDATE',
error_handler => false,
user_procedure => 'strmadmin.emp_dml_handler',
apply_database_link => NULL,
apply_name => NULL);
END;
/

BEGIN
DBMS_APPLY_ADM.SET_DML_HANDLER(
object_name => 'hr.employee',
object_type => 'TABLE',
operation_name => 'DELETE',
error_handler => false,
user_procedure => 'strmadmin.emp_dml_handler',
apply_database_link => NULL,
apply_name => NULL);
END;
/

- Create the apply rule.

This tells streams, yet again, that we in fact do want to capture changes. The
second calls tells streams
where to put the info. Change the source_database_name to match your database.

DECLARE
emp_rule_name_dml VARCHAR2(30);
emp_rule_name_ddl VARCHAR2(30);
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'hr.employee',
streams_type => 'apply',
streams_name => 'apply_emp',
queue_name => 'strmadmin.streams_queue',
include_dml => true,
include_ddl => false,
source_database => 'test10g',
dml_rule_name => emp_rule_name_dml,
ddl_rule_name => emp_rule_name_ddl);
DBMS_APPLY_ADM.SET_ENQUEUE_DESTINATION(
rule_name => emp_rule_name_dml,
destination_queue_name => 'strmadmin.streams_queue');
END;
/

We don't want to stop applying changes when there is an error, so:

BEGIN
DBMS_APPLY_ADM.SET_PARAMETER(
apply_name => 'apply_emp',
parameter => 'disable_on_error',
value => 'n');
END;
/

- Turn on the apply process:

BEGIN
DBMS_APPLY_ADM.START_APPLY(
apply_name => 'apply_emp');
END;
/

- Turn on the capture process:

BEGIN
DBMS_CAPTURE_ADM.START_CAPTURE(
capture_name => 'capture_emp');
END;
/

- Connect as HR and make some changes to Employees.

sqlplus hr/hr

INSERT INTO hr.employee


VALUES(207, 'JOHN', 'SMITH','JSMITH@MYCOMPANY.COM',NULL, '07-JUN-94',
'AC_ACCOUNT', 777, NULL, NULL, 110);
COMMIT;

INSERT INTO hr.employee


VALUES(208, 'Piet', 'Pietersen','JSMITH@MYCOMPANY.COM',NULL, '07-JUN-94',
'AC_ACCOUNT', 777, NULL, NULL, 110);
COMMIT;

INSERT INTO hr.employee


VALUES(209, 'Piet', 'Pietersen','JSMITH@MYCOMPANY.COM',NULL, '07-JUN-94',
'AC_ACCOUNT', 777, NULL, NULL, 110);
COMMIT;

UPDATE hr.employee
SET salary=5999
WHERE employee_id=206;
COMMIT;

DELETE FROM hr.employees WHERE employee_id=207;


COMMIT;

It takes a few seconds for the data to make it to the logs and then back into the
system to be appled.
Run this query until you see data (remembering that it is not instantaneous):

SELECT employee_id, first_name, last_name, upd_Date, action FROM


hr.employee_audit
ORDER BY employee_id;

Then you can log back into the streams admin account:

sqlplus strmadmin/strmadmin

View the XML LCR that we inserted during the capture process:

set long 9999


set pagesize 0
select * from streams_monitor;

That's it! It's really not that much work to capture and apply changes. Of course,
it's a little bit
more work to cross database instances, but it's not that much.
Keep an eye out for a future entry where I do just that.

One of the things that amazes me is how little code is required to accomplish
this. The less code I have to write,
the less code I have to maintain.

Thank care,

LewisC

Note 9: Streams example (2)


===========================

The entry builds directly on my last entry, Oracle Streams Configuration: Change
Data Capture.
This entry will show you how to propagate the changes you captured in that entry
to a 9i database.

NOTE #1: I would recommend that you run the commands and make sure the last entry
works for you
before trying the code in this entry. That way you will need to debug as few
moving parts as possible.

NOTE #2: I have run this code windows to windows, windows to linux, linux to
solaris and solaris to solaris.
The only time I had any problem at all was solaris to solaris. If you run into
problems with propagation
running but not sending data, shutdown the source database and restart it. That
worked for me.

NOTE #3: I have run this code 10g to 10g and 10g to 9i. It works without change
between them.

NOTE #4: If you are not sure of the exact name of your database (including
domain), use global_name,
i.e. select * from global_name;

NOTE #5: Streams is not available with XE. Download and install EE. If you have 1
GB or more of RAM on your PC,
you can download EE and use the DBCA to run two database instances. You do not
physically need two machines
to get this to work.

NOTE #6: I promise this is the last note. Merry Christmas and/or Happy Holidays!

Now for the fun part.

As I mentioned above, you need two instances for this. I called my first instance
ORCL (how creative!)
and I called my second instance SECOND. It works for me!

ORCL will be my source instance and SECOND will be my target instance. You should
already have the CDC code
from the last article running in ORCL.

ORCL must be in archivelog mode to run CDC. SECOND does not need archivelog mode.
Having two databases
running on a single PC in archivelog mode can really beat up a poor IDE drive.

You already created your streams admin user in ORCL so now do the same thing in
SECOND. The code below is mostly
the same code that you ran on ORCL. I made a few minor changes in case you are
running both instances on a single PC:

sqlplus / as sysdba
create tablespace streams_second_tbs datafile 'c:\temp\stream_2_tbs.dbf' size 25M
reuse autoextend on maxsize unlimited;

create user strmadmin identified by strmadmin


default tablespace streams_second_tbs
quota unlimited on streams_second_tbs;

grant dba to strmadmin;

Connect as strmadmin. You need to create an AQ table, AQ queue and then start the
queue.
That's what the code below does.

BEGIN
DBMS_AQADM.CREATE_QUEUE_TABLE(
queue_table => 'lrc_emp_t',
queue_payload_type => 'sys.anydata',
multiple_consumers => TRUE,
compatible => '8.1');

DBMS_AQADM.CREATE_QUEUE(
queue_name => 'lrc_emp_q',
queue_table => 'lrc_emp_t');

DBMS_AQADM.START_QUEUE (
queue_name => 'lrc_emp_q');
END;
/

You also need to create a database link. You have to have one from ORCL to SECOND
but for debugging,
I like a link in both. So, while you're in SECOND, create a link:

CREATE DATABASE LINK orcl.world


CONNECT TO strmadmin
IDENTIFIED BY strmadmin
USING 'orcl.world';

Log into ORCL as strmadmin and run the exact same command there. Most of the setup
for this is exactly
the same between the two instances.

Create your link on this side also.

CREATE DATABASE LINK second.world


CONNECT TO strmadmin
IDENTIFIED BY strmadmin
USING 'second.world';

Ok, now we have running queues in ORCL and SECOND. While you are logged into ORCL,
you will create a propagation
schedule. You DO NOT need to run this in SECOND.

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
table_name => 'hr.employees',
streams_name => 'orcl_2_second',
source_queue_name => 'strmadmin.lrc_emp_q',
destination_queue_name => 'strmadmin.lrc_emp_q@second.world',
include_dml => true,
include_ddl => FALSE,
source_database => 'orcl.world');
END;
/

This tells the database to take the data in the local lrc_emp_q and send it to the
named destination queue.

We're almost done with the propagation now. We just need to change the code we
wrote in the last article
in our DML handler. Go back and review that code now.

We are going to modify the EMP_DML_HANDLER so that we get an enqueue block just
above the execute statement:

CREATE OR REPLACE PROCEDURE emp_dml_handler(in_any IN ANYDATA) IS


lcr SYS.LCR$_ROW_RECORD;
rc PLS_INTEGER;
command VARCHAR2(30);
old_values SYS.LCR$_ROW_LIST;
BEGIN
-- Access the LCR
rc := in_any.GETOBJECT(lcr);
-- Get the object command type
command := lcr.GET_COMMAND_TYPE();
-- I am inserting the XML equivalent of the LCR into the monitoring table.
insert into streams_monitor (txt_msg)
values (command ||
DBMS_STREAMS.CONVERT_LCR_TO_XML(in_any) );
-- Set the command_type in the row LCR to INSERT
lcr.SET_COMMAND_TYPE('INSERT');
-- Set the object_name in the row LCR to EMP_DEL
lcr.SET_OBJECT_NAME('EMPLOYEE_AUDIT');
-- Set the new values to the old values for update and delete
IF command IN ('DELETE', 'UPDATE') THEN
-- Get the old values in the row LCR
old_values := lcr.GET_VALUES('old');
-- Set the old values in the row LCR to the new values in the row LCR
lcr.SET_VALUES('new', old_values);
-- Set the old values in the row LCR to NULL
lcr.SET_VALUES('old', NULL);
END IF;
-- Add a SYSDATE value for the timestamp column
lcr.ADD_COLUMN('new', 'UPD_DATE', ANYDATA.ConvertDate(SYSDATE));
-- Add a user value for the timestamp column
lcr.ADD_COLUMN('new', 'user_name',
lcr.GET_EXTRA_ATTRIBUTE('USERNAME') );
-- Add an action column
lcr.ADD_COLUMN('new', 'ACTION', ANYDATA.ConvertVarChar2(command));

DECLARE
enqueue_options DBMS_AQ.enqueue_options_t;
message_properties DBMS_AQ.message_properties_t;
message_handle RAW(16);
recipients DBMS_AQ.aq$_recipient_list_t;
BEGIN
recipients(1) := sys.aq$_agent(
'anydata_subscriber',
'strmadmin.lrc_emp_q@second.world',
NULL);
message_properties.recipient_list := recipients;

DBMS_AQ.ENQUEUE(
queue_name => 'strmadmin.lrc_emp_q',
enqueue_options => enqueue_options,
message_properties => message_properties,
payload => anydata.convertObject(lcr),
msgid => message_handle);
EXCEPTION
WHEN OTHERS THEN
insert into streams_monitor (txt_msg)
values ('Anydata: ' || DBMS_UTILITY.FORMAT_ERROR_STACK );
END;

-- Make the changes


lcr.EXECUTE(true);
commit;
END;
/

The declaration section above created some variable required for an enqueue. We
created a subscriber
(that's the name of the consumer). We will use that name to dequeue the record in
the SECOND instance.

We then enqueued our LCR as an ANYDATA datatype.

I put the exception handler there in case there are any problems with our enqueue.

That's all it takes. Insert some records into the HR.employees table and commit
them.
Then log into strmadmin@second and select * from the lrc_emp_t table. You should
have as many records
there as you inserted.

There are not a lot of moving parts so there aren't many things that will go
wrong. Propagation is where
I have the most troubles. You can query DBA_PROPAGATION to see if you have any
propagation errors.

That's it for moving the data from 10g to 9i. In my next article, I will show you
how to dequeue
the data and put it into the employee_audit table on the SECOND side.

If you have any problems or any questions please post them.

Take care, LewisC


Note 10: CDC 9.2
================

A change table is required for each source table. The publisher uses the procedure

DBMS_LOGMNR_CDC_PUBLISH .CREATE_CHANGE_TABLE to create change tables, as shown in


Listing 1.
In this example, the change tables corresponding to PRICE_LIST and SALES_TRAN are
named
CDC_PRICE_LIST and CDC_SALES_TRAN respectively.

This procedure creates a change table in a specified schema.

execute DBMS_CDC_PUBLISH.CREATE_CHANGE_TABLE(OWNER => 'cdc1', \


CHANGE_TABLE_NAME => 'emp_ct', \
CHANGE_SET_NAME => 'SYNC_SET', \
SOURCE_SCHEMA => 'scott', \
SOURCE_TABLE => 'emp', \
COLUMN_TYPE_LIST => 'empno number, ename varchar2(10), job varchar2(9), mgr
number, hiredate date, deptno number', \
CAPTURE_VALUES => 'both', \
RS_ID => 'y', \
ROW_ID => 'n', \
USER_ID => 'n', \
TIMESTAMP => 'n', \
OBJECT_ID => 'n',\
SOURCE_COLMAP => 'n', \
TARGET_COLMAP => 'y', \
OPTIONS_STRING => NULL);

This procedure adds columns to, or drops columns from, an existing change table.

EXECUTE DBMS_LOGMNR_CDC_PUBLISH.ALTER_CHANGE_TABLE (OWNER => 'cdc1') \


CHANGE_TABLE_NAME => 'emp_ct' \
OPERATION => ADD \
ADD_COLUMN_LIST => '' \
RS_ID => 'Y' \
ROW_ID => 'N' \
USER_ID => 'N' \
TIMESTAMP => 'N' \
OBJECT_ID => 'N' \
SOURCE_COLMAP => 'N' \
TARGET_COLMAP => 'N');

This procedure allows a publisher to drop a subscriber view in the subscriber's


schema.

EXECUTE sys.DBMS_CDC_SUBSCRIBE.DROP_SUBSCRIBER_VIEW( \
SUBSCRIPTION_HANDLE =>:subhandle, \
SOURCE_SCHEMA =>'scott', \
SOURCE_TABLE => 'emp');

Note 11: Asktom thread


======================

You Asked
I am looking for specific example of setting up streams for bi-directional schema
level
replication. What are your thoughts on using Oracle Streams to implement active-
active
configuration of databases for high availability?

Thanks,

Pratap

and we said...
replication is for replication.

replication is definitely nothing I would consider for HA.

For HA there is:

o RAC -- active active servers in a room.


o Data Guard -- active/warm for failover in the even the room disappears.

Replication is a study in complexity. Update anywhere will make your application

o infinitely hard to design


o fairly impossible to test
o more fragile (more moving pieces, more things that can go wrong. which
conflicts with your stated goal of HA)

I would not consider replication for HA in any circumstance. Data Guard is the
feature
you are looking for.

Tom, Correct me if I am wrong. My understanding is that for a Dataguard failover,


manual
intervention of the DBA is required. But if I have a replicated database (2
masters - synchronous
from primary to replicated database and asynchronous the other way around - and
only the primary
being updated in normal circumstances), the failover would be automatic and does
not require the
DBA to be on site immediately.

Thanks
Anandhi

Followup March 8, 2004 - 8am US/Eastern:

the problem is you have to design your entire system from day 1 to be replicated
since when you
"failover" (lose the ability to connect to db1) there will be QUEUED transactions
that have not yet
taken place on db2 (eg: your users will say "hey, I know i did that already and do
it all over
again") when db1 "recovers" it'll push its transactions and db2 will push its
transactions. bamm
-- update conflicts.

So, replication is a tool developers can use to build a replicated database.

dataguard is a tool DBA's can use to set up a highly available environment.

they are not to be confused - you cannot replicate 3rd party applications like
Oracle Apps, people
soft, SAP, etc. You cannot replication most custom developed applications without
major
design/coding efforts.

you can data guard ANYTHING.

and yes, when failover is to take place, you want a human deciding that. failover
is something
that happens in minutes, it is in response to a disaster (hence the name DR). It
is a fire, an
unrecoverable situation. You do not want to failover because a system blue
screened (wait for it
to reboot). You do not want to failover some people but not others (as would
happen with db1, db2
and siteA, siteB if siteA cannot route to db1 but can route to db2 but siteB can
still route to db1
- bummer, now you have transactions taking place on BOTH and unless you designed
the systems to be
"replicatable" you are in a hole world of hurt)

DR is something you want a human to be involved in. They need to pull the
trigger.

Hi Tom,

can you please provide a classification of streams and change data capture.
I guess the main difference is that streams covers event capture, transport
(transformation) and
consumption. CDC only the capture.
But if you consider only event capture, are there technical differences between
streams and change
data capture? What was the main reason to made CDC as a separate product?

thx

Jaromir

http://www.db-nemec.com

Followup March 26, 2004 - 9am US/Eastern:

think of streams like a brick.

think of CDC like a building made of brick.


streams can be used to build CDC. CDC is built on top of streams (async CDC is
anyway, sync CDC is
trigger based).

they are complimentary, not really competing.

Hi Tom,
Is Oracle advanced queuing(AQ) is renamed as Oracle Stream in 10g?

Thanks

Followup February 13, 2005 - 4pm US/Eastern:

no, AQ is a foundation technology used in the streams implemenation (and advanced


replication), but
streams is not AQ.

Note 12: ANYDATA:


=================

This datatype could be useful in an application that stores generic


attributes -- attributes you don't KNOW what the datatypes are until you actually
run the
code. In the past, we would have stuffed everything into a VARCHAR2 -- dates,
numbers,
everything. Now, you can put a date in and have it stay as a date (and the system
will
enforce it is in fact a valid date and let you perform date operations on it -- if
it
were in a varchar2 -- someone could put "hello world" into your "date" field)

SQL> connect adm/vga88nt


Connected.
SQL> create table t ( x sys.anyData );

Table created.

SQL> insert into t values ( sys.anyData.convertNumber(5) );

1 row created.

SQL>
SQL> insert into t values ( sys.anyData.convertDate(sysdate) );

1 row created.

SQL>
SQL> insert into t values ( sys.anyData.convertVarchar2('hello world') );

1 row created.

SQL> commit;
Commit complete.

SQL> select t.x.gettypeName() typeName from t t;

TYPENAME
--------------------------------------------------------------------------------
SYS.NUMBER
SYS.DATE
SYS.VARCHAR2

SQL> select * from t;

X()
--------------------------------------------------------------------------------
ANYDATA()
ANYDATA()
ANYDATA()

Unfortunately, they don't have a method to display the contents of


ANYDATA in a query (most useful in programs that will fetch the data,
figure out what it is and do something with it -- eg: the application
has some intelligence as to how to handle the data)

Fortunately we can write one tho:

create or replace function getData( p_x in sys.anyData )


return varchar2
2 as
3 l_num number;
4 l_date date;
5 l_varchar2 varchar2(4000);
6 begin
7 case p_x.gettypeName
8 when 'SYS.NUMBER' then
9 if ( p_x.getNumber( l_num ) = dbms_types.success )
10 then
11 l_varchar2 := l_num;
12 end if;
13 when 'SYS.DATE' then
14 if ( p_x.getDate( l_date ) = dbms_types.success )
15 then
16 l_varchar2 := l_date;
17 end if;
18 when 'SYS.VARCHAR2' then
19 if ( p_x.getVarchar2( l_varchar2 ) = dbms_types.success )
20 then
21 null;
22 end if;
23 else
24 l_varchar2 := '** unknown **';
25 end case;
26
27 return l_varchar2;
28 end;
29 /
Function created.

select getData( x ) getdata from t;

GETDATA
--------------------
5
19-MAR-02
hello world

Note 12: Materialized Views


===========================

thread 1:
---------

create materialized view emp_rollback


enable query rewrite
as
select deptno, sum(sal) sal
from emp
group by deptno;

Now, given that all the necessary settings have been done (see the data
warehousing guide
for a comprehensive example) your end users can query:

select deptno, sum(sal) from emp where deptno in ( 10, 20) group by deptno;

and the database engine will rewrite the query to go against the precomputed
rollup, not
the details -- giving you the answer in a fraction of the time it would normally
take.

CREATE MATERIALIZED VIEW LOG ON sales WITH SEQUENCE, ROWID


(prod_id, cust_id, time_id, channel_id, promo_id, quantity_sold, amount_sold)
INCLUDING NEW VALUES;

CREATE MATERIALIZED VIEW sum_sales


PARALLEL
BUILD IMMEDIATE
REFRESH FAST ON COMMIT AS
SELECT s.prod_id, s.time_id, COUNT(*) AS count_grp,
SUM(s.amount_sold) AS sum_dollar_sales,
COUNT(s.amount_sold) AS count_dollar_sales,
SUM(s.quantity_sold) AS sum_quantity_sales,
COUNT(s.quantity_sold) AS count_quantity_sales
FROM sales s
GROUP BY s.prod_id, s.time_id;

This example creates a materialized view that contains aggregates on a single


table.
Because the materialized view log has been created with all referenced columns in
the materialized view's
defining query, the materialized view is fast refreshable. If DML is applied
against the sales table,
then the changes will be reflected in the materialized view when the commit is
issued.

CREATE MATERIALIZED VIEW cust_sales_mv


PCTFREE 0 TABLESPACE demo
STORAGE (INITIAL 16k NEXT 16k PCTINCREASE 0)
PARALLEL
BUILD IMMEDIATE
REFRESH COMPLETE
ENABLE QUERY REWRITE AS
SELECT c.cust_last_name, SUM(amount_sold) AS sum_amount_sold
FROM customers c, sales s WHERE s.cust_id = c.cust_id
GROUP BY c.cust_last_name;

thread 2:
---------

Use the CREATE MATERIALIZED VIEW statement to create a materialized view. A


materialized view is a
database object that contains the results of a query. The FROM clause of the query
can name tables,
views, and other materialized views. Collectively these objects are called master
tables
(a replication term) or detail tables (a data warehousing term). This reference
uses "master tables"
for consistency. The databases containing the master tables are called the master
databases.

Note:

The keyword SNAPSHOT is supported in place of MATERIALIZED VIEW for backward


compatibility.

thread 3:
---------

The following statement creates the primary-key materialized view on the table emp
located on a remote database.

SQL> CREATE MATERIALIZED VIEW mv_emp_pk


REFRESH FAST START WITH SYSDATE
NEXT SYSDATE + 1/48
WITH PRIMARY KEY
AS SELECT * FROM emp@remote_db;

Materialized view created.

Note: When you create a materialized view using the FAST option you will need to
create a view log
on the master tables(s) as shown below:
SQL> CREATE MATERIALIZED VIEW LOG ON emp;
Materialized view log created.

thread 4:
---------

Refreshing Materialized Views


When creating a materialized view, you have the option of specifying whether the
refresh occurs
ON DEMAND or ON COMMIT. In the case of ON COMMIT, the materialized view is changed
every time a
transaction commits, thus ensuring that the materialized view always contains the
latest data.
Alternatively, you can control the time when refresh of the materialized views
occurs by specifying ON DEMAND.
In this case, the materialized view can only be refreshed by calling one of the
procedures in the DBMS_MVIEW package.

DBMS_MVIEW provides three different types of refresh operations.

DBMS_MVIEW.REFRESH

Refresh one or more materialized views.

DBMS_MVIEW.REFRESH_ALL_MVIEWS

Refresh all materialized views.

DBMS_MVIEW.REFRESH_DEPENDENT

Refresh all materialized views that depend on a specified master table or


materialized view or
list of master tables or materialized views.

Note 10:
========

/*********************************************************************************
* @author : Chandar

* @version : 1.0
*
* Name of the Application : SetupStreams.sql

* Creation/Modification History :
*
* Chandar 02-Feb-2003 Created
*

* Overview of Script:

* This SQL scripts sets up the streams for bi-directional replication between two
* databases. Replication is set up for the table named tabone in strmuser schema
* created by the script in both the databases.
* Ensure that you have created a streams administrator before executing this
script.
* The script StreamsAdminConfig.sql can be used to create a streams administrator
* and configure it.
* After running this script you can use AddTable.sql script to add another active
* table to streams environment.

**********************************************************************************
*/

SET VERIFY OFF


SET ECHO OFF
SPOOL streams_setup.log

--define variables to store global names of two databases

variable site1 varchar2(128);


variable site2 varchar2(128);
variable scn number;

----------------------------------------------------------------------------------
-
-- get TNSNAME , SYS password and streams admin user details for both the
databases

----------------------------------------------------------------------------------
-
PROMPT
-- TNSNAME for database 1
ACCEPT db1 PROMPT 'Enter TNS Name of first database :'

PROMPT
-- SYS password for database 1
ACCEPT syspwddb1 PROMPT 'Enter password for sys user of first database :'

PROMPT
-- Streams administrator username for database 1
ACCEPT strm_adm_db1 PROMPT 'Enter username for streams admin of first database :'

PROMPT
-- Streams administrator password for database 1
ACCEPT strm_adm_pwd_db1 PROMPT 'Enter password for streams admin on first database
:'

PROMPT
-- TNSNAME for database 2

ACCEPT db2 PROMPT 'Enter TNS Name of second database :'

PROMPT
-- SYS password for database 2
ACCEPT syspwddb2 PROMPT 'Enter password for sys user of second database :'

PROMPT
-- Streams administrator username for database 2
ACCEPT strm_adm_db2 PROMPT 'Enter username for streams admin of second database :'

PROMPT

-- Streams administrator password for database 2


ACCEPT strm_adm_pwd_db2 PROMPT 'Enter password for streams admin on second
database :'

PROMPT
PROMPT Connecting as SYS user to database 1

CONN sys/&syspwddb1@&db1 AS SYSDBA;

-- Store global name in site1 variable

EXECUTE SELECT global_name INTO :site1 FROM global_name;

PROMPT Granting execute privileges on dbms_lock and dbms_pipe to streams admin

GRANT EXECUTE ON DBMS_LOCK TO &strm_adm_db1;

GRANT EXECUTE ON DBMS_PIPE to &strm_adm_db1;

-- create a user name strmuser and grant necessary privileges

PROMPT Creating user named strmuser

GRANT CONNECT, RESOURCE TO strmuser IDENTIFIED BY strmuser;

PROMPT Connecting as strmuser to database1

CONN strmuser/strmuser@&db1

-- create a sample table named tabone for which the replication will be set up

PROMPT
PROMPT Creating table tabone

CREATE TABLE tabone (id NUMBER(5) PRIMARY KEY, name VARCHAR2(50));

-- grant all permissions on tabone to stream administration

PROMPT Adding supplemetal logging for table tabone

ALTER TABLE tabone ADD SUPPLEMENTAL LOG GROUP tabone_log_group ( id,name) ALWAYS;

PROMPT Granting permissions on table tabone to streams administration

GRANT ALL ON strmuser.tabone TO &strm_adm_db1;

------------------------------------
-- Repeat above steps for database 2
------------------------------------
PROMPT Connecting as SYS user to database2

CONN sys/&syspwddb2@&db2 AS SYSDBA;

-- Store global name in site2 variable

EXECUTE SELECT global_name INTO :site2 FROM global_name;

PROMPT Granting execute privileges on dbms_lock and dbms_pipe to streams admin

GRANT EXECUTE ON DBMS_LOCK TO &strm_adm_db2;

GRANT EXECUTE ON DBMS_PIPE to &strm_adm_db2;

-- create a user name strmuser and grant necessary privileges

PROMPT Creating user named strmuser

GRANT CONNECT, RESOURCE TO strmuser IDENTIFIED BY strmuser;

PROMPT Connecting as strmuser

CONN strmuser/strmuser@&db2

-- create a sample table named tabone for which the replication will be set up

PROMPT
PROMPT Creating table tabone

CREATE TABLE tabone (id NUMBER(5) PRIMARY KEY, name VARCHAR2(50));

PROMPT Adding supplemetal logging for table tabone

ALTER TABLE tabone ADD SUPPLEMENTAL LOG GROUP tabone_log_group ( id,name) ALWAYS;

-- grant all permissions on tabone to stream administration

PROMPT Granting all permissions on tabone to streams administrator

GRANT ALL ON strmuser.tabone TO &strm_adm_db2;

----------------------------------------------------------------------------------
-- Set up replication for table tabone from database 1 to database 2 using streams
----------------------------------------------------------------------------------

-- connect as streams admin to database 1

PROMPT Connecting as streams adimistrator to database 1

conn &strm_adm_db1/&strm_adm_pwd_db1@&db1
-- create and set up streams queue at database 1

PROMPT
PROMPT Creating streams queue

BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'strmuser_queue_table',
queue_name => 'strmuser_queue',
queue_user => 'strmuser');
END;
/

-- Add table propagation rules for table tabone to propagate captured changes
-- from database 1 to database 2

PROMPT Adding propagation rules for table tabone

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(

table_name => 'strmuser.tabone',


streams_name => 'db1_to_db2_prop',
source_queue_name => '&strm_adm_db1..strmuser_queue',
destination_queue_name => '&strm_adm_db2..strmuser_queue@'||:site2,
include_dml => true,
include_ddl => true,
source_database => :site1);
END;
/

-- create a capture process and add table rules for table tabone to capture the
-- changes made to tabone in database 1

PROMPT Creating capture process at database 1 and adding table rules for table
tabone.

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'strmuser.tabone',
streams_type => 'capture',
streams_name => 'capture_db1',

queue_name => '&strm_adm_db1..strmuser_queue',


include_dml => true,
include_ddl => true);
END;
/

-- create a database link to database 2 connecting as streams administrator

PROMPT Creating database link to database 2

DECLARE
sql_command VARCHAR2(200);
BEGIN
sql_command :='CREATE DATABASE LINK ' ||:site2|| ' CONNECT TO'||
'&strm_adm_db2 IDENTIFIED BY &strm_adm_pwd_db2 USING ''&db2''';
EXECUTE IMMEDIATE sql_command;
END;
/

-- get the current SCN of database 1

PROMPT Getting current SCN of database 1

EXECUTE :scn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();

-- connect to database 2 as streams administrator

PROMPT Connecting as streams administrator to database 2

conn &strm_adm_db2/&strm_adm_pwd_db2@&db2

-- Set table instantiation SCN for table tabone at database 2 to current


-- SCN of database 1
-- We need not use import/export for instantiation because table tabone
-- does not contain any data

PROMPT
PROMPT Setting instantiation SCN for table tabone at database 2

BEGIN
DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(source_object_name =>
'strmuser.tabone',
source_database_name => :site1,
instantiation_scn => :scn);
END;
/

-- create and set up streams queue at database 2

PROMPT Setting up streams queue at database 2

BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'strmuser_queue_table',
queue_name => 'strmuser_queue',
queue_user => 'strmuser');
END;
/

-- create an apply process and add table rules for table tabone to apply
-- any changes propagated from database 1

PROMPT Creating Apply process at database 2 and adding table rules for table
tabone
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'strmuser.tabone',
streams_type => 'apply',
streams_name => 'apply_db2',
queue_name => '&strm_adm_db2..strmuser_queue',
include_dml => true,
include_ddl => true,
source_database => :site1);
END;
/

-- start the apply process at database 2

PROMPT Starting the apply process

BEGIN
DBMS_APPLY_ADM.START_APPLY(
apply_name => 'apply_db2');
END;
/

-- connect to database 1 as streams administrator

PROMPT Connecting as streams administrator to database 1

conn &strm_adm_db1/&strm_adm_pwd_db1@&db1

-- start the capture process

PROMPT
PROMPT Starting the capture process at database 1

BEGIN
DBMS_CAPTURE_ADM.START_CAPTURE(
capture_name => 'capture_db1');
END;
/

-- make dml changes to tabone to check if streams is working

PROMPT Inserting row in tabone at database 1

INSERT INTO strmuser.tabone VALUES(11,'chan');

COMMIT;

-- wait for some time so that changes are applied to database 2.

EXECUTE DBMS_LOCK.SLEEP(35);

----------------------------------------------------------------------------------
-
-- Set up replication for table tabone from database 2 to database 1 using streams
----------------------------------------------------------------------------------
-

-- connect to database 2 as streams administrator

PROMPT Connecting as streams administrator to database 2

conn &strm_adm_db2/&strm_adm_pwd_db2@&db2

-- select table tabone to see if changes from database 1 are applied

PROMPT
PROMPT Selecting rows from tabone at database 2 to see if changes are propagated

select * from strmuser.tabone;

PROMPT
PROMPT Setting up bi-directional replication of table tabone

-- create a database link to database 1 connecting as streams administrator

PROMPT
PROMPT Creating database link from database 2 to database 1

DECLARE
sql_command varchar2(200);
BEGIN
sql_command :='CREATE DATABASE LINK ' ||:site1|| ' CONNECT TO'||
'&strm_adm_db1 IDENTIFIED BY &strm_adm_pwd_db1 USING ''&db1''';
EXECUTE IMMEDIATE sql_command;
END;
/

-- Add table propagation rules for table tabone to propagate capture changes
-- from database 2 to database 1

PROMPT Adding table propagation rules for tabone at database 2

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
table_name =>'strmuser.tabone',
streams_name => 'db2_to_db1_prop',
source_queue_name => '&strm_adm_db2..strmuser_queue',

destination_queue_name => '&strm_adm_db1..strmuser_queue@'||:site1,


include_dml => true,
include_ddl => true,
source_database => :site2);
END;
/

-- create a capture process and add table rules for table tabone to
-- capture the changes made to tabone in database 2
PROMPT Creating capture process at database 2 and adding table rules
for table tabone

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'strmuser.tabone',
streams_type => 'capture',
streams_name => 'capture_db2',
queue_name => '&strm_adm_db2..strmuser_queue',
include_dml => true,
include_ddl => true);
END;
/

-- get the current SCN of database 2

PROMPT Getting the current SCN of database 2

EXECUTE :scn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();

-- connect to database 1 as streams administrator

PROMPT Connecting as streams administrator to database 1

CONN &strm_adm_db1/&strm_adm_pwd_db1@&db1

-- Set table instantiation SCN for table tabone at database 1 to current


-- SCN of database 2

PROMPT
PROMPT Setting instantiation SCN for tabone at database 2

BEGIN
DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(source_object_name =>
'strmuser.tabone',
source_database_name => :site2,
instantiation_scn => :scn);
END;
/

-- create an apply process and add table rules for table tabone to apply
-- any changes propagated from database 2

PROMPT Creating apply process at database 1 and adding table rules for tabone

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'strmuser.tabone',
streams_type => 'apply',
streams_name => 'apply_db1',
queue_name => '&strm_adm_db1..strmuser_queue',
include_dml => true,
include_ddl => true,
source_database => :site2);
END;
/

-- start the apply process

PROMPT Starting the apply process

BEGIN
DBMS_APPLY_ADM.START_APPLY(
apply_name => 'apply_db1');
END;
/

-- connect to database 2 as streams administrator

PROMPT Connecting to database 2 as streams administrator

CONN &strm_adm_db2/&strm_adm_pwd_db2@&db2;

-- start the capture process

PROMPT
PROMPT Starting the capture process at database 2

BEGIN
DBMS_CAPTURE_ADM.START_CAPTURE(
capture_name => 'capture_db2');
END;
/

-- perform dml on tabone at database 2 to check if changes are propagated

PROMPT Inserting a row into tabone at database 2

INSERT INTO strmuser.tabone VALUES(12,'kelvin');

COMMIT;

-- wait for some time so that changes are applied to database 1.

EXECUTE DBMS_LOCK.SLEEP(35);

-- connect to database 1 as streams administrator

PROMPT Connecting as streams administrator to database 1

CONN &strm_adm_db1/&strm_adm_pwd_db1@&db1

PROMPT Checking if the changes made at database 2 are applied at database 1

SELECT * FROM strmuser.tabone;

SET ECHO OFF


SPOOL OFF

PROMPT End of Script

Note 11:
========

The Data Propagator replication engine uses a change-capture mechanism and a log
to replicate data between a source system
and a target system. A capture process running on the source system captures
changes as they occur in the source tables
and stores them temporarily in the change data tables. The database administrator
of the source database must ensure
that the Data Propagator capture process is active on the source system. The apply
process reads the change data tables
and applies the changes to the target tables.

You can create a Data Propagator subscription with the DB2 Everyplace XML
Scripting tool. In DB2 Everyplace Version 8.2, you cannot create
or configure Data Propagator subscriptions by using the DB2� Everyplace� Sync
Server Administration Console. You can use the DB2 Everyplace
Sync Server Administration Console only to view and assign Data Propagator
subscriptions to subscription sets.
All the Data Propagator subscriptions use the Data Propagator replication engine.

Each Data Propagator replication environment consists of a source system and a


mirror system. The source system contains
the source database, the tables that you want to replicate, and the capture
process that is used to capture the data changes.
The mirror system contains the mirror database and tables. DB2 Everyplace starts
the apply process on the mirror system.

When capturing changes to the change data tables, the capture process that is
running on both the source system and the mirror system
will consume processor resources and input/output resources. As a result of this
additional load on the source system,
replication competes with the source applications for system resources.
Additionally, with the Data Propagator engine,
the number of moves that the changed data has to make between the tables in the
mirror system is higher than with the JDBC engine.
As a result, the mirror database requires a substantially larger logging space
than the JDBC replication engine.
Capacity planners should balance the needs of the replication tasks and source
application to determine the size of the source system accordingly.

-How the Data Propagator replication engine handles data changes to the source
system
When the source application changes a table in the source system, the changes are
synchronized Data Propagator replication
engine first captures the changes, synchronizes them to the mirror system, and
then applies them to the
target system (the mobile device).
-How the Data Propagator replication engine handles data changes to the client
sys9tem
When the client application on the mobile device changes a table in the client
system, the Data Propagator replication engine
first synchronizes the changes to the mirror system, captures them, and then
applies them to the source system.

Note 12: commitscn


==================

The COMMIT SCN - an undocumented feature May 1999

--------------------------------------------------------------------------------

Try the following experiment:

create table t (n1 number);


insert into t values (userenv('commitscn'));
select n1 from t;
N1
--------
43526438
rem Wait a few seconds if there are other people working
rem on your system, or start a second session execute a
rem couple of small (but real) transactions and commits then
commit;
select n1 from t;
N1
--------
43526441

Obviously your values for N1 will not match the values above, but you should see
that somehow the data you inserted into
your table was not the value that was finally committed, so what's going on ?

The userenv('commitscn') function has to be one of the most quirky little


undocumented features of Oracle. You can only use
it in a very restricted fashion, but if you follow the rules the value that hits
the database is the current value
of the SCN (System Commit Number), but when you commit your transaction the number
changes to the latest value
of the SCN which is always just one less than the commit SCN used by your
transaction.

Why on earth, you say, would Oracle produce such a wierd function - and how on
earth do they stop it from costing a
fortune in processing time.

To answer the first question think replication. Back to the days of 7.0.9, when a
client asked me to build a system
which used asynchronous replication between London and New York; eventually I
persuaded him this was not a good idea,
especially on early release software when the cost to the business of an error
would be around $250,000 per shot;
nevertheless I did have to demonstrate that in principal it was possible. The
biggest problem, though, was guaranteeing
that transactions were applied at the remote site in exactly the same order that
they had been committed at the
local site; and this is precisely where Oracle uses userenv('commitscn').

Each time a commit hits the database, the SCN is incremented, so each transaction
is 'owned' by an SCN and no two transactions
can belong to a single SCN - ultimately the SCN generator is the single-thread
through which all the database must pass
and be serialised. Although there is a small arithmetical quirk that the value of
the userenv('commitscn') is changed to one
less than the actual SCN used to commit the transaction, nevertheless each
transaction gets a unique, correctly ordered value
for the function. If you have two transactions, the one with the lower value of
userenv('commitscn') is guaranteeably
the one that committed first.

So how does Oracle ensure that the cost of using this function is not prohbitive.
Well you need to examine Oracle errors
1735 and 1721 in the $ORACLE_HOME/rdbms/admin/mesg/oraus.msg file.

ORA-01721: USERENV(COMMITSCN) invoked more than once in a transaction


ORA-01735: "USERENV('COMMITSCN') not allowed here

You may only use userenv('commitscn') to update exactly one column of one row in a
transaction, or insert exactly
one value for one row in a transaction, and (just to add that final touch of
peculiarity) the column type has to be
an unconstrained number type otherwise the subsequent change does not take place.

--------------------------------------------------------------------------------

Build Your Own Replication:

Given this strange function, here's the basis of what you have to do to write your
own replication code:

create table control_table(sequence_id number, commit_id number);


begin transaction
insert into control_table (sequence_id,commit_id)
select meaningless_sequence.nextval, null
from dual;
-- save the value of meaningless_sequence
-- left as a language-specific exercise
update control_table
set commit_id = userenv('commitscn')
where sequence_id = {saved value of meaningless_sequence};
-- now do all the rest of the work, and include the saved
-- meaningless_sequence.currval in every row of every table
commit;
end transaction

If you now transport the changed data to the remote site, using the commit_id to
send the transactions
in the correct order, and the sequence_id to find the correct items of data, most
of your problems are over.
(Although you still have some messy details which are again left as an exercise.)

Note 13:
========

Oracle stream not working as Logminer is down


Posted: Dec 17, 2007 11:58 PM Reply

Hi,

Oracle streams capture process is not capturing any updates made on table for
which capture & apply
process are configured.

Capture process & apply process are running fine showing enabled as status & no
error. But,
No new records are captured in �streams_queue_table� when I update record in
table, which is configured for
capturing changes.

This setup was working till I got �ORA-01341: LogMiner out-of-memory� error in
alert.log file.
I guess logminer is not capturing the updates from redo log.

Current Alert log is showing following lines for logminer init process
LOGMINER: Parameters summary for session# = 1
LOGMINER: Number of processes = 3, Transaction Chunk Size = 1
LOGMINER: Memory Size = 10M, Checkpoint interval = 10M

But same log was like this before


LOGMINER: Parameters summary for session# = 1
LOGMINER: Number of processes = 3, Transaction Chunk Size = 1
LOGMINER: Memory Size = 10M, Checkpoint interval = 10M
>>LOGMINER: session# = 1, reader process P002 started with pid=18 OS id=5812
>>LOGMINER: session# = 1, builder process P003 started with pid=36 OS id=3304
>>LOGMINER: session# = 1, preparer process P004 started with pid=37 OS id=1496

We can clearly see reader, builder & preparer process are not starting after I got
Out of memory exception
in log miner.

To allocate more space to logminer, I tried to setup tablespace to logminer I got


2 exception which was
contradicting each other error.
SQL> exec DBMS_LOGMNR.END_LOGMNR();
BEGIN DBMS_LOGMNR.END_LOGMNR(); END;

*
ERROR at line 1:
>>ORA-01307: no LogMiner session is currently active
ORA-06512: at "SYS.DBMS_LOGMNR", line 76
ORA-06512: at line 1

SQL> EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE('logmnrts');


BEGIN DBMS_LOGMNR_D.SET_TABLESPACE('logmnrts'); END;

*
ERROR at line 1:
>>ORA-01356: active logminer sessions found
ORA-06512: at "SYS.DBMS_LOGMNR_D", line 232
ORA-06512: at line 1

When I tried stopping logminer exception was �no logminer session is active�, But
when I tried
to setup tablespace exception was �active logminer sessions found�. I am not sure
how to resolve this issue.

Please let me know how to resolve this issue.

Thanks

Posts: 25
From: Brazil
Registered: 6/19/06
Re: Oracle stream not working as Logminer is down
Posted: Dec 19, 2007 3:34 AM in response to: sgurusam Reply

The Logminer session associated with a capture process is a special kind of


session which is called
a "persistent session". You will not be able to stop it using DBMS_LOGMNR. This
package controls
only non-persistent sessions.

To stop the persistent LogMiner session you must stop the capture process.

However, I think your problem is more related to a lack of RAM space instead of
tablespace
(i. e, disk) space. Try to increase the size of the SGA allocated to LogMiner, by
setting
capture parameter _SGA_SIZE. I can see you are using the default of 10M, which may
be not enough for your case.
Of course, you will have to increase the values of init parameters
streams_pool_size, sga_target/sga_max_size
accordingly, to avoid other memory problems.

To set the _SGA_SIZE parameter, use the PL/SQL procedure


DBMS_CAPTURE_ADM.SET_PARAMETER.
The example below would set it to 100Megs:

begin
DBMS_CAPTURE_ADM.set_parameter('<name of capture process','_SGA_SIZE','100');
end;
/

I hope this helps.

Posts: 68
Registered: 1/16/08
Re: Oracle stream not working as Logminer is down
Posted: Jan 21, 2008 5:55 AM in response to: ilidioj Reply

The other way round is to clear the archivelog on ur box.


You can use rman for doing the same.

Posts: 68
Registered: 1/16/08
Re: Oracle stream not working as Logminer is down
Posted: Jan 21, 2008 5:56 AM in response to: anoopS Reply

The best way is to write a function for clearing up the archivelogs and schedule
it at regular intervals
to avoid these kind of errors.

Note 14:
========

> I have set up asyncronous hotlog change data capture from a 9iR2 mainframe
> oracle database to an AIX 10gR2 database. The mainframe process didn't work
> and put the capture into Abended status.
>
> *** SESSION ID:(21.175) 2006-08-01 17:28:51.777
> error 10010 in STREAMS process
> ORA-10010: Begin Transaction
> ORA-00308: cannot open archived log '//'EDB.RL11.ARCHLOG.T1.S5965.DBF''
> ORA-04101: OS/390 implementation layer error
> ORA-04101, FUNC=LOCATE , RC=8, RS=C5C7002A, ERRORID=1158
> ::OPIRIP: Uncaught error 447. Error stack::ORA-00447: fatal error in
> background
> A-00308: cannot open archived log '//'EDB.RL11.ARCHLOG.T1.S5965.DBF''
> ORA-04101: OS/390 implementation layer error
> ORA-04101, FUNC=LOCATE , RC=8, RS=C5C7002A, ERRORID=1158
>
> This is because I had lower case characters in the log file format in the
> init.ora on the mainframe. The actual log file that was created was a
> completely different name.
>
> I shut down the database and fixed the init.ora. Switched the log file. I
> dropped all the objects that I created for CDC. I recreated the capture and
> altered the start scn of the capture to the current log which I found by
> running: select to_char(max(first_change#)) from v$log;
>
> I created the other objects, but when I run
> dbms_cdc_publish.alter_hotlog_change_source to enable, it immediately changes
> the capture from disabled to abended status, and gives me the same error
> message as above.
>
> How do I get the capture out of abended status, and how do I get it to NOT
> try to find the old archive log file (which isn't there anyways)?
>
Any help would be greatly appreciated!

==================================================================================
============
Note 15: Async CDC extended TEST:
==================================================================================
============
Purpose: Test Async CDC Hotlog and solve errors
1. long running txn detected
2. stop of capture

Date : 26/02/2008
DB : 10.2.0.3
------------------------------------------------------------------------
SOURCE TABLE OWNER: ALBERT
SOURCE TABLE : PERSOON
PUBLISHER : publ_cdc
CDC_SET : CDC_DEMO_SET
SUBSCRIBER : subs_cdc
CHANGE TABLE : CDC_PERSOON
CHANGE_SOURCE : SYNC_SOURCE
------------------------------------------------------------------------

set ORACLE_HOME=C:\ora10g\product\10.2.0\db_1

Init:

-- specific:
TEST10G:
startup mount pfile=c:\oracle\admin\test10g\pfile\init.ora
TEST10G2:
startup mount pfile=c:\oracle\admin\test10g2\pfile\init.ora

-- common:
alter database archivelog;
archive log start;
alter database force logging;
alter database add supplemental log data;
alter database open;

archive log list -- Archive Mode


show parameter aq_tm_processes -- min 3
show parameter compatible -- must be 10.1.0 or above
show parameter global_names -- must be TRUE
show parameter job_queue_processes -- min 2 recommended 4-6
show parameter open_links -- not less than the default 4
show parameter shared_pool_size -- must be 0 or at least 200MB
show parameter streams_pool_size -- min. 480MB (10MB/capture 1MB/apply)
show parameter undo_retention -- min. 3600 (1 hr.) (900)

-- Examples of altering initialization parameters


alter system set aq_tm_processes=3 scope=BOTH;
alter system set compatible='10.2.0.1.0' scope=SPFILE;
alter system set global_names=TRUE scope=BOTH;
alter system set job_queue_processes=6 scope=BOTH;
alter system set open_links=4 scope=SPFILE;
alter system set streams_pool_size=200M scope=BOTH; -- very slow if making smaller
alter system set undo_retention=3600 scope=BOTH;

/*
JOB_QUEUE_PROCESSES (current value) + 2
PARALLEL_MAX_SERVERS (current value) + (5 * (the number of change sets planned))
PROCESSES (current value) + (7 * (the number of change sets planned))
SESSIONS (current value) + (2 * (the number of change sets planned))
*/

------------------------------------------------------------------------

Admin Queries:
connect / as sysdba

select * FROM DBA_SOURCE_TABLES;

SELECT
SET_NAME,CHANGE_SOURCE_NAME,BEGIN_SCN,END_SCN,CAPTURE_ENABLED,PURGING,QUEUE_NAME
FROM CHANGE_SETS;

SELECT OWNER, QUEUE_TABLE, TYPE, OBJECT_TYPE, RECIPIENTS


FROM DBA_QUEUE_TABLES;

SELECT SET_NAME,STATUS,EARLIEST_SCN,LATEST_SCN,to_char(LAST_PURGED, 'DD-MM-


YYYY;HH24:MI'),
to_char(LAST_EXTENDED, 'DD-MM-YYYY;HH24:MI'),SUBSCRIPTION_NAME
FROM DBA_SUBSCRIPTIONS;

SELECT PROPAGATION_SOURCE_NAME, PROPAGATION_NAME, STAGING_DATABASE,


DESTINATION_QUEUE
FROM CHANGE_PROPAGATIONS;

SELECT tablespace_name, force_logging


FROM dba_tablespaces;

SELECT supplemental_log_data_min, supplemental_log_data_pk,


supplemental_log_data_ui,
supplemental_log_data_fk, supplemental_log_data_all, force_logging
FROM gv$database;

SELECT owner, name, QUEUE_TABLE, ENQUEUE_ENABLED, DEQUEUE_ENABLED


FROM dba_queues;

SELECT capture_name, total_messages_captured, total_messages_enqueued,


elapsed_enqueue_time
FROM dba_hist_streams_capture;

SELECT apply_name, reader_total_messages_dequeued, reader_lag,


server_total_messages_applied
FROM dba_hist_streams_apply_sum;

SELECT table_name, scn, supplemental_log_data_pk, supplemental_log_data_ui,


supplemental_log_data_fk, supplemental_log_data_all
FROM dba_capture_prepared_tables;

SELECT table_name, scn, supplemental_log_data_pk, supplemental_log_data_ui,


supplemental_log_data_fk, supplemental_log_data_all
FROM dba_capture_prepared_tables;

SELECT DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER() from dual;

SELECT EQ_NAME,EQ_TYPE,TOTAL_WAIT#,FAILED_REQ#,CUM_WAIT_TIME,REQ_DESCRIPTION
FROM V_$ENQUEUE_STATISTICS WHERE CUM_WAIT_TIME>0 ;

SELECT set_name,capture_name,queue_name,queue_table_name,capture_enabled
FROM cdc_change_sets$;

SELECT set_name,capture_name,capture_enabled
FROM cdc_change_sets$;

SELECT set_name, CAPTURE_ENABLED, BEGIN_SCN, END_SCN,LOWEST_SCN,CAPTURE_ERROR


FROM cdc_change_sets$;

SELECT set_name, change_source_name, capture_enabled, stop_on_ddl, publisher


FROM change_sets;

SELECT subscription_name, handle, set_name, username, earliest_scn, description


FROM cdc_subscribers$;

SELECT username
FROM dba_users u, streams$_privileged_user s
WHERE u.user_id = s.user#;

SELECT cap.CAPTURE_NAME, cap.FIRST_SCN, cap.APPLIED_SCN,


cap.REQUIRED_CHECKPOINT_SCN
FROM DBA_CAPTURE cap, CHANGE_SETS cset
WHERE cset.SET_NAME = 'CDC_DEMO_SET' AND cap.CAPTURE_NAME = cset.CAPTURE_NAME;

SELECT r.SOURCE_DATABASE,r.SEQUENCE#,r.NAME,r.DICTIONARY_BEGIN,r.DICTIONARY_END
FROM DBA_REGISTERED_ARCHIVED_LOG r, DBA_CAPTURE c
WHERE c.CAPTURE_NAME = 'CDC$C_CHANGE_SET_ALBERT' AND r.CONSUMER_NAME =
c.CAPTURE_NAME;

SELECT CONSUMER_NAME,PURGEABLE,THREAD#, FIRST_SCN,NEXT_SCN, SEQUENCE#


FROM DBA_REGISTERED_ARCHIVED_LOG

------------------------------------------------------------------------

>>>>>>>>>>> connect / as sysdba

Initial:

-- TS

CREATE TABLESPACE TS_CDC DATAFILE 'C:\ORACLE\ORADATA\TEST10G\TS_CDC.DBF' SIZE 50M


EXTENT MANAGEMENT LOCAL AUTOALLOCATE
SEGMENT SPACE MANAGEMENT AUTO
LOGGING
FORCE LOGGING;

- USERS:

create user albert identified by albert


default tablespace ts_cdc
temporary tablespace temp
QUOTA 10M ON sysaux
QUOTA 20M ON users
QUOTA 50M ON ts_cdc
;
create user publ_cdc identified by publ_cdc
default tablespace ts_cdc
temporary tablespace temp
QUOTA 10M ON sysaux
QUOTA 20M ON users
QUOTA 50M ON TS_CDC
;

create user subs_cdc identified by subs_cdc


default tablespace ts_cdc
temporary tablespace temp
QUOTA 10M ON sysaux
QUOTA 20M ON users
QUOTA 50M ON TS_CDC
;

-- GRANTS:
GRANT create session TO albert;
GRANT create table TO albert;
GRANT create sequence TO albert;
GRANT create procedure TO albert;
GRANT connect TO albert;
GRANT resource TO albert;

GRANT create session TO publ_cdc;


GRANT create table TO publ_cdc;
GRANT create sequence TO publ_cdc;
GRANT create procedure TO publ_cdc;
GRANT connect TO publ_cdc;
GRANT resource TO publ_cdc;
GRANT dba TO publ_cdc;

GRANT create session TO subs_cdc;


GRANT create table TO subs_cdc;
GRANT create sequence TO subs_cdc;
GRANT create procedure TO subs_cdc;
GRANT connect TO subs_cdc;
GRANT resource TO subs_cdc;
GRANT dba TO subs_cdc;

GRANT execute_catalog_role TO publ_cdc;


GRANT select_catalog_role TO publ_cdc;

GRANT execute_catalog_role TO subs_cdc;


GRANT select_catalog_role TO subs_cdc;

-- object privileges
GRANT execute ON dbms_cdc_publish TO publ_cdc;
GRANT execute ON dbms_cdc_subscribe TO publ_cdc;
GRANT execute ON dbms_lock TO publ_cdc;

GRANT execute ON dbms_cdc_publish TO subs_cdc;


GRANT execute ON dbms_cdc_subscribe TO subs_cdc;
GRANT execute ON dbms_lock TO subs_cdc;

execute dbms_streams_auth.grant_admin_privilege('publ_cdc');
SQL> SELECT *
2 FROM dba_streams_administrator;

USERNAME LOC ACC


------------------------------ --- ---
publ_cdc YES YES

SQL> desc dba_streams_administrator;

SQL> SELECT username


2 FROM dba_users u, streams$_privileged_user s
3 WHERE u.user_id = s.user#;

USERNAME
------------------------------
publ_cdc

------------------------------------------------------------------------

CDC:
====

-- CREATE CHANGE_SET

>>>>>>>>>>> connect albert/albert

create table persoon


(
userid number,
name varchar(30),
lastname varchar(30),
constraint pk_userid primary key (userid)
);

GRANT SELECT ON PERSOON TO publ_cdc;


GRANT SELECT ON PERSOON TO subs_cdc;

ALTER TABLE persoon


ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

>>>>>>>>>>> connect / as sysdba

SQL> SELECT DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER() from dual;

DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER()
-----------------------------------------
608789

exec dbms_capture_adm.prepare_table_instantiation(table_name => ALBERT.PERSOON);

SQL> SELECT table_name, scn, supplemental_log_data_pk, supplemental_log_data_ui,


2 supplemental_log_data_fk, supplemental_log_data_all
3 FROM dba_capture_prepared_tables;

TABLE_NAME SCN SUPPLEME SUPPLEME SUPPLEME SUPPLEME


------------------------------ ---------- -------- -------- -------- --------
PERSOON 608809 IMPLICIT IMPLICIT IMPLICIT EXPLICIT

SQL> SELECT DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER() from dual;

DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER()
-----------------------------------------
608879

>>>>>>>>>>> connect publ_cdc/publ_cdc (done as publisher!!)

exec dbms_cdc_publish.create_change_set('CDC_DEMO_SET', 'CDC Demo 2 Change Set',


'HOTLOG_SOURCE', 'Y', NULL, NULL);

Note the 'HOTLOG_SOURCE' !!

SQL> exec dbms_cdc_publish.create_change_set('CDC_DEMO_SET', 'CDC Demo 2 Change


Set', 'HOTLOG_SOURCE
', 'Y', NULL, NULL);

PL/SQL procedure successfully completed.

Note:

if you need to drop a change set, use:


DBMS_CDC_PUBLISH.DROP_CHANGE_SET(change_set_name IN VARCHAR2);

>>>>>>>>>>> conn / as sysdba

SQL> SELECT set_name, capture_name, queue_name, queue_table_name


2 FROM cdc_change_sets$;

SET_NAME CAPTURE_NAME QUEUE_NAME


QUEUE_TABLE_NAME
------------------------------ ------------------------------
------------------------------ -------
SYNC_SET
CDC_DEMO_SET CDC$C_CDC_DEMO_SET CDC$Q_CDC_DEMO_SET
CDC$T_CDC_DEMO_SET

SQL>
SQL> SELECT set_name, CAPTURE_ENABLED, BEGIN_SCN, END_SCN,LOWEST_SCN,CAPTURE_ERROR
2 FROM cdc_change_sets$;

SET_NAME C BEGIN_SCN END_SCN LOWEST_SCN C


------------------------------ - ---------- ---------- ---------- -
SYNC_SET Y 0 N
CDC_DEMO_SET N 0 N

SQL> SELECT set_name, CAPTURE_ENABLED, BEGIN_SCN, END_SCN,LOWEST_SCN,CAPTURE_ERROR


2 FROM cdc_change_sets$;

SET_NAME C BEGIN_SCN END_SCN LOWEST_SCN C


------------------------------ - ---------- ---------- ---------- -
SYNC_SET Y 0 N
CDC_DEMO_SET N 0 N

SQL> SELECT set_name, change_source_name, capture_enabled, stop_on_ddl, publisher


2 FROM change_sets;

SET_NAME CHANGE_SOURCE_NAME C S PUBLISHER


------------------------------ ------------------------------ - -
------------------------------
SYNC_SET SYNC_SOURCE Y N
CDC_DEMO_SET HOTLOG_SOURCE N Y SYS

SQL>
SQL> SELECT subscription_name, handle, set_name, username, earliest_scn,
description
2 FROM cdc_subscribers$;

no rows selected

-- CREATE CHANGE TABLE:

>>>>>>>>>>> conn publ_cdc/publ_cdc

BEGIN
dbms_cdc_publish.create_change_table('publ_cdc', 'CDC_PERSOON', 'CDC_DEMO_SET',
'ALBERT', 'PERSOON', 'userid number, name varchar(30), lastname varchar(30)',
'BOTH', 'Y', 'Y', 'Y', 'Y', 'N', 'N', 'Y', 'TABLESPACE TS_CDC');
END;
/

The publisher can use this procedure for asynchronous and synchronous Change Data
Capture.
However, the default values for the following parameters are the only supported
values
for synchronous change sets: begin_date, end_date, and stop_on_ddl.

SQL> BEGIN
2 dbms_cdc_publish.create_change_table('publ_cdc', 'CDC_PERSOON',
'CDC_DEMO_SET',
3 'ALBERT', 'PERSOON', 'userid number, name varchar(30), lastname varchar(30)',

4 'BOTH', 'Y', 'Y', 'Y', 'Y', 'N', 'N', 'Y', 'TABLESPACE TS_CDC');
5 END;
6 /

PL/SQL procedure successfully completed.

GRANT select ON CDC_PERSOON TO subs_cdc;

Note: To drop a change table use:

-- drop the change table


DBMS_CDC_PUBLISH.DROP_CHANGE_TABLE(
owner IN VARCHAR2,
change_table_name IN VARCHAR2,
force_flag IN CHAR);

exec dbms_cdc_publish.drop_change_table('publ_cdc','CDC_PERSOON','Y');

>>>>>>>>>>> connect / as sysdba

SQL> SELECT change_set_name, source_schema_name, source_table_name


2 FROM cdc_change_tables$;

CHANGE_SET_NAME SOURCE_SCHEMA_NAME SOURCE_TABLE_NAME


------------------------------ ------------------------------
------------------------------
CDC_DEMO_SET ALBERT PERSOON

SQL> SELECT set_name,capture_name,capture_enabled


2 FROM cdc_change_sets$;

SET_NAME CAPTURE_NAME C
------------------------------ ------------------------------ -
SYNC_SET Y
CDC_DEMO_SET CDC$C_CDC_DEMO_SET N

>>>>>>>>>>> connect publ_cdc/publ_cdc

exec dbms_cdc_publish.alter_change_set(change_set_name=>'CDC_DEMO_SET',
enable_capture=> 'Y');

SQL> exec dbms_cdc_publish.alter_change_set(change_set_name=>'CDC_DEMO_SET',


enable_capture=> 'Y');

PL/SQL procedure successfully completed.

>>>>>>>>>>> connect / as sysdba

SQL> SELECT set_name,capture_name,capture_enabled


2 FROM cdc_change_sets$;

SET_NAME CAPTURE_NAME C
------------------------------ ------------------------------ -
SYNC_SET Y
CDC_DEMO_SET CDC$C_CDC_DEMO_SET Y

SQL> SELECT owner, name, QUEUE_TABLE, ENQUEUE_ENABLED, DEQUEUE_ENABLED


2 FROM dba_queues;

OWNER NAME QUEUE_TABLE


ENQUEUE DEQUEUE
------------------------------ ------------------------------
------------------------------ -------
SYS CDC$Q_CDC_DEMO_SET CDC$T_CDC_DEMO_SET
YES YES
SYS AQ$_CDC$T_CDC_DEMO_SET_E CDC$T_CDC_DEMO_SET
NO NO
........
........

SQL> SELECT OWNER, QUEUE_TABLE, TYPE, OBJECT_TYPE, RECIPIENTS


2 FROM DBA_QUEUE_TABLES;

OWNER QUEUE_TABLE TYPE OBJECT_TYPE

------------------------------ ------------------------------ -------


-------------------------
..........
..........
SYS CDC$T_CDC_DEMO_SET OBJECT SYS.ANYDATA

..........

SQL> SELECT set_name, change_source_name, capture_enabled, stop_on_ddl, publisher


2 FROM change_sets;

SET_NAME CHANGE_SOURCE_NAME C S PUBLISHER


------------------------------ ------------------------------ - -
------------------------------
SYNC_SET SYNC_SOURCE Y N
CDC_DEMO_SET HOTLOG_SOURCE Y Y SYS

SQL>
SQL> SELECT subscription_name, handle, set_name, username, earliest_scn,
description
2 FROM cdc_subscribers$;

no rows selected

>>>>>>>>>>> connect subs_cdc/subs_cdc

exec dbms_cdc_subscribe.create_subscription('CDC_DEMO_SET', 'cdc_demo subx',


'CDC_DEMO_SUB');

SQL> exec dbms_cdc_subscribe.create_subscription('CDC_DEMO_SET', 'cdc_demo subx',


'CDC_DEMO_SUB');

PL/SQL procedure successfully completed.

>>>>>>>>>>> connect / as sysdba

SQL> SELECT subscription_name, handle, set_name, username, earliest_scn,


description
2 FROM cdc_subscribers$;

SUBSCRIPTION_NAME HANDLE SET_NAME USERNAME


EARLIEST_SCN DESCRIP
------------------------------ ---------- ------------------------------
---------------------------
CDC_DEMO_SUB 1 CDC_DEMO_SET subs_cdc
1 cdc_dem

Note:
If you want to drop a subscription, use:

DBMS_CDC_SUBSCRIBE.DROP_SUBSCRIPTION(subscription_name IN VARCHAR2);
DBMS_CDC_SUBSCRIBE.DROP_SUBSCRIPTION('SUBSCRIPTION_ALBERT');

>>>>>>>>>> connect subs_cdc/subs_cdc

BEGIN
dbms_cdc_subscribe.subscribe('CDC_DEMO_SUB', 'ALBERT', 'PERSOON',
'userid, name, lastname', 'CDC_DEMO_SUB_VIEW');
END;
/

SQL> BEGIN
2 dbms_cdc_subscribe.subscribe('CDC_DEMO_SUB', 'ALBERT', 'PERSOON',
3 'userid, name, lastname', 'CDC_DEMO_SUB_VIEW');
4 END;
5 /

PL/SQL procedure successfully completed.

SQL> SELECT set_name, subscription_name, status


2 FROM user_subscriptions;

SET_NAME SUBSCRIPTION_NAME S
------------------------------ ------------------------------ -
CDC_DEMO_SET CDC_DEMO_SUB N

exec dbms_cdc_subscribe.activate_subscription('CDC_DEMO_SUB');

SQL> exec dbms_cdc_subscribe.activate_subscription('CDC_DEMO_SUB');

PL/SQL procedure successfully completed.

SQL> SELECT set_name, subscription_name, status


2 FROM user_subscriptions;

SET_NAME SUBSCRIPTION_NAME S
------------------------------ ------------------------------ -
CDC_DEMO_SET CDC_DEMO_SUB A

>>>>>>>>>>> connect albert/albert

SQL> insert into persoon


2 values
3 (1,'piet','pietersen');
1 row created.

SQL> commit;

Commit complete.

SQL> insert into persoon


2 values
3 (2,'jan','janssen');

1 row created.

SQL> commit;

Commit complete.

>>>>>>>>>>>>>>> connect subs_cdc/subs_cdc

exec dbms_cdc_subscribe.extend_window('CDC_DEMO_SUB');

SQL> select * from publ_cdc.CDC_PERSOON;

OP CSCN$ COMMIT_TI XIDUSN$ XIDSLT$ XIDSEQ$ RSID$ ROW_ID$


USERNAME$
-- ---------- --------- ---------- ---------- ---------- ----------
------------------ -------------
I 627180 27-FEB-08 2 44 323 1
AAAM1CAAGAAAAAQAAA ALBERT
I 627232 27-FEB-08 10 7 326 2
AAAM1CAAGAAAAAQAAB ALBERT

SQL> select * from CDC_DEMO_SUB_VIEW;

OP CSCN$ COMMIT_TI XIDUSN$ XIDSLT$ XIDSEQ$ ROW_ID$


RSID$ TARGET_COLMAP$
-- ---------- --------- ---------- ---------- ---------- ------------------
---------- -------------
I 627180 27-FEB-08 2 44 323 AAAM1CAAGAAAAAQAAA
1 FE7F000000000000000000000000000
I 627232 27-FEB-08 10 7 326 AAAM1CAAGAAAAAQAAB
2 FE7F000000000000000000000000000

SQL> select OPERATION$, COMMIT_TIMESTAMP$, ROW_ID$, USERID, NAME from


CDC_DEMO_SUB_VIEW;

OP COMMIT_TI ROW_ID$ USERID NAME


-- --------- ------------------ ---------- ------------------------------
I 27-FEB-08 AAAM1CAAGAAAAAQAAA 1 piet
I 27-FEB-08 AAAM1CAAGAAAAAQAAB 2 jan

>>>>>>>>>>>>>> connect albert/albert


insert into persoon
values
(3,'kees','pot');

>>>>>>>>>>>>>> connect subs_cdc/subs_cdc

SQL> select * from publ_cdc.CDC_PERSOON;

OP CSCN$ COMMIT_TI XIDUSN$ XIDSLT$ XIDSEQ$ RSID$ ROW_ID$


USERNAME$
-- ---------- --------- ---------- ---------- ---------- ----------
------------------ -------------
I 627180 27-FEB-08 2 44 323 1
AAAM1CAAGAAAAAQAAA ALBERT
I 627232 27-FEB-08 10 7 326 2
AAAM1CAAGAAAAAQAAB ALBERT
I 628175 27-FEB-08 9 16 351 10001
AAAM1CAAGAAAAAQAAC ALBERT

SQL> select OPERATION$, COMMIT_TIMESTAMP$, ROW_ID$, USERID, NAME from


CDC_DEMO_SUB_VIEW;

OP COMMIT_TI ROW_ID$ USERID NAME


-- --------- ------------------ ---------- ------------------------------
I 27-FEB-08 AAAM1CAAGAAAAAQAAA 1 piet
I 27-FEB-08 AAAM1CAAGAAAAAQAAB 2 jan

exec dbms_cdc_subscribe.extend_window('CDC_DEMO_SUB');

SQL> exec dbms_cdc_subscribe.extend_window('CDC_DEMO_SUB');

PL/SQL procedure successfully completed.

SQL> select OPERATION$, COMMIT_TIMESTAMP$, ROW_ID$, USERID, NAME from


CDC_DEMO_SUB_VIEW;

OP COMMIT_TI ROW_ID$ USERID NAME


-- --------- ------------------ ---------- ------------------------------
I 27-FEB-08 AAAM1CAAGAAAAAQAAA 1 piet
I 27-FEB-08 AAAM1CAAGAAAAAQAAB 2 jan
I 27-FEB-08 AAAM1CAAGAAAAAQAAC 3 kees

exec dbms_cdc_subscribe.purge_window('CDC_DEMO_SUB');

SQL> exec dbms_cdc_subscribe.purge_window('CDC_DEMO_SUB');

PL/SQL procedure successfully completed.

SQL> select * from publ_cdc.CDC_PERSOON;

OP CSCN$ COMMIT_TI XIDUSN$ XIDSLT$ XIDSEQ$ RSID$ ROW_ID$


USERNAME$
-- ---------- --------- ---------- ---------- ---------- ----------
------------------ -------------
I 627180 27-FEB-08 2 44 323 1
AAAM1CAAGAAAAAQAAA ALBERT
I 627232 27-FEB-08 10 7 326 2
AAAM1CAAGAAAAAQAAB ALBERT
I 628175 27-FEB-08 9 16 351 10001
AAAM1CAAGAAAAAQAAC ALBERT

SQL> select OPERATION$, COMMIT_TIMESTAMP$, ROW_ID$, USERID, NAME from


CDC_DEMO_SUB_VIEW;

no rows selected

>>>>>>>>>>>>>> connect albert/albert


Connected.

SQL> insert into persoon


2 values
3 (4,'joop','joopsen');

>>>>>>>>>>>>>>> connect subs_cdc/subs_cdc


Connected.
SQL> select OPERATION$, COMMIT_TIMESTAMP$, ROW_ID$, USERID, NAME from
CDC_DEMO_SUB_VIEW;

no rows selected

SQL> select * from publ_cdc.CDC_PERSOON;

OP CSCN$ COMMIT_TI XIDUSN$ XIDSLT$ XIDSEQ$ RSID$ ROW_ID$


USERNAME$
-- ---------- --------- ---------- ---------- ---------- ----------
------------------ -------------
I 627180 27-FEB-08 2 44 323 1
AAAM1CAAGAAAAAQAAA ALBERT
I 627232 27-FEB-08 10 7 326 2
AAAM1CAAGAAAAAQAAB ALBERT
I 628175 27-FEB-08 9 16 351 10001
AAAM1CAAGAAAAAQAAC ALBERT
I 628841 27-FEB-08 5 3 350 20001
AAAM1CAAGAAAAAPAAA ALBERT

SQL> exec dbms_cdc_subscribe.extend_window('CDC_DEMO_SUB');

PL/SQL procedure successfully completed.

SQL> select OPERATION$, COMMIT_TIMESTAMP$, ROW_ID$, USERID, NAME from


CDC_DEMO_SUB_VIEW;

OP COMMIT_TI ROW_ID$ USERID NAME


-- --------- ------------------ ---------- ------------------------------
I 27-FEB-08 AAAM1CAAGAAAAAPAAA 4 joop

>>>>>>>>>>>>>> connect albert/albert


SQL> insert into persoon
2 values
3 (5,'gerrit','gerritsen');

1 row created.

SQL> commit;

>>>>>>>>>>>>>> connect subs_cdc/subs_cdc

SQL> select * from publ_cdc.CDC_PERSOON;

OP CSCN$ COMMIT_TI XIDUSN$ XIDSLT$ XIDSEQ$ RSID$ ROW_ID$


USERNAME$
-- ---------- --------- ---------- ---------- ---------- ----------
------------------ -------------
I 636854 27-FEB-08 2 7 333 30001
AAAM1CAAGAAAAAOAAA ALBERT
I 627180 27-FEB-08 2 44 323 1
AAAM1CAAGAAAAAQAAA ALBERT
I 627232 27-FEB-08 10 7 326 2
AAAM1CAAGAAAAAQAAB ALBERT
I 628175 27-FEB-08 9 16 351 10001
AAAM1CAAGAAAAAQAAC ALBERT
I 628841 27-FEB-08 5 3 350 20001
AAAM1CAAGAAAAAPAAA ALBERT

SQL> select OPERATION$, COMMIT_TIMESTAMP$, ROW_ID$, USERID, NAME from


CDC_DEMO_SUB_VIEW;

OP COMMIT_TI ROW_ID$ USERID NAME


-- --------- ------------------ ---------- ------------------------------
I 27-FEB-08 AAAM1CAAGAAAAAPAAA 4 joop

SQL> exec dbms_cdc_subscribe.extend_window('CDC_DEMO_SUB');

PL/SQL procedure successfully completed.

SQL> select OPERATION$, COMMIT_TIMESTAMP$, ROW_ID$, USERID, NAME from


CDC_DEMO_SUB_VIEW;

OP COMMIT_TI ROW_ID$ USERID NAME


-- --------- ------------------ ---------- ------------------------------
I 27-FEB-08 AAAM1CAAGAAAAAOAAA 5 gerrit
I 27-FEB-08 AAAM1CAAGAAAAAPAAA 4 joop

>>>>>>>>>>>>>>>> connect albert/albert@test10g


Connected.
SQL> insert into persoon
2 values
3 (6,'marie','bruinsma');
1 row created.

SQL> commit;

Commit complete.

>>>>>>>>>>>>>>>> connect subs_cdc/subs_cdc


Connected.
SQL> select * from publ_cdc.CDC_PERSOON;

OP CSCN$ COMMIT_TI XIDUSN$ XIDSLT$ XIDSEQ$ RSID$ ROW_ID$


USERNAME$
-- ---------- --------- ---------- ---------- ---------- ----------
------------------ -------------
I 636854 27-FEB-08 2 7 333 30001
AAAM1CAAGAAAAAOAAA ALBERT
I 643057 27-FEB-08 9 13 364 40001
AAAM1CAAGAAAAAPAAB ALBERT
I 627180 27-FEB-08 2 44 323 1
AAAM1CAAGAAAAAQAAA ALBERT
I 627232 27-FEB-08 10 7 326 2
AAAM1CAAGAAAAAQAAB ALBERT
I 628175 27-FEB-08 9 16 351 10001
AAAM1CAAGAAAAAQAAC ALBERT
I 628841 27-FEB-08 5 3 350 20001
AAAM1CAAGAAAAAPAAA ALBERT

6 rows selected.

SQL> select OPERATION$, COMMIT_TIMESTAMP$, ROW_ID$, USERID, NAME from


CDC_DEMO_SUB_VIEW;

OP COMMIT_TI ROW_ID$ USERID NAME


-- --------- ------------------ ---------- ------------------------------
I 27-FEB-08 AAAM1CAAGAAAAAOAAA 5 gerrit
I 27-FEB-08 AAAM1CAAGAAAAAPAAA 4 joop

SQL> exec dbms_cdc_subscribe.extend_window('CDC_DEMO_SUB');

PL/SQL procedure successfully completed.

SQL> select OPERATION$, COMMIT_TIMESTAMP$, ROW_ID$, USERID, NAME from


CDC_DEMO_SUB_VIEW;

OP COMMIT_TI ROW_ID$ USERID NAME


-- --------- ------------------ ---------- ------------------------------
I 27-FEB-08 AAAM1CAAGAAAAAOAAA 5 gerrit
I 27-FEB-08 AAAM1CAAGAAAAAPAAB 6 marie
I 27-FEB-08 AAAM1CAAGAAAAAPAAA 4 joop

>>>>>>>>>>> Now about RMAN:

A redo log file used by Change Data Capture must remain available on the staging
database until Change Data Capture
has captured it. However, it is not necessary that the redo log file remain
available until the Change Data Capture
subscriber is done with the change data.

To determine which redo log files are no longer needed by Change Data Capture for
a given change set,
the publisher alters the change set's Streams capture process, which causes
Streams to perform some internal
cleanup and populates the DBA_LOGMNR_PURGED_LOG view. The publisher follows these
steps:

Uses the following query on the staging database to get the three SCN values
needed to determine an
appropriate new first_scn value for the change set, CHICAGO_DAILY:

SELECT cap.CAPTURE_NAME, cap.FIRST_SCN, cap.APPLIED_SCN,


cap.REQUIRED_CHECKPOINT_SCN
FROM DBA_CAPTURE cap, CHANGE_SETS cset
WHERE cset.SET_NAME = 'CDC_DEMO_SET' AND
cap.CAPTURE_NAME = cset.CAPTURE_NAME;

SQL> SELECT cap.CAPTURE_NAME, cap.FIRST_SCN, cap.APPLIED_SCN,


2 cap.REQUIRED_CHECKPOINT_SCN
3 FROM DBA_CAPTURE cap, CHANGE_SETS cset
4 WHERE cset.SET_NAME = 'CDC_DEMO_SET' AND
5 cap.CAPTURE_NAME = cset.CAPTURE_NAME;

CAPTURE_NAME FIRST_SCN APPLIED_SCN REQUIRED_CHECKPOINT_SCN


------------------------------ ---------- ----------- -----------------------
CDC$C_CDC_DEMO_SET 610086 672502 665072

SQL> SELECT recid, first_change#, sequence#, next_change#


2 FROM V$LOG_HISTORY;

RECID FIRST_CHANGE# SEQUENCE# NEXT_CHANGE#


---------- ------------- ---------- ------------
1 534907 1 555371
2 555371 2 557968
.........
68 648702 68 651777
69 651777 69 653085
70 653085 70 655053
71 655053 71 655234
72 655234 72 656658
73 656658 73 657846
74 657846 74 659879
75 659879 75 662288
76 662288 76 662292
77 662292 77 662297
78 662297 78 662312
79 662312 79 662322
80 662322 80 662329
81 662329 81 662337
--> 82 662337 82 664708
83 664708 83 665724
84 665724 84 670061
85 670061 85 674246

85 rows selected.

SQL> SELECT first_change#, next_change#, sequence#, archived, substr(name, 1, 40)


2 FROM V$ARCHIVED_LOG;

FIRST_CHANGE# NEXT_CHANGE# SEQUENCE# ARC SUBSTR(NAME,1,40)


------------- ------------ ---------- ---
----------------------------------------------------------
570647 572710 19 YES C:\ORACLE\FLASH_RECOVERY_AREA\TEST10G\AR
.........
657846 659879 74 YES C:\ORACLE\FLASH_RECOVERY_AREA\TEST10G\AR
659879 662288 75 YES C:\ORACLE\FLASH_RECOVERY_AREA\TEST10G\AR
662288 662292 76 YES C:\ORACLE\FLASH_RECOVERY_AREA\TEST10G\AR
662292 662297 77 YES C:\ORACLE\FLASH_RECOVERY_AREA\TEST10G\AR
662297 662312 78 YES C:\ORACLE\FLASH_RECOVERY_AREA\TEST10G\AR
662312 662322 79 YES C:\ORACLE\FLASH_RECOVERY_AREA\TEST10G\AR
662322 662329 80 YES C:\ORACLE\FLASH_RECOVERY_AREA\TEST10G\AR
662329 662337 81 YES C:\ORACLE\FLASH_RECOVERY_AREA\TEST10G\AR
662337 664708 82 YES C:\ORACLE\FLASH_RECOVERY_AREA\TEST10G\AR
--> 664708 665724 83 YES C:\ORACLE\FLASH_RECOVERY_AREA\TEST10G\AR
665724 670061 84 YES C:\ORACLE\FLASH_RECOVERY_AREA\TEST10G\AR
670061 674246 85 YES C:\ORACLE\FLASH_RECOVERY_AREA\TEST10G\AR

104 rows selected.

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> TEST: <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

Dinsdag 13:15:47 2008 in alertlog

C001: long running txn detected, xid: 0x0003.02a.00000160

C001: long txn committed, xid: 0x0003.02a.00000160

Dinsdag 16.00:

>>>>>>>>>>> connect albert/albert

SQL> insert into persoon


2 values
3 (8,'appie','sel');

1 row created.

SQL> commit;

>>>>>>>>>>> connect subs_cdc/subs_cdc

SQL> select OPERATION$,RSID$,USERID,NAME,LASTNAME from publ_cdc.CDC_PERSOON;

OP RSID$ USERID NAME LASTNAME


-- ---------- ---------- ------------------------------
------------------------------
I 30001 5 gerrit gerritsen
I 40001 6 marie bruinsma
I 50001 7 lubbie lubbie
I 50002 8 appie sel <--------
inderdaad toegevoegd
I 1 1 piet pietersen
I 2 2 jan janssen
I 10001 3 kees pot
I 20001 4 joop joopsen

8 rows selected.

SQL> connect sys/vga88nt@test10g as sysdba


Connected.
SQL> SELECT cap.CAPTURE_NAME, cap.FIRST_SCN, cap.APPLIED_SCN,
2 cap.REQUIRED_CHECKPOINT_SCN
3 FROM DBA_CAPTURE cap, CHANGE_SETS cset
4 WHERE cset.SET_NAME = 'CDC_DEMO_SET' AND
5 cap.CAPTURE_NAME = cset.CAPTURE_NAME;

CAPTURE_NAME FIRST_SCN APPLIED_SCN REQUIRED_CHECKPOINT_SCN


------------------------------ ---------- ----------- -----------------------
CDC$C_CDC_DEMO_SET 610086 683349 683184

Dinsdag 20.00h:

SQL> connect sys/vga88nt@test10g as sysdba


Connected.
SQL> SELECT cap.CAPTURE_NAME, cap.FIRST_SCN, cap.APPLIED_SCN,
2 cap.REQUIRED_CHECKPOINT_SCN
3 FROM DBA_CAPTURE cap, CHANGE_SETS cset
4 WHERE cset.SET_NAME = 'CDC_DEMO_SET' AND
5 cap.CAPTURE_NAME = cset.CAPTURE_NAME;

CAPTURE_NAME FIRST_SCN APPLIED_SCN REQUIRED_CHECKPOINT_SCN


------------------------------ ---------- ----------- -----------------------
CDC$C_CDC_DEMO_SET 610086 690551 683349

NO LONG RUNNING TRANSACTIONS

Dinsdag 21.20h

>>>>>>>>>>>>>>> conn albert/albert

SQL> connect albert/albert@test10g


Connected.
SQL> insert into persoon
2 values
3 (10,'marietje','popje');

1 row created.

Woe 08.00h
>>>>>>>>>>>>>> conn subs_cdc/subs_cdc

SQL> select * from albert.persoon;

USERID NAME LASTNAME


---------- ------------------------------ --------------
5 gerrit gerritsen
4 joop joopsen
6 marie bruinsma
7 lubbie lubbie
8 appie sel
1 piet pietersen
2 jan janssen
3 kees pot

8 rows selected.

SQL> select OPERATION$,RSID$,USERID,NAME,LASTNAME from publ_cdc.CDC_PERSOON;

OP RSID$ USERID NAME LASTNAME


-- ---------- ---------- ------------------------------
------------------------------
I 50001 7 lubbie lubbie
I 50002 8 appie sel

SQL> select OPERATION$, COMMIT_TIMESTAMP$, ROW_ID$, USERID, NAME from


CDC_DEMO_SUB_VIEW;

no rows selected

>>>>>>>>>>>>>>>>>>>>>>connect sys/vga88nt@test10g as sysdba


Connected.
SQL> select * from DBA_SOURCE_TABLES;

SOURCE_SCHEMA_NAME SOURCE_TABLE_NAME
------------------------------ -------------------------
ALBERT PERSOON

Woe 08.30:

>>>>>>>>>>>>>>>>>>>>>> connect albert/albert@test10g


Connected.
SQL> insert into persoon
2 values
3 (9,'nadia','nadia');

1 row created.

SQL> commit;

Commit complete.

>>>>>>>>>>>>>>>>>>>>> connect subs_cdc/subs_cdc@test10g


Connected.
SQL> select OPERATION$, COMMIT_TIMESTAMP$, ROW_ID$, USERID, NAME from
CDC_DEMO_SUB_VIEW;

no rows selected

SQL> exec dbms_cdc_subscribe.extend_window('CDC_DEMO_SUB');

PL/SQL procedure successfully completed.

SQL> select OPERATION$, COMMIT_TIMESTAMP$, ROW_ID$, USERID, NAME from


CDC_DEMO_SUB_VIEW;

OP COMMIT_TI ROW_ID$ USERID NAME


-- --------- ------------------ ---------- ------------------------------
I 28-FEB-08 AAAM1CAAGAAAAAPAAC 7 lubbie
I 28-FEB-08 AAAM1CAAGAAAAAPAAD 8 appie
I 28-FEB-08 AAAM1CAAGAAAAAQAAD 9 nadia

SQL> select OPERATION$,RSID$,USERID,NAME,LASTNAME from publ_cdc.CDC_PERSOON;

OP RSID$ USERID NAME LASTNAME


-- ---------- ---------- ------------------------------
------------------------------
I 50001 7 lubbie lubbie
I 50002 8 appie sel
I 60001 9 nadia nadia

Woe 8.45:

>>>>>>>>>>>>>>> connect albert/albert@test10g

SQL> insert into persoon


2 values
3 (10,'lejah','lejah');

1 row created.

No commit done

Woe 9:22:12 2008


C001: long running txn detected, xid: 0x0006.025.0000018f
etc..
12:02:31 2008
C001: long running txn detected, xid: 0x0006.025.0000018f

12.15 COMMIT

RMAN FULL BACKUP MADE

12.30

SQL> insert into persoon


2 values
3 (11,'mira','mira');

1 row created.

No COMMIT

13:02:38 2008
C001: long running txn detected, xid: 0x000a.019.000001a2
13:12:38 2008
C001: long running txn detected, xid: 0x000a.019.000001a2
13:22:40 2008
C001: long running txn detected, xid: 0x000a.019.000001a2

COMMIT

13:26:25 2008
C001: long txn committed, xid: 0x000a.019.000001a2

? knllgobjinfo: MISSING Streams multi-version data dictionary

13.29:

SQL> create table persoon2


2 (
3 userid number,
4 name varchar(30),
5 lastname varchar(30),
6 constraint pk_userid2 PRIMARY KEY (userid));

Table created.

SQL> insert into persoon2


2 values
3 (1,'piet','piet');

1 row created.

SQL> commit;

Commit complete.

13.46:

SQL> insert into persoon2 -- COMPLETELY INDEPENDENT OF CDC


2 values
3 (2,'karel','karel');

1 row created.

SQL>

NO COMMIT

16.00h NEVER LONG RUNNING TRANSACTION DETECTED.


27 Feb 16.00h

>>>>>>>>>>>> conn albert/albert

SQL> insert into persoon


2 values
3 (12,'xyz','xyz');

1 row created.

SQL> commit;

>>>>>>>>>>>>> conn subs_cdc/subs_cdc

SQL> select * from albert.persoon;

USERID NAME LASTNAME


---------- ------------------------------ ------------------------------
5 gerrit gerritsen
4 joop joopsen
6 marie bruinsma
7 lubbie lubbie
8 appie sel
1 piet pietersen
2 jan janssen
3 kees pot
9 nadia nadia
10 lejah lejah
11 mira mira
12 xyz xyz

12 rows selected.

SQL> select OPERATION$,COMMIT_TIMESTAMP$,USERNAME$,USERID,NAME from


publ_cdc.cdc_persoon;

OP COMMIT_TI USERNAME$ USERID NAME


-- --------- ------------------------------ ----------
------------------------------
I 28-FEB-08 ALBERT 7 lubbie
I 28-FEB-08 ALBERT 8 appie
I 28-FEB-08 ALBERT 9 nadia
I 29-FEB-08 ALBERT 10 lejah
I 29-FEB-08 ALBERT 11 mira
I 29-FEB-08 ALBERT 12 xyz

6 rows selected.

SQL> select OPERATION$, COMMIT_TIMESTAMP$, ROW_ID$, USERID, NAME from


CDC_DEMO_SUB_VIEW;

OP COMMIT_TI ROW_ID$ USERID NAME


-- --------- ------------------ ---------- ------------------------------
I 28-FEB-08 AAAM1CAAGAAAAAPAAC 7 lubbie
I 28-FEB-08 AAAM1CAAGAAAAAPAAD 8 appie
I 28-FEB-08 AAAM1CAAGAAAAAQAAD 9 nadia

SQL> exec dbms_cdc_subscribe.extend_window('CDC_DEMO_SUB');

PL/SQL procedure successfully completed.

SQL> select OPERATION$,COMMIT_TIMESTAMP$,USERNAME$,USERID,NAME from


publ_cdc.cdc_persoon;

OP COMMIT_TI USERNAME$ USERID NAME


-- --------- ------------------------------ ----------
------------------------------
I 28-FEB-08 ALBERT 7 lubbie
I 28-FEB-08 ALBERT 8 appie
I 28-FEB-08 ALBERT 9 nadia
I 29-FEB-08 ALBERT 10 lejah
I 29-FEB-08 ALBERT 11 mira
I 29-FEB-08 ALBERT 12 xyz

6 rows selected.

SQL> select OPERATION$, COMMIT_TIMESTAMP$, ROW_ID$, USERID, NAME from


CDC_DEMO_SUB_VIEW;

OP COMMIT_TI ROW_ID$ USERID NAME


-- --------- ------------------ ---------- ------------------------------
I 28-FEB-08 AAAM1CAAGAAAAAPAAC 7 lubbie
I 28-FEB-08 AAAM1CAAGAAAAAPAAD 8 appie
I 28-FEB-08 AAAM1CAAGAAAAAQAAD 9 nadia
I 29-FEB-08 AAAM1CAAGAAAAAQAAE 10 lejah
I 29-FEB-08 AAAM1CAAGAAAAAQAAF 11 mira
I 29-FEB-08 AAAM1CAAGAAAAAQAAG 12 xyz

6 rows selected.

SQL> exec dbms_cdc_subscribe.purge_window('CDC_DEMO_SUB');

PL/SQL procedure successfully completed.

SQL> select OPERATION$,COMMIT_TIMESTAMP$,USERNAME$,USERID,NAME from


publ_cdc.cdc_persoon;

OP COMMIT_TI USERNAME$ USERID NAME


-- --------- ------------------------------ ----------
------------------------------
I 28-FEB-08 ALBERT 7 lubbie
I 28-FEB-08 ALBERT 8 appie
I 28-FEB-08 ALBERT 9 nadia
I 29-FEB-08 ALBERT 10 lejah
I 29-FEB-08 ALBERT 11 mira
I 29-FEB-08 ALBERT 12 xyz

6 rows selected.

SQL> select OPERATION$, COMMIT_TIMESTAMP$, ROW_ID$, USERID, NAME from


CDC_DEMO_SUB_VIEW;
no rows selected

SQL>

Note about "long running txn":


------------------------------

I have found the following definition of a long running transaction:

- A long-running transaction is a transaction that has


not received any LCRs for over 10 minutes. Open
transactions (ie, transactions where the commit or
rollback has not been received) without new LCRs
in 10 minutes will spill to the apply spill table.

In dba_apply_parameters you can find parameters of the apply process.

APPLY_NAME PARAMETER VALUE


------------------------------ ------------------------------ ----------
CDC$A_CHANGE_SET_ALBERT ALLOW_DUPLICATE_ROWS N
CDC$A_CHANGE_SET_ALBERT COMMIT_SERIALIZATION NONE
CDC$A_CHANGE_SET_ALBERT DISABLE_ON_ERROR Y
CDC$A_CHANGE_SET_ALBERT DISABLE_ON_LIMIT Y
CDC$A_CHANGE_SET_ALBERT MAXIMUM_SCN INFINITE
CDC$A_CHANGE_SET_ALBERT PARALLELISM 1
CDC$A_CHANGE_SET_ALBERT STARTUP_SECONDS 0
CDC$A_CHANGE_SET_ALBERT TIME_LIMIT INFINITE
CDC$A_CHANGE_SET_ALBERT TRACE_LEVEL 0
CDC$A_CHANGE_SET_ALBERT TRANSACTION_LIMIT INFINITE
CDC$A_CHANGE_SET_ALBERT TXN_LCR_SPILL_THRESHOLD 10000
CDC$A_CHANGE_SET_ALBERT WRITE_ALERT_LOG Y

===============================================================================

TEST CASE: Export CDC objects from DATABASE TEST10G to DATABASE TEST10G2

- Make database properties in TEST10G2 same as in TEST10G (example, archive


logging, pools etc..)
- Create same CDC related tablespaces
- Create users in TEST10G2 DB
- GRANT ALL APROPRIATE PERMISSIONS
- Export from TEST10G

- IMPORT INTO TEST10G2

===============================================================================
===============================================================================

PROBLEMS:
=========

1. long running txn detected:


-----------------------------
Not serious.

2. RMAN-08137: WARNING: archive log not deleted as it is still needed:


----------------------------------------------------------------------

WARNING: archive log not deleted as it is still needed

Cause
An archivelog that should have been deleted was not as it was required by Streams
or Data Guard.
The next message identifies the archivelog.

Action
This is an informational message. The archivelog can be deleted after it is no
longer needed.
See the documentation for Data Guard to alter the set of active Data Guard
destinations.
See the documentation for Streams to alter the set of active streams.

Starting backup at 27-FEB-08


channel t1: starting archive log backupset
channel t1: specifying archive log(s) in backup set
input archive log thread=1 sequence=600 recid=570 stamp=647820534
channel t1: starting piece 1 at 27-FEB-08
channel t1: finished piece 1 at 27-FEB-08
piece handle=ipj9q29g_1_1 tag=TAG20080227T233511 comment=API Version 2.0,MMS
Version 5.3.3.0
channel t1: backup set complete, elapsed time: 00:00:04
RMAN-08137: WARNING: archive log not deleted as it is still needed
archive log
filename=/dbms/tdbaaccp/accptrid/recovery/archive/arch_1_600_630505403.arch
thread=1 sequence=600
Finished backup at 27-FEB-08

Thu Feb 28 00:00:01 2008


Starting backup at 28-FEB-08
channel t1: starting archive log backupset
channel t1: specifying archive log(s) in backup set
input archive log thread=1 sequence=600 recid=570 stamp=647820534
channel t1: starting piece 1 at 28-FEB-08
channel t1: finished piece 1 at 28-FEB-08
piece handle=isj9q3o5_1_1 tag=TAG20080228T000004 comment=API Version 2.0,MMS
Version 5.3.3.0
channel t1: backup set complete, elapsed time: 00:00:04
RMAN-08137: WARNING: archive log not deleted as it is still needed
archive log
filename=/dbms/tdbaaccp/accptrid/recovery/archive/arch_1_600_630505403.arch
thread=1 sequence=600
Finished backup at 28-FEB-08

Thu Feb 28 01:00:01 2008


Starting backup at 28-FEB-08
channel t1: starting archive log backupset
channel t1: specifying archive log(s) in backup set
input archive log thread=1 sequence=600 recid=570 stamp=647820534
channel t1: starting piece 1 at 28-FEB-08
channel t1: finished piece 1 at 28-FEB-08
piece handle=ivj9q78l_1_1 tag=TAG20080228T010004 comment=API Version 2.0,MMS
Version 5.3.3.0
channel t1: backup set complete, elapsed time: 00:00:04
channel t1: deleting archive log(s)
archive log
filename=/dbms/tdbaaccp/accptrid/recovery/archive/arch_1_600_630505403.arch
recid=570 stamp=647820534
Finished backup at 28-FEB-08

Also handled.

3. ORA-00600: internal error code, arguments: [knlcLoop-200], [], [], [], [], [],
[], []
----------------------------------------------------------------------------------
------

LOGMINER: End mining logfile:


/dbms/tdbaaccp/accptrid/recovery/redo_logs/redo03.log
Thu Feb 28 09:10:30 2008
LOGMINER: Begin mining logfile:
/dbms/tdbaaccp/accptrid/recovery/redo_logs/redo01.log
Thu Feb 28 09:21:01 2008
Thread 1 advanced to log sequence 608
Current log# 2 seq# 608 mem# 0:
/dbms/tdbaaccp/accptrid/recovery/redo_logs/redo02.log
Thu Feb 28 09:21:01 2008
LOGMINER: End mining logfile:
/dbms/tdbaaccp/accptrid/recovery/redo_logs/redo01.log
Thu Feb 28 09:21:01 2008
LOGMINER: Begin mining logfile:
/dbms/tdbaaccp/accptrid/recovery/redo_logs/redo02.log
Thu Feb 28 09:22:46 2008
Errors in file /dbms/tdbaaccp/accptrid/admin/dump/bdump/accptrid_c001_1491066.trc:
ORA-00600: internal error code, arguments: [knlcLoop-200], [], [], [], [], [], [],
[]
Thu Feb 28 09:22:59 2008
Streams CAPTURE C001 with pid=25, OS id=1491066 stopped
Thu Feb 28 09:22:59 2008
Errors in file /dbms/tdbaaccp/accptrid/admin/dump/bdump/accptrid_c001_1491066.trc:
ORA-00600: internal error code, arguments: [knlcLoop-200], [], [], [], [], [], [],
[]

On AIX:

IDENTIFIER TIMESTAMP T C RESOURCE_NAME DESCRIPTION


A63BEB70 0221101308 P S SYSPROC SOFTWARE PROGRAM ABNORMALLY TERMINATED
SQL> select SEQUENCE#, FIRST_CHANGE#,STATUS,ARCHIVED from v$log;

SEQUENCE# FIRST_CHANGE# STATUS ARC


---------- ------------- ---------------- ---
607 9579856 INACTIVE YES
608 9594819 CURRENT NO
606 9579542 INACTIVE YES

SQL> /

SEQUENCE# FIRST_CHANGE# STATUS ARC


---------- ------------- ---------------- ---
607 9579856 INACTIVE YES
608 9594819 CURRENT NO
606 9579542 INACTIVE YES

Warning: Errors detected in file


/dbms/tdbaaccp/accptrid/admin/dump/bdump/accptrid_c001_1499336.trc

> /dbms/tdbaaccp/accptrid/admin/dump/bdump/accptrid_c001_1499336.trc
> Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
> With the Partitioning, OLAP and Data Mining options
> ORACLE_HOME = /dbms/tdbaaccp/ora10g/home
> System name: AIX
> Node name: pl003
> Release: 3
> Version: 5
> Machine: 00CB560D4C00
> Instance name: accptrid
> Redo thread mounted by this instance: 1
> Oracle process number: 23
> Unix process pid: 1499336, image: oracle@pl003 (C001)
>
> *** 2008-02-28 10:49:04.501
> *** SERVICE NAME:(SYS$USERS) 2008-02-28 10:49:04.488
> *** SESSION ID:(195.1286) 2008-02-28 10:49:04.488
> KnlcLoop: priorCkptScn currentCkptScn
> 0x0000.00926d84 0x0000.00927073
> knlcLoop: buf_txns_knlcctx:1:: lowest bufLcrScn:0x0000.00926cca
> knlcPrintCharCachedTxn:xid: 0x000b.005.000001ac
> *** 2008-02-28 10:49:04.501
> ksedmp: internal or fatal error
> ORA-00600: internal error code, arguments: [knlcLoop-200], [], [], [], [],
[], [], []
> OPIRIP: Uncaught error 447. Error stack:
> ORA-00447: fatal error in background process
> ORA-00600: internal error code, arguments: [knlcLoop-200], [], [], [], [],
[], [], []

==================================================================================
============
END OF Async CDC extended TEST:
==================================================================================
============

exec dbms_cdc_subscribe.extend_window('CHANGE_SET_ALBERT');
exec dbms_cdc_subscribe.purge_window('CHANGE_SET_ALBERT');

exec DBMS_CDC_PUBLISH.DROP_CHANGE_SET('CHANGE_SET_ALBERT');

exec dbms_capture_adm.abort_table_instantiation('HR.CDC_DEMO');

-- drop the change set


exec dbms_cdc_publish.drop_change_set('CDC_DEMO_SET');

=============
26 X$ TABLES:
=============

Listed below are some of the important subsystems in the Oracle kernel. This table
might help you to read those dreaded
trace files and internal messages. For example, if you see messages like this, you
will at least know where they come from:

OPIRIP: Uncaught error 447. Error stack:


KCF: write/open error block=0x3e800 online=1

Kernel Subsystems:
OPI Oracle Program Interface
KK Compilation Layer - Parse SQL, compile PL/SQL
KX Execution Layer - Bind and execute SQL and PL/SQL
K2 Distributed Execution Layer - 2PC handling
NPI Network Program Interface
KZ Security Layer - Validate privs
KQ Query Layer
RPI Recursive Program Interface
KA Access Layer
KD Data Layer
KT Transaction Layer
KC Cache Layer
KS Services Layer
KJ Lock Manager Layer
KG Generic Layer
KV Kernel Variables (eg. x$KVIS and X$KVII)
S or ODS Operating System Dependencies

Where can one get a list of all hidden Oracle parameters?


Oracle initialization or INIT.ORA parameters with an underscore in front are
hidden or unsupported parameters.
One can get a list of all hidden parameters by executing this query:

select *
from SYS.X$KSPPI
where substr(KSPPINM,1,1) = '_';

The following query displays parameter names with their current value:

select a.ksppinm "Parameter", b.ksppstvl "Session Value", c.ksppstvl "Instance


Value"
from x$ksppi a, x$ksppcv b, x$ksppsv c
where a.indx = b.indx and a.indx = c.indx
and substr(ksppinm,1,1)='_'
order by a.ksppinm;

Remember: Thou shall not play with undocumented parameters!

Oracle's x$ Tables
See also: Speculation of X$ Table Names

x$ tables are the sql interface to viewing oracle's memory in the SGA. The names
for the x$ tables can be queried with

select kqftanam from x$kqfta;


x$activeckpt
x$bh

Information on buffer headers. Contains a record (the buffer header) for each
block in the buffer cache. This select statement lists
how many blocks are Available, Free and Being Used.

select count(*), State from (


select decode (state,
0, 'Free',
1, decode (lrba_seq,
0, 'Available',
'Being Used'),
3, 'Being Used',
state) State
from x$bh )
group by state

The meaning of state: 0 FREE no valid block image


1 XCUR a current mode block, exclusive to this instance
2 SCUR a current mode block, shared with other instances
3 CR a consistent read (stale) block image
4 READ buffer is reserved for a block being read from disk
5 MREC a block in media recovery mode
6 IREC a block in instance (crash) recovery mode
The meaning of tch: tch is the touch count. A high touch count indicates that the
buffer is used often. Therefore, it will probably be at the head of the MRU list.
See also touch count. The meaning of tim: touch time. class represents a value
designated for the use of the block. lru_flag set_ds maps to addr on x$kcbwds.
le_addr can be outer joined on x$le.le_addr. flag is a bit array. Bit if set
0 Block is dirty
4 temporary block
9 or 10 ping
14 stale
16 direct
524288 (=0x80000) Block was read in a full table scan See this link

x$bufqm
x$class_stat
x$context
x$globalcontext
x$hofp
x$hs_session
The x$kc... tables
x$kcbbhs
x$kcbmmav
x$kcbsc
x$kcbwait
x$kcbwbpd
Buffer pool descriptor, the base table for v$buffer_pool. How is the buffer cache
split between the default, the recycle and the keep buffer pool.
x$kcbwds
Set descriptor, see also x$kcbwbpd The column id can be joined with
v$buffer_pool.id. The column bbwait corresponds to the buffer busy waits wait
event. Information on working set buffers addr can be joined with x$bh.set_ds.
set_id will be between lo_setid and hi_setid in v$buffer_pool for the relevant
buffer pool.
x$kccal
x$kccbf
x$kccbi
x$kccbl
x$kccbp
x$kccbs
x$kcccc
x$kcccf
x$kccdc
x$kccdi
x$kccdl
x$kccfc
x$kccfe
x$kccfn
x$kccic
x$kccle
Controlfile logfile entry. Use
select max(lebsz) from x$kccle
to find out the size of a log block. The log block size is the unit for the
following init params: log_checkpoint_interval, _log_io_size, and
max_dump_file_size.
x$kcclh
x$kccor
x$kcccp
Checkpoint Progress:
The column cpodr_bno displays the current redo block number. Multiplied with the
OS Block Size (usually 512), it returns the amount of bytes of redo currently
written to the redo logs. Hence, this number is reset at each log switch. k$kcccp
can (together with x$kccle) be used to monitor the progress of the writing of
online redo logs. The following query does this.
select
le.leseq "Current log sequence No",
100*cp.cpodr_bno/le.lesiz "Percent Full",
cp.cpodr_bno "Current Block No",
le.lesiz "Size of Log in Blocks"
from
x$kcccp cp,
x$kccle le
where
LE.leseq =CP.cpodr_seq
and bitand(le.leflg,24)=8;
bitand(le.leflg,24)=8 makes sure we get the current log group How much redo is
written by Oracle uses a variation of this SQL statement to track how much redo is
written by different DML Statements.
x$kccrs
x$kccrt
x$kccsl
x$kcctf
x$kccts
x$kcfio
x$kcftio
x$kckce
x$kckty
x$kclcrst
x$kcrfx
x$kcrmf
x$kcrmx
x$kcrralg
x$kcrrarch
x$kcrrdest
x$kcrrdstat
x$kcrrms
x$kcvfh
x$kcvfhmrr
x$kcvfhonl
x$kcvfhtmp
x$kdnssf
The x$kg... tables
KG stands for kernel generic
x$kghlu
This view shows one row per shared pool area. If there's a java pool, an
additional row is displayed.
x$kgicc
x$kgics
x$kglcursor
x$kgldp
x$kgllk
This table lists all held and requested library object locks for all sessions. It
is more complete than v$lock. The column kglnaobj displays the first 80 characters
of the name of the object.
select
kglnaobj, kgllkreq
from
x$kgllk x join v$session s on
s.saddr = x.kgllkses;
kgllkreq = 0 means, the lock is held, while kgllkreq > 0 means that the lock is
requested.
x$kglmem
x$kglna
x$kglna1
x$kglob
Library Cache Object
x$kglsim
x$kglst
x$kgskasp
x$kgskcft
x$kgskcp
x$kgskdopp
x$kgskpft
x$kgskpp
x$kgskquep
x$kjbl
x$kjbr
x$kjdrhv
x$kjdrpcmhv
x$kjdrpcmpf
x$kjicvt
x$kjilkft
x$kjirft
x$kjisft
x$kjitrft
x$kksbv
x$kkscs
x$kkssrd
x$klcie
x$klpt
x$kmcqs
x$kmcvc
x$kmmdi
x$kmmrd
x$kmmsg
x$kmmsi
x$knstacr
x$knstasl
x$knstcap
x$knstmvr
x$knstrpp
x$knstrqu
x$kocst
The x$kq... tables
x$kqfco
This table has an entry for each column of the x$tables and can be joined with
x$kqfta. The column kqfcosiz indicates the size (in bytes?) of the columns.
select
t.kqftanam "Table Name",
c.kqfconam "Column Name",
c.kqfcosiz "Column Size"
from
x$kqfta t,
x$kqfco c
where
t.indx = c.kqfcotab
x$kqfdt
x$kqfsz
x$kqfta
It seems that all x$table names can be retrieved with the following query.
select kqftanam from x$kqfta;
This table can be joined with x$kqfco which contains the columns for the tables:
select
t.kqftanam "Table Name",
c.kqfconam "Column Name"
from
x$kqfta t,
x$kqfco c
where
t.indx = c.kqfcotab
x$kqfvi
x$kqfvt
x$kqlfxpl
x$kqlset
x$kqrfp
x$kqrfs
x$kqrst
x$krvslv
x$krvslvs
x$krvxsv
The x$ks... tables
KS stands for kernel services.
x$ksbdd
x$ksbdp
x$ksfhdvnt
x$ksfmcompl
x$ksfmelem
x$ksfmextelem
x$ksfmfile
x$ksfmfileext
x$ksfmiost
x$ksfmlib
x$ksfmsubelem
x$ksfqp
x$ksimsi
x$ksled
x$kslei
x$ksles
x$kslld
x$ksllt
x$ksllw
x$kslwsc
x$ksmfs
x$ksmfsv
This SGA map.
x$ksmge
x$ksmgop
x$ksmgsc
x$ksmgst
x$ksmgv
x$ksmhp
x$ksmjch
x$ksmjs
x$ksmlru
Memory least recently used Whenever a select is performed on x$ksmlru, its content
is reset! This table show which memory allocations in the shared pool caused the
throw out of the biggest memory chunks since it was last queried.
x$ksmls
x$ksmmem
This 'table' seems to allow to address (that is read (write????)) every byte in
the SGA. Since the size of the SGA equals the size of select sum(value) from
v$sga, the following query must return 0 (at least on a four byte architecture.
Don't know about 8 bytes.)
select
(select sum(value) from v$sga ) -
(select 4*count(*) from x$ksmmem) "Must be Zero!"
from
dual;
x$ksmsd
x$ksmsp
x$ksmsp_nwex
x$ksmspr
x$ksmss
x$ksolsfts
x$ksolsstat
x$ksppcv
x$ksppcv2
Contains the value kspftctxvl for each parameter found in x$ksppi. Determine if
this value is the default value with the column kspftctxdf.
x$ksppi
This table contains a record for all documented and undocumented (starting with an
underscore) parameters. select ksppinm from x$ksppi to show the names of all
parameters. Join indx+1 with x$ksppcv2.kspftctxpn.
x$ksppo
x$ksppsv
x$ksppsv2
x$kspspfile
x$ksqeq
x$ksqrs
x$ksqst
Enqueue management statistics by type. ksqstwat: The number of wait for the
enqueue statistics class.
ksqstwtim: Cumulated waiting time. This column is selected when
v$enqueue_stat.cum_wait_time is selected. The types of classes are: BL Buffer
Cache Management
CF Controlfile Transaction
CI Cross-instance call invocation
CU Bind Enqueue
DF Datafile
DL Direct Loader index creation
DM Database mount
DP ???
DR Distributed Recovery
DX Distributed TX
FB acquired when formatting a range of bitmap blocks far ASSM segments. id1=ts#,
id2=relative dba
FS File Set
IN Instance number
IR Instance Recovery
IS Instance State
IV Library cache invalidation
JD Something to do with dbms_job
JQ Job queue
KK Redo log kick
LA..LP Library cache lock
MD enqueue for Change data capture materialized view log (gotten internally for
DDL on a snapshot log) id1=object# of the snapshot log.
MR Media recovery
NA..NZ Library cache pin
PF Password file
PI Parallel slaves
PR Process startup
PS Parallel slave synchronization
SC System commit number
SM SMON
SQ Sequence number enqueue
SR Synchronized replication
SS Sort segment
ST Space management transaction
SV Sequence number value
SW Suspend writes enqueue gotten when someone issues alter system suspend|resume
TA Transaction recovery
UL User defined lock
UN User name
US Undo segment, serialization
WL Redo log being written
XA Instance attribute lock
XI Instance registration lock
XR Acquired for alter system quiesce restricted

x$kstex
x$ksull
x$ksulop
x$ksulv
x$ksumysta
x$ksupr
x$ksuprlat
x$ksurlmt
x$ksusd
Contains a record for all statistics.
x$ksuse
x$ksusecon
x$ksusecst
x$ksusesta
x$ksusgif
x$ksusgsta
x$ksusio
x$ksutm
x$ksuxsinst
x$ktadm
x$targetrba
x$ktcxb
The SGA transaction table.
x$ktfbfe
x$ktfthc
x$ktftme
x$ktprxrs
x$ktprxrt
x$ktrso
x$ktsso
x$ktstfc
x$ktstssd
x$kttvs
Lists save undo for each tablespace: The column kttvstnm is the name of the
tablespace that has saved undo. The column is null otherwise.
x$kturd
x$ktuxe
Kernel transaction, undo transaction entry
x$kvis
Has (among others) a row containing the db block size:
select kvisval from x$kvis where kvistag = 'kcbbkl'
x$kvit
x$kwddef
x$kwqpd
x$kwqps
x$kxfpdp
x$kxfpns
x$kxfpsst
x$kxfpys
x$kxfqsrow
x$kxsbd
x$kxscc
x$kzrtpd
x$kzspr
x$kzsrt
x$le
Lock element: contains an entry for each PCM lock held for the buffer cache. x$le
can be left outer joined to x$bh on le_addr.
x$le_stat
x$logmnr_callback
x$logmnr_contents
x$logmnr_dictionary
x$logmnr_logfile
x$logmnr_logs
x$logmnr_parameters
x$logmnr_process
x$logmnr_region
x$logmnr_session
x$logmnr_transaction
x$nls_parameters
x$option
x$prmsltyx
x$qesmmiwt
x$qesmmsga
x$quiesce
x$uganco
x$version
x$xsaggr
x$xsawso
x$xssinfo
A perlscript to find x$ tables
#!/usr/bin/perl -w

use strict;

open O, ("/appl/oracle/product/9.2.0.2/bin/oracle");
open F, (">x");

my $l;
my $p = ' ' x 40;
my %x;
while (read (O,$l,10000)) {
$l = $p.$l;

foreach ($l =~ /(x\$\w{3,})/g) {


$x{$_}++;
}

$p = substr ($l,-40);
}

foreach (sort keys %x) {


print F "$_\n";
}
Obviously, it is also possible to extract those names through x$kqfta

===============
27 OTHER STUFF:
===============

27.1 How to retrieve DDL from sqlplus:


=======================================

Use DBMS_METADATA.GET_DDL()

Examples:

SELECT dbms_metadata.get_ddl('TABLE','EMPLOYEE','RM_LIVE') from dual;

SQL> set pagesize 0


SQL> set long 90000
SELECT dbms_metadata.get_ddl('TABLE', table_name, 'RM_LIVE')
FROM DBA_TABLES WHERE OWNER = 'RM_LIVE' and table_name like 'CDC_%';

More on this procedure:

If there is a task in Oracle for which the wheel has been reinvented many times,
it is that of generating database object DDL. There are numerous scripts floating
in different forums doing the same thing. Some of them work great, while others
work only until a specific version. Sometimes the DBAs prefer to create the
scripts themselves. Apart from the testing overhead, these scripts require
substantial insight into the data dictionary. As new versions of the database are
released, the scripts need to be modified to fit the new requirements.

Starting from Oracle 9i Release 1, the DBMS_METADATA package has put an official
end to all such scripting effort. This article provides a tour of the reverse
engineering features of the above package, with a focus on generating the creation
DDL of existing database objects. The article also has a section covering the
issue of finding object dependencies.

Why do we need to reverse engineer object creation DDL

We need them for several reasons:


Database upgrade from earlier versions when for various reason export-import is
the only way out. But huge databases would require a precreated structure -
importing data with several parallel processes into individual tables.
Moving development objects into production. The cleanest method is to reverse
engineer the DDL of the existing objects and run them in the production.
For learning the various parameters that an object has been created with. When we
create an object, we do not specify all the options, letting Oracle pick the
defaults. We might want to view the defaults that have been picked up, or we might
want to crosscheck the parameters of the object. For that we need Enterprise
Manager, Toad, or some other tool, or self-developed queries in the data
dictionary. Now DBMS_METADATA get the clean complete DDL with all options.

Modes of usage of the Metadata Package

A set of functions that can be used with SQL. This is known as the browsing
interface. The functions in the browsing interface are GET_DDL, GET_DEPENDENT_DDL,
GET_GRANTED_DDL
A set of functions that can be used in PLSQL, which is in fact a superset of (1).
They support filtering, and optional turning on and turning off of some clause in
the DDL. The flexibilities provided by the programmer interface are rarely
required. For general use the browsing interface is sufficient - more so if the
programmer knows SQL well.

Retrieving DDL information by SQL

As mentioned in the section above, GET_DDL, GET_DEPENDENT_DDL and GET_GRANTED_DDL


are the three functions in this mode. The next few sections discuss them in
detail. The objects on which the examples are tested are given in Table 9.

GET_DDL

The general syntax of GET_DDL is


GET_DDL(object_type, name, schema, version, model, transform).

Version, model and transform take the default values "COMPATIBLE", "ORACLE", and
"DDL" - further discussion of these is not in the scope of this article.

object_type can be any of the object types given in Table 8 below. Table 1 shows a
simple usage of the GET_DDL function to get all the tables of a schema. This
function can only be used to fetch named objects, that is, objects with type N or
S in Table 8. We will see in a later section how the "/" at the end of the DDL can
be turned on by default.

Table 1 (DBMS_METADATA.GET_DDL Usage)

SQL> set head off


SQL> set long 1000
SQL> set pages 0
SQL> show user
USER is "REVRUN"
SQL>
SQL> select DBMS_METADATA.GET_DDL('TABLE','EMPLOYEE')||'/' from dual;

CREATE TABLE "REVRUN"."EMPLOYEE"


( "LASTNAME" VARCHAR2(60) NOT NULL ENABLE,
"FIRSTNAME" VARCHAR2(20) NOT NULL ENABLE,
"MI" VARCHAR2(2),
"SUFFIX" VARCHAR2(10),
"DOB" DATE NOT NULL ENABLE,
"BADGE_NO" NUMBER(6,0),
"EXEMPT" VARCHAR2(1) NOT NULL ENABLE,
"SALARY" NUMBER(9,2),
"HOURLY_RATE" NUMBER(7,2),
PRIMARY KEY ("BADGE_NO")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "SYSTEM" ENABLE
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "SYSTEM"
/
GET_DEPENDENT_DDL

The general syntax of GET_DEPENDENT_DDL is


GET_DEPENDENT_DDL(object_type, base_object_name, base_object_schema,
version, model, transform, object_count)

Version, model and transform take the default values "COMPATIBLE", "ORACLE" and
"DDL", and are not discussed futher. object_count takes the default of 10000 and
can be left like that for most cases.

object_type can be any object of type D in Table 8. base_object_name is the base


object on which the object_type objects are dependent.

The GET_DEPENDENT_DDL function allows the fetching of metadata for dependent


objects with a single call. For some object types, other functions can be used for
the same effect. For example, GET_DDL can be used to fetch an index by its name or
GET_DEPENDENT_DDL can be used to fetch the same index by specifying the table on
which it is defined. An added reason for using GET_DEPENDENT_DDL in this case
might be that it gives the DDL of all dependent objects of that base object and
the specific object type.

Table 2 shows a simple usage of GET_DEPENDENT_DDL.

Table 2 (GET_DEPENDENT_DDL example)

SQL> column aa format a132


SQL>
SQL> select DBMS_METADATA.GET_DEPENDENT_DDL('TRIGGER','EMPLOYEE') aa from dual;

CREATE OR REPLACE TRIGGER "REVRUN"."HOURLY_TRIGGER"


before update of hourly_rate on
employee

for each row


begin :new.hourly_rate:=:old.hourly_rate;end;
ALTER TRIGGER "REVRUN"."HOURLY_TRIGGER" ENABLE

CREATE OR REPLACE TRIGGER "REVRUN"."SALARY_TRIGGER"


before insert or update of salary on
employee
for each row WHEN (new.salary > 150000) CALL check_sal(:new.salary)
ALTER TRIGGER "REVRUN"."SALARY_TRIGGER" ENABLE
GET_GRANTED_DDL

The general syntax of GET_GRANTED_DDL is


GET_GRANTED_DDL(object_type, grantee, version, model, transform, object_count)

Version, model and transform take the default values "COMPATIBLE", "ORACLE" and
"DDL", and need no further discussion.
object_count takes the default of 10000, and can be left like that for most cases.

grantee is the user who is granting the object_types. The object types that can
work in GET_GRANTED_DDL are the ones with type G in Table 8. Table 3 shows a
simple usage of the GET_GRANTED_DDL function.

Table 3 (GET_GRANTED_DDL Usage)

SQL> set long 99999


SQL> column aa format a132
SQL> select DBMS_METADATA.GET_GRANTED_DDL('OBJECT_GRANT','REVRUN_USER') aa from
dual;

GRANT UPDATE ("SALARY") ON "REVRUN"."EMPLOYEE" TO "REVRUN_USER"

GRANT UPDATE ("HOURLY_RATE") ON "REVRUN"."EMPLOYEE" TO "REVRUN_USER"

GRANT INSERT ON "REVRUN"."TIMESHEET" TO "REVRUN_USER"

GRANT UPDATE ON "REVRUN"."TIMESHEET" TO "REVRUN_USER"


Table 4 below classifies some common objects as Dependent Object (D), Named Object
(N) or Granted Object (G). Some objects exhibit more than one such property. For a
complete list, refer to the Oracle Documentation. However, the list below will
meet most requirements.

Metadata information retrieval by programmatic interface

The programmatic interface is for fine-grained detailed control on DDL generation.


The list of procedures available for use in the programmatic interface is as
follows:

OPEN
SET_FILTER
SET_COUNT
GET_QUERY
SET_PARSE_ITEM
ADD_TRANSFORM
SET_TRANSFORM_PARAM
FETCH_xxx
CLOSE
To make use of this interface one must write a PLSQL block. Considering the fact
that several CLOB columns are involved, this is not simple. However, the next
section shows how to use the SET_TRANSFORM_PARM function in SQLPLUS in order to
perform most of the jobs done by this interface. If one adds simple SQL skills to
it, the programmatic interface can be bypassed in almost all cases. To get details
of the programmatic interface, the reader should refer to the documentation.
Using the SET_TRANSFORM_PARAM function in SQL Session

This function determines how the output of the DBMS_METADATA is displayed. The
general syntax is
SET_TRANSFORM_PARAM(transform_handle, name, value).

transform_handle for SQL Sessions is DBMS_METADATA.SESSION_TRANSFORM


name is the name of the transform, and value is essentially TRUE or FALSE.

Table 4 shows how to get the DDL of tables not containing the word LOG in a good
indented form and with SQL Terminator without a storage clause.

Table 4 (SET_TRANSFORM_PARAM Usage)

SQL> execute
DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'STORAGE',false)
;

PL/SQL procedure successfully completed.

SQL> execute
DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'PRETTY',true);

PL/SQL procedure successfully completed.

SQL> execute
DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'SQLTERMINATOR',
true);

PL/SQL procedure successfully completed.

SQL> select dbms_metadata.get_ddl('TABLE',table_name) from user_tables


2 where table_name not like '%LOG';

CREATE TABLE "REVRUN"."EMPLOYEE"


( "LASTNAME" VARCHAR2(60) NOT NULL ENABLE,
"FIRSTNAME" VARCHAR2(20) NOT NULL ENABLE,
"MI" VARCHAR2(2),
"SUFFIX" VARCHAR2(10),
"DOB" DATE NOT NULL ENABLE,
"BADGE_NO" NUMBER(6,0),
"EXEMPT" VARCHAR2(1) NOT NULL ENABLE,
"SALARY" NUMBER(9,2),
"HOURLY_RATE" NUMBER(7,2),
PRIMARY KEY ("BADGE_NO")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255
TABLESPACE "SYSTEM" ENABLE
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
TABLESPACE "SYSTEM" ;

CREATE TABLE "REVRUN"."TIMESHEET"


( "BADGE_NO" NUMBER(6,0),
"WEEK" NUMBER(2,0),
"JOB_ID" NUMBER(5,0),
"HOURS_WORKED" NUMBER(4,2),
FOREIGN KEY ("BADGE_NO")
REFERENCES "REVRUN"."EMPLOYEE" ("BADGE_NO") ENABLE
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
TABLESPACE "SYSTEM" ;

SQL>
Thus we see how a DDL requirement even with some filtering condition and a
formatting requirement was met by the SQL browsing interface along with
SET_SESSION_TRANSFORM.

Table 5 shows the name and meaning of the SET_SESSION_TRANSFORM parameters.

Table 5 (SET_SESSION_TRANSFORM "name" Parameters)

PRETTY (all objects) - If TRUE, format the output with indentation and line feeds.
Defaults to TRUE.

SQLTERMINATOR (all objects) - If TRUE, append a SQL terminator (; or /) to each


DDL statement. Defaults to FALSE.

DEFAULT (all objects) - Calling SET_TRANSFORM_PARAM with this parameter set to


TRUE has the effect of resetting all parameters for the transform to their default

values. Setting this FALSE has no effect. There is no default.

INHERIT (all objects) - If TRUE, inherits session-level parameters. Defaults to


FALSE. If an application calls ADD_TRANSFORM to add the DDL transform, then by
default the only transform parameters that apply are those explicitly set for that
transform handle. This has no effect if the transform handle is the session
transform handle.

SEGMENT_ATTRIBUTES (TABLE and INDEX) - If TRUE, emit segment attributes (physical


attributes, storage attributes, tablespace, logging). Defaults to TRUE.

STORAGE (TABLE and INDEX) - If TRUE, emit storage clause. (Ignored if


SEGMENT_ATTRIBUTES is FALSE.) Defaults to TRUE.

TABLESPACE (TABLE and INDEX) - If TRUE, emit tablespace. (Ignored if


SEGMENT_ATTRIBUTES is FALSE.) Defaults to TRUE.

CONSTRAINTS (TABLE) - If TRUE, emit all non-referential table constraints.


Defaults to TRUE.

REF_CONSTRAINTS (TABLE) - If TRUE, emit all referential constraints (foreign key


and scoped refs). Defaults to TRUE.

CONSTRAINTS_AS_ALTER (TABLE) - If TRUE, emit table constraints as separate ALTER


TABLE (and, if necessary, CREATE INDEX) statements. If FALSE, specify table
constraints as part of the CREATE TABLE statement. Defaults to FALSE. Requires
that
CONSTRAINTS be TRUE.

FORCE (VIEW) - If TRUE, use the FORCE keyword in the CREATE VIEW statement.
Defaults to TRUE.

DBMS_METADATA Security Model

The object views of the Oracle metadata model implement security as follows:
Non-privileged users can see the metadata only of their own objects.
SYS and users with SELECT_CATALOG_ROLE can see all objects.
Non-privileged users can also retrieve object and system privileges granted to
them or by them to others. This also includes privileges granted to PUBLIC.
If callers request objects they are not privileged to retrieve, no exception is
raised; the object is simply not retrieved.
If non-privileged users are granted some form of access to an object in someone
else's schema, they will be able to retrieve the grant specification through the
Metadata API, but not the object's actual metadata.

Finding objects that are dependent on a given object

This is another type of requirement. While dropping a seemingly unimportant table


or procedure from a schema one might like to know the objects that are dependent
on this object.

The data dictionary view DBA_DEPENDENCIES or USER_DEPENDENCIES or ALL_DEPENDENCIES


is the answer to these requirements. The columns of the ALL_DEPENDENCIES view are
discussed in Table 6. ALL_DEPENDENCIES describes dependencies between procedures,
packages, functions, package bodies, and triggers accessible to the current user,
including dependencies on views created without any database links. Only tables
are left out of this view. However for finding table dependencies we can use
ALL_CONSTRAINTS. The ALL_DEPENDENCIES view comes to the rescue in the very
important area of finding dependencies between stored code objects.

Table 6 (Columns of ALL_DEPENDENCIES table)

Column Description
------ -----------
OWNER Owner of the object
NAME Name of the object
TYPE Type of object
REFERENCED_OWNER Owner of the parent object
REFERENCED_NAME Type of parent object
REFERENCED_TYPE Type of referenced object
REFERENCED_LINK_NAME Name of the link to the parent object (if remote)
SCHEMAID ID of the current schema
DEPENDENCY_TYPE Whether the dependency is a REF dependency (REF) or not
(HARD)
Table 7 below shows how to use the above view to get the dependencies. The example
shows a case where we might want to drop the procedure CHECK_SAL, but we would
like to find any objects dependent on it. The query below shows that a TRIGGER
named SALARY_TRIGGER is dependent on it.

Table 7 (Use of the ALL_DEPENDENCIES view)

SQL> select name, type, owner


2 from all_dependencies
3 where referenced_owner = 'REVRUN'
4 and referenced_name = 'CHECK_SAL';

NAME TYPE OWNER


------------------------------ ----------------- ----------------------
SALARY_TRIGGER TRIGGER REVRUN

CONCLUSION

This article is intended to give the minimum effort answer to elementary and
intermediate level object dependency related issues. For advanced object
dependency issues, this article points to the solution. As Oracle keeps on
upgrading its versions, it is clear that they will be upgrading the DBMS_METADATA
interface and ALL_DEPENDENCIES view along with it. The solutions developed along
those lines will persist.

Table 8 (Classifying common database objects as Named, Dependent, Granted and


Schema objects)

CONSTRAINT (Constraints) SND


DB_LINK (Database links) SN
DEFAULT_ROLE (Default roles) G
FUNCTION (Stored functions) SN
INDEX (Indexes) SND
MATERIALIZED_VIEW (Materialized views) SN
MATERIALIZED_VIEW_LOG (Materialized view logs) D
OBJECT_GRANT (Object grants) DG
PACKAGE (Stored packages) SN
PACKAGE_SPEC (Package specifications) SN
PACKAGE_BODY (Package bodies) SN
PROCEDURE (Stored procedures) SN
ROLE (Roles) N
ROLE_GRANT (Role grants) G
SEQUENCE (Sequences) SN
SYNONYM (Synonyms) S
SYSTEM_GRANT (System privilege grants) G
TABLE (Tables) SN
TABLESPACE (Tablespaces) N
TRIGGER (Triggers) SND
TYPE (User-defined types) SN
TYPE_SPEC (Type specifications) SN
TYPE_BODY (Type bodies) SN
USER (Users) N
VIEW (Views) SN
Table 9 (Creation script of the REVRUN Schema)

connect system/manager
drop user revrun cascade;
drop user revrun_user cascade;
drop user revrun_admin cascade;

create user revrun identified by revrun;


GRANT resource, connect, create session
, create table
, create procedure
, create sequence
, create trigger
, create view
, create synonym
, alter session
TO revrun;
create user revrun_user identified by user1;
create user revrun_admin identified by admin1;

grant connect to revrun_user;


grant connect to revrun_admin;

connect revrun/revrun

Rem Creating employee tables...

create table employee


( lastname varchar2(60) not null,
firstname varchar2(20) not null,
mi varchar2(2),
suffix varchar2(10),
DOB date not null,
badge_no number(6) primary key,
exempt varchar(1) not null,
salary number (9,2),
hourly_rate number (7,2)
)
/

create table timesheet


(badge_no number(6) references employee (badge_no),
week number(2),
job_id number(5),
hours_worked number(4,2)
)
/

create table system_log


(action_time DATE,
lastname VARCHAR2(60),
action LONG
)
/

Rem grants...

grant update (salary,hourly_rate) on employee to revrun_user;


grant ALL on employee to revrun_admin with grant option;

grant insert,update on timesheet to revrun_user;


grant ALL on timesheet to revrun_admin with grant option;

Rem indexes...

create index i_employee_name on employee(lastname);


create index i_employee_dob on employee(DOB);

create index i_timesheet_badge on timesheet(badge_no);

Rem triggers
create or replace procedure check_sal( salary in number) as
begin
return; -- Demo code
end;
/

create or replace trigger salary_trigger before insert or update of salary on


employee
for each row when (new.salary > 150000)
call check_sal(:new.salary)
/

create or replace trigger hourly_trigger before update of hourly_rate on


employee
for each row
begin :new.hourly_rate:=:old.hourly_rate;end;

SELECT substr(username, 1, 20), account_status,


default_tablespace, temporary_tablespace, created
FROM dba_users WHERE created > SYSDATE -10;

=============
11g Features:
=============

Note 1:
-------

Vraag: Wat zijn de belangrijkste nieuwe performance features in Oracle Database


11g?
Antwoord: De drie Result Caches: SQL Result Cache, PL/SQL Function Result Cache en
de OCI Client Result Cache.
De SQL result Cache bewaart de uitkomst van een vaak uitgevoerd SQL query
statement in de SGA. De Query Optimizer houdt zelf bij
welke queries in aanmerking komen, rekening houdende met de DML en query
frequentie.
Met name queries op lookup tabellen hebben hier enorm veel profijt van.
De PL/SQL Result Cache doet hetzelfde, maar dan voor PL/SQL Functions.
De OCI Client Cache bewaart het query resultaat op de client zodat er geen netwerk
trip naar de database nodig is
voor geselecteerde SQL queries.
Verder nog SQL Plan Management, een feature die SQL performance regressie voorkomt
door executie plannen op te slaan
in de database als basis voor toekomstige executie plannen. Als er in de toekomst
een ander executie plan beschikbaar komt,
doordat er bijvoorbeeld een index is aangemaakt, kan zo�n nieuw plan alleen
geaccepteerd worden als het
daadwerkelijk tot een betere performance leidt.
Dus de SQL optimizer is hiermee zelflerend geworden.

Vraag: Wat zijn de belangrijkste nieuwe Backup and Recovery features?


Antwoord: De Data Recovery Advisor: in plaats van zelf te bedenken hoe je een
Recovery vraagstuk het beste kunt aanpakken
vraag je nu gewoon aan RMAN een advies en natuurlijk kan RMAN dat advies ook voor
je uitvoeren.
Dus 'advise failure' en �repair failure' is alles wat een Oracle 11g DBA-er hoeft
te weten. En qua backup performance
is de RMAN multisection backup een belangrijke verbetering die het mogelijk maakt
om ��n file met meerdere channels
te backuppen om intra-file parallelisme te bewerkstellen. Een grote time saver is
de RMAN �duplicate from active database� feature
die het mogelijk maakt om een database te dupliceren zonder dat daarvoor een
opgeslagen backup nodig is.

Vraag: Wat zijn de belangrijkste nieuwe security features?


Antwoord: Tablespace encryptie is belangrijk als het om data protectie gaat.
Hiermee is het mogelijk om de gehele inhoud van een Tablespace te encrypten
ongeacht de gebruikte datatypen.
Hiermee is de data niet alleen beveiligd binnen de database maar ook tegen
aanvallen buiten de database om.
Een andere belangrijke verbetering is het nieuwe password algoritme en de
mogelijkheid om de DBA passwords onder te brengen
in een LDAP server zodat ze centraal beheerd kunnen worden.

Vraag: Wat zijn de belangrijkste nieuwe data opslag features?


Antwoord: De nieuwe LOB implementatie is ronduit geweldig met een veel betere
performance en ingebouwde encryptie mogelijkheid
die voor iedereen met LOB�s in de database veel voordelen biedt. Maar ook de
integratie van NFS in de database, Direct NFS,
is een feature met veel performance voordelen en meer keuze vrijheid wat het onder
liggende disk systeem betreft.
De Oracle 11g database kan nu rechtstreeks met een NFS server praten zonder
tussenkomst van de NFS layer uit het operating systeem.
Als laatste zijn de vele nieuwe partitionerings methoden ronduit overweldigend en
is alles zo ongeveer mogelijk
wat men maar wensen kan. Misschien wel de meest belangrijke vorm van
partitioneren, range partitioning, is geautomatiseerd
met de introductie van interval partitioning, maar ook de virtuele partition key
biedt vele nieuwe mogelijkheden.

Você também pode gostar