Você está na página 1de 12

1. Difference between TRUNCATE and DELETE.

2. What is the difference between “DBMS_SQL.parse” and “Execute


Immediate” ?
3. what is the difference between cursor and ref cursor, and when would you
appropriately use each of these. Could you please tell me how can I
effectively answer this question in short.
4. How to select the second maximum value in the column?
1) Why "Not in" will not use the index. ?
3) Whts materialized view? what are the parameters to be set for
materialized view ?
4) Without changing the where condition, how wil you make the query to
not use the index.
5) Whts diff between explain plan and autotrace on ?
6) Diff between DELETE and Truncate ? ( Wht happens internally )
7) What happens when commit is issued ?
8) What is the difference between “DBMS_SQL.parse” and “Execute
Immediate” ?

Two phase commit


REF CURSOR
meterialised view
SDLC
two phase commit
analyze table
select for update
exception
varryas
collections
Difference between TRUNCATE and DELETE.

 TRUNCATE is a DDL command and cannot be rolled back. All of the


memory space is released back to the server.
DELETE is a DML command and can be rolled back.
 Both commands accomplish identical tasks (removing all data from
a table),
but TRUNCATE is much faster.
 Truncate is implicitly a commit. Delete doesn’t have.
 You can’t grant permission to truncate a table.
 You can delete any subset of rows, but you can only truncate the
complete table,
or a partition or sub partition of it
 Truncate makes unusable indexes usable again
 Truncate can’t maintain foreign keys
It’s "cascading delete", not "cascading truncate".
 You can’t flashback a truncate. We can flashback a drop table,
rollback uncommitted deletes, or use flashback to recover pre-
commit deleted data, but a truncate is a barrier across which we
cannot flashback.
 Truncate de-allocates space, delete doesn’t.
Unless you want it not to, using the "reuse storage" clause.
 Truncate resets the high water mark, delete doesn’t. And on the
indexes, also.
 DML triggers do not fire on a truncate. Because it’s DDL not DML.
 You cannot TRUNCATE a table that has any foreign key constraints.
You will have to remove the constraints, TRUNCATE the table, and
reapply the constraints.

truncate also discards an objects segments and re-allocates a new one.


You can see this by watching the value of user_objects.data_obj_id
change after each truncate ( user_objects.data_obj_id - the object_id
of the segment that contains the object)
This is part of the reason why truncate can be so expensive to carry
out in a high transaction rate system.
Although if you have never inserted any rows into the table then a
truncate doesn’t change the data object id (in 10g at least). Even a
single insert that is then rolled back causes a data object id change
on truncate.
I suppose the usual reason people would be truncating in an OLTP system
would be to do with temporary table usage, or something like that,
where a global temporary table (or subquery factoring clause) would
often do the job just as well.
Truncate will also invalidate any cursors referencing that table.
Another issue recently discussed is that truncate doesn’t reset
statistics for the table/indexes.

What is the difference between “DBMS_SQL.parse” and “Execute


Immediate” ?
http://www.orafaq.com/forum/t/59656/0/

In a procedure i used below given methods to drop a table and both are
working, but what is the difference between 'execute immediate' and
'DBMS_SQL.PARSE' package...which is faster in execution, please suggest.

vsql varchar2(100):='drop user '||v_user_check;


execute immediate vsql;

OR

cr number:=DBMS_SQL.OPEN_CURSOR;
DBMS_SQL.PARSE(cr, vsql ,DBMS_SQL.V7);
--- which command is better to use in PLSQL as im using oracle9i Re1.

Thanks in advance

http://www.stanford.edu/dept/itss/docs/oracle/10g/appdev.101/b10795/adf
ns_dy.htm

execute immediate is faster than dbms_sql. This is also documented at:


http://www.lc.leidenuniv.nl/awcourse/oracle/appdev.920/a96590/adg09dyn.
htm#26586

and you can try and verify it by creating a testcase as well.

DBMS_SQL predates EXECUTE IMMEDIATE in PL/SQL. DBMS_SQL was all we had


in v7. EXECUTE IMMEDIATE is now (since v8.0) the preferred method of
dynamic SQL in PL/SQL.

DBMS_SQL is still maintained because of the inability of EXECUTE


IMMEDIATE to perform a so-called "Method 4 Dynamic SQL" where the
name/number of SELECT columns or the name/number of bind variables is
dynamic.

Native Dynamic SQL is Easy to Use

Because native dynamic SQL is integrated with SQL, you can use it in
the same way that you use static SQL within PL/SQL code. Native dynamic
SQL code is typically more compact and readable than equivalent code
that uses the DBMS_SQL package.

With the DBMS_SQL package you must call many procedures and functions
in a strict sequence, making even simple operations require a lot of
code. You can avoid this complexity by using native dynamic SQL instead.

Native dynamic SQL in PL/SQL performs comparably to the performance of


static SQL, because the PL/SQL interpreter has built-in support for it.
Programs that use native dynamic SQL are much faster than programs that
use the DBMS_SQL package. Typically, native dynamic SQL statements
perform 1.5 to 3 times better than equivalent DBMS_SQL calls. (Your
performance gains may vary depending on your application.)
Native dynamic SQL bundles the statement preparation, binding, and
execution steps into a single operation, which minimizes the data
copying and procedure call overhead and improves performance.

The DBMS_SQL package is based on a procedural API and incurs high


procedure call and data copy overhead. Each time you bind a variable,
the DBMS_SQL package copies the PL/SQL bind variable into its space for
use during execution. Each time you execute a fetch, the data is copied
into the space managed by the DBMS_SQL package and then the fetched
data is copied, one column at a time, into the appropriate PL/SQL
variables, resulting in substantial overhead.

Native dynamic SQL supports all of the types supported by static SQL in
PL/SQL, including user-defined types such as user-defined objects,
collections, and REFs. The DBMS_SQL package does not support these
user-defined types.

Native Dynamic SQL Supports Fetching Into Records

Native dynamic SQL and static SQL both support fetching into records,
but the DBMS_SQL package does not. With native dynamic SQL, the rows
resulting from a query can be directly fetched into PL/SQL records.

In the following example, the rows from a query are fetched into the
emp_rec record:

DECLARE
TYPE EmpCurTyp IS REF CURSOR;
c EmpCurTyp;
emp_rec emp%ROWTYPE;
stmt_str VARCHAR2(200);
e_job emp.job%TYPE;

BEGIN
stmt_str := 'SELECT * FROM emp WHERE job = :1';
-- in a multi-row query
OPEN c FOR stmt_str USING 'MANAGER';
LOOP
FETCH c INTO emp_rec;
EXIT WHEN c%NOTFOUND;
END LOOP;
CLOSE c;
-- in a single-row query
EXECUTE IMMEDIATE stmt_str INTO emp_rec USING 'PRESIDENT';

END;
/

Advantages of the DBMS_SQL Package

The DBMS_SQL package provides the following advantages over native


dynamic SQL:
DBMS_SQL Supports SQL Statements Larger than 32KB

The DBMS_SQL package supports SQL statements larger than 32KB; native
dynamic SQL does not.

DBMS_SQL is Supported in Client-Side Programs

The DBMS_SQL package is supported in client-side programs, but native


dynamic SQL is not. Every call to the DBMS_SQL package from the client-
side program translates to a PL/SQL remote procedure call (RPC); these
calls occur when you need to bind a variable, define a variable, or
execute a statement.

DBMS_SQL Supports DESCRIBE

The DESCRIBE_COLUMNS procedure in the DBMS_SQL package can be used to


describe the columns for a cursor opened and parsed through DBMS_SQL.
This feature is similar to the DESCRIBE command in SQL*Plus. Native
dynamic SQL does not have a DESCRIBE facility.

DBMS_SQL Lets You Reuse SQL Statements

The PARSE procedure in the DBMS_SQL package parses a SQL statement


once. After the initial parsing, you can use the statement multiple
times with different sets of bind arguments.

Native dynamic SQL prepares a SQL statement each time the statement is
used, which typically involves parsing, optimization, and plan
generation. Although the extra prepare operations incur a small
performance penalty, the slowdown is typically outweighed by the
performance benefits of native dynamic SQL.

what is the difference between cursor and ref cursor, and


when would you appropriately use each of these. Could you
please tell me how can I effectively answer this question
in short.

technically, under the covers, at the most "basic level", they are the
same.

A "normal" plsql cursor is static in defintion.

Ref cursors may be dynamically opened or opened based on logic.

Declare
type rc is ref cursor;
cursor c is select * from dual;
l_cursor rc;
begin
if ( to_char(sysdate,'dd') = 30 ) then
open l_cursor for 'select * from emp';
elsif ( to_char(sysdate,'dd') = 29 ) then
open l_cursor for select * from dept;
else
open l_cursor for select * from dual;
end if;
open c;
end;
/

Given that block of code -- you see perhaps the most "salient"
difference -- no matter how many times you run that block -- cursor C
will always be
select * from dual. The ref cursor can be anything.

Another difference is a ref cursor can be returned to a client. a


plsql "cursor cursor" cannot be returned to a client.

Another difference is a cursor can be global -- a ref cursor cannot


(you cannot define them OUTSIDE of a procedure / function)

Another difference is a ref cursor can be passed from subroutine to


subroutine -- a cursor cannot be.

Another difference is that static sql (not using a ref cursor) is much
more
efficient then using ref cursors and that use of ref cursors should be
limited to
- returning result sets to clients
- when there is NO other efficient/effective means of achieving the goal

that is, you want to use static SQL (with implicit cursors really)
first and use a ref cursor only when you absolutely have to

Then sit back and say "anything else you wanted to know about them"

How to select the second maximum value in the column?Say i have


SALARY column in a table and i have values 3000,2000,1000,500 in
it ,so how to get 2000 value by querying i.e. the second maximum
in a column.

depends on the version of oracle.


I'll assume Oracle8i EE since you did not say

scott@ORA8I.WORLD> select ename, sal, dense_rank() over ( order by


sal )
from emp;
ENAME SAL DENSE_RANK()OVER(ORDERBYSAL)
---------- ---------- ----------------------------
KING 5 1
SMITH 800 2
JAMES 950 3
ADAMS 1100 4
WARD 1250 5
MARTIN 1250 5
MILLER 1300 6
TURNER 1500 7
ALLEN 1600 8
CLARK 2450 9
BLAKE 2850 10
JONES 2975 11
SCOTT 3000 12
FORD 3000 12
KING 5000 13

so, to get the 6'th highest salary we simply:


scott@ORA8I.WORLD> select * from ( select ename, sal, dense_rank()
over ( order by sal )r from emp ) where r = 6;

ENAME SAL R
---------- ---------- ----------
MILLER 1300 6

The dense_rank() function gave me initiative to find more functions. So


here are
some valuable links for you guys,

http://www.akadia.com/services/ora_analytic_functions.html
http://www.quest-pipelines.com/newsletter-v3/0402_D.htm

I hope you will explore more as I am ....

Another Method: 1
select * FROM emp where sal = (select max(sal) from emp where sal <
(select max(sal) from temp));

Another Method: 2
Select * from emp where 1 = ( select count(*) from emp e
where salary > emp.salary)

High water mark [HWM]


The high water mark is divides a segment into used blocks free
blocks. Blocks below the high water mark (used blocks) have at least
once contained data. This data might have been deleted. Since Oracle
knows that blocks beyond the high water mark don't have data, it only
reads blocks up to the high water mark in a full table scan. Oracle
keeps track of the high water mark for a segment in the segment header.

Moving the high water mark


In normal DB operations, the high water mark only moves upwards, not
downwards. The exceptions being the truncate. If there is a lot of free
space below the high water mark, one might consider to use alter table
move statements. See On shrinking table sizes.

Initial position
The initial position of the high water mark is extent 0 block 0 for
tables and extent 0 block 1 for indexes.

Difference between DBMS_STAT and Analyze command.

you can import/export/set statistics directly with dbms_stats


it is easier to automate with dbms_stats (it is procedural, analyze is
just a command)
dbms_stats is the stated, preferred method of collecting statisttics.
dbms_stats can analyze external tables, analyze cannot.
DBMS_STATS gathers statistics only for cost-based optimization; it does
not gather other statistics. For example, the table statistics gathered
by DBMS_STATS include the number of rows, number of blocks currently
containing data, and average row length but not the
number of chained rows, average free space, or number of unused data
blocks.
dbms_stats (in 9i) can gather system stats (new)
ANALYZE calculates global statistics for partitioned tables and indexes
instead of gathering them directly. This can lead to inaccuracies for
some statistics, such as the number of distinct values. DBMS_Stats
won't do that.
Most importantly, in the future, ANALYZE will not collect statistics
needed by the cost-based optimizer.

Difference between Rank and Dense Rank?


The difference between the two is that RANK() leaves gaps while ranking the records
whereas DENSE_RANK() doesn't leave any gaps. For example, if we have more than one records at
a particular position then RANK() will place all those records in that position and it'll place the next
record after a gap of the additional records. But, DENSE_RANK() (which will also place all the
records in that position only) will not leave that gap for the next rank.

Rank:
1
2<--2nd position
2<--3rd position
4
5

Same Rank is assigned to same totals/numbers.


Rank is followed by the Position. Golf game ususally Ranks this way.
This is usually a Gold Ranking.

Dense Rank:
1
2<--2nd position
2<--3rd position
3
4

Same ranks are assigned to same totals/numbers/names.


The next rank follows the serial number.
Index Rebuild
=================
Index fragmentation occurs when a row included in the index is deleted
AKA index stagnation.

You will need to analyze you indexes individually to find those


stagnated indexes, once discovered they can be rebuilt.

To analyze issue the following command :-


analyze index owner.index_name validate structure;
The index information will now be in table 'index_stats'
Now issue the following query :-
select del_lf_rows * 100 / decode(lf_rows,0,1,lf_rows) from index_stats
where name = 'index_ name'

If 20%+ of rows are deleted then the index should be rebuilt.


The index stats table can only hold one record of information at a
time,
therefore you will need to analyze each index individually and then
interrogate index_stats,
you can also automate this process using pl/sql.

Alternatively, you can use Oracle Enterprise Manager, Index Tuning


Wizard.

Major considerations here to create SGA are:


========================================================

http://asktom.oracle.com/pls/asktom/f?
p=100:11:0::::P11_QUESTION_ID:30011178429375

a) how much do you want to assign to your buffer cache for maximum
performance
b) how big is your shared/java pool (a function of how much
sql/plsql/java you run in
your database, no magical number for all to use)
c) do you run in shared server (than the large pool is used and will be
large -- that is
part of the sga) or in dedicated server -- then you need to leave OS
memory for dynamic
allocations
d) what else is going on in the machine.

if you set the SGA_TARGET to 1000m, the 4 components will be sized to


consume 1000m. consider:

SQL> show parameter sga_target

NAME TYPE VALUE


------------------------------------ ----------- ---------------
sga_target big integer 1000M

SQL> show sga


Total System Global Area 1048576000 bytes
Fixed Size 782424 bytes
Variable Size 259002280 bytes
Database Buffers 788529152 bytes
Redo Buffers 262144 bytes

SQL> show parameter pool

NAME TYPE VALUE


------------------------------------ -----------
------------------------------
buffer_pool_keep string
buffer_pool_recycle string
global_context_pool_size string
java_pool_size big integer 0
large_pool_size big integer 0
olap_page_pool_size big integer 0
shared_pool_reserved_size big integer 12373196
shared_pool_size big integer 0
streams_pool_size big integer 0
SQL>

basically, Oracle will setup reasonable initial sized pools (if you
know how to peek at _
parameters, you'll see them:

__java_pool_size 4194304
__large_pool_size 4194304
__shared_pool_size 247463936

) and will put the rest in the buffer cache. Over time, if the pools
need more, it'll steal from
the buffer cache and increase them.

sga_target has to be less than or equal to sga_max_size. It depends on


the OS how the memory is
reserved, but basically your 1200/1600 would have you start with an SGA
of 1,200 meg that could be
grown by you to 1600m (using alter system)

SQL> show parameter sga

NAME TYPE VALUE


------------------------------------ -----------
------------------------------
lock_sga boolean FALSE
pre_page_sga boolean FALSE
sga_max_size big integer 1200M
sga_target big integer 1008M

SQL> alter system set sga_target = 1100m;


System altered.

SQL> alter system set sga_target = 1300m;


alter system set sga_target = 1300m
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is
invalid
ORA-00823: Specified value of sga_target greater than sga_max_size

Difference between SGA and PGA


1. The PGA is allocated when a process is created and deallocated
when the process is terminated. In contrast to the SGA,
which is shared by several processes, the PGA is an area that is
used by only one process.

2.SGA is allocated for an instance startup.


PGA is allocated when server process is started.

3. Sorts happen in pga ( Example : Hash join will happen in PGA)


other operations happen in sga

SGA (System Global Area) is an area of memory (RAM) allocated when an


Oracle Instance starts up.
The SGA's size and function are controlled by initialization (INIT.ORA
or SPFILE) parameters Contents

In general, the SGA consists of the following sub-components,


as can be verified by querying the V$SGAINFO:

SELECT * FROM v$sgainfo;

The common components are:

* Data buffer cache - cache data and index blocks for faster access.
* Shared pool - cache parsed SQL and PL/SQL statements.
* Dictionary Cache - information about data dictionary objects.
* Redo Log Buffer - committed transactions that are not yet written to
the redo log files.
* JAVA pool - caching parsed Java programs.
* Streams pool - cache Oracle Streams objects.
* Large pool - used for backups, UGAs, etc.

SQL> SHOW SGA


Total System Global Area 638670568 bytes
Fixed Size 456424 bytes
Variable Size 503316480 bytes
Database Buffers 134217728 bytes
Redo Buffers 679936 bytes

SQL> SELECT * FROM v$sga;


NAME VALUE
---- ------
Fixed Size 456424
Variable Size 503316480
Database Buffers 134217728
Redo Buffers 679936

The size of the SGA is controlled by the DB_CACHE_SIZE parameter.

PGA (Program or Process Global Area) is a memory area (RAM) that stores
data and control
information for a single process. For example, it typically contains a
sort area, hash area,
session cursor cache, etc.

PGA areas can be sized manually by setting parameters like


hash_area_size, sort_area_size etc.
To allow Oracle to auto tune the PGA areas, set the
WORKAREA_SIZE_POLICY parameter to AUTO and
the PGA_AGGREGATE_TARGET to the size of memory that can be used for
PGA.
This feature was introduced in Oracle 9i.

PGA usage statistics:


select * from v$pgastat;

Determine a good setting for pga_aggregate_target:

select * from v$pga_target_advice order by pga_target_for_estimate;

Show the maximum PGA usage per process:


select max(pga_used_mem), max(pga_alloc_mem), max(pga_max_mem) from
v$process;

No Of Blocks at a time IO will read


=======================================
db_file_multiblock_read_count integer 16

Telling optimizer that 90%


optimizer_index_caching integer 90

Você também pode gostar