Você está na página 1de 19

Performance and Tuning

Oracle initialization parameters used in the compilation of PLSQL units.

PLSQL_CCFLAGS
PLSQL_CODE_TYPE
PLSQL_DEBUG Stored with the units metadata, PLSQL compiler settings and optimization
PLSQL_OPTIMIZE_LEVEL level and can viewed in ALL_PLSQL_OBJECTS_SETTINGS view. Cal
PLSQL_WARNINGS also be altered using ALTER cmd
NLS_LENGTH_SEMANTICS

Use:

1. Check data types :- PLS_INTEGER, BINARY_FLOAT and BINARY_DOUBLE


2. Use BINARY_FLOAT and BINARY_DOUBLE for floating point arithmetic. Because these can
native arithmetic instructions. These types are less suitable for financial code where accuracy is
critical. Because they handle rounding differently than number types. (Operations on NUMBERS
and INTEGER variable require calls to library routines).
3. Call the functions efficiently.
4. Use Regular expressions.
5. Use FORALL and BULK COLLECT INTO and RETURNNING BULK COLLECT INTO.
6. Use OUT NOCOPY and INOUT NOCOPY. If it is collections, big VARCHAR3 values or LOBs. If
program doesn’t depend on out parameter keeping their values [SELF INOUT COPY].
7. PLSQL stops evaluating a logical expression as soon as the result can be determined. This
functionality is “Short- Circuit Evolution” When the multiple conditions separated by AND or OR
put the least expensive one first.
8. Be generous while declaring size for VARCHAR2 variable. MAX size 32000. Least specifies more
than 4000.
9. Use properly shared memory pool correctly. Because when you call PKG subprograms for the
first time the whole package is loaded in the shared memory pool. Used to Pin and unpin objects
from memory.
dbms_shared_pool.keep(name IN VARCHAR2, flag IN CHAR DEFAULT 'P');
dbms_shared_pool.unkeep(name IN VARCHAR2, flag IN CHAR DEFAULT 'P');
10. Improve your code to avoid compiler warning.

Avoid:

1. Badly written SQL statements


2. Poor programming practices
3. Inattention to PLSQL basics
4. Misuse of shared memory

1. Avoid CPU overhead in PLSQL code.


 Use appropriate indexes like function based Index, etc.,
 Use explain plan to analyze the execution or SQL trace/TKPROOF utility
 Avoid full table scans
 Use DBMS_STATIS to see the plan.
Note: if a column is passed to a function with SQL query the query cannot use regular index on that
column.

2. Takes lot of time issuing DDL in PLSQL like Create table try avoiding it.
3. Minimize data type conversion.
 At runtime, PLSQL converts b/w different data types automatically.
Example:- PL_INTEGER variable to a NUMBER variable conversion. Because their
internal representation are different. So, whenever it is possible choose data types
carefully to minimize implicit conversions.

Note: In table if data type NUMBER and convert to PLS_INTERGER in the code and maintain same till
the end. This actually improves performance. Because it used more efficient hardware arithmetic and it
requires less space than INTEGER and NUMBER. PLS_INTERGER and BINARY_INTEGER are more
identical.

4. Avoid subtypes like: INTERGER, NATURAL, NATURALn, POSITIVE, POSITIVEn, AND SIGN
type in performance critical code.

Explain Plan
EXPLAIN PLAN parses a query and records the "plan" that Oracle devises to execute it. By examining
this plan, you can find out if Oracle is picking the right indexes and joining your tables in the most efficient
manner. There are a few different ways to utilize Explain Plan. We will focus on using it
through SQL*Plus since most Oracle programmers have access to SQL*Plus.

Creating a Plan Table

The first thing you will need to do is make sure you have a table called PLAN_TABLE available in your
schema. The following script will create it for you if you don't have it already:

@?/rdbms/admin/utlxplan.sql

Explain Plan Syntax

EXPLAIN PLAN FOR your-sql-statement;

or
EXPLAIN PLAN SET STATEMENT_ID = statement_id FOR your-sql-statement;

Formatting the output

After running EXPLAIN PLAN, Oracle populates the PLAN_TABLE table with data that needs to be formatted
to presented to the user in a more readable format. Several scripts exist for this, however, one of the easiest
methods available is to cast dbms_xplan.display to a table and select from it (see examples below).

Some Examples

SQL> EXPLAIN PLAN FOR select * from dept where deptno = 40;
Explained.
SQL> set linesize 132
SQL> SELECT * FROM TABLE(dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------
-
Plan hash value: 2852011669
---------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 20 | 1 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| DEPT | 1 | 20 | 1 (0)| 00:00:01 |
|* 2 | INDEX UNIQUE SCAN | PK_DEPT | 1 | | 0 (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):


---------------------------------------------------

2 - access("DEPTNO"=40)

14 rows selected.
The DBMS_XPLAN.DISPLAY function can accept 3 optional parameters:

EXPLAIN PLAN SET STATEMENT_ID='TSH' FOR


SELECT *
FROM emp e, dept d
WHERE e.deptno = d.deptno
AND e.ename = 'SMITH';
SET LINESIZE 130
SELECT *
FROM TABLE(DBMS_XPLAN.DISPLAY('PLAN_TABLE','TSH','BASIC'));

Plan hash value: 3625962092

------------------------------------------------

| Id | Operation | Name |

------------------------------------------------
| 0 | SELECT STATEMENT | |

| 1 | NESTED LOOPS | |

| 2 | NESTED LOOPS | |

| 3 | TABLE ACCESS FULL | EMP |

| 4 | INDEX UNIQUE SCAN | PK_DEPT |

| 5 | TABLE ACCESS BY INDEX ROWID| DEPT |


------------------------------------------------

12 rows selected.
Using SQL*Plus Autotrace
SQL*Plus also offers an AUTOTRACE facility that will display the query plan and execution statistics as each
query executes. Example:

SQL> SET AUTOTRACE ON


SQL> select * from dept where deptno = 40;

DEPTNO DNAME LOC


---------- -------------- -------------
40 OPERATIONS BOSTON

Execution Plan
----------------------------------------------------------
Plan hash value: 2852011669
---------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 20 | 1 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| DEPT | 1 | 20 | 1 (0)| 00:00:01 |
|* 2 | INDEX UNIQUE SCAN | PK_DEPT | 1 | | 0 (0)| 00:00:01 |
---------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("DEPTNO"=40)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
2 consistent gets
0 physical reads
0 redo size
443 bytes sent via SQL*Net to client
374 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
Oracle Function-Based Indexes
Traditionally, performing a function on an indexed column in the where clause of a query guaranteed an
index would not be used. Oracle 8i introduced Function-Based Indexes to counter this problem. Rather
than indexing a column, you index the function on that column, storing the product of the function, not the
original column data. When a query is passed to the server that could benefit from that index, the query is
rewritten to allow the index to be used. The following code samples give an example of the use of
Function-Based Indexes.

 Build Test Table


 Build Regular Index
 Build Function-Based Index
 Concatenated Columns

Build Test Table


First we build a test table and populate it with enough data so that use of an index would be
advantageous.

CREATE TABLE user_data (


id NUMBER(10) NOT NULL,
first_name VARCHAR2(40) NOT NULL,
last_name VARCHAR2(40) NOT NULL,
gender VARCHAR2(1),
dob DATE);
BEGIN
FOR cur_rec IN 1 .. 2000 LOOP
IF MOD(cur_rec, 2) = 0 THEN
INSERT INTO user_data
VALUES (cur_rec, 'John' || cur_rec, 'Doe', 'M', SYSDATE);
ELSE
INSERT INTO user_data
VALUES (cur_rec, 'Jayne' || cur_rec, 'Doe', 'F', SYSDATE);
END IF;
COMMIT;
END LOOP;
END;
/
EXEC DBMS_STATS.gather_table_stats(USER, 'user_data', cascade => TRUE);
At this point the table is not indexed so we would expect a full table scan for any query.

SET AUTOTRACE ON
SELECT *
FROM user_data
WHERE UPPER(first_name) = 'JOHN2';

Execution Plan
----------------------------------------------------------
Plan hash value: 2489064024

-------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |

-------------------------------------------------------------------------------

| 0 | SELECT STATEMENT | | 20 | 540 | 5 (0)| 00:00:01 |

|* 1 | TABLE ACCESS FULL| USER_DATA | 20 | 540 | 5 (0)| 00:00:01 |

-------------------------------------------------------------------------------

Build Regular Index


If we now create a regular index on the FIRST_NAME column we see that the index is not used.

CREATE INDEX first_name_idx ON user_data (first_name);

EXEC DBMS_STATS.gather_table_stats(USER, 'user_data', cascade => TRUE);

SET AUTOTRACE ON

SELECT *
FROM user_data
WHERE UPPER(first_name) = 'JOHN2';

Execution Plan

----------------------------------------------------------
Plan hash value: 2489064024

-------------------------------------------------------------------------------

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |

-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 20 | 540 | 5 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| USER_DATA | 20 | 540 | 5 (0)| 00:00:01 |

-------------------------------------------------------------------------------

Build Function-Based Index


If we now replace the regular index with a function-based index on the FIRST_NAME column we see that
the index is used.

DROP INDEX first_name_idx;

CREATE INDEX first_name_idx ON user_data (UPPER(first_name));


EXEC DBMS_STATS.gather_table_stats(USER, 'user_data', cascade => TRUE);
-- Later releases set these by default.

ALTER SESSION SET QUERY_REWRITE_INTEGRITY = TRUSTED;

ALTER SESSION SET QUERY_REWRITE_ENABLED = TRUE;


SET AUTOTRACE ON

SELECT *
FROM user_data
WHERE UPPER(first_name) = 'JOHN2';

Execution Plan

----------------------------------------------------------

Plan hash value: 1309354431


----------------------------------------------------------------------------------------------

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |

----------------------------------------------------------------------------------------------

| 0 | SELECT STATEMENT | | 1 | 36 | 2 (0)| 00:00:01 |

| 1 | TABLE ACCESS BY INDEX ROWID| USER_DATA | 1 | 36 | 2 (0)| 00:00:01 |

|* 2 | INDEX RANGE SCAN | FIRST_NAME_IDX | 1 | | 1 (0)| 00:00:01 |

----------------------------------------------------------------------------------------------

The QUERY_REWRITE_INTEGRITY and QUERY_REWRITE_ENABLED parameters must be set or the


server will not be able to rewrite the queries, and will therefore not be able to use the new index. Later
releases have them enabled by default.

Concatenated Columns
This method works for concatenated indexes also.

DROP INDEX first_name_idx;


CREATE INDEX first_name_idx ON user_data (gender, UPPER(first_name), dob);

EXEC DBMS_STATS.gather_table_stats(USER, 'user_data', cascade => TRUE);

SET AUTOTRACE ON
SELECT *

FROM user_data

WHERE gender = 'M'


AND UPPER(first_name) = 'JOHN2';

Execution Plan

----------------------------------------------------------
Plan hash value: 1309354431

----------------------------------------------------------------------------------------------

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |

----------------------------------------------------------------------------------------------

| 0 | SELECT STATEMENT | | 1 | 36 | 3 (0)| 00:00:01 |

| 1 | TABLE ACCESS BY INDEX ROWID| USER_DATA | 1 | 36 | 3 (0)| 00:00:01 |

|* 2 | INDEX RANGE SCAN | FIRST_NAME_IDX | 1 | | 2 (0)| 00:00:01 |

----------------------------------------------------------------------------------------------

Remember, function-based indexes require more effort to maintain than regular indexes, so having
concatenated indexes in this manner may increase the incidence of index maintenance compared to a
function-based index on a single column.

For example, suppose you create the following function-based index:

CREATE INDEX emp_total_sal_idx

ON employees (12 * salary * commission_pct, salary, commission_pct);

The database can use the preceding index when processing queries such as Example 3-6 (partial
sample output included).

Example 3-6 Query Containing an Arithmetic Expression


SELECT employee_id, last_name, first_name,
12*salary*commission_pct AS "ANNUAL SAL"
FROM employees
WHERE (12 * salary * commission_pct) < 30000
ORDER BY "ANNUAL SAL" DESC;

EMPLOYEE_ID LAST_NAME FIRST_NAME ANNUAL SAL


----------- ------------------------- -------------------- ----------

159 Smith Lindsey 28800


151 Bernstein David 28500
152 Hall Peter 27000
160 Doran Louise 27000
175 Hutton Alyssa 26400
149 Zlotkey Eleni 25200
169 Bloom Harrison 24000

A function-based index is also useful for indexing only specific rows in a table.

For example, the cust_valid column in the sh.customerstable has either I or A as a value. To index only
the A rows, you could write a function that returns a null value for any rows other than the Arows. You
could create the index as follows:

CREATE INDEX cust_valid_idx


ON customers ( CASE cust_valid WHEN 'A' THEN 'A' END );

Optimization with Function-Based Indexes

The optimizer can use an index range scan on a function-based index for queries with expressions
in WHERE clause. The range scan access path is especially beneficial when the predicate
(WHERE clause) has low selectivity. In Example 3-6 the optimizer can use an index range scan if an
index is built on the expression 12*salary*commission_pct.

A virtual column is useful for speeding access to data derived from expressions. For example, you could
define virtual column annual_salas 12*salary*commission_pct and create a function-based index
on annual_sal.

The optimizer performs expression matching by parsing the expression in a SQL statement and then
comparing the expression trees of the statement and the function-based index. This comparison is case-
insensitive and ignores blank spaces.

NOCOPY Hint to Improve Performance of OUT and IN OUT Parameters in PL/SQL Code
Background
Oracle has two methods of passing passing OUT and IN OUT parameters in PL/SQL code:

 Pass By Value : The default action is to create a temporary buffer (formal parameter), copy the
data from the parameter variable (actual parameter) to that buffer and work on the temporary
buffer during the lifetime of the procedure. On successful completion of the procedure, the
contents of the temporary buffer are copied back into the parameter variable. In the event of an
exception occurring, the copy back operation does not happen.
 Pass By Reference : Using the NOCOPY hint tells the compiler to use pass by reference, so no
temporary buffer is needed and no copy forward and copy back operations happen. Instead, any
modification to the parameter values are written directly to the parameter variable (actual
parameter).

Under normal circumstances you probably wouldn't notice the difference between the two methods, but
once you start to pass large or complex data types (LOBs, XMLTYPEs, collections etc.) the difference
between the two methods can become quite considerable. The presence of the temporary buffer means
pass by value requires twice the memory for every OUT and IN OUT parameter, which can be a problem
when using large parameters. In addition, the time it takes to copy the data to the temporary buffer and
back to the parameter variable can be quite considerable.
The following tests compare the elapsed time and memory consumption of a single call to test procedures
passing a large collection as OUT and IN OUT parameters.

Issues
There are a number of issues associated with using the NOCOPY hint that you should be aware of before
adding it to all your OUT and IN OUT parameters.

 NOCOPY is a hint. There are a number of circumstances where the compiler can ignore the hint,
as described here.
 If you are testing the contents of the parameter as a measure of successful completion of a
procedure, adding NOCOPY may give unexpected results. For example, suppose I pass the
value of NULL and assume if the parameter returns with a NOT NULL value the procedure has
worked. This will work without NOCOPY, since the copy back operation will not happen in the
event of an exception being raised. If I add NOCOPY, all changes are instantly written to the
actual parameter, so exceptions will not prevent a NOT NULL value being returned. This may
seem like a problem, but in my opinion if this affects you it is an indication of bad coding practice
on your part. Failure should be indicated by raising an exception, or at worst using a status flag,
rather than testing for values.
 Parameter Aliasing. If you use a single variable as an actual parameter for
multiple OUT and/or IN OUT parameters in a procedure, using a mix of pass by value and pass
by reference, you may get unexpected results. This is because the final copy back from the pass
by value parameters will wipe out any changes to the pass by reference parameters. This
situation can be compounded further if the actual parameter is a global variable that can be
referenced directly from within the procedure. Although the manual describes possible issues,
once again it is an indication that you are writing terrible code, rather than a limitation of pass by
reference. You can read more about parameter aliasing here.
Short-Circuit Evaluation in PL/SQL
As soon as the final outcome of a boolean expression can be determined, PL/SQL stops evaluating the
expression. This is known as short-circuit evaluation and it can be used to improve the performance of
some boolean expressions in your PL/SQL.

 Short-Circuit Evaluation of OR
 Short-Circuit Evaluation of AND

Short-Circuit Evaluation of OR

If left side of an OR expression is TRUE, the whole expression is TRUE. We know this because,

 TRUE OR FALSE = TRUE


 TRUE OR TRUE = TRUE
 TRUE OR NULL = TRUE

So placing the least expensive tests to the left of boolean expressions can potentially improve
performance as the right hand side of the expression may not need to be evaluated.
Imagine we have a function that returns a boolean value. The amount of processing in the function is
significant, making it take a long time to complete. The following function fakes this by calling
the DBMS_LOCK.SLEEP procedure.

CONN / AS SYSDBA
GRANT EXECUTE ON DBMS_LOCK TO test;
CONN test/test
CREATE OR REPLACE FUNCTION slow_function (p_number IN NUMBER)

RETURN BOOLEAN AS
BEGIN
-- Mimic a slow function.
DBMS_LOCK.sleep(0.5);
RETURN TRUE;
END;
/

SHOW ERRORS

Depending on the boolean expression used, we may be able to avoid calling the function altogether,
giving out code a significant performance improvement.
SET SERVEROUTPUT ON
DECLARE
l_loops NUMBER := 10;
l_start NUMBER;
l_boolean BOOLEAN := TRUE;
BEGIN
-- Time normal OR.
l_start := DBMS_UTILITY.get_time;
FOR i IN 1 .. l_loops LOOP
IF slow_function(i) OR l_boolean THEN
-- Do nothing.
NULL;
END IF;
END LOOP;
DBMS_OUTPUT.put_line('Normal OR : ' || (DBMS_UTILITY.get_time - l_start));
-- Time short-circuit OR.
l_start := DBMS_UTILITY.get_time;
FOR i IN 1 .. l_loops LOOP
IF l_boolean OR slow_function(i) THEN
-- Do nothing.
NULL;
END IF;
END LOOP;
DBMS_OUTPUT.put_line('Short circuit OR : ' || (DBMS_UTILITY.get_time - l_start));
END;
/
Normal OR : 498
Short circuit OR : 0
PL/SQL procedure successfully completed.

SQL>

As expected, if the call to the slow function is placed on the right-hand side of the expression, it is not
executed, so the code is much quicker.

Short-Circuit Evaluation of AND

If the left side of an AND expression is FALSE, the whole expression is FALSE. We know this because,
FALSE AND FALSE = FALSE
FALSE AND TRUE = FALSE
FALSE AND NULL = FALSE

Once again, placing the least expensive tests to the left of boolean expressions can potentially improve
performance as the right hand side of the expression may not need to be evaluated. We can demonstrate
this using the slow function again.

SET SERVEROUTPUT ON
DECLARE
l_loops NUMBER := 10;
l_start NUMBER;
l_boolean BOOLEAN := FALSE;

BEGIN

-- Time normal AND.

l_start := DBMS_UTILITY.get_time;

FOR i IN 1 .. l_loops LOOP


IF slow_function(i) AND l_boolean THEN
-- Do nothing.
NULL;
END IF;
END LOOP;

DBMS_OUTPUT.put_line('Normal AND : ' ||(DBMS_UTILITY.get_time - l_start));

-- Time short-circuit AND.

l_start := DBMS_UTILITY.get_time;

FOR i IN 1 .. l_loops LOOP


IF l_boolean AND slow_function(i) THEN
-- Do nothing.
NULL;
END IF;
END LOOP;

DBMS_OUTPUT.put_line('Short circuit AND: ' || (DBMS_UTILITY.get_time - l_start));

END;/

Normal AND : 499


Short circuit AND: 0

PL/SQL procedure successfully completed.

SQL>

As expected, if the call to the slow function is placed on the right-hand side of the expression, it is not
executed, so the code is much quicker.

DML RETURNING INTO Clause

The RETURNING INTO clause allows us to return column values for rows affected by DML statements.
The following test table is used to demonstrate this clause.
DROP TABLE t1;
DROP SEQUENCE t1_seq;
CREATE TABLE t1 (
id NUMBER(10),
description VARCHAR2(50),
CONSTRAINT t1_pk PRIMARY KEY (id)
);

CREATE SEQUENCE t1_seq;


INSERT INTO t1 VALUES (t1_seq.nextval, 'ONE');
INSERT INTO t1 VALUES (t1_seq.nextval, 'TWO');
INSERT INTO t1 VALUES (t1_seq.nextval, 'THREE');

COMMIT;

When we insert data using a sequence to generate our primary key value, we can return the primary key
value as follows.

SET SERVEROUTPUT ON
DECLARE
l_id t1.id%TYPE;
BEGIN
INSERT INTO t1 VALUES (t1_seq.nextval, 'FOUR')
RETURNING id INTO l_id;
COMMIT;
DBMS_OUTPUT.put_line('ID=' || l_id);
END;/
ID=4
PL/SQL procedure successfully completed.

SQL>

The syntax is also available for update and delete statements.

SET SERVEROUTPUT ON
DECLARE
l_id t1.id%TYPE;
BEGIN
UPDATE t1
SET description = description
WHERE description = 'FOUR'
RETURNING id INTO l_id;

DBMS_OUTPUT.put_line('UPDATE ID=' || l_id);

DELETE FROM t1
WHERE description = 'FOUR'
RETURNING id INTO l_id;

DBMS_OUTPUT.put_line('DELETE ID=' || l_id);

COMMIT;
END;
/
UPDATE ID=4
DELETE ID=4

PL/SQL procedure successfully completed.

SQL>

When DML affects multiple rows we can still use the RETURNING INTO, but now we must return the
values into a collection using the BULK COLLECT clause.

SET SERVEROUTPUT ON
DECLARE
TYPE t_tab IS TABLE OF t1.id%TYPE;
l_tab t_tab;
BEGIN
UPDATE t1
SET description = description
RETURNING id BULK COLLECT INTO l_tab;

FOR i IN l_tab.first .. l_tab.last LOOP


DBMS_OUTPUT.put_line('UPDATE ID=' || l_tab(i));
END LOOP;

COMMIT;
END;
/
UPDATE ID=1
UPDATE ID=2
UPDATE ID=3

PL/SQL procedure successfully completed.

SQL>

We can also use the RETURNING INTO clause in combination with bulk binds.

SET SERVEROUTPUT ON
DECLARE
TYPE t_desc_tab IS TABLE OF t1.description%TYPE;
TYPE t_tab IS TABLE OF t1%ROWTYPE;
l_desc_tab t_desc_tab := t_desc_tab('FIVE', 'SIX', 'SEVEN');
l_tab t_tab;
BEGIN

FORALL i IN l_desc_tab.first .. l_desc_tab.last


INSERT INTO t1 VALUES (t1_seq.nextval, l_desc_tab(i))
RETURNING id, description BULK COLLECT INTO l_tab;

FOR i IN l_tab.first .. l_tab.last LOOP


DBMS_OUTPUT.put_line('INSERT ID=' || l_tab(i).id ||
' DESC=' || l_tab(i).description);
END LOOP;

COMMIT;
END;
/
INSERT ID=5 DESC=FIVE
INSERT ID=6 DESC=SIX
INSERT ID=7 DESC=SEVEN

PL/SQL procedure successfully completed.

SQL>

This functionality is also available from dymanic SQL.

SET SERVEROUTPUT ON
DECLARE
TYPE t_tab IS TABLE OF t1.id%TYPE;
l_tab t_tab;
BEGIN
EXECUTE IMMEDIATE 'UPDATE t1
SET description = description
RETURNING id INTO :l_tab'
RETURNING BULK COLLECT INTO l_tab;

FOR i IN l_tab.first .. l_tab.last LOOP


DBMS_OUTPUT.put_line('UPDATE ID=' || l_tab(i));
END LOOP;

COMMIT;
END;
/
UPDATE ID=1
UPDATE ID=2
UPDATE ID=3

PL/SQL procedure successfully completed.

SQL>

Você também pode gostar