Você está na página 1de 123

Mark Townsend

Director, Product Management


for

Bryn Llewellyn
PL/SQL Product Manager Oracle Corporation

Less Pain, More Gain


Use the new PL/SQL features in Oracle9i Database to get better programs by writing fewer code lines
paper #30720, OracleWorld, Copenhagen, Tue 25-June-2002

But before I start

OTN homepage Technologies PL/SQL


otn.oracle.com/tech/pl_sql

9.2.0 Enhancements
This presentation focuses on enhancements introduced in Oracle9i Database Release 2 Well refer to this as Version 9.2.0 for brevity Major enhancements were also introduced in Oracle9i Database Release 1 Well refer to that as Version 9.0.1 This succeded the last release of Oracle8i Well refer to that as Version 8.1.7

Recap of 9.0.1 Enhancements


Some of you will be upgrading directly from Version 8.1.7 to Version 9.2.0 Youll get PL/SQL benefits that go way beyond whats discussed today The new Version 9.0.1 features together with the new Version 9.2.0 features allow you to write significantly faster PL/SQL applications with dramatically fewer lines of code

Recap of 9.0.1 Enhancements


Native compilation Cursor expressions; Table functions; Multilevel collections; Exception handling in bulk binding DML operations; Bulk binding in native dynamic SQL CASE statements and CASE expressions

Recap of 9.0.1 Enhancements


VARCHAR2 <-> NVARCHAR2 (etc) assignment; VARCHAR2 <-> CLOB assignment; SUBSTR and INSTR w/ CLOB; Seamless access to new SQL features (eg MERGE,

multitable insert, new time datatypes) OO:Schema evolution, inheritance support Utl_Http, Utl_Raw, Utl_File enhanced Nineteen new packages

Recap of 9.0.1 Enhancements


Manipulating records faster by up to 5x; Inter-package calls faster by up to 1.5x; Utl_Tcp native implementation (underlies Utl_Http, Utl_Smtp) Common SQL parser

Summary of 9.2.0 Enhancements


Index-by-varchar2 tables, aka associative arrays
RECORD binds in DML and in BULK SELECTs Utl_File enhancements

GUI debugging via JDeveloper Version 9.0.3

Associative arrays
declare type word_list is table of varchar2(20) index by varchar2(20); the_list word_list; begin the_list ( 'book' ) := livre; the_list ( 'tree' ) := 'arbre'; end;

Associative arrays
idx varchar2(20); ... /* loop in lexical sort order */ idx := the_list.First(); while idx is not null loop Show ( idx || ' : ' || the_list(idx) ); idx := the_list.Next(idx); end loop;

Associative arrays
These three flavors of index-by tables are now supported

index by binary_integer index by pls_integer index by varchar2

the term associative array is now used for all these variants in line with common usage in other 3GLs

Lookup caching scenario


Req't to look up a value via a unique non-numeric key is generic computational problem Oracle9i Database provides a solution with SQL and a B*-tree index (!) But performance improvement by using an explicit PL/SQL implementation True even before the new index-by-varchar2 table

Lookup caching scenario


Scenarios characterized by very frequent lookup in a relatively small set of values, usually in connection with flattening a relational representation for reporting or for UI presentation We'll use a simple neutral scenario (not to complicate the following examples with distracting detail)

Lookup caching scenario


select * from translations; ENGLISH FRENCH -------------------- ---------computer ordinateur tree arbre ... furniture meubles

Allow lookup from French to English Allow efficient addition of new vocabulary pairs

The Vocab package


Well abstract the solution as a package...
package Vocab is function Lookup (p_english in varchar2) return varchar2; procedure New_Pair ( p_english in varchar2, p_french in varchar2); end Vocab;

The Vocab package


Must support...
begin Show ( Vocab.Lookup ( 'tree' ) ); ... Vocab.New_Pair ( 'garden', 'jardin' ); Show ( Vocab.Lookup ( 'garden' ) ); end;

package body Vocab is /* Nave pure SQL approach */ function Lookup ( p_english in varchar2 ) return varchar2 is v_french translations.french%type; begin select french into v_french from translations where english = p_english; return v_french; end; ... end Vocab;

package body Vocab is /* Nave pure SQL approach */ ... procedure New_Pair ( p_english in varchar2, p_french in varchar2 ) is begin insert into translations ( english, french ) values ( p_english, p_french ); end New_Pair; end Vocab;

Nave pure SQL


Concerned only about correctness of behavior and ease of algorithm design Accept the performance we get Each time Lookup is invoked, we make a round trip between PL/SQL and SQL This frequently repeated context switch can become significantly expensive

package body Vocab is /* Linear search in index-by-binary_integer table */ type word_list is table of translations%rowtype index by binary_integer; g_english_french word_list; ... begin /* package initialization */ declare idx binary_integer := 0; begin for j in ( select english, french from translations ) loop idx := idx+1; g_english_french(idx).english := j.english; g_english_french(idx).french := j.french; end loop; end; end Vocab;

package body Vocab is /* Linear search in index-by-binary_integer table */ ... function Lookup ( p_english in varchar2 ) return varchar2 is begin for j in 1..g_english_french.Last() loop if g_english_french(j).english = p_english then return g_english_french(j).french; end if; end loop; end Lookup; ... end Vocab;

package body Vocab is /* Linear search in index-by-binary_integer table */ ... procedure New_Pair ( p_english in varchar2, p_french in varchar2 ) is idx binary_integer; begin idx := g_english_french.Last() + 1; g_english_french(idx).english := p_english; g_english_french(idx).french := p_french; insert into translations ( english, french ) values ( p_english, p_french ); end New_Pair; ... end Vocab;

Linear search in index-bybinary_integer table


Algorithm is still trivial, but does require some explicit coding The entire table contents are loaded into an indexby-binary_integer table Well-known disadvantage that on average half the elements are examined before we find a match

Binary chop search in index-bybinary_integer table


Possible improvement: maintain the elements in lexical sort order Compare the search target to the half-way element to determine which half it's in Repeat this test recursively on the relevant half

Binary chop search in index-bybinary_integer table


Requires more elaborate coding - and testing Poses a design problem for the New_Pair procedure Either the array must be opened at the insertion point copying all later elements to the next slot Or it must be created sparse

Binary chop search in index-bybinary_integer table


Neither of these approaches is very comfortable The sparse alternative is not complete until we cater for the corner case where a gap fills up

package body Vocab is /* Hash-based lookup in index-by-pls_integer table */ hash binary_integer; g_hash_base constant number := 100; g_hash_size constant number := 100; type word_list is table of translations.french%type index by binary_integer; g_english_french word_list;
...

begin /* package initialization */ begin for j in ( select english, french from translations ) loop hash := Dbms_Utility.Get_Hash_Value ( j.english, g_hash_base, g_hash_size ); g_english_french(hash) := j.french; end loop; end; end Vocab;

package body Vocab is /* Hash-based lookup in index-by-pls_integer table */ ... function Lookup ( p_english in varchar2 ) return varchar2 is begin hash := Dbms_Utility.Get_Hash_Value ( p_english, g_hash_base, g_hash_size ); return g_english_french(hash); end Lookup; ... end Vocab;

package body Vocab is /* Hash-based lookup in index-by-pls_integer table */ ... procedure New_Pair ( p_english in varchar2, p_french in varchar2 ) is begin hash := Dbms_Utility.Get_Hash_Value ( p_english, g_hash_base, g_hash_size ); g_english_french(hash) := p_french; insert into translations ( english, french ) values ( p_english, p_french ); end New_Pair; ... end Vocab;

Hash-based lookup in index-bybinary_integer table


The algorithm as shown is too nave for real-world use No guarantee that two distinct values for the name IN parameter to Get_Hash_Value will always produce distinct hash values To be robust, a collision avoidance scheme must be implemented

Hash-based lookup in index-bybinary_integer table


Oracle provides no specific support for collision avoidance in Get_Hash_Value Solving this is a non-trivial design, implementation and testing task Probable that the resulting index-by-binary_integer table will be quite sparse Will discuss this later

Purpose-built B*-tree structure in PL/SQL


Study the relevant computer science textbooks Implement a B*-tree structure in PL/SQL, horror of wheel re-invention notwithstanding! Very far from trivial - certainly too long and complex for inclusion here But at 9.20, we dont need to entertain that uncomfortable thought: Oracle does this for us behind the scenes !

package body Vocab is /* Direct lookup in index-by-varchar2 table */ type word_list is table of translations.french%type index by translations.english%type; g_english_french word_list; ... begin /* package initialization */ for j in ( select english, french from translations ) loop g_english_french( j.english ) := j.french; end loop; end Vocab;

package body Vocab is /* Direct lookup in index-by-varchar2 table */ ... function Lookup ( p_english in varchar2 ) return varchar2 is begin return g_english_french( p_english ); end Lookup; ... end Vocab;

package body Vocab is /* Direct lookup in index-by-varchar2 table */ ... procedure New_Pair ( p_english in varchar2, p_french in varchar2 ) is begin g_english_french( p_english ) := p_french; insert into translations ( english, french ) values ( p_english, p_french ); end New_Pair; ... end Vocab;

Direct lookup in index-byvarchar2 table


Use precisely the B*-tree organization of the values but to do so implicitly via the new language feature Can think of the index-by-varchar2 table as the inmemory PL/SQL version of the schema-level index organized table

index-by-varchar2 table
Optimized for efficiency of lookup on a nonnumeric key Notion of sparseness is not really applicable index-by-*_integer table (now *_integer can be either pls_integer or binary_integer) is optimized for compactness of storage on the assumption that the data is dense sometimes better to represent numeric key as a index-by-varchar2 table via a To_Char conversion

Associative arrays: summary


index-by-varchar2 table is a major enhancement Allows a class of very common programming tasks to be implemented much more performantly than was possible pre Version 9.2.0 with very much less design and testing in dramatically fewer lines of code

Associative arrays: summary


index-by-pls_integer table is a minor enhancement that allows a clear coding guideline for all new projects never use BINARY_INTEGER except where it's required to match the type in an existing API

Summary of 9.2.0 Enhancements


Index-by-varchar2 tables, aka associative arrays RECORD binds in DML and in BULK SELECTs
Utl_File enhancements

GUI debugging via JDeveloper Version 9.0.3

RECORD binds
PL/SQL RECORD datatype corresponds to a row in a schema-level table Natural construct when manipulating table rows programmatically, especially when a row from one table is manipulated programmatically, and then stored (via INSERT or UPDATE) in an another table with the same shape

RECORD binds
Declaration is compact, using mytable%rowtype Guaranteed to match the corresponding schemalevel template Immune to schema-level changes in definition of the shape of the table RECORD can be used as subprogram parameter giving compact and guaranteed correct notation and allowing optimizations in the implementation of parameter passing

RECORD binds
SQL-PL/SQL interface allows a syntax which does not list the columns of the source/target table explicitly Again allowing for robust code which has a greater degree of schema-independence

RECORD binds
Reduce effort of writing the code Increase the chances of its correctness But... use of RECORDs in the SQL-PL/SQL interface was greatly restricted pre Version 9.2.0 Supported only for single row SELECT Thus... the listed advantages were not yet capable of being realized

RECORD binds
Version 9.2.0 adds support for BULK SELECT in both Static and Native Dynamic SQL ie full support for all flavors of SELECT Adds support (with some minor restrictions) for all Static SQL flavors of INSERT, DELETE and UPDATE INSERT DELETE ... RETURNING UPDATE ... RETURNING UPDATE ... SET ROW

procedure P ( p_date in date ) is /* BULK SELECT into a RECORD */ type emprec_tab_t is table of employees%rowtype index by pls_integer; v_emprecs emprec_tab_t; cursor cur is select * from employees where hire_date >= p_date; begin open cur; fetch cur bulk collect into v_emprecs limit 10; close cur; ... end P;

procedure P ( p_date in date ) is /* BULK SELECT into a RECORD w/ dynamic SQL */ ... cur sys_refcorsor; begin open cur for 'select * from employees where hire_date >= :the_date' using p_date; fetch cur bulk collect into v_emprecs limit 10; close cur; ... end P;

BULK SELECT with RECORD bind


What did this look like pre Version 9.2.0 ?

declare type employee_ids_t type first_names_t type last_names_t type emails_t type phone_numbers_t type hire_dates_t type job_ids_t type salarys_t type commission_pcts_t type manager_ids_t type department_ids_t

is table index is table index is table index is table index is table index is table index is table index is table index is table index is table index is table index

of by of by of by of by of by of by of by of by of by of by of by

employees.employee_id%type binary_integer; employees.first_name%type binary_integer; employees.last_name%type binary_integer; employees.email%type binary_integer; employees.phone_number%type binary_integer; employees.hire_date%type binary_integer; employees.job_id%type binary_integer; employees.salary%type binary_integer; employees.commission_pct%type binary_integer; employees.manager_id%type binary_integer; employees.department_id%type binary_integer;

v_employee_ids v_first_names v_last_names v_emails v_phone_numbers v_hire_dates v_job_ids v_salarys v_commission_pcts v_manager_ids v_department_ids

employee_ids_t; first_names_t; last_names_t; emails_t; phone_numbers_t; hire_dates_t; job_ids_t; salarys_t; commission_pcts_t; manager_ids_t; department_ids_t;

type emprec_tab_t is table of employees%rowtype index by pls_integer; v_emprecs emprec_tab_t; ... begin ... end;

declare ... cursor cur is select employee_id, first_name, last_name, email, phone_number, hire_date, job_id, salary, commission_pct, manager_id, department_id from employees where hire_date >= '25-JUN-97'; begin ... end;

declare ... begin open cur; fetch cur bulk collect into v_employee_ids, v_first_names, v_last_names, v_emails, v_phone_numbers, v_hire_dates, v_job_ids, v_salarys, v_commission_pcts, v_manager_ids, v_department_ids limit 10; close cur; for j in 1..v_employee_ids.Last loop v_emprecs(j).employee_id v_emprecs(j).first_name v_emprecs(j).last_name v_emprecs(j).email v_emprecs(j).phone_number v_emprecs(j).hire_date v_emprecs(j).job_id v_emprecs(j).salary v_emprecs(j).commission_pct v_emprecs(j).manager_id v_emprecs(j).department_id end loop; ... end;

:= := := := := := := := := := :=

v_employee_ids(j); v_first_names(j); v_last_names(j); v_emails(j); v_phone_numbers(j); v_hire_dates(j); v_job_ids(j); v_salarys(j); v_commission_pcts(j); v_manager_ids(j); v_department_ids(j);

BULK SELECT with RECORD bind


Pre Version 9.2.0 you needed a scalar index-by table for each select list item and had to list all columns explicitly to be robust Needed explicit loop to assign RECORD values following the SELECT Approaches what is feasible to maintain Feels especially uncomfortable because of the artificial requirement to compromise the natural modeling approach by slicing the desired table of records vertically into N tables of scalars

declare /* INSERT RECORD, single row Dynamic SQL not yet supported */ v_emprec employees%rowtype := Get_One_Row; begin insert into employees values v_emprec; end;

declare /* BULK INSERT RECORD Dynamic SQL not yet supported */ v_emprecs Emp_Util.emprec_tab_t := Emp_Util.Get_Many_Rows; begin forall j in v_emprecs.first..v_emprecs.last insert into employees values v_emprecs(j); end;

declare /* BULK INSERT RECORD, showing SAVE EXCEPTIONS syntax */ bulk_errors exception; pragma exception_init ( bulk_errors, -24381 ); ... begin forall j in v_emprecs.first..v_emprecs.last save exceptions insert into employees values v_emprecs(j); exception when bulk_errors then for j in 1..sql%bulk_exceptions.count loop Show ( sql%bulk_exceptions(j).error_index, sql%bulk_exceptions(j).error_code ); end loop; end;

BULK INSERT with RECORD bind


What did this look like pre Version 9.2.0 ?

declare ... type employee_ids_t

is table index type first_names_t is table index type last_names_t is table index type emails_t is table index by binary_integer; type phone_numbers_t is table index type hire_dates_t is table index type job_ids_t is table index type salarys_t is table index type commission_pcts_t is table index type manager_ids_t is table index type department_ids_t is table index v_employee_ids v_first_names v_last_names v_emails v_phone_numbers v_hire_dates v_job_ids v_salarys v_commission_pcts v_manager_ids v_department_ids

of by of by of by of of by of by of by of by of by of by of by

employees.employee_id%type binary_integer; employees.first_name%type binary_integer; employees.last_name%type binary_integer; employees.email%type employees.phone_number%type binary_integer; employees.hire_date%type binary_integer; employees.job_id%type binary_integer; employees.salary%type binary_integer; employees.commission_pct%type binary_integer; employees.manager_id%type binary_integer; employees.department_id%type binary_integer;

employee_ids_t; first_names_t; last_names_t; emails_t; phone_numbers_t; hire_dates_t; job_ids_t; salarys_t; commission_pcts_t; manager_ids_t; department_ids_t; := Emp_Util.Get_Many_Rows;

v_emprecs Emp_Util.emprec_tab_t begin ... end;

declare ... begin for j in 1..v_emprecs.Last loop v_employee_ids(j) := v_first_names(j) := v_last_names(j) := v_emails(j) := v_phone_numbers(j) := v_hire_dates(j) := v_job_ids(j) := v_salarys(j) := v_commission_pcts(j) := v_manager_ids(j) := v_department_ids(j) := end loop;

v_emprecs(j).employee_id; v_emprecs(j).first_name; v_emprecs(j).last_name; v_emprecs(j).email; v_emprecs(j).phone_number; v_emprecs(j).hire_date; v_emprecs(j).job_id; v_emprecs(j).salary; v_emprecs(j).commission_pct; v_emprecs(j).manager_id; v_emprecs(j).department_id;

forall j in v_emprecs.first..v_emprecs.last save exceptions insert into employees_2 ( employee_id, first_name, last_name, email, phone_number, hire_date, job_id, salary, commission_pct, manager_id, department_id ) values ( v_employee_ids(j), v_first_names(j), v_last_names(j), v_emails(j), v_phone_numbers(j), v_hire_dates(j), v_job_ids(j), v_salarys(j), v_commission_pcts(j), v_manager_ids(j), v_department_ids(j) ); exception when bulk_errors then ... end;

BULK INSERT with RECORD bind


Pre Version 9.2.0 you needed a scalar index-by table for each target column and had to list all columns explicitly to be robust Needed explicit loop to assign the scalr tables before the INSERT Again, approaches what is feasible to maintain Again, feels especially uncomfortable because of the artificial requirement to compromise the natural modeling approach by slicing the desired table of records vertically into N tables of scalars

declare v_emprec employees%rowtype; begin delete from employees where employee_id = 100 returning employee_id, first_name, last_name, email, phone_number, hire_date, job_id, salary, commission_pct, manager_id, department_id into v_emprec; ... end;

declare v_emprecs Emp_Util.emprec_tab_t; begin update employees set salary = salary * 1.1 where hire_date < = '25-JUN-97' returning employee_id, first_name, last_name, email, phone_number, hire_date, job_id, salary, commission_pct, manager_id, department_id bulk collect into v_emprecs; ... end;

BULK UPDATE and DELETE with RECORD bind


What did this look like pre Version 9.2.0 ?

declare type employee_ids_t type first_names_t type last_names_t type emails_t type phone_numbers_t type hire_dates_t type job_ids_t type salarys_t type commission_pcts_t type manager_ids_t type department_ids_t

is table index is table index is table index is table index is table index is table index is table index is table index is table index is table index is table index

of by of by of by of by of by of by of by of by of by of by of by

employees.employee_id%type binary_integer; employees.first_name%type binary_integer; employees.last_name%type binary_integer; employees.email%type binary_integer; employees.phone_number%type binary_integer; employees.hire_date%type binary_integer; employees.job_id%type binary_integer; employees.salary%type binary_integer; employees.commission_pct%type binary_integer; employees.manager_id%type binary_integer; employees.department_id%type binary_integer;

v_employee_ids v_first_names v_last_names v_emails v_phone_numbers v_hire_dates v_job_ids v_salarys v_commission_pcts v_manager_ids v_department_ids

employee_ids_t; first_names_t; last_names_t; emails_t; phone_numbers_t; hire_dates_t; job_ids_t; salarys_t; commission_pcts_t; manager_ids_t; department_ids_t;

v_emprecs Emp_Util.emprec_tab_t; begin ... end;

declare ... begin update employees set salary = salary * 1.1 where hire_date < = '25-JUN-97' returning employee_id, first_name, last_name, email, phone_number, hire_date, job_id, salary, commission_pct, manager_id, department_id bulk collect into v_employee_ids, v_first_names, v_last_names, v_emails, v_phone_numbers, v_hire_dates, v_job_ids, v_salarys, v_commission_pcts, v_manager_ids, v_department_ids; for j in 1..v_employee_ids.Last loop v_emprecs(j).employee_id := v_emprecs(j).first_name := v_emprecs(j).last_name := v_emprecs(j).email := v_emprecs(j).phone_number := v_emprecs(j).hire_date := v_emprecs(j).job_id := v_emprecs(j).salary := v_emprecs(j).commission_pct := v_emprecs(j).manager_id := v_emprecs(j).department_id := end loop; ... end;

v_employee_ids(j); v_first_names(j); v_last_names(j); v_emails(j); v_phone_numbers(j); v_hire_dates(j); v_job_ids(j); v_salarys(j); v_commission_pcts(j); v_manager_ids(j); v_department_ids(j);

BULK UPDATE and DELETE with RECORD bind


Even at Version 9.2.0, we dont yet have a RETURNING * syntax But pre Version 9.2.0 you needed a scalar index-by table for each target column Again, approaches what is feasible to maintain Again, feels especially uncomfortable because of the artificial requirement to compromise the natural modeling approach by slicing the desired table of records vertically into N tables of scalars

UPDATE ... SET ROW = with RECORD bind


This syntax is useful when a row from one table is manipulated programmatically, and then stored in for example an auditing table with the same shape where an earlier version of the row already exists First, the single row syntax

declare v_emprec employees%rowtype := Emp_Util.Get_One_Row; begin v_emprec.salary := v_emprec.salary * 1.2; update employees_2 set row = v_emprec where employee_id = v_emprec.employee_id; end;

UPDATE ... SET ROW = with RECORD bind


What did this look like pre Version 9.2.0 ?

declare v_emprec employees%rowtype := Emp_Util.Get_One_Row; begin v_emprec.salary := v_emprec.salary * 1.2; update employees set first_name = v_emprec.first_name, last_name = v_emprec.last_name, email = v_emprec.email, phone_number = v_emprec.phone_number, hire_date = v_emprec.hire_date, job_id = v_emprec.job_id, salary = v_emprec.salary, commission_pct = v_emprec.commission_pct, manager_id = v_emprec.manager_id, department_id = v_emprec.department_id where employee_id = v_emprec.employee_id; end;

UPDATE ... SET ROW = with RECORD bind


Next, the BULK syntax Note: you cant reference fields of the BULK Inbind table of RECORDs in the WHERE clause

declare v_emprecs Emp_Util.emprec_tab_t := Emp_Util.Get_Many_Rows; type employee_id_tab_t is table of employees.employee_id%type index by pls_integer; v_employee_ids Employee_Id_Tab_t; Begin /* workaround for PLS-00436 */ for j in v_emprecs.first..v_emprecs.last loop v_employee_ids(j) := v_emprecs(j).employee_id; end loop; forall j in v_emprecs.first..v_emprecs.last update employees set row = v_emprecs(j) where employee_id = v_employee_ids(j); end;

UPDATE ... SET ROW = with RECORD bind


What did the BULK synatx look like pre Version 9.2.0 ?

declare v_emprecs Emp_Util.emprec_tab_t := Emp_Util.Get_Many_Rows; type employee_ids_t type first_names_t type last_names_t type emails_t type phone_numbers_t type hire_dates_t type job_ids_t type salarys_t type commission_pcts_t type manager_ids_t type department_ids_t is table index is table index is table index is table index is table index is table index is table index is table index is table index is table index is table index of by of by of by of by of by of by of by of by of by of by of by employees.employee_id%type binary_integer; employees.first_name%type binary_integer; employees.last_name%type binary_integer; employees.email%type binary_integer; employees.phone_number%type binary_integer; employees.hire_date%type binary_integer; employees.job_id%type binary_integer; employees.salary%type binary_integer; employees.commission_pct%type binary_integer; employees.manager_id%type binary_integer; employees.department_id%type binary_integer;

v_employee_ids v_first_names v_last_names v_emails v_phone_numbers v_hire_dates v_job_ids v_salarys v_commission_pcts v_manager_ids v_department_ids begin ... end;

employee_ids_t; first_names_t; last_names_t; emails_t; phone_numbers_t; hire_dates_t; job_ids_t; salarys_t; commission_pcts_t; manager_ids_t; department_ids_t;

declare ... begin for j in 1..v_emprecs.Last loop v_employee_ids(j) := v_first_names(j) := v_last_names(j) := v_emails(j) := v_phone_numbers(j) := v_hire_dates(j) := v_job_ids(j) := v_salarys(j) := v_commission_pcts(j) := v_manager_ids(j) := v_department_ids(j) := end loop;

v_emprecs(j).employee_id; v_emprecs(j).first_name; v_emprecs(j).last_name; v_emprecs(j).email; v_emprecs(j).phone_number; v_emprecs(j).hire_date; v_emprecs(j).job_id; v_emprecs(j).salary * 1.2; v_emprecs(j).commission_pct; v_emprecs(j).manager_id; v_emprecs(j).department_id;

forall j in v_emprecs.first..v_emprecs.last update employees set first_name = v_first_names(j), last_name = v_last_names(j), email = v_emails(j), phone_number = v_phone_numbers(j), hire_date = v_hire_dates(j), job_id = v_job_ids(j), salary = v_salarys(j), commission_pct = v_commission_pcts(j), manager_id = v_manager_ids(j), department_id = v_department_ids(j) where employee_id = v_employee_ids(j); end;

UPDATE ... SET ROW = with RECORD bind


Pre Version 9.2.0 you needed a scalar index-by table for each target column Again, approaches what is feasible to maintain Again, feels especially uncomfortable because of the artificial requirement to compromise the natural modeling approach by slicing the desired table of records vertically into N tables of scalars

RECORD binds: summary


You can now take advantage of the power of the PL/SQL RECORD datatype in all SELECT constructs and in all DML constructs in Static SQL The volume of code you had to write pre Version 9.2.0 was enormously greater verging on unmaintainable - and forced you to compromise the natural modelling approach Pre Version 9.2.0 you had to copy from one representation to another. Now that you no longer need to do this, your program runs faster

Summary of 9.2.0 Enhancements


Index-by-varchar2 tables, aka associative arrays
RECORD binds in DML and in BULK SELECTs

Utl_File enhancements GUI debugging via JDeveloper Version 9.0.3

Utl_File enhancements
You can now use the DIRECTORY schema object (as for BFILEs) Line length limit for Utl_File.Get_Line and Utl_File.Put_Line has been increased to 32K New APIs to manipulate files at the operating system level New APIs for handling RAW data Performance is improved via transparent internal reimplementation

Utl_File using DIRECTORY


Pre Version 9.2.0, the way to denote the director(ies) for files was via the UTL_FILE_DIR initialization parameter Disadvantages instance had to be bounced to make changes to the list of directories all users could access files on all directories Version 9.2.0 allows the same mechanism to be used with Utl_File as is used for BFILEs The UTL_FILE_DIR initialization parameter is slated for deprecation

Utl_File o/s file management


procedure procedure procedure procedure Fgetattr Fcopy Fremove Frename

Utl_File handling RAW data


procedure function procedure procedure Fseek -- go to offset Fgetpos -- report offset Get_Raw Put_Raw

Utl_File - summary
Functionality now provided natively for a number of common file i/o tasks dramatically reduces the amount of code you need to write delivers better performance Performance improvement further enhanced by some transparent internal changes Robustness improved by removing some uncomfortable limits Security improved by adopting the BFILE model

Summary of 9.2.0 Enhancements


Index-by-varchar2 tables, aka associative arrays
RECORD binds in DML in BULK SELECTs

and

Utl_File enhancements

GUI debugging via JDeveloper Version 9.0.3

GUI PL/SQL debugging via JDeveloper Version 9.0.3


JDeveloper Version 9.0.3 will very soon be available for download from OTN Its PL/SQL IDE subcomponent is extended to provide support for graphical debugging of PL/SQL stored subprograms This includes support for all the new features discussed in this presentation

Connection model
JDeveloper (when executing a PL/SQL subprogram) connects as a classical Oracle client via Oracle Net Or, any Oracle client connects The client (implicitly or explicitly) requests debugging The shadow process connects back to JDeveloper (or any equivalent 3rd party tool) via the Java Debugging Wire Protocol aka JDWP

Connection model
Dbms_Debug_Jdwp.Connect_Tcp ( :the_node, :the_port, ... )

Usually called transparently by the debugging tool or by some client or middle-tier infrastructure

Connection model
Java Debugging Wire Protocol aka JDWP industry standard invented to support Java debugging completely suitable for PL/SQL too allows 3rd party tools vendors to implement PL/SQL debugging At the same protocol level as say HTTP or Oracle Net Typically implemented on top of TCP/IP

Connection model
JDWP also allows JDeveloper and other 3rd party tools to debug database stored Java When debugging an application implemented in a mix of Java and PL/SQL execution point moves seamlessly from one environment to the other integrity of the call stack is maintained across the language boundary

Connection model
JDeveloper incorporates a JDWP listener Can be started on any port Can spawn any number of debugging sessions running concurrently - cf tnslsnr The Oracle shadow processes which are the debugging clients could be running on different machines, communicating with each other via say Oracle AQ

Connection model
JDeveloper Oracle Net Shadow tns Process listener Oracle Client

JDWP

Debugging JWDP Listener Server

Pre-conditions for debugging


User must have debug connect session and debug any procedure system privileges must have execute privilege on the subprograms of interest source code must not be wrapped must have been compiled with debug information, eg
alter [ package | ... ] P compile debug;

GUI-initiated debugging mode


A debugging exercise always involves opening the source code read from the database in the source window This is how the current execution point is displayed and how you set breakpoints

Execution point...

Breakpoint...

GUI-initiated debugging mode


In many scenarios youre happy to kick off debugging directly from JDeveloper You choose your top level subprogram and set its actual parameter values via the same GUI used just to execute subprograms for testing The magic to control the JDWP listener is implicit In the default configuration JDeveloper chooses the first available port automatically To debug code which is invoked when a trigger fires, write a small procedure to invoke the SQL

Remote debugging mode


What if the server-side PL/SQL which you want to debug is invoked from a client? Pro*C or Java application in a classical twotier model mod_plsql component of Oracle9iAS as the middle tier in a three-tier model The parameters that define the debugging case of interest are set by application logic Calling your subprogram with these values by hand is too tortuous

Remote debugging mode


You could change your application code to call the Dbms_Debug_Jdwp APIs But this invasive approach is uncomfortable In the two-tier case, use the environment variable ora_debug_jdwp, eg
set ora_debug_jdwp=host=lap99.acme.com;port=2125

Read by the OCI layer which then calls


Dbms_Debug_Jdwp.Connect_Tcp

immediately on connection

Remote debugging mode


JDeveloper (or an equivalent third party tool) must be running on the indicated node Its JDWP listener must have been started manually to listen on the indicated port Do this by choosing remote debugging rather than the default GUI-initiated debugging mode via a UI that allows selection of the port

Remote debugging mode


What if the client is mod_plsql ? Want to turn on debugging for the server-side PL/SQL that supports a particular browser (pseudo) session And not to turn it on for other concurrent browser sessions The debugging user sets a cookie via the browser UI specifying JDWP host and port Causes mod_plsql to call Connect_Tcp before its normal calls and Disconnect after these

Remote debugging mode


If you use a jdbc:thin Java client Or some middle tier infrastructure other than Oracle9iAS's mod_plsql then you'll need to make calls to the Dbms_Debug_Jdwp API in yourself in your production code

Debugger features
GUI-initiated and remote debugging modes Can support one or many concurrent debugging clients and thus debug interacting processes Current execution point displayed as highlighted line in the source extracted dynamically from the database where the code is executing Breakpoints can be set and unset before and during the debugging session. Displayed as highlighted line in source window Start debugging session with Step Into, Step Over or Run to First Breakpoint

Debugger features
When paused, can continue with Resume (aka Run to Next Breakpoint), Step Into or Step Over Or Abort the current debugging session Can attach condition to a breakpoint to determine if execution stops there or not When paused, can view the values of variables visible in the current subprogram, and the values for all variables that are currently alive Intuitive display for collection objects and objects of user-defined abstract datatypes

Debugger features
Call stack display Can click anywhere in the stack to view the values of variables that belong to the selected subprogram Intuitive display for recursive calls When paused, can modify the values of variables normal caveats regarding what you can then deduce about your program's subsequent behavior!

Debugger scenario
An index-by- pls_integer table of RECORDs of employees%rowtype is populated by BULK SELECT using Native Dynamic SQL The index-by table is inserted into a second database table with the same shape with BULK INSERT using Static SQL with the save exceptions construct The table has a trigger which raises an exception on conditions of these data An exception handler traverses the index-by table of exception codes

Debugger scenario
The contents of the target table are selected using the same subprogram as at the start But now the Native Dynamic SQL uses a different table name This confirms the successful insert of those rows for which an exception was not raised

Debugger scenario
The Data Window shows the state of the variables of the function that does the BULK SELECT when paused as in the earlier screenshot The second element of the emprecs index-by table is expanded to show the values of each field in the RECORD

Debugger scenario
Heres the Call Stack window when the execution point is in the trigger

Debugger scenario
Heres the Code Window when the execution point is in the trigger

Debugger scenario
Heres the Data Window when the execution point is in the trigger

PL/SQL Debugging at 9.0.1 and earlier


JDeveloper Version 9.0.3 also implements PL/SQL debugging via the earlier Dbms_Debug API to allow it to be used against database versions earlier than Version 9.2.0

PL/SQL debugging: summary


Value of a GUI debugger is self-evident! Consider just index-by tables of RECORDs No PL/SQL syntax to display all fields in each row of such a structure Effort required to program a loop to do this via Dbms_Output is time-consuming and potentially error-prone JDeveloper provides ready-made intuitive mechanisms for displaying the values of arbitrarily complex structures

Oracle9i in Action
170 Systems, Inc have been an Oracle Partner for eleven years and participated in the Beta Program for the Oracle9i Database with particular interest in PL/SQL Native Compilation They have now certified their 170 MarkView Document Management and Imaging System against Oracle9i Vesrion 9.2.0

170 MarkView Document Management and Imaging System


Provides Content Management, Document Management, Imaging and Workflow solutions Tightly integrated with the Oracle9i Database, Oracle9i Application Server and the Oracle E-Business Suite Enables businesses to capture and manage all of their information online in a single, unified system

170 MarkView
Large-scale multi-user, multi-access system Customers include

British Telecommunications E*TRADE Group the Apollo Group the University of Pennsylvania

Very large numbers of documents, images, concurrent users, and high transaction rates Performance and scalability especially important

170 Systems, Inc


Planning to take advantage of many of the new Version 9.2.0 PL/SQL features eg Associative Arrays to improve performance in an application that performs complex stack manipulations Values are taken from the stack using random access based on a identifying character name Previously, this was simulated using hashing routines written using PLSQL Therefore a performance improvement is expected

170 Systems, Inc


Tested the JDeveloper PL/SQL debugging environment extensively They like it Plan to adopt it as their PL/SQL debugging environment of choice for their developers

Summary of 9.2.0 Enhancements


Index-by-varchar2 tables, aka associative arrays
RECORD binds in DML and in BULK SELECTs Utl_File enhancements

GUI debugging via JDeveloper Version 9.0.3

Summary of 9.2.0 Enhancements


All the enhancements discussed in this presentation dramatically reduce the effort of program design, implementation and testing to build applications with matching requirements And Index-by-varchar2 tables, RECORD binds and the Utl_File enhancements also deliver more performant programs you should upgrade !

Q U E S T I O N S A N S W E R S

Você também pode gostar