Escolar Documentos
Profissional Documentos
Cultura Documentos
Contains:
Generating Statistics Reviewing Dedicated Temporary Tables Selecting Tablespaces Maintaining Indexes Using Cursor Sharing Selecting Batch Servers Capturing Traces Sample Script to Gather Statistics On Tables
Table of Contents
Table of Contents........................................................................................................................................................4 Chapter 1 - Introduction .............................................................................................................................................6 Structure of this Red Paper Related Materials 6 6
Chapter 2 - Generating Statistics .............................................................................................................................7 Manually Gathering Table Statistics 7 Sample DBMS_STATS Command............................................................................................................................................ 8 Generating Statistics for Data dictionary tables 9 Generating System Statistics...................................................................................................................................................... 9 Statistics at Runtime for Temporary Tables 9 Example ..................................................................................................................................................................................... 9 Turning off %UpdateStats ....................................................................................................................................................... 10 Modify %UpdateStats to Use Analyze Command ................................................................................................................... 10 Turning Off Dynamic Sampling .............................................................................................................................................. 14 Histograms 14 What Are Histograms?............................................................................................................................................................. 14 Candidate Columns for Histograms within PeopleSoft Applications ...................................................................................... 14 Creating Histograms................................................................................................................................................................. 15 Viewing Histograms................................................................................................................................................................. 15 Chapter 3 - Reviewing Dedicated Temporary Tables............................................................................................16 What are Dedicated Temp Tables? 16 Performance Tips for Dedicated Temp Tables......................................................................................................................... 16 Application Engine Performance with Dedicated Temporary Tables 17 How Do Temp Tables Work in AE?........................................................................................................................................ 17 Test Case Explaining Temp Table Behavior............................................................................................................................ 17 Drawbacks of Using Base Temp Table.................................................................................................................................... 18 Recommendations .................................................................................................................................................................... 18 Creating PeopleSoft Temporary Tables as Oracle Global Temp Tables 19 What are Global Temporary Tables? ....................................................................................................................................... 19 Can GTTs Be Used in Place of Dedicated Temp Tables? ....................................................................................................... 19 Chapter 4 - Selecting Tablespaces ........................................................................................................................21 Locally Managed Tablespaces 21 Advantages of Locally Managed Tablespaces ......................................................................................................................... 21 Space Management .................................................................................................................................................................. 21 Locally Managed - AUTO ALLOCATE ................................................................................................................................. 21 Locally Managed - UNIFORM EXTENT ............................................................................................................................... 22 Temporary Tablespaces 22 Tempfile Based ........................................................................................................................................................................ 22 UNDO Management 22 Automatic Undo Management ................................................................................................................................................. 23
9/4/2008
Table/Index Partitioning 23 What Is Partitioning? ............................................................................................................................................................... 23 Chapter 5 - Maintaining Indexes.............................................................................................................................24 Index Tips Rebuilding of Indexes Function-Based Indexes 24 24 25
Chapter 6 - Using Cursor Sharing..........................................................................................................................27 Use of Bind Variables 27 AE - Reuse Flag ....................................................................................................................................................................... 27 SQR/COBOL - CURSOR_SHARING 30 Example ................................................................................................................................................................................... 30 Pros and Cons of CURSOR_SHARING.................................................................................................................................. 31 Chapter 7 - Selecting Batch Servers......................................................................................................................33 Scenario 1: Process Scheduler And Database Server on Different Boxes Scenario 2: Process Scheduler and Database Server on one BOX What is the recommended scenario? 33 33 34
Chapter 8 - Capturing Traces .................................................................................................................................35 Online Performance Issues 35 Settings for Single User Online Session Trace ........................................................................................................................ 36 Application Server Settings for Tracing an Online Process..................................................................................................... 37 AE Performance Issues 38 Configuration Settings for Tracing an AE Process on the AE ................................................................................................. 38 Configuration Settings for Tracing an AE Process on the Database........................................................................................ 39 COBOL Performance Issues SQR Reports Performance Issues ORACLE Performance Issues Generating Explain Plan for SQL Using sqltexplain.sql 41 41 41 42
Automatic Workload Repository 42 Generating a HTML or Text AWR Report: ............................................................................................................................. 43 AWR Report Analysis ............................................................................................................................................................. 43 Appendix A Validation and Feedback..................................................................................................................48 Appendix B Sample Script to Gather Statistics on Tables................................................................................53 Customer Validation Field Validation 53 53
Appendix C References ........................................................................................................................................54 Appendix B Revision History................................................................................................................................55 Authors..................................................................................................................................................................................... 55 Revision History ...................................................................................................................................................................... 55
9/4/2008
Chapter 1 - Introduction
This Red Paper is a practical guide for technical users, installers, system administrators, and programmers who implement, maintain, or develop applications for your PeopleSoft system. In this Red Paper, we discuss guidelines on how to diagnose a PeopleSoft Online Transaction environment, including PeopleSoft Internet Architecture and Portal configuration. Configuration of Batch processes is not covered in this document. Much of the information contained in this document originated within PeopleSoft Development and is therefore based on "real-life" problems encountered in the field. Although every conceivable problem that one could encounter in this document, the issues that appear are the problems that prove to be the most common or troublesome.
RELATED MATERIALS
We assume that our readers are experienced IT professionals, with a good understanding of PeopleSofts Internet Architecture. To take full advantage of the information covered in this document, we recommend that you have a basic understanding of system administration, basic Internet architecture, relational database concepts/SQL, and how to use PeopleSoft applications. This document is not intended to replace the documentation delivered with the CRM PeopleBooks. We recommend that before you read this document, you read the PIA related information in the PeopleTools PeopleBooks to ensure that you have a well-rounded understanding of our PIA technology. Note: Much of the information in this document eventually gets incorporated into subsequent versions of the PeopleBooks. Many of the fundamental concepts related to PIA are discussed in the following PeopleSoft PeopleBooks: PeopleSoft Internet Architecture Administration (PeopleTools|Administration Tools|PeopleSoft Internet Architecture Administration) Application Designer (Development Tools|Application Designer) Application Messaging (Integration Tools|Application Messaging) PeopleCode (Development Tools|PeopleCode Reference)
9/4/2008
When using Oracle's Cost Based Optimizer (CBO), query performance depends greatly on appropriate table and index statistics. Maintenance of these statistics is critical to optimal database and query performance. At database creation, Oracle 10g Scheduler contains a default nightly job that attempts to maintain these vital statistics. You can determine whether this job exists by viewing the DBA_SCHEDULER_JOBS view. A sample SQL statement is listed here:
SELECT * FROM DBA_SCHEDULER_JOBS WHERE JOB_NAME = 'GATHER_STATS_JOB';
Because this job does not and cannot take into account the requirements for PeopleSoft, we recommend that you not allow this job to run after it has initially executed. We recommend that you disable this job after its initial execution. To disable the GATHER_STATS_JOB, run the following SQL*Plus command:
BEGIN DBMS_SCHEDULER.DISABLE('GATHER_STATS_JOB'); END; /
The reasoning for this recommendation is because PeopleSoft Application Engine (AE) programs use many standard Oracle tables and PeopleSoft temporary tables. The default GATHER_STATS_JOB picks up the temporary tables, and regenerates statistics on them, which is not desirable because many of these tables do not contain data. We recommend that you update the statistics of only nontemporary PeopleSoft tables and indexes. We also recommend that you regenerate statistics for only tables and indexes that are considered stale or missing by the database. A sample script to provide this functionality is listed in the Appendix. By generating statistics on only the nontemporary tables and indexes, the temporary object statistics are not updated with improper values. Regenerating statistics on only stale or missing objects reduces the overall time needed to regenerate statistics on a nightly, weekly, or monthly basis. We recommended that you do not gather statistics on objects during peak operational hours. Cursor invalidation can cause severe performance degradation. It is advisable to gather statistics periodically for objects where the statistics become stale over time because of changing data volumes or changes in column values. You should gather new statistics after a schema object's data or structure are modified in ways that make the previous statistics inaccurate. For example, after loading a significant number of rows into a table, collect new statistics on the number of rows. You should also gather new statistics on the average row length after you update data into a table. Use the DBMS_STATS package to update statistics. It is not possible to recommend a single command line script on how to update statistics nor how often to update statistics; these details depend on factors like data distribution, business rules, and time and window of each organization. You must work with your DBA to come up with an appropriate strategy.
9/4/2008
The DBMS_STATS package provides the ability to generate statistics in parallel by specifying the degree of parallelism. The ability to generate statistics in parallel significantly reduces the time needed to refresh object statistics. Note: The use of procedures within DBMS_STATS is recommended. Oracle will no longer support ANALYZE in future releases.
Note: Specifying NULL for ESTIMATE_PERCENT provides the same functionality as ANALYZEs COMPUTE. Using a value of 100 is not the same as COMPUTE. The default value for ESTIMATE_PERCENT is DBMS_STATS.AUTO_SAMPLE_SIZE and is the recommended value only if the data composition is unknown. Sometimes, AUTO_SAMPLE_SIZE can perform slowly for large tables. Values between 5 and 20 percent tend to provide the best balance between speed and calculation accuracy. Run tests to find the appropriate value. Data distribution is also gathered when using DBMS_STATS. The most basic information about the data distribution is the maximum value and minimum value of the column within a table. However, this level of statistics may not be sufficient for the optimizer's needs if the data within the column is skewed. With the METHOD_OPT parameter set to For All Columns Size Auto, Oracle automatically determines which columns require histograms and the number of buckets (size) of each histogram. The default value is For All Columns Size Auto. The size_clause is defined as size_clause:= SIZE {integer | REPEAT | AUTO | SKEWONLY} integer: Number of histogram buckets: Range [1 to 254]. REPEAT: Collects histograms only on columns that already have histograms. AUTO: Determines the columns to collect histograms based on data distribution and the workload of the columns. SKEWONLY: Determines the columns to collect histograms based on the data distribution of the columns. Note: We strongly recommend that you read Note 237293.1 of Oracle Metalink. This note includes a set of notes and examples that help DBAs move updating statistics from ANALYZE to DBMS_STATS. With the CASCADE parameter set to TRUE, the associated indexes are also analyzed. The default setting for CASCADE is FALSE. Note: Specifying the DEGREE only helps gather table statistics (partitioned or nonpartitioned) in parallel. The index statistics cannot make use of this flag and does not run in parallel.
9/4/2008
Note: A user must have DBA privileges or the GATHER_SYSTEM_STATISTICS role to update dictionary or system statistics.
Example
Here is the command in the SQL Step of an AE program: %UpdateStats(%Table(INTFC_BI_HTMP)) This meta-SQL starting with PeopleTools 8.48 issues the following Oracle database command at runtime:
9/4/2008
DBMS_STATS.GATHER_TABLE_STATS (ownname=> [DBNAME], tabname=>[TBNAME], estimate_percent=>1, method_opt=> 'FOR ALL COLUMNS SIZE 1',cascade=>TRUE);
Note: To reduce the increased overhead of DBMS_STATS at runtime (when compared to ANALYZE with estimate), the ESTIMATE_PERCENT parameter was set to 1. Note: PeopleSoft stores the default syntax for the gather stats command in a table PSDDLMODEL. Use the supplied script (DDLORA.DMS) to change the default setting or to add a required SAMPLE ROWS/PERCENT for the ESTIMATE clause. For example: Lets assume that we want to change the ESTIMATE_PERCENT option of LOW option to 5 percent and the HIGH option to 80 percent. With this as our goal, complete the following steps: 1. Edit the DDLORA.DMS-delivered script: The first occurrence of the DBMS_STATS is used for the LOW option of %UpdateStats. The second occurrence is used for the HIGH option of %UpdateStats.
4,2,0,0,$long DBMS_STATS.GATHER_TABLE_STATS (ownname=> [DBNAME], tabname=>[TBNAME], estimate_percent=>5, method_opt=> 'FOR ALL INDEXED COLUMNS SIZE 1',cascade=>TRUE); // 5,2,0,0,$long DBMS_STATS.GATHER_TABLE_STATS (ownname=> [DBNAME], tabname=>[TBNAME], estimate_percent=>80, method_opt=> 'FOR ALL INDEXED COLUMNS SIZE 1',cascade=>TRUE); //
2. Run the modified DDLORA.DMS through DataMover. Ensure that the temporary table statistics have been handled as shown in the code listed in step 1. If you find any temporary table that was not updated during the run time, then plan to use a manual method of updating the statistics.
InsBalT.UpdStat.S
10
9/4/2008
Once you confirm that using DBMS_STATS to update statistics is causing a lot of overhead when compared to Analyze with estimate when using %Updatestats, you can revert %Updatestats to use the Analyze command by completing the following steps:
Modifying SQL#4 and SQL#5 DDL Model Defaults for Oracle Platform
1. Navigate to DDL model defaults by selecting PeopleTools, Utilities, Administration, DDL Model Defaults. 2. Click the Search button and then select Oracle Platform.
11
DDL Model Defaults page 4. Copy Model SQL from the PeopleSoft Internet Architecture page to a backup text file so that you can revert back to using DBMS_STATS in the future if you so choose. 5. Replace the Model SQL for Oracle Platform with the following:
psstats.analyze_table(tab_name=>[TBNAME],stats_mode=>'LOW');
12
DDL Model Defaults page 7. Copy Model SQL from the PeopleSoft Internet Architecture page to a backup text file so that you can revert back to using DBMS_STATS in the future if you so choose. 8. Replace the Model SQLfor Oracle Platform with the following.
psstats.analyze_table(tab_name=>[TBNAME],stats_mode=>'HIGH');
DDL Model Defaults page This change affects all PeopleSoft programs that use %UpdateStats, as well as DataMover import scripts with option Set statistics ON (ON is the default value).
13
9/4/2008
Note: Due to this change affecting all the programs using %UpdateStats, we strongly recommend that you perform regression test before implementing in production. Do not use Analyze to update statistics during regular database maintenance window.
HISTOGRAMS
What Are Histograms?
Histograms provide improved selectivity estimates in the presence of data skew, resulting in optimal execution plans with nonuniform data distributions. A histogram partitions the values of a column into bands so that all column values in a band fall within the same range. CBO uses data within the histograms to get accurate estimates of the distribution of column data. Oracle uses height-balanced histograms or frequency-based histograms based on the number of distinct values and the number of bands. See Oracle documentation for more details.
14
9/4/2008
Note: Bind Peeking (BP) was designed to fix the last bullet point in the previous list. However, due to additional issues that BP causes, we do not recommend the use of BP. Columns such as PROCESS_INSTANCE, ORD_STATUS are likely candidates that benefit from histograms.
Creating Histograms
Creation of specific column histograms is no longer needed as long as the DBMS_STATS procedures are used with the METHOD_OPT parameter containing SIZE AUTO. Based on column usage within runtime SQL where clauses and the amount of data skew within the column, the SIZE AUTO value instructs the DBMS_STATS procedure to generate only those columns that would benefit from histograms. Note: If any other value for METHOD_OPT is provided (for example, SIZE 1 or SIZE REPEAT), the automatic creation of histograms is deactivated.
Viewing Histograms
You can display information about whether a table contains histograms using the following dictionary views: USER_HISTOGRAMS ALL_HISTOGRAMS DBA_HISTOGRAMS You can display the number of bands within a columns histogram using the following dictionary views: USER_TAB_COLUMNS ALL_TAB_COLUMNS DBA_TAB_COLUMNS
15
9/4/2008
16
9/4/2008
Tools
AE Properties
If you select Continue for the runtime option, the system uses the base temp table if there are no temp table instances available at the time of the run.
In the AE program properties, Advanced options, if you select the Batch Only option, the program will not be available for online transactions. You dont need to change this setting unless you are advised to do so.
17
9/4/2008
= 1+3+3 = 7
Scenario 2: Batch Only Option Is Selected Total number of temp table instances created for each temp table associated to the AE program = Base Temp Table + Number of Temp Tables (AE Program)
= 1+3 = 4
AE PROCESS When the program runs for the first time, the system uses temp table instance 1. Subsequent parallel streams TAB1 TAO1 TAB1 TAO use the rest of the instances in sequence. In this example, TAB2 TAO1 TAB2 TAO the first three concurrent streams will use the instance TAB3 TAO1 counts 1, 2, and 3. When the TAB3 TAO user tries to run the fourth, fifth, and sixth streams, the TAB4 TAO1 TAB4 TAO program will not find an available temp table instance and will use the base temp table. Number of Concurrent Runs
AE PROCESS
AE PROCESS
The number of concurrent runs in this example is six; the number of available temp table instances is only three. Therefore, the first three processes use the temp table instance. The final three use the base temp tables.
Recommendations
We recommend that you: 1. Always set up an adequate number of temp table instances to achieve good performance. 2. Set up a temp table instance even if you are planning to run only one process at a time. 3. Set up a required value for the Process Scheduler server in the max concurrent field. The Max API Aware value should be larger than or equal to the total of Max Concurrent value set of all the process types.
18
9/4/2008
Server Definition page 4. Set up the required number of PSAESERV processes on the Process Scheduler server.
[PSAESRV] ;========================================================================= ; Settings for Application Engine Tuxedo Server ;========================================================================= ;------------------------------------------------------------------------; The max instance should reflect the max concurrency set for process type ; defined with a generic process type of Application Engine as defined ; in the Server Definition page in Process Scheduler Manager Max Instances =12
We recommend that you use dedicated temp tables even when the process is running in single stream.
19
9/4/2008
Warning: Keep in mind that because GTTs lose data when a session ends, there is no way to restart the program.
20
9/4/2008
Starting from PeopleTools 8.44, the create scripts that Oracle provides create only locally managed tablespaces (LMTs).
Space Management
Dictionary contention is reduced because space is managed at the datafile level. Information about extent allocation and deallocation are no longer stored in the dictionary tables. Free extents recorded in bitmap (so some part of the tablespace is set aside for bitmap) Each bit corresponds to a block or group of blocks Bit value indicates free or unused Common views used are DBA_EXTENTS and DBA_FREE_SPACE
21
9/4/2008
CREATE TABLESPACE TS_PERM_LOC_AUTO size EXTENT MANAGEMENT LOCAL AUTO ALLOCATE; 100M datafile
Uniform extent provides the best predictability and consistency. Having the consistent extent size eliminates holes, or tablespace waste. Its easier for the DBA to do capacity planning, but to ensure optimum extent size, you should do the planning. Creating different categories of tablespace, such as small, medium, and large, with different uniform extent sizes and placing the tables in the appropriate tablespaces relevant for each tables expected size may improve performance.
TEMPORARY TABLESPACES
Every database user should be assigned a default temporary tablespace to handle the data sorts. You cannot specify nonstandard block sizes for a temporary tablespace. In Oracle 10g, you cannot assign a regular tablespace as the temporary tablespace. Oracle 10g flags as an error any assigned tablespace that is not a true Oracle temporary tablespace.
Tempfile Based
Oracle introduced a new tablespace type, temporary tablespace, that uses a tempfile instead of a datafile. Tempfile based tablespace is the preferred method for any temporary tablespace because it provides better extent management and space management than datafile-based tablespace. In tempfile based tablespace, only local management with UNIFORM EXTENT is allowed. The following statement creates a temporary tablespace with uniform extent size of 500K: CREATE TEMPORARY TABLESPACE TS_TEMP_LOC_UNI size 100M tempfile '/temp/ora/ts_temp_loc_uni.dbf' EXTENT MANAGEMENT LOCAL UNIFORM SIZE 500K;
Advantages
Here are some of the advantages of temporary tablespaces using tempfile: Space management (extent allocation and deallocation) is locally managed. The sort segment created for each instance is reused. All processes performing sorts reuse existing sort extents of the sort segment, rather than allocating a segment (and potentially many extents) for each sort.
UNDO MANAGEMENT
22
9/4/2008
SIZE 1024M
SIZE 5048M;
TABLE/INDEX PARTITIONING
What Is Partitioning?
Partitioning is a data volume management technique that may have performance benefits; however, partitioning is most effective on multiprocessor machines when implemented with increased db_writers and degree of parallelism. Partitioning addresses the key problem of supporting very large tables and indexes by enabling you to decompose these tables and indexes into smaller and more manageable pieces called partitions. Once you define the partitions, SQL statements can access and manipulate the partitions rather than managing entire tables or indexes.
23
9/4/2008
PeopleSoft-supplied indexes are generic in nature. Depending on the customer's business needs and data composition, index requirements may vary. The following few tips will help DBAs manage indexes more efficiently.
INDEX TIPS
Here are some indexing tips: 1. Review the index recommendation document supplied by the product to see if any of the suggestions apply to your installation. 2. Run the Oracle trace/TKPROF report for a process and check the access paths to determine the usage of indexes. Note: We recommend that you do not use the Index Skip Scan access method because it can be very slow when accessing large indexes. 3. Consider adding additional indexes depending on your processing/performance needs. 4. Examine the available indexes and remove any of the unused indexes to boost the performance of Insert/Update/Delete DML. Sometimes, an index that is omitted from a batch process may still be useful with an online page. The reverse may also be true. Perform thorough system analysis before deleting any index. Index deletion can severely impact another programs performance.
REBUILDING OF INDEXES
We recommend that you rebuild an index when a SQL execution plan accesses an index either by a range scan or full index scan and reveals a significant number of logical I/Os (as well as physical I/Os) in a relatively small number of rows returned by the scan. Typically significant I/Os can happen when a large number of rows have been deleted from the table. Within the index, all the rows are logically deleted, but physically they still linger within until you rebuild the index. To improve runtime performance, Oracle does not coalesce near-empty blocks or rebalance physical index blocks. As a result, deleted blocks linger within the index until you rebuild it. During a range scan and a full index scan, these deleted/empty blocks must still be read, causing the performance degradation. As of this writing, little information about index rebuilding criteria seems to be available pertaining specifically to Oracle 10g. Three metalink notes were found that might provide a little insight as to when to rebuild or coalesce an index. Note 77574.1 Guidelines on When to Rebuild a B-Tree Index, dated October 20, 2005: This note is labeled for Oracle 7.0 to 9i inclusive. May indirectly be usable for Oracle 10.x. This note is referenced by note 122008.1. Note 99618.1 ORACLE8i - Coalescing Indexes, dated October 20, 2005: This note is labeled for version Oracle 8i only, but does describe index coalescing and discuss the pros and cons for using coalescing indexes.
24
9/4/2008
Note 122008.1 Script: Lists All Indexes that Benefit from a Rebuild, dated May 6, 2005: This note is labeled for Oracle 7.3 to 10.2 inclusive. This note provides a script that may be used to determine whether an index may be a candidate for rebuilding. This note uses the suggested values from note 77574.1 and provides a good basis for determining whether an index should be rebuilt. Please read note 77574.1 before using this script. Note: If a very large index is evaluated, a threshold of 5 or more levels may be too shallow.
FUNCTION-BASED INDEXES
A function-based index is an index on an expression, such as an arithmetic expression or an expression containing a package function. We recommend that you contact customer support before you create a functional index. Here is a test case: Table PS_CUSTOMER has an index PS0CUSTOMER with NAME1 as the leading column:
SELECT SETID,CUST_ID,NAME1 FROM PS_CUSTOMER WHERE NAME1 LIKE 'Adventure%'; SQL> SETID CUST_ID NAME1 ----- --------------- ----------------SHARE 1008 Adventure 54
Query uses index PS0CUSTOMER and returns the result faster, but it doesnt return any rows. If data is stored in mixed case such as the above example, the only way to get the result using a consistent case filter is by using the function UPPER.
SELECT SETID,CUST_ID,NAME1 FROM PS_CUSTOMER WHERE UPPER(NAME1) LIKE 'ADVENTURE%'; SQL> SETID CUST_ID NAME1 ----- --------------- ----------------SHARE 1008 Adventure 54
Query does not use the PS0CUSTOMER index, so it takes a long time to return a result. However, the data that it returns is correct. In such cases, the use of function-based indexes is helpful.
CREATE INDEX ON PSFCUSTOMER ON PS_CUSTOMER (UPPER(NAME1)); SELECT SETID,CUST_ID,NAME1 FROM PS_CUSTOMER WHERE UPPER(NAME1) LIKE 'ADVENTURE%'; SQL> SETID CUST_ID NAME1 ----- --------------- ----------------SHARE 1008 Adventure 54
Query uses the PSFCUSTOMER index, returns the query faster, and provides the correct output. Note: Review the Oracle documentation pertaining to function-based indexes before you try to create them. Starting with PeopleTools 8.48, PeopleTools generates indexes with DESCENDING columns. These indexes are considered function-based indexes in Oracle. Here is an example: 25
9/4/2008
CREATE UNIQUE INDEX PS_GL_ACCOUNT_TBL ON PS_GL_ACCOUNT_TBL (SETID, ACCOUNT, EFFDT DESC) TABLESPACE PSINDEX STORAGE (INITIAL 45056 NEXT 100000 MAXEXTENTS UNLIMITED PCTINCREASE 0) PCTFREE 10 PARALLEL NOLOGGING
When you select the descending column name from DBA_IND_COLUMNS, the index will show something like SYS_NC00033$, which is a system-generated column name. To find the real column name, you must look in COLUMN_EXPRESSION of DBA_IND_EXPRESSIONS. select a.index_name,a.index_type,b.column_name from dba_indexes a,dba_ind_columns b, where b.index_name='PS_GL_ACCOUNT_TBL' and a.index_name=b.index_name order by column_position;
Because of bugs (#4939157 and 5092688) that caused wrong results or core dumps from queries using functional indexes, PeopleSoft had recommended disabling functional indexes. However, disabling of functional indexes may cause SQLs that rely on these indexes to perform in an inefficient manner. We suggest that you go to the following link to check the availability of patches and apply them accordingly to fix the bugs and to remove the _disable_function_based_index=TRUE initialization parameter: http://www4.peoplesoft.com/psdb.nsf/0/33440EC2DE7C886788257051005AEB72?OpenDocument
26
9/4/2008
When you run a SQL statement that does not exist in the shared pool, Oracle must fully parse that statement. Oracle must allocate memory for the statement from the shared pool, check the statement syntactically and semantically, and so on. This process is referred to as a hard parse and is costly both in terms of CPU used and in the number of latches that are performed. Hard parsing happens when the Oracle server parses a query and cannot find an exact match for the query in the library cache. Sometimes hard parsing causes excessive CPU usage. This problem occurs due to inefficient sharing of SQL statements. You can avoid this problem by using bind variables instead of literals in queries. The number of hard parses can be identified in a PeopleSoft AE trace (128). Below is an example of the AE trace (128). Notice the compile count of 252 for the same SQL statement.
SQL Statement BL6100.10000001.S C o m p i l e Count Time 252 0.6 E x e c u t e Count Time 252 1.5 F e t c h Count Time 0 0.0 Total Time 2.1
In Oracle Trace output, such statements appear as individual statements and each statement parses once. Relying on Oracle trace output to identify the SQL that are hard parsed due to literal instead of bind variables is somewhat difficult. Oracle introduced this new parameter CURSOR_SHARING as of Oracle8i. By default, the parameters of this value is set to EXACT, which means that the database looks for an exact match of the SQL statement while parsing. Note: Setting the CURSOR_SHARING value at the instance level is not recommended in a PeopleSoft environment.
AE - Reuse Flag
PeopleSoft AE programs use bind variables in the SQL statements, but these are PeopleSoft-specific variables. When the statement is passed to the database, the AE programs send the statement with literal values. The only way to tell the AE program to send the bind variables is by selecting the Reuse option for that statement that needs to use the bind variable. If you decide to customize, we recommend that you select the Reuse option for all program steps..
27
9/4/2008
Misses in library cache during parse: 1 Optimizer goal: CHOOSE Parsing user id: 21 (PROJ84) Rows ------1 2 Row Source Operation --------------------------------------------------UPDATE PS_PC_RATE_RUN_TAO INDEX RANGE SCAN (object id 16735)
28
9/4/2008
Rows ------0 1 2
Execution Plan --------------------------------------------------UPDATE STATEMENT GOAL: CHOOSE UPDATE OF 'PS_PC_RATE_RUN_TAO' INDEX GOAL: ANALYZED (RANGE SCAN) OF 'PS_PC_RATE_RUN_TAO' (UNIQUE) ********************************************************************************
You will see 252 different SQLs in the tkprof similar to the one presented in the previous example.
Misses in library cache during parse: 1 Optimizer goal: CHOOSE Parsing user id: 21 (PROJ84)
29
9/4/2008
Rows ------252 504 Rows ------0 252 504
Execution Plan --------------------------------------------------UPDATE STATEMENT GOAL: CHOOSE UPDATE OF 'PS_PC_RATE_RUN_TAO' INDEX GOAL: ANALYZED (RANGE SCAN) OF 'PS_PC_RATE_RUN_TAO' (UNIQUE) ********************************************************************************
SQR/COBOL - CURSOR_SHARING
Most of the SQR and COBOL programs are written to use bind variables. If you find any programs that are not using bind variables and are not able to modify the code, then the CURSOR_SHARING option FORCE is useful. With this setting, the database looks for a similar statement, excluding the literal values that are passed to the SQL statement. Oracle replaces the literal values with the system-bind variables and treats them once as a single statement and parses. Setting the value at the session level is more appropriate. If you identify the program (SQR/COBOL) that is not using the bind variables and must force them to use the binds at the database level, then adding the ALTER SESSION command at the beginning of the program is a better option. If you are not willing to change the application program, then implementing the session level command through a trigger may provide you more flexibility. Session level (using trigger): Here is sample trigger code that you can use to implement the session-level option:
CREATE OR REPLACE TRIGGER MYDB.SET_TRACE_INS6000 BEFORE UPDATE OF RUNSTATUS ON MYDB.PSPRCSRQST FOR EACH ROW WHEN ( NEW.RUNSTATUS = 7 AND OLD.RUNSTATUS != 7 AND NEW.PRCSTYPE = 'SQR REPORT' AND NEW.PRCSNAME = 'INS6000' ) BEGIN EXECUTE IMMEDIATE 'ALTER SESSION SET CURSOR_SHARING=FORCE'; END; /
Note: You must give ALTER SESSION privilege to MYDB to make this trigger work.
Example
Here is an example SQL statement issued from a SQR/COBOL program:
SELECT . FROM PS_PHYSICAL_INV PI, PS_STOR_LOC_INV SLI WHERE. NOT EXISTS (SELECT 'X' FROM PS_PICKZON_INV_VW PZI WHERE PZI.BUSINESS_UNIT = 'US008' AND PZI.INV_ITEM_ID = 'PI000021' AND ..) ORDER BY ..
This example statement uses literal values in the where clause, which causes a hard parse for each run. Every hard parse has some amount of performance overhead. Minimizing these hard parses boosts performance.
30
9/4/2008
This example statement gets run for every combination of BUSINESS_UNIT and INV_ITEM_ID. As per the data composition used in this benchmark, there were about 13,035 unique combinations of BUSINESS_UNIT and INV_ITEM_ID and about 19,580 total runs. Oracle TKPROF Output with CURSOR_SHARING=FORCE
SELECT FROM PS_PHYSICAL_INV PI, PS_STOR_LOC_INV SLI WHERE .. NOT EXISTS (SELECT :SYS_B_09 FROM PS_PICKZON_INV_VW PZI WHERE PZI.BUSINESS_UNIT = :SYS_B_10 AND PZI.INV_ITEM_ID = :SYS_B_11 AND ..) ORDER BY ..
Misses in library cache during parse: 13190 Misses in library cache during execute: 1 OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS call count ------- -----Parse 27118 Execute 33788 Fetch 54988 ------- -----total 115894 cpu elapsed disk query current -------- ---------- ---------- ---------- ---------5.35 5.06 0 49 1 2.42 2.22 0 5577 235 2.44 2.57 1 97241 0 -------- ---------- ---------- ---------- ---------10.21 9.85 1 102867 236 rows ---------0 229 47621 ---------47850
Misses in library cache during parse: 64 Misses in library cache during execute: 1 OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS call count ------- -----Parse 356 Execute 357 cpu elapsed disk query current -------- ---------- ---------- ---------- ---------0.08 0.10 0 0 0 0.47 0.48 0 5568 228 rows ---------0 228
31
9/4/2008
Fetch 667 ------- -----total 1380
From these trace statistics, you can see that the number of library cache misses decreased dramatically with the use of bind variables. Original Timing 197 Sec Time with CURSOR_SHARING option 102 Sec %Gain 48%
32
9/4/2008
Process Scheduler runs PeopleSoft batch processes. Just as in the PeopleSoft architecture, you can set up the Process Scheduler (batch server) to run from a database server or any other server.
Running the Process Scheduler on a box other than the database server uses a TCP/IP connection to connect to the database. Because the batch process may involve extensive SQL processing, this TCP/IP comes with a lot of overhead and may impact processing times. The impact is more evident in a process where excessive row-by-row processing is done. In processes where the majority of SQL statements are set based, TCP/IP overhead is likely reduced. Dedicate a network connection between the batch server and the database to minimize the overhead.
Process Scheduler
33
9/4/2008
Running the Process Scheduler on the database server eliminates the TCP/IP overhead and improves the processing time. However, keep in mind that this scenario does use additional server memory. Set the following value in the Process Scheduler configuration file "psprcs.cfg" to use the direct connection instead of TCP/IP:
UseLocalOracleDB=1
This kind of set up is useful for programs that do excessive row-by-row processing.
34
9/4/2008
PeopleTools provides tracing facilities to capture online, as well as batch program flows. Oracle DBMS also provides utilities to capture traces for details on SQL execution during a database session. The following are basic recommendations that can assist you in capturing various traces in order to identify performance issues. Ensure that you reset the values back to zero (that is, stop tracing) after you capture the needed trace files. The recommendations provided here meet typical needs; specific scenarios may require additional settings to capture needed details. Refer to PeopleTools documentation for complete discussion. Note: Running a production environment with any of these settings may cause significant performance degradation due to the overhead introduced by tracing.
There are multiple ways to capture traces of online applications. Using the online runtime settings for a single user is the most efficient method and has the least performance impact on the system. Alternatively, you can set the trace settings in the application servers configuration file. However, the application server trace settings will affect all users using the server and thus have a wider performance impact on the overall system.
Copyright 2008 Oracle, Inc. All rights reserved.
35
9/4/2008
Note: We do not recommend turning on trace on an application server in a production environment. Having trace on at the server level may cause significant performance degradation due to the overhead introduced by tracing.
36
9/4/2008
We recommend a value of 31 when tracing with TraceSQL. When tracing PeopleCode (TracePC), the following options are requested to be traced: 64 128 256 512 + 1024 ====== 1984 - Trace start of programs - Trace external function calls - Trace internal function calls - Show parameter values - Show function return value - The value TracePC is to be set.
We recommend a value of 1984 when tracing with TracePC. Here are sample option settings from psappsrv.cfg:
;------------------------------------------------------------------------; SQL Tracing Bitfield ; ; Bit Type of tracing ; ----------------; 1 - SQL statements ; 2 - SQL statement variables ; 4 - SQL connect, disconnect, commit and rollback ; 8 - Row Fetch (indicates that it occurred, not data) ; 16 - All other API calls except ssb ; 32 - Set Select Buffers (identifies the attributes of columns ; to be selected). ; 64 - Database API specific calls ; 128 - COBOL statement timings ; 256 - Sybase Bind information ; 512 - Sybase Fetch information ; 4096 - Manager information ; 8192 - Mapcore information ; Dynamic change allowed for TraceSql and TraceSqlMask TraceSql=31 TraceSqlMask=12319 ;------------------------------------------------------------------------; PeopleCode Tracing Bitfield ; ; Bit Type of tracing ; ----------------; 1 - Trace entire program
37
9/4/2008
; 2 - List the program ; 4 - Show assignments to variables ; 8 - Show fetched values ; 16 - Show stack ; 64 - Trace start of programs ; 128 - Trace external function calls ; 256 - Trace internal function calls ; 512 - Show parameter values ; 1024 - Show function return value ; 2048 - Trace each statement in program ; Dynamic change allowed for TracePC and TracePCMask TracePC=1984 TracePCMask=4095
Note: Performing online tracing at the database level (especially on a production environment) is difficult mainly because online sessions share database connections. As database connections are used, a single web server session can use many different database connections. If tracing at the application server level is not adequate, contact customer support for more assistance.
AE PERFORMANCE ISSUES
Configuration Settings for Tracing an AE Process on the AE
When tracing is enabled for AE programs, the Process Scheduler creates a subdirectory under the process scheduler/log/output directory for each AE process. As an example, the trace directory created for the FS_BP process might be AE_FS_BP_7233, containing file named AE_FS_BP_7233.AET. The calculation for the TraceAE value is the same as for the TraceSQL, as previously explained. The recommended TraceSQL value is 31. When tracing AE (TraceAE), the following options are requested to be traced: 1 2 4 + 128 ===== 135 - Trace STEP execution sequence to AET file - Trace Application SQL statements to AET file - Trace Dedicated Temp Table Allocation to AET file - Timings Report to AET file - The value TraceAE is to be set.
We recommend a value of 135 when tracing with TraceAE. Here are sample option settings from psprcs.cfg:
;------------------------------------------------------------------------; AE Tracing Bitfield ; ; Bit Type of tracing ; ----------------; 1 - Trace STEP execution sequence to AET file ; 2 - Trace Application SQL statements to AET file ; 4 - Trace Dedicated Temp Table Allocation to AET file ; 8 - not yet allocated ; 16 - not yet allocated ; 32 - not yet allocated ; 64 - not yet allocated ; 128 - Timings Report to AET file ; 256 - Method/BuiltIn detail instead of summary in AET Timings Report ; 512 - not yet allocated ; 1024 - Timings Report to tables
38
9/4/2008
; 2048 - DB optimizer trace to file ; 4096 - DB optimizer trace to tables TraceAE= 135
If tracing PeopleCode steps with an AE program is necessary, the following settings are needed within the Process Scheduler configuration file to capture both SQL and PeopleCode events during a run. PeopleCode tracing is not generally necessary, but is helpful when trying to debug procedural issues. Please remember to restore original configuration values after you complete the trace. We recommend a value of 31 when tracing with TraceSQL. We recommend a value of 1984 when tracing with TracePC. Here are sample option settings from psprcs.cfg:
; SQL Tracing Bitfield ; Bit Type of tracing ; ----------------; 1 - SQL statements ; 2 - SQL statement variables ; 4 - SQL connect, disconnect, commit and rollback ; 8 - Row Fetch (indicates that it occurred, not data) ; 16 - All other API calls except ssb ; 32 - Set Select Buffers (identifies the attributes of columns ; to be selected). ; 64 - Database API specific calls ; 128 - COBOL statement timings ; 256 - Sybase Bind information ; 512 - Sybase Fetch information ; 1024 - SQL Informational Trace ; Dynamic change allowed for TraceSql and TraceSqlMask TraceSQL=31 ;------------------------------------------------------------------------; PeopleCode Tracing Bitfield ; ; Bit Type of tracing ; ----------------; 1 - Trace Evaluator instructions (not recommended) ; 2 - List Evaluator program (not recommended) ; 4 - Show assignments to variables ; 8 - Show fetched values ; 16 - Show stack ; 64 - Trace start of programs ; 128 - Trace external function calls ; 256 - Trace internal function calls ; 512 - Show parameter values ; 1024 - Show function return value ; 2048 - Trace each statement in program (recommended) ; Dynamic change allowed for TracePC TracePC=1984
39
9/4/2008 We recommend a value of 2183 when using this method. Here is a sample option setting from psprcs.cfg: ;-------------------------------------------------------------------------
; AE Tracing Bitfield ; ; Bit Type of tracing ; ----------------; 1 - Trace STEP execution sequence to AET file ; 2 - Trace Application SQL statements to AET file ; 4 - Trace Dedicated Temp Table Allocation to AET file ; 8 - not yet allocated ; 16 - not yet allocated ; 32 - not yet allocated ; 64 - not yet allocated ; 128 - Timings Report to AET file ; 256 - Method/BuiltIn detail instead of summary in AET Timings Report ; 512 - not yet allocated ; 1024 - Timings Report to tables ; 2048 - DB optimizer trace to file ; 4096 - DB optimizer trace to tables TraceAE=2183
Note: This setting does not provide wait events and bind variable information. If you need this information, use the second method. 2. Create a trigger to start SQL tracing on the database side for an AE process with customized trace settings. Generally, level 12 trace is useful to identify SQL performance problems because the trace captures wait and bind information for all SQL. For example, to generate trace for AE process PO_PO_CALC, you must create the following trigger. MYDB is the database name and SET_TRACE_POCALC is the trigger name.
CREATE OR REPLACE TRIGGER MYDB.SET_TRACE_POCALC BEFORE UPDATE OF RUNSTATUS ON MYDB.PSPRCSRQST FOR EACH ROW WHEN ( NEW.runstatus = 7 AND OLD.runstatus != 7 AND NEW.prcstype = 'Application Engine' AND NEW.prcsname = 'PO_PO_CALC' ) BEGIN EXECUTE IMMEDIATE 'ALTER SESSION SET TIMED_STATISTICS = TRUE'; EXECUTE IMMEDIATE 'ALTER SESSION SET MAX_DUMP_FILE_SIZE = UNLIMITED'; EXECUTE IMMEDIATE 'ALTER SESSION SET TRACEFILE_IDENTIFIER = ''POCALC'''; EXECUTE IMMEDIATE 'ALTER SESSION SET EVENTS = ''10046 TRACE NAME CONTEXT FOREVER, LEVEL 12'''; END; /
Modify the trigger creation command with the proper values for database name, process name, and tracefile identifier. Note: Drop or disable the trigger once the trace is captured. Once the raw database trace is captured, run the tkprof program with following sort options:
40
9/4/2008
tkprof <trace_input_file> <rpt_output_file> sys=no explain=<user_id>/<password> sort=exeela,fchela,prscpu,execpu,fchcpu Note: We discourage enabling tracing at the database instance level.
41
9/4/2008 a) Generate a patch inventory list: Log on to the DB server. Change the directory to cd $ORACLE_HOME/Opatch. Issue the following command: opatch lsinventory. b) Verify you applied all required minimum patches:
Log on to Oracles PeopleSoft Customer Connection to make sure that http://www4.peoplesoft.com/psdb.nsf/0/33440EC2DE7C886788257051005AEB72?OpenDocumen t Search for Required Operating System, RDBMS & Third Party Product Patches Required for Installation. Select the Tools Release that you are on.
Note: The Customer Connection website also documents initialization parameters that impact performance. 2. Send PeopleSoft Global Support Center a copy of your init.ora or spfile.ora. Note: You can use Remote Diagnostic Agent (RDA) to collect the information listed here. For instructions on how to run RDA, see Note:414970.1 in Oracle Metalink.
42
9/4/2008
A list appears displaying the snapshot IDs and the corresponding times when the snapshots were generated . 4. Enter the beginning and ending snapshot ID for the AWR report.
Enter value for begin_snap: Enter value for end_snap:
The AWR report is generated. If you want to diagnose a specific issue, you can create a snapshot just prior to and then again just after executing a questionable program. Typically, this is not necessary. Here is an example of how to manually create a snapshot via SQL*Plus:
BEGIN DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT (); END; /
Service time is CPU consumption by the database; wait time is the sum of all the wait events in the database. The most important part of the AWR report is the Top 5 Timed Events section. Using this list, you can quickly identify the main areas on which you want to focus. In situations where CPU usage is much more significant than wait time, it is less likely that investigating wait events will produce significant savings in response time. Therefore, we recommend that you compare the time taken by the top five timed events and direct the tuning effort to the biggest consumers.
43
9/4/2008
Example
Top 5 Timed Events
Event CPU time db file sequential read db file scattered read log file parallel write control file sequential read 4,658 8,883 10,336 28,284 Waits Time(s) 114 26 12 9 9 12 5 1 0 Avg Wait(ms) % Total Call Time 75.2 13.4 User I/O 8.5 User I/O 1.4 System I/O 1.3 System I/O Wait Class
After a glance at this table, the system is CPU bound with 75.2 percent of the processing time spent in CPU. To drill further into the details of the CPU consumption, look at the Time Model Statistics section: Time Model Statistics
Statistic Name DB CPU sql execute elapsed time parse time elapsed hard parse elapsed time failed parse elapsed time connection management call elapsed time hard parse (sharing criteria) elapsed time PL/SQL execution elapsed time sequence load elapsed time repeated bind elapsed time DB time background elapsed time background cpu time Time (s) 114.19 92.92 11.67 3.40 1.37 0.79 0.16 0.07 0.01 0.00 134.02 66.65 31.67 % of DB Time 75.20 59.33 8.71 2.54 1.02 0.59 0.12 0.05 0.01 0.00
44
9/4/2008
The Time Model Statistics section reveals that 59 percent of the total DB CPU is spent in SQL execute elapsed time. The SQL Statistics section reveals problem SQLs (such as SQLs that have high gets, high physical reads, and high parses). The SQL Statistics section shows: SQL Ordered by Elapsed Time: Includes SQL statements that took significant run time during processing. SQL Ordered by CPU Time: Includes SQL statements that consumed significant CPU time during processing. SQL Ordered by Gets: These SQLs performed a high number of logical reads while retrieving data. SQL Ordered by Reads: These SQLs performed a high number of physical disk reads while retrieving data. SQL Ordered by Parse Calls: These SQLs experienced a high number of reparsing operations. SQL Ordered by Sharable Memory: Includes SQL statements cursors that consumed a large amount of SGA shared pool memory. SQL Ordered by Version Count: These SQLs have a large number of versions in shared pool. To get the details of wait events, go to the Wait Events Statistics section: Wait Events
Event db file sequential read db file scattered read log file parallel write Waits 4,658 8,883 10,336 %Time -outs 0.00 0.00 0.00 0.00 0.00 0.00 Total Wait Time (s) 26 12 9 9 8 6 Avg wait (ms) 12 5 1 0 1 1 Waits /txn 1.02 1.00 1.16 3.18 0.74 1.22
control file sequential read 28,284 db file parallel write control file parallel write 6,547 10,839
The followings are the most common wait events found in Oracle database: The most common I/O related wait event is the db file sequential read, which occurred on a single block read for the index data or table data accessed through an index. If this wait event is high, then tune it as follows: 1. Find the top SQL statements with physical reads in the SQL ordered by the Reads section, and generate the explain plan of the SQL statements. a) If index range scans are involved, the system could be visiting more blocks than necessary if the index is unselective. By creating a more selective index, the SQL can access the same table data by visiting fewer index blocks (and doing fewer physical I/Os). b) If indexes are fragmented, the system could be visiting more blocks because there is less index data per block. In this case, rebuilding the index compacts its contents into fewer blocks. c) If the index being used has a large clustering factor, then more table data blocks must be visited in order to get the rows in each index block. By rebuilding the table with its rows sorted by the particular index columns, the clustering factor can be reduced and hence reducing the number of table data blocks that most be visited for each index block. For example, if the table has columns A, B, C, and D, and the index is on B, D, then export the table data in order by B, D, and reload the table.
45
9/4/2008
2. If there is no particular SQL statement with a bad execution plan, then I/Os on particular data files may be serviced slower due to excessive activity on their disks. In this case, looking at the File I/O Statistics section of the AWR report will identify such hot disks. To improve performance, spread out the I/O by manually moving datafiles to other storage or by making use of Striping, RAID, and other technologies to automatically perform I/O load balancing. 3. If there is no SQL with suboptimal execution plans, and I/O is evenly spread out with similar response times from all disks, then a larger buffer cache may help. In Oracle Database 10g, the Automatic Shared Memory Management (ASMM) feature is introduced to automatically determine the size of the database buffer cache (default pool), shared pool, large pool, and Java pool by setting the parameter SGA_TARGET. For more details about ASMM, see this metalink: Note 257643.1. Another common I/O-related wait event is db file scattered read, which occurs when multiblock reads from a disk are performed into noncontiguous buffers in the buffer cache. Such reads are issued for up to the number of blocks specified by this parameter DB_FILE_MULTIBLOCK_READ_COUNT at a time. These reads typically happen for full table scans and for fast full index scans. If this wait event is high, then investigate the top SQL statements with physical reads in the SQL ordered by Reads section to see if their execution plans contain full table or fast full index scans. In cases where such multiblock scans are necessary, it is possible to tune the size of multiblock I/Os issued by Oracle by setting the instance parameter DB_FILE_MULTIBLOCK_READ_COUNT to the following:
DB_BLOCK_SIZE x DB_FILE_MULTIBLOCK_READ_COUNT = max_io_size of system
Starting with Oracle10g Release 2, the initialization parameter DB_FILE_MULTIBLOCK_READ_COUNT is now automatically tuned to use a default value when this parameter is not set explicitly. This default value corresponds to the maximum I/O size that can be performed efficiently. This value is platform-dependent and is 1MB for most platforms. Because the parameter is expressed in blocks, it will be set to a value that is equal to the maximum I/O size that can be performed efficiently, divided by the standard block size. Another common I/O-related wait event is control file parallel write,' which occurs when Oracle is writing physical blocks to all control files and is waiting for the I/Os to complete. The details of this wait are reported in the Background Wait Events section. If systemwide waits for this wait event are significant, then this either indicates numerous writes to the controlfile (too many control files copies) or slow performance of writes to the controlfiles. Consider these possible solutions: Reduce the number of controlfile copies to the minimum, which ensures that not all copies can be lost at the same time. Enable asynchronous I/O or move the controlfiles to less I/O-saturated disks. Another popular wait event is log file sync, which occurs when a user session issues a COMMIT and is waiting for LGWR to finish flushing all redo from the log buffer to disk. To understand what is delaying the log file sync, there is a need to examine some other wait events, such as LGWR wait for redo copy, log file parallel write, log file single write, and the redo statistics. Here are some general tuning tips for this wait event: Move all the log members to high speed disks Move log members to low I/O activities disk controllers Starting with Oracle 10gR2, Oracle introduced Asynchronous Commit, which is enabled with the initialization parameter COMMIT_WRITE to change the commit behavior at the SYSTEM and SESSION LEVEL. To read more about this feature, see Metalink Note 336119.1. As a general rule, systems where CPU time is dominant usually need less tuning than systems where wait time is dominant. On the other hand, heavy CPU usage can be caused by poor SQL access paths or badly written SQL, so do not neglect it.
46
9/4/2008
In addition, the proportion of CPU time to WAIT time always tends to decrease as load on the system increases. A steep increase in wait times is a sign of contention and must be addressed for good scalability. You must take snapshots of the database workload throughout the day to detect such performance issues. Here is a list of metalink reference notes that can be useful for database tuning: Note 190124.1: THE COE PERFORMANCE METHOD Note 30286.1: I/O Tuning with Different RAID Configurations Note 30712.1: Init.ora Parameter DB_FILE_MULTIBLOCK_READ_COUNT Reference Note Note 1037322.6: WHAT IS THE DB_FILE_MULTIBLOCK_READ_COUNT PARAMETER? Note 47324.1: Init.ora Parameter DB_FILE_DIRECT_IO_COUNT Reference Note Note 45042.1: Archiver Best Practices Note 62172.1: Understanding and Tuning Buffer Cache and DBWR Note 147468.1: Checkpoint Tuning and Troubleshooting Guide Note 76713.1: 8i Parameters that Influence Checkpoints Note 76374.1: Multiple Buffer Pools
47
9/4/2008
BEGIN /** ** Delete Old Status Info here ** ** If an external table is used to hold the log ** of the executions, truncate it here. **/ /** ** Flush Monitoring info ** ** This forces the database to "flush" the modification data ** so that dbms_stats can tell whether the stats are stale. ** Documentation says this is not needed. Doing it just in case. **/ DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO; /** ** Find all tables and indexes that have stale stats **/
48
9/4/2008
DBMS_STATS.GATHER_DATABASE_STATS ( cascade => TRUE ,options => 'LIST AUTO' ,objlist => lot_ObjectsNeedingStats ); FOR i IN NVL( lot_ObjectsNeedingStats.first, 0 ) .. NVL( lot_ObjectsNeedingStats.last, 0 ) LOOP /** ** Filter out all the system objects **/ IF lot_ObjectsNeedingStats( i ).ownname NOT IN ( 'SYS' ,'SYSTEM' ,'SYSMAN' ,'CTXSYS' ,'DBSNMP' ) THEN /** ** Non System object ** ** Determine whether this object is an "INDEX" or "TABLE" **/ IF lot_ObjectsNeedingStats( i ).objtype = 'INDEX' THEN SELECT table_owner ,table_name INTO lvc_tableowner ,lvc_tablename FROM dba_indexes WHERE 1 = 1 AND owner = lot_ObjectsNeedingStats( i ).ownname AND index_name = lot_ObjectsNeedingStats( i ).objname; ELSE lvc_tableowner := lot_ObjectsNeedingStats( i ).ownname; lvc_tablename := lot_ObjectsNeedingStats( i ).objname; END IF; /** Object Type checking **/ /** ** Trim off the default "PS_" from the table name if it exists **/ IF SUBSTR( lvc_tablename, 1, 3 ) = 'PS_' THEN lvc_PSRecordName := SUBSTR( lvc_tablename, 4 ); ELSE lvc_PSRecordName := lvc_tablename; END IF; /** Strip "PS_" **/ /** Set the temp table trap **/ lb_IsTempTable := FALSE; /** rectype = 7 denotes a PeopleSoft temp table **/ sql_stmt := 'SELECT 1 FROM ' || lvc_tableowner || '.PSRECDEFN WHERE recname = :b1 and rectype = 7'; /** ** Check to see if the record ** is a base temp table. ** ** Example Table name = PS_TEMP_TAO **/ BEGIN EXECUTE IMMEDIATE sql_stmt INTO lint_tmp USING lvc_PSRecordName; -- If we make it here, we found a record lb_IsTempTable := TRUE; EXCEPTION WHEN NO_DATA_FOUND THEN /** ** Record was not a base temp table ** ** Check if record is a single digit temp instance ** ** Example Table name = PS_TEMP_TAO1
49
9/4/2008
AND '9' THEN /** Remove the last digit **/ lvc_PSRecordName := SUBSTR( lvc_PSRecordName, 1, LENGTH( lvc_PSRecordName ) - 1 ); /** ** Do the Check **/ BEGIN EXECUTE IMMEDIATE sql_stmt INTO lint_tmp USING lvc_PSRecordName; -- If we make it here, we found a record lb_IsTempTable := TRUE; EXCEPTION WHEN NO_DATA_FOUND THEN /** ** Record was not a single digit temp instance. ** ** Check if record is a double digit temp instance ** ** Example Table name = PS_TEMP_TAO26 **/ IF SUBSTR( lvc_PSRecordName, LENGTH( lvc_PSRecordName ), 1 ) BETWEEN '1' AND '9' THEN /** Remove the 2nd to last digit **/ lvc_PSRecordName := SUBSTR( lvc_PSRecordName, 1, LENGTH( lvc_PSRecordName ) - 1 ); /** ** Do the Check **/ BEGIN EXECUTE IMMEDIATE sql_stmt INTO lint_tmp USING lvc_PSRecordName; -- If we make it here, we found a record lb_IsTempTable := TRUE; EXCEPTION WHEN NO_DATA_FOUND THEN -- We have not found a temp table! NULL; WHEN OTHERS THEN RAISE; END; /** 2 digit dedicated temp table Check **/ END IF; /** Check 2nd last char for digit **/ WHEN OTHERS THEN RAISE; END; /** single digit dedicated temp table Check **/ END IF; /** Checking last digit **/ WHEN OTHERS THEN IF SQLCODE = -942 THEN /** table or view does not exist **/ /** Non PeopleSoft Table/Index found. ** Treat as though it was not a temp table. ** Generate Stats **/ NULL; ELSE /** Unexpected error **/ RAISE; END IF; END; /** base temp table check**/ IF lb_IsTempTable = FALSE THEN lts_StartTime := SYSTIMESTAMP;
50
9/4/2008
dbms_output.put_line( 'Generating stats on ' || LOWER( lot_ObjectsNeedingStats( i ).objtype ) || ' ' || lot_ObjectsNeedingStats( i ).ownname || '.' || lot_ObjectsNeedingStats( i ).objname || '.' ); IF lot_ObjectsNeedingStats( i ).objtype = 'INDEX' THEN /* PROCEDURE GATHER_INDEX_STATS Argument Name Type In/Out Default? ------------------------------ ----------------------- ------ -------OWNNAME VARCHAR2 IN INDNAME VARCHAR2 IN PARTNAME VARCHAR2 IN DEFAULT ESTIMATE_PERCENT NUMBER IN DEFAULT STATTAB VARCHAR2 IN DEFAULT STATID VARCHAR2 IN DEFAULT STATOWN VARCHAR2 IN DEFAULT DEGREE NUMBER IN DEFAULT GRANULARITY VARCHAR2 IN DEFAULT NO_INVALIDATE BOOLEAN IN DEFAULT STATTYPE VARCHAR2 IN DEFAULT FORCE BOOLEAN IN DEFAULT */ EXECUTE IMMEDIATE ' BEGIN DBMS_STATS.GATHER_INDEX_STATS( ownname => ''' || lot_ObjectsNeedingStats( i ).ownname || ''' ,indname => ''' || lot_ObjectsNeedingStats( i ).objname || ''' ,partname => ''' || lot_ObjectsNeedingStats( i ).partname || ''' ,estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE ,degree => DBMS_STATS.AUTO_DEGREE ); END; '; ELSE /* PROCEDURE GATHER_TABLE_STATS Argument Name Type In/Out Default? ------------------------------ ----------------------- ------ -------OWNNAME VARCHAR2 IN TABNAME VARCHAR2 IN PARTNAME VARCHAR2 IN DEFAULT ESTIMATE_PERCENT NUMBER IN DEFAULT BLOCK_SAMPLE BOOLEAN IN DEFAULT METHOD_OPT VARCHAR2 IN DEFAULT DEGREE NUMBER IN DEFAULT GRANULARITY VARCHAR2 IN DEFAULT CASCADE BOOLEAN IN DEFAULT STATTAB VARCHAR2 IN DEFAULT STATID VARCHAR2 IN DEFAULT STATOWN VARCHAR2 IN DEFAULT NO_INVALIDATE BOOLEAN IN DEFAULT STATTYPE VARCHAR2 IN DEFAULT FORCE BOOLEAN IN DEFAULT */ EXECUTE IMMEDIATE ' BEGIN DBMS_STATS.GATHER_TABLE_STATS( ownname => ''' || lot_ObjectsNeedingStats( i ).ownname || ''' ,tabname => ''' || lot_ObjectsNeedingStats( i ).objname || ''' ,partname => ''' || lot_ObjectsNeedingStats( i ).partname || ''' ,estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE ,block_sample => FALSE ,method_opt => ''FOR ALL COLUMNS SIZE AUTO'' ,degree => DBMS_STATS.AUTO_DEGREE ,cascade => FALSE ); END; '; END IF; lts_FinishTime := SYSTIMESTAMP;
51
9/4/2008
/** ** Create status string here. ** ** If so inclined. **/ ELSE dbms_output.put_line( 'NOT Generating stats on temp ' || LOWER( lot_ObjectsNeedingStats( i ).objtype ) || ' ' || lot_ObjectsNeedingStats( i ).ownname || '.' || lot_ObjectsNeedingStats( i ).objname || '.' ); END IF; /** Gen Stats **/ /** ** Insert status info ** ** If so inclined ** COMMIT; **/ END IF; /** System object filter **/ END LOOP; /** Object stale stats loop **/ END;
52
9/4/2008
CUSTOMER VALIDATION
PeopleSoft is working with PeopleSoft customers to get feedback and validation on this document. Lessons learned from these customer experiences will be posted here.
FIELD VALIDATION
PeopleSoft Consulting has provided feedback and validation on this document. Additional lessons learned from field experience will be posted here.
53
9/4/2008
Appendix C References
Implementation Guide, Customer Connection, http://www.peoplesoft.com/ Solution ID 201049233: E-ORACLE: 10g Master Performance Solution for Oracle 10g, Customer Connection, http://www.peoplesoft.com/ http://www.oracle.com/oramag/ http://metalink.oracle.com
54
9/4/2008
Revision History
July 1008: Created document.
55
9/4/2008
Peoplesoft Enterpise Performance on Oracle 10g Database July 2008 Author: Jayagopal Theranikal Contributing Authors: Rama Tiruveedhula, Lawrence Schapker, Michelle Lam, Glenn Low Oracle Corporation World Headquarters 500 Oracle Parkway Redwood Shores, CA 94065 U.S.A. Worldwide Inquiries: Phone: +1.650.506.7000 Fax: +1.650.506.7200 oracle.com Copyright 2008, Oracle. All rights reserved. This document is provided for information purposes only, and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor is it subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document, and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission. Oracle, JD Edwards, and PeopleSoft are registered trademarks of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.
56