Você está na página 1de 49

SQL Server 2008

with Siebel CRM Applications


SQL Server Technical Article

Writers: Microsoft Corporation


Technical Reviewers: Anu Chawla, Wanda He, George Heynen, Peter Samson

Published: April 2009


Updated: N/A
Applies To: SQL Server 2008

Summary: Microsoft SQL Server 2008 offers best-of breed performance for Siebel.
This paper describes the capabilities of SQL Server 2008, how to maximize database
performance for Siebel, and how to resolve common issues encountered by customers.
Copyright
The information contained in this document represents the current view of Microsoft Corporation on the issues
discussed as of the date of publication. Because Microsoft must respond to changing market conditions, it
should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the
accuracy of any information presented after the date of publication.

This White Paper is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS,
IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS DOCUMENT.

Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under
copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or
transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or
for any purpose, without the express written permission of Microsoft Corporation.

Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights
covering subject matter in this document. Except as expressly provided in any written license agreement
from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks,
copyrights, or other intellectual property.

 2009 Microsoft Corporation. All rights reserved.

Microsoft, Visual Basic, Visual Studio, Win32, Windows, and Windows Server are either registered trademarks
or trademarks of Microsoft Corporation in the United States and/or other countries.

The names of actual companies and products mentioned herein may be the trademarks of their respective
owners.
Contents
Executive Summary ...........................................................................................1
Siebel Architecture ............................................................................................2
SQL Server 2008 Features .................................................................................4
Scale and Performance .................................................................................... 4
Data Compression ...................................................................................... 4
Enhanced Lock Escalation ............................................................................ 5
Predictable Query Performance .................................................................... 6
Plan Freezing ............................................................................................. 7
Resource Governor ..................................................................................... 8
Enhanced High Availability ............................................................................... 9
Enhanced Database Mirroring ...................................................................... 9
Geographically-dispersed Cluster Services ..................................................... 9
Mirrored Backup Set ................................................................................... 9
Backup Compression ................................................................................ 10
Powerful Monitoring and Tools ........................................................................ 10
Performance Data Collection ...................................................................... 10
Policy-based Management ......................................................................... 12
Enhanced Security and Compliance ................................................................. 12
Transparent Data Encryption ..................................................................... 12
Extensible Key Management ...................................................................... 13
Auditing .................................................................................................. 13
Configuration Management .............................................................................14
Configuration Parameter – Max Degree of Parallelism ........................................ 14
Configure Memory ......................................................................................... 15
Configure Data and Log Files .......................................................................... 16
Configure TempDB ........................................................................................ 17
Configure Network Packet Size ....................................................................... 18
Ongoing Maintenance and Monitoring .............................................................19
Defragmenting Tables and Updating Statistics .................................................. 19
Significant events necessitating Maintenance .................................................... 20
After initial creation of the Siebel Database ................................................. 21
After a Repository migration ...................................................................... 21
After mass data changes ........................................................................... 21
After a Siebel product upgrade................................................................... 21
Ongoing in the Siebel Development environment ......................................... 21
Performance Monitor ..................................................................................... 21
Performance Tuning ........................................................................................23
Identifying and Tuning resource-intensive SQL statements ................................. 23
SQL Server Profiler ........................................................................................ 24
Data Management Views (DMVs)..................................................................... 26
Database Engine Tuning Advisor ..................................................................... 27
Common Questions .........................................................................................28
Case-insensitive Search ................................................................................. 28
Change Database Collation ............................................................................. 29
Dropping Indexes .......................................................................................... 30
Reporting against a mirror Database ............................................................... 31
Partitioned Tables ......................................................................................... 32
EIM Performance ........................................................................................... 34
No mixed workload, EIM involving low-volume Tables ................................... 35
No mixed workload, EIM involving moderate/high-volume Tables ................... 35
Mixed workload, EIM involving low-volume Tables ........................................ 36
Mixed workload, EIM involving moderate/high-volume Tables ........................ 36
Appendix 1 – REINDEX Script ..........................................................................37
Appendix 2 – Maintenance Plans .....................................................................40
Appendix 3 – Traced SQL statement ................................................................41
Appendix 4 - Identify and resolve a suboptimal Execution Plan ......................43
Message and Audience
This paper focuses on the capabilities of SQL Server 2008, the advantages of using SQL
Server 2008 with Siebel CRM Applications, provides guidance for maximizing
performance, discusses common questions, and provides solutions to some common
problems.

In order to fully comprehend the concepts covered in this paper, the reader should have
a general understanding of databases, SQL Server, and Siebel. We also assume the
reader is or will be using the referenced databases on their preferred hardware
platforms.

Executive Summary
Microsoft SQL Server 2008 offers many improvements and new capabilities for Siebel.
Several SQL Server 2008 features offer immediate benefit and require minor projects to
implement:
 Monitoring, Troubleshooting, and Tuning. The Database Administrator can
implement a comprehensive database-monitoring solution using Performance
Data Collection and the Management Data Warehouse.
 Backup Compression. The on-disk footprint for backups of the Siebel database
can be immediately reduced by as much as 75%. Moreover, the elapsed time
for backups may be reduced as much as 43%.
 Data Compression. The on-disk footprint for the Siebel database may be
reduced by as much as 45%. Moreover, runtime performance of Siebel may
improve by as much as 25% since compressed data is being retrieved from the
I/O subsystem.

Take advantage of SQL Server 2008’s industry-leading performance and scalability for
real-world database workloads with the lowest cost of operation, as verified by
Microsoft partners and industry-standard Transaction Processing Performance Council’s
TPC benchmarks.

Oracle has certified SQL Server 2008 for Siebel 7.8, Siebel 8.0, and Siebel 8.1.
Siebel Architecture
The following diagram illustrates a sample Siebel deployment (diagram from Oracle’s
Siebel Performance Tuning Guide v8.0).

The minimum set of components will always include:


 Siebel Database. SQL Server 2008 stores the Siebel Repository, reference data
such as the List of Values (LOV’s), and the transactional data (e.g. Accounts,
Contacts, Activities, Opportunities, Service Requests, and so on).
 Siebel File System. Storage of attachments such as documents or
presentations.
 Siebel Application Server(s). Application Server to host client sessions and
Siebel asynchronous components.
 Siebel Gateway Name Server. Holds configuration data and component
availability.
 Web Server(s). Facilities communication between the user’s web browser and
the Siebel enterprise.

The actual topology, components, and number of servers for a Siebel deployment will
be influenced by any of the following considerations:
 Desired business functionality, and Siebel components in use.
 Business expectations for Siebel availability.
 Planned workload, including concurrent users and asynchronous processes such
as Workflow or integration (EAI).

The processor, memory, and storage capacity for SQL Server 2008 must be aligned
with these considerations.
SQL Server 2008 Features

Scale and Performance


Siebel CRM is a complex workload that demands sophisticated database engine. SQL
Server 2008 high-performance engine provides Siebel Administrators and Database
Administrators with industry-leading performance and scalability.

The TPC-E benchmark, introduced in February 2007 to measure OLTP performance, is


broadly representative of customer workloads. Unlike its predecessor TPC-C, TPC-E
uses a complex but realistic database schema and requires mainstream capabilities
such as referential integrity and RAID protected storage. A SQL Server 2008 TPC-E
benchmark by NEC beats the previous record by 70%. The benchmark for the SAP
application demonstrated an increase in throughput by nearly a factor of three (3), and
continues to demonstrate that SQL Server 2008 offers a low Total Cost of Ownership
(TCO) as the database engine for SAP, Siebel, and other enterprise applications.

A HP, Microsoft, and Oracle benchmark demonstrated that SQL Server 2008 could scale
to 12,000 concurrent users with almost 1.5 million daily transactions.

For more information please see:

 SQL Server 2008 TPC Benchmarks

 HP, Microsoft, and Oracle benchmark for SQL Server 2008

Data Compression
SQL Server 2008 allows the Database Administrator to selectively compress tables
using two levels of compression (ROW or PAGE). Data compression is transparent to
Siebel, and may be selectively enabled or disabled during off-peak hours or
maintenance windows using the ALTER INDEX command with the DATA_COMPRESSION
parameter. The Database Administrator might choose to initially compress historic or
mostly read-only Tables, and then compress all Tables.

SQL Server 2008 has the ability to compress data for selected tables, indexes, or
partitions. Data compression provides relief for the common problem of the I/O
bottleneck for many Siebel implementations.

SQL Server 2008 supports two types of data compression:


 ROW compression compresses individual columns of a table.
 PAGE compression is a superset of ROW compression, in that it takes the results
of ROW compression and then further compresses repeating data patterns on
the same data page.
PAGE compression is generally NOT recommended for a Table in a Siebel OLTP
database due to the high CPU/processor cost for data-manipulation language
(DML) operations. The Database Administrator should only consider PAGE
compression for a Table with historical or static data.

Microsoft tested ROW compression on a 185 GB Siebel database in a lab environment.


The on-disk footprint for the Siebel database was reduced by 45%. Moreover, runtime
performance of Siebel was improved by as much as 25% since compressed data is
being retrieved from the I/O subsystem. Note that the reduction in the on-disk
footprint and improvement in performance will vary between Siebel implementations,
and is highly dependent on the data and the characteristics of the workload. Use the
Stored Procedure sp_estimate_data_compression_savings to estimate the
reduction in the on-disk footprint.

The Database Administrator can create and retain maintenance scripts and Maintenance
Plans that assume all Tables are (or will be) compressed in the Siebel database. If the
Database Administrator prefers to selectively implement ROW compression then
consider that Tables, Indexes, or Partitions should be selected for compression when
the subsequent reduction in I/O will be greater than the subsequent increase in
processor consumption. For simplicity of management, if there is a desire to
implement ROW compression then it may be best to do so for all Tables.
Alternatively, the Database Administrator must carefully track which Table has which
type of compression (if any) enabled for it.

For the Siebel database, Siebel does not support the option DATA_COMPRESSION in its
data-definition language (DDL) statements, so the Database Administrator must
implement compression using the ALTER TABLE or ALTER INDEX statement with a
REBUILD clause. The option DATA_COMPRESSION may be set to NONE, ROW, or
PAGE. Please see an example in Appendix 1 of the document.

Most Tables may be changed to NONE, ROW, or PAGE compression and remain in an
online mode. However, partitions of a Partitioned Table can only change their
compression setting in an offline mode.

Please be aware that if the definition of an Index changes (e.g. a custom Index or a
Siebel product upgrade) in the Siebel Repository and that Index was selectively created
using compression, then Siebel’s Database Utility (formerly called DDLSYNC) will drop
and recreate the Index without compression.

Enhanced Lock Escalation


SQL Server 2008 improves on Lock Escalation by allowing the Database Administrator
to easily disable Lock Escalation at the Table level. This may mitigate the risk of
blocking during significant Enterprise Integration Manager (EIM) activity (please see the
section on Common Questions in the document).
A lock-escalation may occur in SQL Server when:

 the logical unit of work (LUW) contains more than 5,000 records

 the LUW processes (INSERT, UPDATE, or DELETE) more than 20% of the records
in the Table.

Lock Escalation on Partitioned Tables is isolated to the individual Partition, and will not
escalate to the entire Table. This is an improvement with SQL Server 2008.

Predictable Query Performance


SQL Server creates an Execution Plan for each distinct SQL statement. Here it is
important to clarify what constitutes a distinct SQL statement, since the use of different
passed parameters with the same SQL statement would not result in two concurrently-
cached Execution Plans.

The Execution Plan includes the sequence each Table will be accessed, the amount of
parallel processing, the Index used to access each Table, any interim result sets, the
type of Joins used between Tables and interim result sets, and so on. SQL Server
evaluates dynamic factors such as statistics for each Table and Index, and any
parameters in the SQL statement.

The cached Execution Plan then resides in memory until an event occurs that may flush
it from the cache. For example:
 Time to live (TTL), and level of reuse. SQL Server will drop an Execution Plan
that has been cached for a period of time. The exact amount of time is
influenced by the level of reuse of that cached Execution Plan, memory pressure
on SQL Server, and so on.
 Revised statistics generated for a Table. Manually-generated statistics or
automatically-generated statistics for a Table will cause SQL Server to drop all
cached Execution Plans involving that Table.
 Data Definition Language (DDL) statements exist in the logical unit of work.
Note that this does not occur in normal Siebel operations, but it is called out in
the document for the sake of completeness. One example is the creation of
temporary Tables (local or global) in a Stored Procedure.
 Stop SQL Server. All cached Execution Plans are dropped.

A suboptimal Execution Plan in cache may be best summarized as having a poorly-


chosen Execution Plan by the Optimizer, or having an Execution Plan that was
optimized for non-typical parameters passed from Siebel. The Database
Administrator can mitigate the risk of a suboptimal Execution Plan by
maintaining representative statistics for all Tables in the Siebel database. This
task is discussed later in the document.
A more difficult situation occurs when non-typical parameters are passed from Siebel or
the passed parameters represent a skewed distribution of data in a Table, and SQL
Server then creates or recreates an Execution Plan based on this combination of
dynamic factors. For example:
 A Siebel implementation has been configured to have closed Service Requests
remain assigned to the same user. A Siebel user executes a query in the All
SR’s view for Service requests owned by this User, and does not filter out the
closed Service Requests. SQL Server might create or recreate an Execution Plan
that does not begin with the Index on the Owner column, since the distribution
of the data is poor.
 A Siebel implementation is using the Multi-Organization feature for a worldwide
business, most of the data in the Siebel database is for one or two
Organizations, users are assigned to their Organization, and users typically use
the All View to query for data within their Organization. SQL Server might
create or recreate an Execute Plan that does not begin with the generally-
selective Multi-Org Table or BU_ID Column.

Plan Freezing
SQL Server 2005 introduced Plan Guides and the USE PLAN hint. SQL Server 2008
introduces Plan Freezing for Plan Guides. Plan Freezing builds on the Plan Guides
framework by introducing an easier method to create Plan Guides. With Plan Freezing,
it is now possible to create a Plan Guide based on data already available in the SQL
Server plan cache (using it’s corresponding plan handle, etc.).

The Stored Procedure sys.sp_create_plan_guide_from_handle allows the Database


Administrator to create a Plan Guide based on a Plan in the Plan Cache. Plan Guides
offer full DML (SELECT, INSERT, UPDATE, and DELETE) support with SQL Server 2008.

-- Create a plan guide for the query by specifying the query plan in the
plan cache.
DECLARE @plan_handle varbinary(64);
DECLARE @offset int;
SELECT @plan_handle = plan_handle, @offset = qs.statement_start_offset
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS st
CROSS APPLY sys.dm_exec_text_query_plan(qs.plan_handle,
qs.statement_start_offset, qs.statement_end_offset) AS qp
WHERE text LIKE N'SELECT WorkOrderID, p.Name, OrderQty, DueDate%';

EXECUTE sp_create_plan_guide_from_handle
@name = N'Guide1',
@plan_handle = @plan_handle,
@statement_start_offset = @offset;
GO

-- Verify that the plan guide is created.


SELECT * FROM sys.plan_guides
WHERE scope_batch LIKE N'SELECT WorkOrderID, p.Name, OrderQty, DueDate%';
GO
The utility DBCC FREEPROCCACHE has also been enhanced in SQL Server 2008. DBCC
FREEPROCCACHE now allows the Database Administrator to selectively remove a single
plan from cache.

For further information please see:


 SQL Server 2008 Books Online - Execution Plan Cache and Reuse
 SQL Server 2008 Books Online - Understanding Plan Guides
 SQL Server 2008 Books Online - Using SQL Profiler to create and test Plan
Guides
 SQL Server 2008 Books Online - sp_create_plan_guide
 SQL Server 2008 Books Online - sp_create_plan_guide_from_handle

Resource Governor
Control resource utilization to prioritize key workloads with Resource Governor. Ensure
that mission-critical database workloads are not adversely affected by other database
activity. Resource Governor allows the Siebel Administrator to effectively manage
system resources between batch workloads (e.g. EAI or EIM) and online/user workload
(e.g. Call Center), and adjust workload resource allowance for different hours of the day
according to business requirements.

Resource Governor allows the Database Administrator to control and set limits on
resource consumption for an incoming request. The limits are specified for processor or
memory consumption. Resource Governor may be beneficial for any of the following
scenarios:
 Unpredictable workload execution.
 Prioritization of work.

A typical scenario might be to prioritize workload between online users and


asynchronous tasks such as EAI or EIM.

A generic approach to implementing Resource Governor would include:


1. Create one or more Workload Groups. Here you may also specify consumption
limits for each Workload Group. For example, set maximum CPU time to 30
seconds.
2. Create a classification function to assign logins or applications to a Workload
Group.
3. Register the classification function with Resource Governor.
4. To restrict a Workload Group to a percentage of all server resources, create a
Resource Pool, set the consumption limits, and assign the Workload Group to the
Resource Pool. For example, set maximum concurrent consumption of all
processor/CPU resources to 50% of the entire server.
5. Restart Resource Governor.

For further information please see:


 SQL Server 2008 Books Online - Managing workloads with Resource Governor

Enhanced High Availability


Enterprises running Siebel business applications need 24x7 availability. Siebel supports
SQL Server 2008 new capabilities to enable highly available environments. SQL Server
customers are currently running applications with multi-terabyte databases and more
than 99.998% availability.

For more information please see:

 SQL CAT - 6 failover clustering benefits with SQL 2008

Enhanced Database Mirroring


SQL Server 2008 builds on SQL Server 2005 by providing a more reliable platform that
has enhanced database mirroring, including automatic page repair, improved
performance, and enhanced supportability.

For more information please see:

 SQL CAT - Database Mirroring Log Compression with SQL 2008

Geographically-dispersed Cluster Services


Failover clustering enhancements in Microsoft SQL Server 2008 and Microsoft Windows
Server® 2008 provide server-level redundancy and remove the single point of failure in
a typical failover cluster by using a certified Microsoft Geographically Dispersed Cluster
Services configuration with SAN replication and a VLAN.

Mirrored Backup Set


Perform a concurrent backup of a database to multiple backup devices and to increase
protection in the event of backup media failure. Create checksums on backup media to
verify subsequent restore operations.
Backup Compression
Backup compression is independent of any Siebel-provided software or utilities, and
therefore is easy to implement by the Database Administrator. The only impact is a
slight increase in processor consumption while SQL Server creates the compressed
backup.

Tests of backup compression showed as much as a 75% savings on disk space (for the
backups). These results will vary between Siebel implementations, but backup
compression may result in substantial savings. The backup operation may also
complete as much as 43% faster, which is important for Siebel implementations with
small outage windows or 24x7 operations.

Implementation is easily and relatively quick. Backup compression will default to the
server setting but can be enabled by using the COMPRESSION parameter
(NO_COMPRESSION for no compression) or by selecting the option in SQL Server
Management Studio. Note that you cannot blend compressed and uncompressed
backups to the same back media. For example:
1. Open SQL Server Management Studio.
2. Connect to SQL Server.
3. Right-click on the siebeldb database and select Tasks -> Back Up.
4. Click on the Options page, and set backup compression to Compress Backup.
5. Set other desired options.
6. Click on OK to create the compressed backup.

For further information please see:


 SQL Server 2008 Books Online
 SQL CAT - Tuning the Performance of Backup Compression in SQL 2008
 UNISYS - SQL Server 2008 Data and Backup Compression

Powerful Monitoring and Tools


SQL Server 2008 provides Siebel administrators enhanced capabilities to ensure 24x7
operations and the best performance, with automated and enhanced monitoring and
tuning tools.

Performance Data Collection


Performance Data Collection is a new feature in SQL Server 2008, and is likely to be of
immediate and immense value to both the Database Administrator and Siebel
Administrator. Performance Data Collection allows the ongoing collection, storage, and
reporting of performance data. You can also use system Stored Procedures and the
Performance Studio API to build additional performance-management utilities.
Performance Data Collection is easy to implement, offers low overhead against SQL
Server, and includes several analytical reports.

Data Collectors for each SQL Server instance submit their data to the Management Data
Warehouse (MDW). The Database Administrator may choose to implement a MDW on
each SQL Server instance, or a central MDW.

Implementation of Performance Data Collection does not require significant effort by


the Database Administrator. To enable a new Collection and the Management Data
Warehouse:
1. Open SQL Server Management Studio
2. Expand the Management folder, right-click on Data Collection, and select
Configure Management Data Warehouse.
3. Select Create or upgrade a Management Data Warehouse to create a new
database for the Management Data Warehouse.
4. Select the desired SQL Server instance, and click on the New button to create
the database. Specify the desired database name. MDW may be an
appropriate name.
5. Map users and logins to the roles for the Management Data Warehouse.
6. Allow the wizard to complete the needed tasks.
7. Right-click on Data Collection again and select Configure Management Data
Warehouse.
8. Select Set up data collection.
9. Allow the wizard to complete the needed tasks.
10. Right-click on Data Collection and select Refresh. A new folder of System Data
Collection Sets is now available. Moreover, reports are now available for the
Collection Sets.

The System Data Collection Sets include predefined properties such as collection mode
and frequency, upload frequency, and retention period. You may change these values
using SQL Server Management Studio (right-click on the Collection Set and select
Properties). The three System Data Collection Sets include:
 Disk Usage.
 Query Statistics. This is of immense value to Siebel Administrators and
Database Administrators since it is now easier to identify resource-intensive SQL
statements. Right-click on Data Collection and select Reports -> Query
Statistics History. SQL statements can be ranked (sorted) by CPU, Duration,
Total I/O, Physical Reads, and Logical Writes.
 Server Activity. The report Server Activity History presents a dashboard of
metrics for the Database Administrator, including % CPU, Memory Usage, Disk
I/O Usage, Network Usage, SQL Server Waits, and SQL Server Activity.
Once collecting data, the Database Administrator can easily obtain symptoms and
trends on the health of the Siebel database.

Policy-based Management
Policy-based Management is a framework for managing one or more instances of SQL
Server 2008. Database Administrators can use this framework to ensure the overall
system configuration is in compliance with best practices, monitor and prevent changes
to the system that violate best practices, and reduce total cost of ownership by
simplifying administration tasks.

For example, the Database Administrator can implement a Policy that periodically
checks many items in the configuration, including the parameter Max Degree of
Parallelism.

Enhanced Security and Compliance


SQL Server 2008 security enhancements provides Siebel Administrators with strong
authentication and access control, powerful encryption and key management
capabilities, and enhanced auditing.

Transparent Data Encryption


Enable encryption of an entire database (all data files and log files) without the need for
application changes. Benefits of this include the ability to search encrypted data using
both range and fuzzy searches, prevent access to secure data from unauthorized users,
and data encryption without any changes to existing applications.

For example, the Database Administrator can encrypt the entire Siebel database with
no changes needed on the Siebel Server(s), Siebel Tools, Mobile Web Clients, and so
on. Note that encryption may be enabled for the entire database, but not for selective
tables or columns.

The database pages are decrypted once read into SQL Server memory, and encrypted
any time a page is written to disk. Moreover, backups are fully encrypted once
encryption is enabled on the Siebel database.

There is a slight cost to initially enable encryption an existing Siebel database. The
enabling of encryption may be done while Siebel is online, but this decision is at the
discretion of the Database Administrator. There is also a slight cost for the ongoing
encryption and decryption events during normal operations of Siebel, but this cost
should be negligible.

The Database Administrator should investigate additional considerations for Log


Shipping, Database Mirroring, Replication, and so on before enabling encryption of the
Siebel database.
Extensible Key Management
SQL Server 2008 provides a comprehensive solution for encryption and key
management. SQL Server 2008 supports third-party key management and HSM
products.

Auditing
Create and manage auditing via DDL while simplifying compliance by providing more
comprehensive data auditing. This enables organizations to answer common questions
such as “Who accessed our customer data?”.

The Siebel Administrator may choose to augment Siebel’s audit capabilities with SQL
Server’s audit features. For example, the Audit Trail feature in Siebel 7.8 does not
audit read (SELECT) events.
Configuration Management
Microsoft wants you to maximize your success with Siebel and SQL Server 2008.
Please consider these changes for your Siebel implementation(s) to mitigate the risk of
a performance problem due to a default or poorly-configured SQL Server environment.

Configuration Parameter – Max Degree of


Parallelism
The SQL Server parameter Max Degree of Parallelism (MAXDOP) controls the
amount of parallel processing SQL Server can introduce for each SQL statement when
creating the Execution Plan for that SQL statement. The parameter is set to a default
value of zero (0) when installing SQL Server.

For example:

 A value of zero (0) means SQL Server may create an Execution Plan (for a SQL
statement) that concurrently utilizes up to the total number of processors
available to SQL Server.

 A value of one (1) means SQL Server may create an Execution Plan (for a SQL
statement) that concurrently utilizes one and only one processor.

 A value of two (2) means SQL Server may create an Execution Plan (for a SQL
statement) that concurrently utilizes up to two (2) processors available to SQL
Server.

Change this parameter to a value of one (1) in all of your Siebel environments.
This guidance is also documented in Siebel Bookshelf. Oracle conducts all Siebel-
related functional and performance tests with this parameter set to one (1). Use of a
different value may cause unpredictable results.

Use the command sp_configure to review the value for the parameter.

Use SQL Server Management Studio and a sequence of commands (below) to change
the parameter to one (1):

1. Open SQL Server Management Studio.

2. Connect to SQL Server.

3. Open a new Query window.

4. Paste this sequence of commands into the Query window.

use siebeldb
go
sp_configure 'max degree of parallelism', 1
go
reconfigure with override
go
5. Click on Execute or press F5.
Setting MAXDOP to one (1) does not constrain the Database Administrator from using a
different value for many operations. For example:
 The default value for MAXDOP can be overridden at the query level or batch
level using the MAXDOP hint. The hint may be specified in the query or
indirectly via a Plan Guide for the query.
 MAXDOP can be specified in commands such as ALTER INDEX REBUILD, ALTER
TABLE, CREATE INDEX, DBCC CHECKTABLE, and DBCC CHECKDB.

For further information from Microsoft's Siebel Resource Center, please see:
 Tech Note 32 - SQL Server Parallelism and Siebel

For further information from Oracle/Siebel, please see the relevant version of Siebel
Bookshelf, and specifically the section Configuring the RDBMS in the Installation
Guide.

Configure Memory
SQL Server 2008 offers excellent management of the allocated memory resources and
requires minimal configuration by the Database Administrator.

The two primary parameters to review or change are Minimum Server Memory and
Maximum server memory. The minimum amount of memory should be set to a
value such that SQL Server is still guaranteed some amount of memory in case of
memory pressure across the operating system. The maximum amount of memory
should be capped such that there is still memory available to the operating system and
other services running on the same server.

The maximum amount of useable memory depends on the available hardware,


and both the version and edition of SQL Server 2008. Please see the link below
on Memory Architecture for a matrix summarizing the options by the version of SQL
Server 2008.

For SQL Server 2008, check to ensure the values are defined as expected:

1. Open SQL Server Management Studio.

2. Connect to SQL Server.

3. Right-click on the server name and select Properties.

4. In the upper left, click on the Memory page.

5. Adjust the Minimum server memory value (in MB). Set to a value that
guarantees some amount of memory, and is less than Maximum server
memory.

6. Adjust the Maximum server memory value (in MB). Ensure that this
maximum still leaves memory for the operating system, and possibly other
services or applications running on the same operating system. For 32-bit
systems, see TechNote 21 (link is below). For 64-bit systems, a general rule of
thumb is to leave 2 GB for every 16 GB on the machine.

7. Click OK to apply the changes.

For further information from Microsoft's Siebel Resource Center and additional
resources, please see:
 TechNote 21 - Memory Configuration for SQL Server 2000
 SQL Server 2008 Books Online - Memory Architecture

Configure Data and Log Files


For each Data and Log file for the Siebel database, check to ensure the Initial Size
(allocation) and Autogrowth values are defined as expected:

1. Open SQL Server Management Studio.

2. Connect to SQL Server.

3. Right-click on the siebeldb database (your database name may vary) and select
Properties.

4. In the upper left, click on the Files page.

5. Adjust the Initial Size values of the Data and Log Files as needed for each
Siebel environment (Development, Test, etc.). It is recommended that the
Initial Size of the Data File be at least 2048 MB and the Initial Size of the Log
File be at least 100 MB for all Siebel environments.

6. Adjust the Autogrowth values of the Data and Log Files. It is recommended
that the file be set to grown in MB (and not as a percentage of the Initial Size).
It is further recommended that the Data file be set to grow in at least 10 MB
increments. The optimal value for Autogrowth will be influenced by the speed
and isolation of the I/O subsystem hosting the TempDB Data file(s) – attempting
to auto-grow the Data file by a large amount (e.g. 100 MB) on a slow or busy
I/O subsystem may temporarily cause slowness in SQL Server.

7. In the upper left, click on the Options page.

8. Ensure that Auto Shrink is False. This prevents SQL Server from attempting to
automatically shrink the Data and Log files, especially during significant activity
in the database.

9. Click OK to apply the changes.

The values may be changed for an existing Siebel database. However, it is


recommended that these changes be made during no or minimal activity (off-peak) in
the Siebel database.

The Autogrowth feature should be thought of as an emergency capability to


ensure the Siebel database does not run out of space. The Database
Administrator should regularly monitor space utilization, and as needed change the
Initial Size value for each Data file during off-peak hours or maintenance periods.
For further information from Microsoft's Siebel Resource Center, please see:
 TechNote 9 - SQL Server Disk Layout for Siebel
 TechNote 28 - Common Q&A for deploying SQL Server in a SAN environment

Configure TempDB
Ensure the data file(s) for TempDB reside on a fast I/O subsystem (ie., treat TempDB
just like your Data files). Ideally, ensure that the data file(s) for TempDB are on an
isolated I/O subsystem (not used by any other SQL Server data files, not used by the
operating system for its files, and so on).

Verify or adjust the configuration of TempDB:

1. Open SQL Server Management Studio.

2. Connect to SQL Server.

3. Expand the Databases folder and then the System databases folder.
Right-click on TempDB and select Properties.

4. Click on the Files page.

5. Consider adding more Data files. The rule of thumb is to have one Data
file for every two processors/CPUs. For example, utilize four Data files if
there are eight processors on the hardware.

6. Increase the Initial Size of each Data file. The recommended minimum
is 500 MB. Increase to the size of the largest Table if that Table is more
than 500 MB. Note that the REINDEX script in Appendix 1 specifies that
sort activity (when rebuilding Indexes) should occur in TempDB.

7. Adjust the Autogrowth value of each Data file. It is recommended that


the file be set to grown in MB (and not as a percentage of the Initial
Size). It is further recommended that the Data file be set to grow in at
least 10 MB increments. The optimal value for Autogrowth will be
influenced by the speed and isolation of the I/O subsystem hosting the
TempDB Data file(s) – attempting to auto-grow the Data file by a large
amount (e.g. 100 MB) on a slow or busy I/O subsystem may temporarily
cause slowness in SQL Server.

8. Click OK to apply the changes.

The Autogrowth feature should be thought of as an emergency capability to


ensure the TempDB database does not run out of space. The Database
Administrator should regularly monitor space utilization, and as needed change the
Initial Size value for each Data file during off-peak hours or maintenance periods.

For further information please see:


 SQL Server 2008 Books Online - Optimizing TempDB Performance
 TechNote 9 - SQL Server Disk Layout for Siebel
Configure Network Packet Size
The following information on Network Packet Size is specific to SQL Server 2005 and
SQL Server 2008, and should NOT be implemented for SQL Server 2000.

The SQL Server parameter Network Packet Size controls the network packet size on
SQL Server. The array-fetch data buffer of the Siebel data connector is set to 8192
bytes on every Siebel Server and is used by the Components on the Siebel Server, so
the SQL Server parameter may be set to the same value to minimize the number of
network round trips (and therefore improve performance).

There are two caveats to this guidance:

 Developers using Siebel Tools may experience better performance in the Siebel
Development environment if the Network Packet Size is left at the default
value of 4096 on the SQL Server instance hosting the Development Siebel
database. Siebel Tools does not use the Siebel data connector to communicate
with the Siebel database, and instead connects directly using an ODBC driver.

 For non-Development environments, the Repository import process (when


migrating a Repository) may be slower if Network Packet Size is increased to
a value of 8192. However, the performance benefits for normal operations
(users, integration, etc.) in the Siebel environment will outweigh the infrequent
imports of Repositories.

Use the command sp_configure to review the value for the parameter.

Use SQL Server Management Studio and a sequence of commands (below) to change
the parameter to 8192:

1. Open SQL Server Management Studio.

2. Connect to SQL Server.

3. Open a new Query window.

4. Paste this sequence of commands into the Query window.

use siebeldb
go
sp_configure 'network packet size (B)', 8192
go
reconfigure with override
go
5. Click on Execute or press F5.
Ongoing Maintenance and Monitoring
Microsoft wants you to maximize your success with Siebel and SQL Server 2008.
Please consider these maintenance activities for your Siebel implementation(s) to
mitigate the risk of a performance problem.

Defragmenting Tables and Updating Statistics


SQL Server utilizes a Cost-based Optimizer (CBO) for creating an optimal execution
plan for each distinct SQL statement. The Optimizer is dependent on statistics that are
representative of both the volume and distribution of data in each Table. For example:

 Total number of rows in the Table

 Number of distinct values in a Column, and the number of rows matching each
distinct value in that Column

SQL Server has the ability to automatically create and update statistics. SQL Server
will choose when to automatically update the statistics for a Table – the decision criteria
is based on the number of rows or percentage of rows inserted, updated, or deleted
from a Table.

The parameters Auto Create Statistics and Auto Update Statistics are enabled by
default. Check to ensure the parameters are defined as expected:

1. Open SQL Server Management Studio.

2. Connect to SQL Server.

3. Right-click on the siebeldb database (your database name may vary) and select
Properties.

4. In the upper left, click on the Options page.

5. Ensure that Auto Create Statistics is True, and Auto Update Statistics is
True.

6. Ensure that Auto Update Statistics Asynchronously is False. This was a new
feature in SQL Server 2005 and is available in SQL Server 2008, but additional
testing is still pending from Oracle relative to its use with Siebel. Expect future
guidance on if and when to enable this feature.

7. Click OK to apply the changes.

The ability to auto-update statistics is a strength of SQL Server, but this feature is
frequently misunderstood by Database Administrators. The creation (or recreation) of
statistics involves a full pass of a Table. The updating of statistics involves a sampling
of a subset of the rows in a Table.

Do not assume that the auto-update statistics feature alleviates the need for
periodic database maintenance. As a Table grows, the default sampling rate may
be insufficient for auto-update statistics. A long-term reliance on auto-update
(sampling) statistics for quickly-growing, active, or large Tables may ultimately result in
non-representative statistics, which will provide inaccurate information to the Optimizer
and thus result in suboptimal Execution Plans. Suboptimal Execution Plans will rapidly
degrade the performance of SQL Server and Siebel.

Even though a Siebel schema contains thousands of Tables, the time to defragment and
recreate statistics for most Tables is insignificant since the majority of Tables have less
than 10,000 rows. A Siebel Development environment with one Repository and low
data volumes would probably conclude in five to 15 minutes. This does not imply that
the total time to defragment and recreate statistics for the entire Siebel database will
conclude in less than 15 minutes. The best method to estimate the total elapsed time
is to run the process against a backup copy of the Production database. Factors
influencing the total elapsed time include:

 Amount of Columns populated for any Table, and size of the data populated in
these Columns. For example, one Siebel implementation may capture minimal
data for each Contact whereas a separate Siebel implementation may be
capturing extensive data for each Contact.

 Columns populated where the Column participates in one or more Indexes. A


frequently-populated Column that is also indexed will result in more work for
SQL Server during the processes to defragment the Table and Indexes, and then
to recreate statistics.

 Total records for any Table. A Table with 10 million rows will naturally take
longer to process than the same Table (and Indexes) with one (1) million rows.

 Total Indexes for any Table. A Table with a large number of Indexes will take
longer to process than a similar Table with few Indexes.

 Hardware resources, disk configuration, and SQL Server configuration.

If Table-level compression is already implemented, please ensure that the


Database Administrator remembers to set the COMPRESSION = ON parameter
when defragmenting that Table and its Indexes using the ALTER INDEX…
REBUILD command. Failure to do this will cause SQL Server to uncompress the
Table. Note that this concern does not apply to the ALTER INDEX… REORGANIZE
command or the DBCC INDEXDEFRAG command.

For simplicity of management, if there is a desire to implement compression


then it may be best to do so for all Tables. The Database Administrator can create
and retain maintenance scripts and Maintenance Plans that assume all Tables are (or
will be) compressed in the Siebel database. Please see the comments on Data
Compression earlier in the document.

Please see the REINDEX script in Appendix 1 of the document, and the discussion of
Maintenance Plans in Appendix 2 of the document.

Significant events necessitating Maintenance


Please consider the following suggestions.
After initial creation of the Siebel Database
After initiation creation of a new Siebel environment (Table creation, loading the
default Repository or a customized Repository, and then running the Siebel
Database Utility to synchronize the Repository with the physical schema),
defragment and recreate statistics for all Tables. This also allows the Database
Administrator to set a default FILLFACTOR for every Table.

After a Repository migration


After a Repository migration or the deletion of an old Repository, at a minimum
defragment and recreate statistics for the Repository Tables. Time permitting,
defragment and recreate statistics for the most active Tables or for all Tables.

After mass data changes


When importing, updating, or deleting high volumes of data (using EAI, EIM, etc.),
defragment and recreate statistics for the affected Tables. Please see additional
discussion on Common EIM Problems later in this document.

After a Siebel product upgrade


After a Siebel product upgrade (e.g. Siebel 7.8 to Siebel 8.1), defragment and
recreate statistics for all Tables. The Siebel upgrade processes and scripts tend to
introduce a significant amount of changes and fragmentation in the Tables.

Ongoing in the Siebel Development environment


In the Siebel Development environment, defragment and recreate statistics for all
Tables at least once per week. This will maximize the performance of the Get,
Check-in, and Check-out events for Siebel Tools.

Performance Monitor
Performance Monitor (PerfMon) allows the Database Administrator to monitor multiple
performance counters for the operating system, SQL Server, and so on. PerfMon may
be run from the server or from most Windows-based computers.
1. Select Start -> Run, type perfmon, and click OK.
2. Multiple Counters for the local machine are selected by default. You may
monitor or remove these Counters.
3. Click on the Properties icon, change the sampling rate from one (1) second to
perhaps 15 or 30 seconds, and click OK.
4. Press CTRL+H to enable/toggle the highlight feature. This will highlight the
Counter observations in the graph as you highlight each monitored Counter.
Press CTRL+H again if you wish to toggle off the highlight feature.
5. Click on the Add Counters icon and add these recommended Counters.
 Processor - % Processor Time (_Total)
 SQLServer:Buffer Manager – Buffer Cache hit ratio
 SQLServer:Buffer Manager – Page life expectancy
 SQLServer:Databases – Active Transactions (siebeldb)
 SQLServer:Locks – Lock Waits/sec (_Total)
 SQLServer:SQL Statistics – Batch Requests/sec
 SQLServer:SQL Statistics – Re-Compilations/sec. A high recompilation rate
may be indicative of non-representative statistics on one or more Tables.
 SQLServer:Wait Statistics – Log write waits (Average wait time)
 System – Processor Queue Length
6. Select File -> Save As and save the monitoring template for future use. For
example, save to your Desktop or save to the default folder of Performance
Monitor.

Note that each saved template is specific to the server since the specified Counters
include the name of the server.
Performance Tuning

Identifying and Tuning resource-intensive SQL


statements
A generic approach to SQL tuning involves multiple steps and various tools. Many of
these steps and tools have been discussed in the document, or are discussed below.

Configuration Management and Ongoing Maintenance


1. Ensure that each SQL Server instance is configured as expected. The
Configuration Management section of this document discusses these activities:
o Configure Max Degree of Parallelism (MAXDOP) to one (1).
o Configure memory.
o Configure the size of the Data and Log files.
o Configure the size of TempDB.
o Configure Network Packet Size.
2. Ensure that representative statistics exist for every Table and Index, and that
the Tables and Indexes are regularly defragmented by the Database
Administrator. The Ongoing Maintenance and Monitoring section of this
document provides a good overview.

Capture/Identification
 Start with broad-brush tools such as Performance Data Collection or the DMVs.
These tools are minimally invasive, and Performance Data Collection is capturing
data 24x7.
 Look at resource consumption on the server using tools such as Performance
Monitor.
 Look for recurring, resource-intensive SQL statements.
 Prioritize the recurring, resource-intensive SQL statements by the estimated
impact on the SQL Server database. A severe SQL statement that is only
executed once per day may be a lower priority than a moderate SQL statement
that is executed thousands of times per day.
 If possible, identify the source of the SQL statement. The source may be from
the Siebel UI (user-induced), a customization (configuration or scripting),
integration (EAI), and so on.
 If possible, reproduce the SQL statement and capture it using SQL Server
Profiler. The RPC:Starting event will include the actual SQL statement and is
invaluable when evaluating Execution Plans. See the example in Appendix 3 of
the document.

Evaluation
 If possible, do all evaluation work against a Production-like database that
includes these considerations:
o Configuration parameters of SQL Server
o Identical schema (Tables, Columns, and Indexes)
o Same or similar volumes and distributions of data in the Tables and
Indexes
o Same or similar defragmentation levels and statistics on the Tables and
Indexes
 Copy the SQL stream from the RPC:Starting event into SQL Server
Management Studio. Enable the Actual Execution Plan since it may be different
from the Estimated Execution Plan. Execute the SQL stream.
 Review the Actual Execution Plan. Here it is useful to have the assistance of an
experienced Database Administrator since SQL tuning is a learned skill.
o Are the joins, constant values, and passed parameters allowing the
Optimizer to obtain a good, selective first hit?
o Could Index changes support a better first hit? Use caution with Index
changes since it could have an adverse affect on other SQL statements.
o Do Indexes support optimal joins after the first hit? This is generally not
a factor for Siebel-provided Tables and Indexes, but it may occur if
joining on an extension (X_) Column or a new (CX) Table.
 The Database Engine Tuning Advisor (DTA) may be used to recommend Index or
Partition changes to the database.

Implementation
 In addition to any configuration or script changes, remember that any Index
changes should be done using Siebel Tools in the Development environment.
This will ensure that schema changes are consistently propagated to all Siebel
environments.

Please see Appendix 4 in the document, which provides a sample of one approach to
identify and resolve a suboptimal Execution Plan.

For further information please see:


 SQL Server Performance Methodology with Oracle Applications - Presentation
 SQL Server Performance Methodology with Oracle Applications - Poster

SQL Server Profiler


SQL Server Profiler is typically installed with SQL Server Management Studio. If you
will be frequently tracing, it is recommended that you create and save a Trace
Template. You may follow these steps, or download a template from Microsoft's Siebel
Resource Center.

1. Open SQL Server Profiler.


2. Select File -> New Trace.

3. Connect to SQL Server.

4. In the Trace Properties dialog, click on the Events Selection tab at the top.

5. Check the Show all events box to see all events.

6. In the Cursors events, enable all of the events.

7. In the Performance events, enable the Auto Stats event if you want to be
aware of when an auto-update statistics event occurs in SQL Server.

8. In the TSQL events, enable all of the events. This will include the
SQL:StmtRecompile event, and this event shows when a cached Execution Plan
must be recompiled by SQL Server. Note that it is import to include the Column
EventSubClass in the trace if you want to see the reason for the recompile
event.

9. Enable or disable other events as desired.

10. Click on Run to start a trace. It is then important that you validate that the
trace is capturing the needed events and columns/data.

11. If necessary, stop the trace, select File -> Properties, click on the Events
Selection Tab, and then adjust what events and columns are captured by the
trace. Start the trace again by clicking on Run.

12. Once you are satisfied with the trace properties, stop the trace and select File ->
Save As -> Trace Template. Save the trace template in the default folder with
an appropriate name (e.g. Siebel). When you start SQL Server Profiler at a later
date, you can select the trace template from the list of available templates.

Filters can be applied later to this generic trace template.

Be aware of two events and their relevance to SQL statements executed by


Siebel:

 RPC:Starting This is the actual SQL statement executed by Siebel. It includes


the parameter values and the SQL statement. It is critical to use this entire
set of information together if attempting to evaluate the estimated
Execution Plan or the actual Execution Plan. Otherwise, you may not see
the same Execution Plan.

 RPC:Completed This is an altered representation of the SQL statement


executed by Siebel, in that some parameter values have been changed to
represent what was used during the actual execution (ie., it may produce invalid
results if attempting to evaluate the Execution Plan). This event shows the total
work for the SQL statement, such as CPU, Reads, Writes, and Duration.

Please see an example of a traced SQL statement in Appendix 3 of the document.

Use caution when running SQL Server Profiler in a production environment. If the
Database Administrator needs to run a trace for an extended period of time then
consider creating a server-side trace (using sp_trace_create). This will tend to have
less impact on SQL Server.
Data Management Views (DMVs)
SQL Server 2005 introduced Dynamic Management Views (DMVs) for the Database
Administrator. DMVs continue in SQL Server 2008, and are further enhanced by the
performance-collection capabilities of the new Management Data Warehouse. DMVs
provide a simple and familiar interface (in the form of relational Tables) for gathering
critical system information from SQL Server.

DMVs are documented in SQL Server Books Online, and may be browsed in SQL Server
Management Studio by looking for all Views prefixed with the naming convention
sys.dm_.

Be aware that several DMVs only provide data since the last time the
procedure cache was cleared, or the last restart time of SQL Server (e.g. Index-
usage statistics in the DMV sys.dm_db_index_usage_stats). It is recommend that you
combine the data from the DMVs with the data in the Management Data Warehouse
when evaluating performance of the Siebel database.

Some examples of DMVs:


 Index Usage Statistics (sys.dm_db_index_usage_stats). Aggregated totals for
all Index in the current Database. Includes the number of times an Index has
been involved in a seek, scan, or a bookmark lookup.
 Missing Indexes (sys.dm_db_missing_index_details). When generating an
Execution Plan the Optimizer will store information about perceived/potential
Indexes in this DMV.
 Query Execution Plan (sys.dm_exec_query_plan). Returns the Execution Plan
(showplan) for a specified query in XML format. In SQL Server 2008 the
Database Administrator can click on the XML output to automatically bring up
the Execution Plan in a graphical format.
 Query Execution Statistics (sys.dm_exec_query_stats). Provides a direct view
into SQL Server’s Plan cache by allowing the Database Administrator to see what
queries have been cached, duration, number of times executed, average CPU
per execution, etc. Can be a useful resource for identifying queries that may
benefit from a Plan Guide.
 System Requests (sys.dm_exec_requests). Displays information regarding each
request occurring in SQL Server. This DMV and the Sessions DMV
(sys.dm_exec_sessions) provide a selectable version of sp_who2 and with
more Columns.
Database Engine Tuning Advisor
Database Engine Tuning Advisor (DTA) may be used to evaluate existing Indexes and
Partitions in a database, and get recommendations for changes to Indexes and
Partitions.

To use:
1. Open SQL Server Management Studio.
2. Navigate to Tools -> Database Engine Tuning Advisor.
3. Select options in both the General and Tuning Options tabs.
Common Questions

Case-insensitive Search
A case-insensitive search allows the user or application to not worry about
inconsistencies in the data, and be guaranteed that the user or application will be
presented with all matching rows regardless of lower-case or upper-case data in the
database and predicates/filters in the SQL statement.

There are multiple methods to implement case-insensitive search. The Siebel


Administrator and Database Administrator should first clarify several questions:

 What is the functional requirement? Global case-insensitivity across the


application? Case-insensitivity specific to one module or one area of
functionality? Temporary solution due to a data issue?

 What amount of inconvenience will occur with the solution? Integration


problems with a boundary system? Incompatibility with the current Siebel
product, a future Siebel patch, or a future Siebel upgrade?

 What amount of performance degradation might occur with the solution?

Here are some common solutions:

 Use a case-insensitive collation (code page) for the Siebel database in SQL Server.

o Benefits: Global across the entire Siebel application.

o Concerns: May see a slight performance degradation (2%?) across SQL


Server. Must still use a binary collation in the Development environment,
since Siebel Tools will fail if attempting to compile a Siebel Repository File
(SRF) from a non-binary collation. Cannot subsequently use Siebel Tools to
compile from any non-Development environments.

 Use a case-insensitive search in the user query or pre-defined query (PDQ).

o Benefits: Simple and supported. Isolated to limited functionality or a


module.

o Concerns: Must remember to use the tilda (~) character to invoke the case-
insensitive search. Resulting SQL statement involves multiple OR predicates
and LIKE pattern matching, and may result in poor performance.

 Convert critical, existing search Columns to upper case (e.g. Account Name, Contact
Last Name).

o Benefits: Supported.

o Concerns: Data only available in upper case (appropriate for user-facing


correspondence?). Initial configuration in Siebel Tools to always convert the
data to upper case. Possible integration challenge with boundary
applications if integration key involves that Column.

 Use a mirrored Column (always saved in upper case) and an Index.

o Benefits: Supported.

o Concerns: Additional disk storage and overhead. Scripting to keep the


mirrored Column in sync.

 Create a case-insensitive or accent-insensitive Index (CIAI).

o Benefits: Supported in Siebel 8.0. Create a computed Column (available in


SQL Server 2005 and SQL Server 2008) and an Index on the computed
Column, and Siebel can transparently utilize the computed Column during a
search.

o Concerns: New in Siebel 8.0, and not available in earlier versions of Siebel.
May be limited to particular Columns.

Change Database Collation


A collation in SQL Server provides properties for your data such as sorting rules, case
sensitivity, and accent sensitivity. For example, a typical collation for a North American
Siebel implementation is Latin1_General_CI_AI. This collation supports Western
European characters (including US English) and is analogous to the 1252 code page, is
case insensitive, and is accent insensitive.

A collation is selected when installing a new server/instance of SQL Server. Databases


subsequently created on the server by default then use the same collation (unless
explicitly changed). Although technically feasible, it is not recommended to have
a different collation between the system databases and the Siebel database in
the Production environment.

A common situation is how to migrate a Siebel environment to a case-insensitive


collation. Here are a high-level set of tasks:
1. Identify the desired collation. Confirm with Oracle that this is a supported
collation, or attempt to understand the risks of using an unsupported collation.
2. Stop the Siebel Server(s), Gateway Server, and Web Server(s).
3. Perform a full backup of the Siebel database and transaction log.
4. If there are other user databases to be retained, perform a full backup of the
databases and transaction logs.
5. Copy the backup files to a secure location.
6. Uninstall SQL Server.
7. Install SQL Server and specify the desired collation.
8. Apply any needed patches to SQL Server.
9. Configure SQL Server (parameters, memory, etc.).
10. Restore the Siebel database.
11. Rename the Siebel database (e.g. siebeldb_old)
12. Create a new Siebel database (e.g. siebeldb). Ensure that this database is
created with the desired, new collation.
13. Configure the new Siebel database (Autogrowth, etc.).
14. Use a tool to import the objects from siebeldb_old to siebeldb. The Siebel
database is primarily composed of Tables and Indexes, but the Database
Administrator should be aware that there may be a few Views, Functions, and
Stored Procedures (depends on the version of Siebel). The purpose of this step
is to move the data from the old collation to the new collation. Note that this
step may involve a significant amount of manual effort and elapsed
time. The reader might review the functionality of SQL Server Integration
Services 2008 (SSIS 2008), SQL Server BCP, a third-party tool such as
Embarcadero DBArtisian, and so on.
15. Defragment and collect updated statistics for all Tables and Indexes in the new
Siebel database.
16. Configure security on the server and the new Siebel database.
17. Test extensively. Test the UI, components on the Siebel Server, any integration,
etc.
18. Later, drop the old Siebel database (siebeldb_old) once comfortable that the
migration process has been successful and the Siebel environment is functioning
as expected.

For further information please see:


 SQL Server 2008 Books Online - Collation and International Terminology

Dropping Indexes
Database Administrators are sometimes tempted to drop some of the numerous
Indexes provided by Siebel, since the complete physical schema (Tables and Indexes)
are created regardless of the functionality being used in Siebel. For example, in Siebel
7.8 the Activity (S_EVT_ACT) Table has over 15 Indexes defined in the default
Repository.

Use caution before dropping Siebel-provided Indexes. The Index might appear to not
be used based on data provided by a Data Management View (DMV), or the Index
might appear to have no value in the schema (e.g. all values for the Column(s) of the
Index are NULL). However, the Index might be used to optimize infrequent events
such as a Merge operation from a Siebel user.

Use of tools such as the Management Data Warehouse (combined with DMV’s) may be
able to provide this level of detail.

use siebeldb
go
select object_name(i.object_id) as 'Object'
, i.name as 'Index'
from sys.indexes as i with (nolock)
inner join sys.objects as o with (nolock)
on o.object_id = i.object_id
where i.index_id NOT IN
( select s.index_id
from sys.dm_db_index_usage_stats as s with (nolock)
where s.object_id = i.object_id
and s.index_id = i.index_id
and s.database_id = db_id('siebeldb')
)
and o.type = 'U'
order by object_name(i.object_id) asc
go

It is recommended that any Indexes be first inactivated in the Repository using Siebel
Tools, and then use the Siebel Database Utility to synchronize the Repository and the
physical schema. This will ultimately ensure a consistent schema across the Siebel
environments.

Reporting against a mirror Database


Prior to SQL Server 2005 some Siebel customers would implement a near-real-time
reporting database using SQL Server replication. The problem with this solution was
that the Siebel customer would need to define the primary-key constraint on every
Table such that replication could function. This was not a supported configuration for
Siebel.

In SQL Server 2005 and SQL Server 2008, Database mirroring is primarily used for
increasing database availability, but it may also be used as a solution for near-real-time
reporting when it is desired to not have the reporting workload running against the
Siebel database. The intent is to have two database partners, have Siebel function
against the principal role, and have reporting function against the mirror role.

The mirror may be operating in a high-safety mode (SAFETY = FULL) or a high-


performance mode (SAFETY = OFF). The latter mode is typically used since it runs
asynchronously between the two partners. Therefore, it may be best to plan for a near-
real-time reporting solution since there may be a slight delay in propagating the
transactions to the mirror role.

In addition to the mirror, the Database Administrator must also create a


Database Snapshot on top of the mirror such that the reporting is done against
the Database Snapshot (which is on top of the mirror).
A Database Snapshot is a point-in-time copy of the database. On initial creation, the
size of the Database Snapshot is zero (0) bytes and does not have a performance
impact during this creation process. The size of the Database Snapshot will increase
will increase when the primary database is modified, since the Database Snapshot
needs to save a copy of each data page before modification.

A preferred method to accomplish near-real-time reporting using Database Snapshots,


and is summarized in the SQL CAT document/link (below):
1. Create two Database Snapshots that are five (5) minutes apart.
2. Use a Synonym to abstract the name of the Database Snapshots. Create the
synonym in the database that will point to the snapshot objects.
3. Toggle read/refresh between the two Database Snapshots using the Synonym.

For further information please see:


 SQL Server 2008 Books Online - Database Mirroring and Snapshots
 SQL CAT - Database Snapshots and Synonyms
 SQL CAT - Database Snapshot performance considerations under I/O-intensive
workloads

Partitioned Tables
Partitioned Tables were introduced in SQL Server 2005 and continue to be a supported
feature in SQL Server 2008. Partitioned Tables are typically used for Tables with a
large volume of data, such that the Table is easier to manage and maintain for the
Database Administrator. For example, an Activity Table (S_EVT_ACT) with 100 million
records takes more time to defragment and update statistics.

The criteria for identifying a Table as a candidate for partitioning are arbitrary.
Considerations:
 How large is the Table? Large may be defined by both the average record width
and the total number of records.
 Is much of the data in the Table static or historical? For example, a large
volume of data in the Activity (S_EVT_ACT) Table may not change after the
Activity is marked as Done.
 Are the benefits worth the additional effort?
 Are the business processes and transactions well understood such that the Table
can be partitioned to align with these business considerations?

The biggest impediment to the use of Partitioned Tables with Siebel is that
Siebel is not Partition-aware in Siebel Tools and Siebel Database Utility
(DDLSYNC). This means that schema changes cannot be applied, synchronized, or
confirmed using Siebel Tools or the Siebel Database Utility.
It is feasible to partition a Siebel Table so long as the Database Administrator and
Siebel Administrator understand that:
 Partitioned Tables may or may not be implemented in the Siebel Development
environment. If not implemented in Development, Developers or the Siebel
Administrator will still be able to extend the schema or create/adjust custom
Indexes on demand for all Tables. If implemented in Development, only the
Database Administrator may implement any DDL changes for the partitioned
Table. However, the advantage to the latter approach is consistency across the
Siebel environments.
 Manual coordination and workload will increase. The Siebel Database Utility
(DDLSYNC) will fail when attempting to apply a schema change against a
Partitioned Table. Consequently, schema changes will need to be manually
tracked and applied to Siebel environments (excluding Development).
 The clustered Index on the Table is likely not aligned with how the Database
Administrator would like to partition the Table. This will require a redefinition of
the clustered Index for the Table that is not in sync with the Siebel Repository.
 If planning to upgrade the Siebel product (e.g. 7.8 to 8.1) in the future and the
approach is to use Siebel’s upgrade processes/scripts, the Database
Administrator will likely need to remove the partitioning from the Table and
recreate Indexes such that the schema is completely in sync with the Siebel
Repository.

The choice of which Column to use to partition the Table is arbitrary - the Database
Administrator could use a Siebel-provided Column, or could add an extension Column
that represents a hash value. The Database Administrator must consider the following:
 Cardinality. Does the selected Column provide the needed values to support the
envisioned and reasonably-balanced partitions?
 Change. Does the value in the selected partitioning Column ever change?
Ideally the value should not change after the initial INSERT, since a change to
the value could cause SQL Server to move the record from one partition to
another partition.
 Growth. How quickly will the Table grow? How does this align with the selected
Column?
 Access. How do the online users and asynchronous processes typically access or
update the data?
 Maintenance. How does the Database Administrator want to add or prune
partitions over time?

For DDL changes, note that the Siebel Repository has a last-updated timestamp on
every Table. The data in the Siebel Repository Tables may be of benefit for tracking
and confirming schema changes.
 Tables. S_TABLE
 Columns. S_COLUMN, joined to S_TABLE
 Indexes. S_INDEX, joined to S_INDEX_COLUMN and S_TABLE
EIM Performance
Enterprise Integration Manager (EIM) is a Siebel-provided tool to perform bulk-data
operations.

EIM commits its changes to SQL Server per batch. Thus, rows in the EIM Table with the
same value for IF_ROW_BATCH_NUM will be processed and committed at the same
time to the base Tables. This is an important consideration when planning for
concurrent EIM tasks, or planning for EIM tasks concurrent with other workload (e.g.
online users).

A lock-escalation may occur in SQL Server when:

 the logical unit of work (LUW) contains more than 5,000 records

 the LUW processes (INSERT, UPDATE, or DELETE) more than 20% of the
records in the Table.

These are two important rules of thumb relative to EIM, and both should influence how
the Database Administrator and Siebel Administrator plan and implement EIM in one of
four generic scenarios.

The Database Administrator may choose to temporarily disable Lock Escalation for one
or more base Tables before significant EIM activity. For example:

use siebel_db
go
alter table dbo.S_ORG_EXT set (LOCK_ESCALATION = disable)
alter table dbo.S_PARTY set (LOCK_ESCALATION = disable)
alter table dbo.S_ORG_BU set (LOCK_ESCALATION = disable)
alter table dbo.S_ACCNT_POSTN set (LOCK_ESCALATION = disable)
go

The Database Administrator must remember to enable Lock Escalation after


the EIM activity is complete. To enable Lock Escalation again:

use siebel_db
go
alter table dbo.S_ORG_EXT set (LOCK_ESCALATION = auto)
alter table dbo.S_PARTY set (LOCK_ESCALATION = auto)
alter table dbo.S_ORG_BU set (LOCK_ESCALATION = auto)
alter table dbo.S_ACCNT_POSTN set (LOCK_ESCALATION = auto)
go

For further information please see:

 Oracle’s Technical Note 409: Siebel Enterprise Integration Manager Best


Practices.
No mixed workload, EIM involving low-volume
Tables
This is perhaps the least-challenging but most misunderstood scenario. It is assumed
that EIM jobs will be running while online users are not accessing the Siebel application.

 Plan for a moderate EIM batch size of perhaps 10,000 to 20,000 rows per batch
(directly controls the size of the LUW, or the COMMIT frequency to the base Tables).

 Prepare a focused version of the script in Appendix 1. When invoked, the focused
script should defragment and recreate statistics for the EIM Table, and all of the
affected base Tables (e.g. EIM_ACCOUNT, S_ORG_EXT, S_ORG_BU, S_PARTY, etc.).
Don’t forget the intersection Tables!

 Plan to NOT run concurrent EIM tasks if the data volumes are low(er), or until there
are at least 50,000 records in each base Table.

 As the data in the base Tables quickly increase from no or minimal records, SQL
Server will automatically generate auto-updated statistics for the base Tables.
Expect to conclude all EIM tasks, and then run the focused script to defragment and
recreate statistics for the EIM Table and the base Tables.

 Once complete, run the focused script to again defragment and recreate statistics
for the EIM Table and the base Tables.

No mixed workload, EIM involving moderate/high-


volume Tables
This is challenging scenario in that one must avoid contention between concurrent EIM
tasks. It is assumed that EIM jobs will be running while online users are not accessing
the Siebel application.

 Plan for a moderate EIM batch size of perhaps 10,000 rows per batch (directly
controls the size of the logical unit of work, or the COMMIT frequency to the base
Tables).

 Prepare a focused version of the script in Appendix 1. When invoked, the focused
script should defragment and recreate statistics for the EIM Table, and all of the
affected base Tables (e.g. EIM_ACCOUNT, S_ORG_EXT, S_ORG_BU, S_PARTY, etc.).
Don’t forget the intersection Tables! For all of these Tables, consider increasing the
free space in the Table by reducing the FILLFACTOR parameter in the focused script
(e.g. set FILLFACTOR = 75, or 25% free for heavy INSERT operations).

 Plan to initially NOT run concurrent EIM tasks that involve operations against the
same Siebel base Tables, until there are more than 50,000 rows in the base Tables).

 For situations involving the need to load significant volumes of data (e.g. 10 million
Accounts), consider temporarily dropping some of the unneeded, non-clustered
Indexes on the Siebel base Tables being loaded by EIM. Use caution though, since
dropping a needed non-clustered Index may induce a separate performance
problem.
 As the data in the base Tables quickly increase from no or minimal records, SQL
Server will automatically generate auto-updated statistics for the base Tables.
Expect to conclude all EIM tasks, and then run the focused script to defragment and
recreate statistics for the EIM Table and the base Tables.

 Once complete, decrease the free space in the Table by increasing the FILLFACTOR
parameter in the focused script (e.g. set FILLFACTOR = 85, or a nominal 15% free),
and then running the focused script to again defragment and recreate statistics for
the EIM Table and the base Tables. Then use the Siebel Database Utility to
synchronize the Repository and the physical schema, and hence recreate any
dropped, non-clustered Indexes.

Mixed workload, EIM involving low-volume Tables


This is perhaps the most challenging scenario. It is assumed that EIM jobs will be
running while online users are accessing the Siebel application.

 Consider a small EIM batch size of perhaps 1,000 to 3,000 rows per batch (directly
controls the size of the logical unit of work, or the COMMIT frequency to the base
Tables). The smaller batch size is technically less efficient, but it is assumed there
is an additional goal to avoid contention with online users.

 Once complete and when able to take Siebel offline (or when workload is very low),
defragment and recreate statistics for the base Tables.

Mixed workload, EIM involving moderate/high-


volume Tables
This is another challenging scenario. It is assumed that EIM jobs will be running while
online users are accessing the Siebel application.

 Consider a smaller EIM batch size of perhaps 3,000 to 5,000 rows per batch
(directly controls the size of the logical unit of work, or the COMMIT frequency to
the base Tables). The smaller batch size is technically less efficient, but it is
assumed there is an additional goal to avoid contention with online users.

 Once complete and when able to take Siebel offline (or when workload is very low),
defragment and recreate statistics for the base Tables.
Appendix 1 – REINDEX Script
This generic script may be used for multiple scenarios by the database administrator.
Copy and paste the script into a query window in SQL Server Management Studio, and
then save for later use.

Notes:

 The database name is assumed to be siebeldb.

 By default, the script will defragment and update statistics for all Tables in the
database.

 FILLFACTOR = 85. This equates to leaving 15% (100 – 85 = 15) free space
after defragmenting the Tables and Indexes. 15% is a very reasonable amount
of free space. It is not recommended to use a value of zero (0) or 100 for
FILLFACTOR since both values leave no free space. Similarly, it is not
recommended to use a value less than 75 (25% or more free space) for normal
operations. Consider using this sequence of commands to change the default
FILLFACTOR to 85 (15% free) for all future-created Tables in the database:

sp_configure 'fill factor (%)', 85


go
reconfigure
go
 The script output will include the starting time (of the REBUILD operation) and
the approximate row count for each Table.

 Data compression may be set to NONE or ROW. PAGE compression is NOT


recommended for Siebel OLTP databases. For simplicity of management, the
Database Administrator might consider compressing all Tables instead of a single
Table. WARNING: an existing, compressed Table will be uncompressed if
the ALTER INDEX command is run with DATA_COMPRESSION = NONE.

 Max Degree of Parallelism (MAXDOP) may be temporarily increased within each


ALTER INDEX command. MAXDOP cannot exceed the number of processors
available to SQL Server, and a conservative approach might be to never set
MAXDOP higher than (N – 1), where N is the number of processors. This will
leave one processor free for SQL Server internals.

 For very large Tables (e.g. 50+ million rows), it may be faster to exclude those
Tables from this script and take an alternate approach:

o Drop all non-clustered Indexes on the Table.

o Perform the ALTER operation on the Table and its clustered Index.
Remember to set an appropriate value for FILLFACTOR.

o Recreate all non-clustered Indexes on the Table. Remember to set an


appropriate value for FILLFACTOR.

 The script is not suitable for Partitioned Tables. Partitioned Tables should be
excluded by adding a simple predicate to the script.

and so.name not in('Table1', 'Table2')


 The script may be used with SQL Server 2000 or SQL Server 2005 by changing
the command to DBREINDEX. See the example in the comments.

-- Rebuild/Defragment all Tables and Indexes in a given database


-- Written by Peter Samson (atc.clears@gmail.com)
-- March 14, 2009

-- FILLFACTOR set to a reasonable default of 85,


-- which essentially gives 15% free space after rebuild

-- DATA_COMPRESSION may be set to NONE or ROW.


-- PAGE compression is not recommended for Siebel OLTP databases.

-- Max Degree of Parallelism (MAXDOP) should be selected by the DBA.


-- Should never be more than (N - 1), where N is the number of processors
-- available to SQL Server.

SET ARITHABORT ON
SET QUOTED_IDENTIFIER ON

use siebeldb -- CHANGE TO YOUR DATABASE NAME


go

set nocount on

print convert(varchar(24), getdate(), 121) + ' Start'

declare @exec_cmd varchar(8000)

-- SQL 2008 format


-- alter index all on dbo.mytable rebuild with
-- (fillfactor = 85, sort_in_tempdb = on,
statistics_norecompute = off
-- , online = off, allow_row_locks = on, allow_page_locks =
on
-- , data_compression = NONE | ROW, maxdop = N )

-- SQL 2000/2005 format


-- dbcc dbreindex('dbo.mytable', ' ', 85)

DECLARE alter_table CURSOR FOR


select 'alter index all on '
+ su.name
+ '.'
+ so.name
+ ' rebuild with (fillfactor = 85, sort_in_tempdb = on,
statistics_norecompute = off, '
+ ' online = off, allow_row_locks = on, allow_page_locks
= on, '
+ ' data_compression = none, maxdop = 1 ) '
+ ' -- Estimated Row Count: '
+ convert(varchar(15), max(si.rowcnt))
from sysobjects as so with (nolock)
inner join sysindexes as si with (nolock)
on so.id = si.id
inner join sysusers as su with (nolock)
on so.uid = su.uid
where so.xtype = 'U'
group by su.name
, so.name
order by max(si.rowcnt) asc
, 1 asc

OPEN alter_table

FETCH NEXT FROM alter_table


INTO @exec_cmd

WHILE @@FETCH_STATUS = 0
BEGIN
print convert(varchar(24), getdate(), 121) + ' ' + @exec_cmd
EXECUTE (@exec_cmd)

FETCH NEXT FROM alter_table


INTO @exec_cmd

END

CLOSE alter_table
DEALLOCATE alter_table

print convert(varchar(24), getdate(), 121) + ' Complete'

go
Appendix 2 – Maintenance Plans
Simple maintenance will optimize the health of your SQL Server environments. The
Maintenance Plan Wizard may be used to define maintenance plans for the Siebel
database in each environment. Moreover, SQL Server Management Studio provides an
extensive design surface such that you can design an enhanced workflow for
maintenance-plan tasks.

Suggested maintenance tasks include:

 Every day, perform a full backup of the Siebel database and transaction
log. Do this in all Siebel environments. Remember that Development likely
contains your master Siebel Repository.

 Every 10-30 minutes, perform a transaction log backup for the Siebel
database. The combination of a full backup and the transaction logs will likely
allow the Database Administrator to perform point-in-time recovery if required.

 Every day, perform a full database backup of all system databases. Do


this after the full backup of the Siebel database.

 Periodically run a database-integrity check (CHECKDB) on the Siebel


database. The consistency check is ideally run before the full database backup,
and will detect physical corruption in the database. The consistency check
introduces a heavy workload on SQL Server and may not be practical to
frequently run on a large Siebel database (e.g. greater than one TByte).

 Periodically defragment and update statistics on all Siebel Tables and


Indexes. Consider using a default of 15% free (equates to a FILLFACTOR of
85). When updating statistics, perform a full scan. Please see the discussion
earlier in the document.

You will likely have three to four maintenance plans for your Siebel database, and each
maintenance plan will have a different set of tasks:

 Every 10-30 minutes – Siebel. Backup of Siebel transaction Log.

 Daily – Siebel. Full backups of Siebel database and transaction log.

 Daily – System. Full backups of system databases.

 Weekly – Siebel. Check Database Integrity, Rebuild Indexes. Full backups.


Appendix 3 – Traced SQL statement
This is an example of a Siebel-issued SQL statement that was captured with SQL
Profiler in the RPC:Starting event. The SQL statement was traced from the All
Contacts view for Siebel 7.8. The numerous Columns in the SQL statement have been
removed/minimized for simplicity.

The entire set of captured data in the RPC:Starting event should be used when
attempting to recreate the Execution Plan for the SQL statement. The initial
values for the internal parameters @p1 through @p7 are just as critical as the passed
parameters (Last Name like Smith*, First Name like John*, etc.). Note that the values
for @p1 through @p7 will be different in the RPC:Completed event.

declare @p1 int


set @p1=-1
declare @p2 int
set @p2=0
declare @p5 int
set @p5=28688
declare @p6 int
set @p6=8193
declare @p7 int
set @p7=1
exec sp_cursorprepexec @p1 output,@p2 output,N'@P1 varchar(15),@P2
varchar(15),@P3 varchar(50),@P4 varchar(30)',N'
SELECT
T1.CONFLICT_ID,
CONVERT (VARCHAR (10),T1.LAST_UPD, 101) + '' '' + CONVERT (VARCHAR (10),T1.LAST_UPD, 8),
CONVERT (VARCHAR (10),T1.CREATED, 101) + '' '' + CONVERT (VARCHAR (10),T1.CREATED, 8),
T2.FST_NAME,
T2.HOME_PH_NUM,
T2.JOB_TITLE,
T2.LAST_NAME
FROM
dbo.S_PARTY T1
INNER JOIN dbo.S_CONTACT T2 ON T1.ROW_ID = T2.PAR_ROW_ID
INNER JOIN dbo.S_POSTN_CON T3 ON T2.PR_POSTN_ID = T3.POSTN_ID AND T2.ROW_ID =
T3.CON_ID
INNER JOIN dbo.S_PARTY T4 ON T3.POSTN_ID = T4.ROW_ID
INNER JOIN dbo.S_CONTACT_BU T5 ON T5.BU_ID = @P1 AND T2.ROW_ID = T5.CONTACT_ID
INNER JOIN dbo.S_PARTY T6 ON T5.BU_ID = T6.ROW_ID

LEFT OUTER JOIN dbo.S_ORG_EXT T7 ON T2.PR_DEPT_OU_ID = T7.PAR_ROW_ID


LEFT OUTER JOIN dbo.S_POSTN T8 ON T7.PR_POSTN_ID = T8.PAR_ROW_ID
LEFT OUTER JOIN dbo.S_POSTN T9 ON T2.PR_POSTN_ID = T9.PAR_ROW_ID

LEFT OUTER JOIN dbo.S_POSTN_CON T10 ON T1.ROW_ID = T10.CON_ID AND T10.POSTN_ID = @P2
LEFT OUTER JOIN dbo.S_USER T11 ON T8.PR_EMP_ID = T11.PAR_ROW_ID
LEFT OUTER JOIN dbo.S_USER T12 ON T9.PR_EMP_ID = T12.PAR_ROW_ID

LEFT OUTER JOIN dbo.S_ADDR_PER T13 ON T2.PR_PER_ADDR_ID = T13.ROW_ID


LEFT OUTER JOIN dbo.S_MED_SPEC T14 ON T2.MED_SPEC_ID = T14.ROW_ID
LEFT OUTER JOIN dbo.S_PRI_LST T15 ON T2.CURR_PRI_LST_ID = T15.ROW_ID

LEFT OUTER JOIN dbo.S_CONTACT_LOYX T16 ON T1.ROW_ID = T16.PAR_ROW_ID


LEFT OUTER JOIN dbo.S_EMP_PER T17 ON T1.ROW_ID = T17.PAR_ROW_ID
LEFT OUTER JOIN dbo.S_CONTACT_FNX T18 ON T1.ROW_ID = T18.PAR_ROW_ID

LEFT OUTER JOIN dbo.S_CONTACT_T T19 ON T1.ROW_ID = T19.PAR_ROW_ID


LEFT OUTER JOIN dbo.S_CONTACT_X T20 ON T1.ROW_ID = T20.PAR_ROW_ID
LEFT OUTER JOIN dbo.S_CONTACT_SS T21 ON T1.ROW_ID = T21.PAR_ROW_ID

LEFT OUTER JOIN dbo.S_POSTN T22 ON T3.POSTN_ID = T22.PAR_ROW_ID


LEFT OUTER JOIN dbo.S_USER T23 ON T22.PR_EMP_ID = T23.PAR_ROW_ID
LEFT OUTER JOIN dbo.S_PARTY T24 ON T2.PR_DEPT_OU_ID = T24.ROW_ID

LEFT OUTER JOIN dbo.S_ORG_EXT T25 ON T2.PR_DEPT_OU_ID = T25.PAR_ROW_ID


LEFT OUTER JOIN dbo.S_ORG_EXT_FNX T26 ON T2.PR_DEPT_OU_ID = T26.PAR_ROW_ID
LEFT OUTER JOIN dbo.S_CON_ADDR T27 ON T2.PR_PER_ADDR_ID = T27.ADDR_PER_ID AND
T2.ROW_ID = T27.CONTACT_ID
LEFT OUTER JOIN dbo.S_ADDR_PER T28 ON T2.PR_PER_ADDR_ID = T28.ROW_ID
LEFT OUTER JOIN dbo.S_PARTY T29 ON T2.PR_SYNC_USER_ID = T29.ROW_ID
LEFT OUTER JOIN dbo.S_USER T30 ON T2.PR_SYNC_USER_ID = T30.PAR_ROW_ID
WHERE
(T2.PRIV_FLG = ''N'' AND T1.PARTY_TYPE_CD != ''Suspect'') AND
(T5.CON_LAST_NAME LIKE @P3 AND T2.CON_CD LIKE @P4)
ORDER BY
T5.BU_ID, T5.CON_LAST_NAME, T5.CON_FST_NAME
OPTION (FAST 40)
',@p5 output,@p6 output,@p7 output,'0-R9NH','0-5220','Smith%',’John%'
select @p1, @p2, @p5, @p6, @p7
Appendix 4 - Identify and resolve a suboptimal
Execution Plan
This is an example of one approach using the Performance Tuning tools to resolve a
performance issue.

As discussed earlier in the document, it is possible for a suboptimal query plan to


develop under certain conditions. Here is a troubleshooting scenario for handling a SQL
Server CPU spike caused by single query with a suboptimal Execution Plan. The
sample approach below shows how to isolate and resolve using SQL Server 2008 server
and client side utilities, and include examples using both Management Studio and
Transact SQL. Note that Transact SQL has more options to resolve the query plan
issue.

First, isolate the problem query in Management Studio (using the Performance Data
Collection):
 In SQL Server Management Studio, open Management \ Data Collection \ Server
Activity
 After the report opens, find the CPU spike under “% CPU Utilization” then click
on it
 Management Studio will now generate a second report displaying your top 10
worst queries ordered by CPU time
 Click on an individual query to drill down on the specific query. For example,
how many times it has been run, total CPU used, etc.

Isolating and resolving the problem query in Transact SQL:


 It is also possible to generate a similar report to the Performance Data Collection
above using the Query Execution Statistics DMV. Run this query with results to
the data grid.

select top 10

e.plan_handle

, e.sql_handle

, ((total_elapsed_time / execution_count) / 1000000) as


avg_elapsed_seconds

, (total_worker_time / execution_count) as
avg_worker_time

, s.text as query_text

, q.query_plan

, e.*
from sys.dm_exec_query_stats e

outer apply sys.dm_exec_sql_text (e.plan_handle) s

outer apply sys.dm_exec_query_plan (e.plan_handle) q

where last_execution_time > [start time of cpu spike]

order by total_worker_time desc

 Once you have found a query of interest, confirm a suboptimal Execution Plan by
clicking on the “query_plan” hyperlink to automatically bring up the graphical
execution plan. When you are sure you have found the right query, keep your
results accessible. You will need it later, including the SQL Handle.
 Next, to solve your short-term issues, remove the problem query plan from the
procedure cache using this command.

DBCC FREEPROCCACHE [sql_handle*]

If all goes well, your CPU issues should be resolved. If yes, run this query. Compare
your results with the old version (such as comparing avg_elapsed_seconds,
avg_worker_time, etc.). You can also confirm the new Execution Plan is fixed by once
again clicking on the query_plan hyperlink and viewing the graphical Execution Plan.

select

e.plan_handle

, e.sql_handle

, ((total_elapsed_time / execution_count) / 1000000) as


avg_elapsed_seconds

, (total_worker_time / execution_count) as
avg_worker_time

, s.text as query_text

, q.query_plan

, e.*

from sys.dm_exec_query_stats e

outer apply sys.dm_exec_sql_text (e.plan_handle) s

outer apply sys.dm_exec_query_plan (e.plan_handle) q

where sql_handle = [sql_handle*]


Once you have confirmed an optimal Execution Plan has been found by the Optimizer,
“lock down” the Execution Plan using the using the new Plan Freeze feature. This
should fix the query once and for all.

sp_create_plan_guide_from_handle @name = N'Problem Query',


@plan_handle = [plan_handle*]

Você também pode gostar