Você está na página 1de 44

Product: OpenText Content Server

Version: 10.5
Task/Topic: Deployment, Administration, Performance
Audience: Administrators
Platform: SQL Server 2012, 2014
Document ID: 500227
Updated: September 28, 2016

Best Practices
Microsoft® SQL Server for OpenText™
Content Server 10.5™
John Postma, Director, Common Engineering

Rose Liang, Performance Engineering Analyst

Feng Xiao, Senior Performance Engineering Analyst

Amarendra Kondra, Performance Architect

Scott Tindal, Senior Performance Engineering Analyst

OpenText Performance Engineering


Contents
Audience ......................................................................................................................3 
Disclaimer ....................................................................................................................3 
Executive Summary .................................................................................................... 4 
Monitoring and Benchmarking .................................................................................. 5 
Benchmark ............................................................................................................. 5 
SQL Server Performance Monitoring Tools ............................................................ 7 
Performance Dashboard .................................................................................. 7 
Management Data Warehouse ........................................................................ 8 
SQL Server Setup Best Practices.............................................................................. 9 
Maximum Degree of Parallelism (MaxDOP) ........................................................ 10 
tempdb Configuration ........................................................................................... 11 
Instant Database File Initialization........................................................................ 12 
Lock Pages in Memory ......................................................................................... 12 
Min and Max Memory ........................................................................................... 13 
Antivirus Software................................................................................................. 14 
Storage Best Practices ......................................................................................... 14 
Locking ......................................................................................................................15 
Transaction Isolation ............................................................................................ 15 
Lock Escalation .................................................................................................... 16 
SQL Server Configuration Settings......................................................................... 17 
Cost Threshold for Parallelism ............................................................................. 17 
Optimize for Ad hoc Workloads ............................................................................ 18 
Allocate Full Extent ............................................................................................... 18 
AlwaysOn Availability Groups ............................................................................... 19 
Content Server Database Settings .......................................................................... 20 
Compatibility Level ............................................................................................... 20 
Clustered Indexes................................................................................................. 21 
Table and Index Fragmentation, Fill factor ........................................................... 22 
Statistics ............................................................................................................... 23 
Collation ................................................................................................................ 24 
Data Compression ................................................................................................ 25 
Database Data, Log File Size, and AutoGrowth ................................................... 27 
Recovery Model.................................................................................................... 28 
Identifying Worst-Performing SQL .......................................................................... 29 
Content Server Connect Logs .............................................................................. 29 
SQL Server DMVs ................................................................................................ 29 
Appendices ................................................................................................................ 30 
Appendix A – References ..................................................................................... 30 
Appendix B – Dynamic Management Views (DMVs) ........................................... 32 
About OpenText ........................................................................................................ 44 

2
Audience
The document is intended for a technical audience that is planning an
implementation of OpenText™ products. OpenText recommends consulting with
OpenText Professional Services who can assist with the specific details of
individual implementation architectures.

Disclaimer
The tests and results described in this document apply only to the OpenText
configuration described herein. For testing or certification of other configurations,
contact OpenText Corporation for more information.
All tests described in this document were run on equipment located in the
OpenText Performance Laboratory and were performed by the OpenText
Performance Engineering Group. Note that using a configuration similar to that
described in this document, or any other certified configuration, does not
guarantee the results documented herein. There may be parameters or variables
that were not contemplated during these performance tests that could affect
results in other test environments.
For any OpenText production deployment, OpenText recommends a rigorous
performance evaluation of the specific environment and applications to ensure
that there are no configuration or custom development bottlenecks present that
hinder overall performance.

NOTE: This document is specific to OpenText Content Server


10.5. If your environment has OpenText Content Server 16.x
deployed, refer to:
Best Practices – SQL Server for OpenText Content Server 16.x

3
Executive Summary
This white paper is intended to explore aspects of Microsoft® SQL Server which may
be of value when configuring and scaling OpenText Content Server™ 10.5. It is
relevant to SQL Server 2014 and 2012 in particular, and is based on customer
experiences, performance lab tests with a typical document management workload,
and technical advisements from Microsoft.
Most common performance issues can be solved by ensuring that the hardware used
to deploy SQL Server has sufficient CPU, RAM and fast I/O devices, properly
balanced.
Topics here explore non-default options available when simple expansion of
resources is ineffective, and discuss some best practices for administration of
Content Server’s database. It concentrates on non-default options, because in
general, as a recommended starting point, Content Server on SQL Server
installations uses Microsoft’s default deployment options. Usage profiles vary widely,
so any actions taken based on topics discussed in this paper must be verified in your
own environment prior to production deployment, and a rollback plan must be
available should adverse effects be detected.
These recommendations are not intended to replace the services of an experienced
and trained SQL Server database administrator (DBA), and do not cover standard
operational procedures for SQL Server database maintenance, but rather offer
advice specific to Content Server on the SQL Server platform.
This document will open with a brief section on how to monitor Content Server SQL
Server database performance, and then make recommendations on specific tuning
parameters

4
Monitoring and Benchmarking
To conduct a comprehensive health and performance check of OpenText Content
Server on SQL Server, you should collect a number of metrics for a pre-defined
“monitored period”. This monitored period should represent a reasonably typical
usage period for the site, and include the site’s busiest hours. Performing a
benchmark establishes a baseline of expected response times and resource usage
for typical and peak loads in your Content Server environment. You can use the
baseline to identify areas for potential improvement and for comparisons to future
periods as the site grows, and you apply hardware, software, or configuration
changes.

Benchmark
Collect the following as the basis for a benchmark and further analysis of worst-
performing aspects:
• Collect operating-system and resource-level operating statistics, including CPU,
RAM, I/O, and network utilization on the database server. How you collect these
statistics depends on the hardware and operating system that you use, and the
monitoring tools that are available to you. Performance Monitor (perfmon) is a
tool that is natively available on Windows servers. If you use perfmon, include
the counters in the following table as a minimum. Consider also using the PAL
tool with SQL Server 2012 threshold file to generate a perfmon template
containing relevant SQL Server performance counters and to analyze captured
perfmon log files:

5
Memory Pages/sec, Pages Input/sec, Available MBytes.
In general available memory should not drop below 5% of physical
memory. Depending on disk speed, pages/sec should remain below
200.

Physical Disk Track the following counters per disk or per partition: % Idle
Time, Avg. Disk Read Queue Length, Avg. Disk Write
Queue Length, Avg. Disk sec/Read, Avg. Disk
sec/Write, Disk Reads/sec, Disk Writes/sec, Disk
Write Bytes/sec, and Disk Read Bytes/sec.
In general, % Idle Time should not drop below 20%. Disk queue
lengths should not exceed twice the number of disks in the array.
Disk latencies vary based on the type of storage. General
guidelines:
Reads: Excellent < 8 msec, Good < 12 msec, Fair < 20 msec, Poor
> 20 msec;
Non-cached writes: Excellent < 08 msec, Good < 12 msec, Fair <
20 msec, Poor > 20 msec;
Cached writes: Excellent < 01 msec, Good < 02 msec, Fair < 04
msec, Poor > 04 msec
Also review virtual file latency data from the
sys.dm_io_virtual_file_stats Dynamic Management View
(DMV) that shows I/O requests and latency per data/log file.

Processor % Processor Time (total and per processor), % Privileged


Time, Processor Queue Length.
In general, % Processor Time for all processors should not
exceed 80% and should be evenly distributed on all processors. %
Privileged Time should remain below 30% of total processor
time. You should try to keep Processor Queue Length below 4
per CPU.
Processor Queue Length for standard servers with long
Quantums: Excellent: <= 4 per CPU, Good: < 8 per CPU, Fair: < 12
per CPU

Network Bytes Received/sec, Bytes Sent/sec.


In general, these metrics should remain below 60% bandwidth
utilization.

SQL Server The SQL Server Buffer cache hit ratio should be > 90%. In
Counters OLTP applications, this ratio should exceed 95%. Use the PAL tool
SQL Server 2012 template for additional counters and related
thresholds.

• Note any Windows event log errors present after or during the monitored period.

• Generate Content Server summary timing and connect logs.

Generate summary timing logs while you collect the operating-system statistics
noted above. In addition, generate at least one day of Content Server connect

6
logs during the larger period covered by the summary timings, during as typical a
period of activity as possible.

Note that connect logging requires substantial space. Depending on the activity
level of the site, your connect log files may be 5 to 10 GB, so adequate disk
space should be planned. Content Server logs can be redirected to a different file
system if necessary. There is also an expected performance degradation of 10%
to 25% while connect logging is on. If the system is clustered, you should enable
connect logging on all front-end nodes.

• Collect SQL Server profiling events to trace files for periods of three to four hours
during core usage hours that fall within the monitored period. Use the Tuning
template to restrict events captured to Stored Procedures –
RPC:Completed and TSQL—SQL:BatchCompleted. Ensure data columns
include Duration (data needs to be grouped by duration), Event Class, Textdata,
CPU, Writes, Reads, SPID. Don’t collect system events, and filter to only the
Content Server database ID. If the site is very active, you may also want to filter
duration > 2000 msec to limit the size of the trace logs and reduce overhead. You
can use SQL Server Extended Events (new in version 2008, and with new GUI
tool in 2012) to monitor activity. They are intended to replace the SQL Profiler,
provide more event classes, and cause less overhead on the server. For more
information, see an overview of SQL Server Extended Events and a guide to
converting existing SQL Profiler traces to the new extended events format on the
Microsoft Developer Network.

• Obtain the results of a Content Server Level 5 database verification report (run
from the Content Server Administration page, Maintain Database section). To
speed up the queries involved in this verification, ensure there is an index
present on DVersData.ProviderID. Note that for a large site this may take
days to run. If there is a period of lower activity during the night or weekends, that
would be an ideal time to run this verification.

• Gather feedback from Content Server business users that summarizes any
current performance issues or operational failures that might be database-
related.

SQL Server Performance Monitoring Tools

Performance Dashboard
The performance dashboard offers real-time views of system activity and wait states,
lets you drill down on specific slow or blocking queries, and provides historical
information on waits, I/O stats, and expensive queries. It also shows active traces
and reports on missing indexes. (Note, however, that this is based on single SQL
statements, not on overall database load, so you must consider it from a wider
perspective.)

Download the SQL Server 2012 Performance Dashboard reports installer. It works
with SQL Server 2012 and 2014.

7
Management Data Warehouse
In SQL Server 2008 and later, you can use the Management Data Warehouse to
collect performance data on system resources and query performance, and to report
historical data. Disk usage, query activity, and server activity are tracked by default;
user-defined collections are also supported. A set of graphical reports show data from
the collections and allows you to drill down to specific time periods. For more
information see the Microsoft Technet article, SQL Server 2008 Management Data
Warehouse.

Also, in SQL 2008 and later, the Management Studio has been enhanced with an
Activity monitor for real-time performance monitoring.

8
SQL Server Setup Best Practices
OpenText recommends that you install and configure SQL Server following
Microsoft’s recommendations for best performance. This section covers many SQL
Server settings, and refers to Microsoft documentation where applicable.
In addition to configuring the settings described in this section, OpenText
recommends that you install the latest SQL Server Service Pack that is supported by
your Content Server Update level. (Check the release notes for your Content Server
version.)

9
Maximum Degree of Parallelism (MaxDOP)
Description Controls the maximum number of processors that are used for the
execution of a query in a parallel plan
(http://support.microsoft.com/kb/2806535 ).
Parallelism is often beneficial for longer-running queries or for
queries that have complicated execution plans. However, OLTP-
centric application performance can suffer, especially on higher-end
servers, when the time that it takes SQL Server to coordinate a
parallel plan outweighs the advantages of using one.

Default 0 (unlimited)

Recommendation Consider modifying the default value when SQL Server experiences
excessive CXPACKET wait types.
For non-NUMA servers, set MaxDOP no higher than the number of
physical cores, to a maximum of 8.
For NUMA servers, set MaxDOP to the number of physical cores per
NUMA node, to a maximum of 8.
Note: Non-uniform memory access (NUMA) is a processor
architecture that divides system memory into sections that are
associated with sets of processors (called NUMA nodes). It is
meant to alleviate the memory-access bottlenecks that are
associated with SMP designs. A side effect of this approach is that
each node can access its local memory more quickly than it can
access memory on remote nodes, so you can improve performance
by ensuring that threads run on the same NUMA node.
Also see the Cost Threshold for Parallelism section for related
settings that restrict when parallelism is used, to allow best
performance with Content Server.
Note: Any value that you consider using should be thoroughly
tested against the specific application activity or pattern of queries
before you implement that value on a production server.

Notes Several factors can limit the number of processors that SQL Server
will utilize, including:
• licensing limits related to the SQL Server edition
• custom processor affinity settings and limits defined in a
Resource Governor pool.
These factors may require you to adjust the recommended MaxDOP
setting. See related reference items in Appendix A – References for
background information.
See Appendix B – Dynamic Management Views (DMVs) for
examples of monitoring SQL Server wait types.

Permissions To change this setting, you must have the alter settings
server-level permission.

10
tempdb Configuration
Description The tempdb is a global resource that stores user objects (such as
temp tables), internal objects (such as work tables, work files,
intermediate results for large sorts and index builds). When
snapshot isolation is used, the tempdb stores the before images of
blocks that are being modified, to allow for row versioning and
consistent committed read access.

Default Single data file.

Recommendation The tempdb has a large impact on CS performance. Follow these


guidelines for best results:
• Create one data file per physical core, up to a maximum of
eight. If the tempdb continues to have memory contention,
add four files at a time (up to the total number of logical
processors).
• Make each data file the same size. Each one should be
large enough to accommodate a typical workload. (As a
general rule, set it to one-and-a-half times the size of the
largest single table in any database used by the instance.)
Allow autogrowth to accommodate usage spikes.
• Place these files on your fastest available storage.

Notes As with MaxDOP, be mindful of factors that can limit the number of
processors SQL Server will utilize, and set the number of tempdb
data files appropriately.

Monitoring Monitor latch waits related to pages in tempdb. PAGELATCH_XX


wait types can indicate tempdb contention Appendix B – Dynamic
Management Views (DMVs) has a sample query that you can use
to monitor waits. The following script can help you identify active
tasks that are blocked on tempdb pagelatch_XX waits:

SELECT session_id, wait_type, wait_duration_ms,


blocking_session_id, resource_description,
ResourceType = CASE
WHEN Cast(Right(resource_description,
Len(resource_description) - Charindex(':',
resource_description, 3)) AS Int) - 1 % 8088 = 0
THEN 'Is PFS Page'
WHEN Cast(Right(resource_description,
Len(resource_description) - Charindex(':',
resource_description, 3)) AS Int) - 2 % 511232 = 0
THEN 'Is GAM Page'
WHEN Cast(Right(resource_description,
Len(resource_description) - Charindex(':',
resource_description, 3)) AS Int) - 3 % 511232 = 0
THEN 'Is SGAM Page'
ELSE 'Is Not PFS, GAM, or SGAM page' END
FROM sys.dm_os_waiting_tasks WHERE wait_type LIKE
'PAGE%LATCH_%' AND resource_description LIKE
'2:%'.

11
Monitor the space used in and the growth of tempdb, and
adjust tempdb size as needed.

Permissions Adding or modifying tempDB data files requires ALTER permission


on the tempDB database.

Instant Database File Initialization


Description Allows faster creation and autogrowth of database and log files by
not filling reclaimed disk space with zeroes before use. For more
information see the MSDN article Database Instant File
Initialization.

Default Not enabled.

Recommendation Enable this feature by assigning the SE_MANAGE_VOLUME_NAME


user right to the SQL Server service account. (It appears as
Perform volume maintenance tasks in the Local Security
Policy tool User Rights Assignment list.)

Notes Microsoft states that, because deleted disk data is overwritten only
when data is written to files, an unauthorized principal who gains
access to data files or backups may be able to access deleted
content. Ensure that access to these files is secured, or disable this
setting when potential security concerns outweigh the performance
benefit.
If the database has Transparent Data Encryption enabled, it
cannot use instant initialization.

Permissions To set this user right for the SQL Server service, you must have
administrative rights on the Windows server.

Lock Pages in Memory


Description Memory for the buffer pool is allocated in a way that makes it non-
pageable, avoiding delays that can occur when information has to
be loaded from the page file. For more information, see the
Microsoft Support article How to enable the "locked pages" feature
in SQL Server 2012.

Default Not set.

Recommendation Enable this feature by assigning the Lock pages in memory


user right to the SQL Server service account.
If you enable this setting, be sure to also set max memory
appropriately to leave sufficient memory for the operating system
and other background services.

Notes If SQL Server is running in a virtual environment, be mindful that, if


max memory is set too high, there is a potential for memory
overcommits that can expand and decrease available memory,
leading to memory pressure and potential problems.

12
Permissions To set this user right for the SQL Server service, you must have
administrative rights on the Windows server.

Min and Max Memory


Description The min server memory and max server memory settings
configure the amount of memory that is managed by the SQL
Server Memory Manager. SQL Server will not release memory
below the min memory setting, and will not allocate more than max
memory while it runs. See the MSDN article, Server Memory Server
Configuration Options.

Default The default setting for min server memory is 0, and the default
setting for max server memory is 2,147,483,647 MB. SQL
Server dynamically determines how much memory it will use, based
on current activity and available memory.

Recommendation On a server dedicated to a single SQL Server instance, leaving


SQL Server to dynamically manage its memory usage can provide
the best results over time, but min and max memory should be set
when:
• The lock pages in memory setting is enabled. You
should set max memory to a value that leaves sufficient
memory for the parts of SQL Server that are not included
in the max server memory setting (thread stacks,
extended SPs, and so on), the operating system, and other
services.
• More than one SQL Server instance, or other services, are
hosted on the server. Setting max memory for each
instance will ensure balanced memory use.
• Memory pressure could cause SQL Server memory use to
drop to a level that affects performance. Set minimum
memory to a value that maintains stable performance.

Notes Monitor available memory, and adjust as needed.


SQL Server Enterprise Edition allows max memory to be set up to
the OS maximum, but other editions do place limits on the
maximum memory used per instance. As of August, 2015, SQL
Server 2012 Standard Edition has a limit of 64GB, and SQL Server
2014 Standard Edition a limit of 128GB. Consult SQL Server
documentation for up to date information on memory limits for your
version and edition of SQL Server.

Permissions To change this setting, you must have the alter settings
server-level permission.

13
Antivirus Software
Description Antivirus software scans files and monitors activity to prevent,
detect, and remove malicious software. Guidelines for antivirus
software configuration are provided in the Microsoft support article,
How to choose antivirus software to run on computers that are
running SQL Server.

Default Depends on vendor.

Recommendation Exclude all database data and log files from scanning (including
tempdb). Exclude SQL Server engine process from active
monitoring.

Notes Follow the Microsoft support article for SQL Server version-specific
details.

Storage Best Practices


Description SQL Server is an I/O-intensive application. Proper configuration of
I/O subsystems is critical to achieve optimal performance.
With the wide variety of storage types available, it is difficult to make
specific recommendations. Microsoft provides some guidelines in a
Storage Top 10 Best Practices guide. Suggestions for testing
storage capacity and configuring for best performance are provided
in this TechNet article.
For the purposes of characterizing expected I/O patterns, Content
Server is primarily an OLTP-type application.
This section covers a few specific topics related to storage, but it is
not meant to be a comprehensive guide for storage planning.

Default Windows NTFS default cluster size is 4 KB.

Recommendation Microsoft recommends a cluster size of 64 KB for partitions that


house SQL Server data, log, and tempdb files.
Transaction log and tempdb data files have the most impact on
query performance, so Microsoft recommends placing them on
RAID 10 storage. (This provides the best performance compared to
other RAID levels that provide data protection.)
If you use a SAN, increase the host bus adapter (HBA) queue depth
as needed to support the amount of IOPS generated by SQL
Server.
Use a tool such as SQLIO (mentioned in the Storage Top 10 Best
Practices guide) to benchmark and understand the I/O
characteristics of available storage, and to aid in planning the
location of the Content Server database, transaction log, and
tempdb files.

Notes Also, see the sections on tempdb, Database Data, Log File Size,
and AutoGrowth for other recommendations related to data and log
files.

14
Locking

Transaction Isolation
Description When snapshot isolation is enabled, all statements see a snapshot
of data as it existed at the start of the transaction. This reduces
blocking contention and improves concurrency since readers do not
block writers and vice-versa, and also reduces the potential for
deadlocks. See the MSDN article, Snapshot Isolation in SQL
Server.

Default In Content Server 10.5 and later, ALLOW_SNAPSHOT_ISOLATION


and READ_COMMITTED_SNAPSHOT are both automatically set to ON
for the Content Server database.

Recommendation For earlier versions of Content Server, OpenText recommends that


you enable these settings, using the following commands in SQL
Management Studio:
ALTER DATABASE <Content_Server_DB> SET
ALLOW_SNAPSHOT_ISOLATION ON
ALTER DATABASE <Content_Server_DB> SET
READ_COMMITTED_SNAPSHOT ON

Permissions To change this setting, you must have alter permission on the
database.

15
Lock Escalation
Description Some bulk operations, such as copying or moving a large subtree,
or changing permissions on a tree, can cause SQL Server resource
thresholds to be exceeded. Lock escalation is triggered when one of
the following conditions exists:
• A single Transact-SQL statement acquires at least 5,000
locks on a single non-partitioned table or index.
• A single Transact-SQL statement acquires at least 5,000
locks on a single partition of a partitioned table and the
ALTER TABLE SET LOCK_ESCALATION option is set to
AUTO.
• The number of locks in an instance of the Database
Engine exceeds memory or configuration thresholds. (The
thresholds vary depending on memory usage and the
Locks server setting).
Although escalation to a lower granularity of lock can free
resources, it also affects concurrency, meaning that other sessions
accessing the same tables and indexes can be put in a wait state
and degrade performance.

Default Locks setting is 0, which means that lock escalation occurs when
the memory used by lock objects is 24% of the memory used by the
database engine.
All objects have a default lock escalation value of table, which
means that, when lock escalation is triggered, it is done at the table
level.

Recommendation Use the lock escalation DMV example in Appendix B – Dynamic


Management Views (DMVs) to monitor for lock escalation attempts
and successes.
For objects experiencing frequent lock escalations, consider using
the SET LOCK_ESCALATION clause (available in SQL Server 2008
and later) in the ALTER TABLE statement to change the escalation
algorithm from the default TABLE to either AUTO or DISABLE.
• AUTO means that escalation can happen at the partition
level of a partitioned table, and thus not affect concurrency
on other partitions. (Note that this introduces the potential
of deadlocks, when transactions locking different partitions
each want to expand an exclusive lock to the other
partitions).
• DISABLE does not guarantee that no escalation will occur,
but it puts the thresholds much higher, so that only a stress
on memory resources will trigger escalation.

Notes For a description of the Lock Escalation process in SQL Server, see
the Microsoft Technet article, Lock Escalation (Database Engine).

Permissions To change the lock_escalation setting, you must have alter


permission on the table.

16
SQL Server Configuration Settings
Global server settings that affect all databases on an instance.

Cost Threshold for Parallelism


Description The threshold at which SQL Server will create and run a parallel plan for a query. If the
estimated cost for a serial plan is higher than this value, SQL Server uses a parallel
plan. This setting is ignored and a serial plan is always used if:
• the server has only one processor
• affinity settings limit SQL Server to one processor
• MaxDOP is set to 1

Default 5

Recommendation Content Server mainly issues small OLTP-type queries where the overhead of
parallelism outweighs the benefit, but it does issue a small number of longer queries
that may run faster with parallelism. OpenText recommends that you increase the cost
threshold setting in combination with configuring the Maximum Degree of Parallelism
(MaxDOP) setting as recommended in this white paper. This reduces the overhead for
smaller queries, while still allowing longer queries to benefit from parallelism.
The optimal value depends on a variety of factors including hardware capability and
load level. Load tests in the OpenText performance lab achieved improved results with
a cost threshold of 50, and that may be a reasonable setting to start with. Monitor the
following and adjust the cost threshold as needed:
• CXPACKET wait type: when a parallel plan is used for a query there is some
overhead coordinating the threads that are tracked under the CXPACKET
wait. It’s normal to have some CXPACKET waits when parallel plans are
used, but if it is one of the highest wait types, further changes to this setting
may be warranted. See Appendix B – Dynamic Management Views (DMVs)
for examples of querying DMVs for wait info.
• See Appendix B – Dynamic Management Views (DMVs) for examples of
querying DMVs for queries using Parallelism
• THREADPOOL wait type: If many queries are using a parallel plan, there can
be periods when SQL Server uses all of its available worker threads, time
spent by a query waiting for an available worker thread is tracked under the
THREADPOOL wait type. If this is one of the highest wait types it may be an
indication that too many queries are using parallel plans, and that cost
threshold for parallelism should be increased, or maximum worker threads
increased (only consider increasing maximum worker threads on systems that
are not experiencing CPU pressure). However, note that there can be other
causes for an increase in this wait type (blocked queries or long running
queries), so it should only be considered in combination with a more
comprehensive view of query performance and locking.

Permissions To change this setting, you must have the alter settings server-level permission.

17
Optimize for Ad hoc Workloads
Description Available in SQL Server 2008 and later, this ad hoc caching
mechanism can reduce stress on memory-bound systems. It
caches a stub of the query plan, and stores the full plan only if a
query is issued more than once. This prevents the cache from being
dominated by plans that are not reused, freeing space for more
frequently accessed plans.
Turning this on does not affect plans already in the cache, only new
plans created after enabling the setting.

Default Off

Recommendation When there is memory pressure, and the plan cache contains a
significant number of single-use plans, enable this setting.

Monitoring Check the portion of the plan cache used by single use queries:
SELECT objtype AS [CacheType], count_big(*) AS
[Total Plans]
, sum(cast(size_in_bytes as
decimal(18,2)))/1024/1024 AS [Total MBs] ,
avg(usecounts) AS [Avg Use Count]
, sum(cast((CASE WHEN usecounts = 1 THEN
size_in_bytes ELSE 0 END) AS
decimal(18,2)))/1024/1024 AS [Total MBs - USE
Count 1]
, sum(CASE WHEN usecounts = 1 THEN 1 ELSE 0 END)
AS [Total Plans - USE Count 1]
FROM sys.dm_exec_cached_plans
GROUP BY objtype
ORDER BY [Total MBs - USE Count 1] DESC

Allocate Full Extent


Description Trace flag 1118 enables SQL Server to allocate a full extent to each
database object, rather than one page at a time, which can reduce
contention on SGAM pages. See the Microsoft Support article,
Recommendations to reduce allocation contention in SQL Server
tempdb database.

Default Not enabled

Recommendation Consider enabling this flag if latch waits on pages in tempdb cause
long delays that are not resolved by the recommendations in the
tempdb section.

18
AlwaysOn Availability Groups
Description First introduced in SQL Server 2012, AlwaysOn Availability
Groups are a high-availability and disaster recovery solution that
supports a failover environment for a set of user databases. For
more information, see the MSDN article, AlwaysOn Availability
Groups (SQL Server).

Default Disabled

Recommendation As of August 2015, Content Server does not support AlwaysOn


Availability Groups, although support is planned for a future
Content Server update. Check with customer support for up-to-date
details.

19
Content Server Database Settings
These settings are specific to the Content Server database.

Compatibility Level
Description The database compatibility level sets certain database behaviors to
be compatible with the specified version of SQL Server.

Default The compatibility level for newly created databases is the same as
the model database which, by default, is the same as the installed
version of SQL Server.
When upgrading the database engine, compatibility level for user
databases is not altered, unless it is lower than the minimum
supported. Restoring a database backup to a newer version also
does not change its compatibility level.

Recommendation Using the latest compatibility mode allows the Content Server
database to benefit from all performance improvements in the
installed SQL Server version.
OpenText recommends that you set this equal to the version of SQL
Server that is installed.
When you change the compatibility level of the Content Server
database, be sure to update statistics on the database after making
the change.

NOTE: With SQL Server 2014, as per this technical alert, you must
use trace flag 9481 if the Content Server database compatiblity
level is set to SQL 2014 (120), or leave the compatiblitly level set to
SQL 2012 (110).

20
Clustered Indexes
Description Clustered indexes store data rows for the index columns in sorted
order. In general, the primary key or the most frequently used index
on each table is a good candidate for a clustered index. This is
especially important for key highly-active core tables. Only one
clustered index can be defined per table.

Default In Content Server 10.5 and later, many tables in the Content Server
database have a clustered index.

Recommendation OpenText does not recommend making schema changes such as


adding clustered indexes to tables in the Content Server database.
Additional clustered indexes may be added in future releases of
Content Server.

Notes One benefit of clustered indexes is to avoid potential blocking by the


ghost record cleanup process when there is a high volume of
deletes (such as with Records Management Disposition). Without
clustered indexes, the cleanup process may require a table lock to
scan for ghost records, blocking other operations.
There are other situations where the ghost record cleanup process
might fall behind the rate of row deletions. You can monitor this
using the sys.dm_db_index_physical_stats DMV, and
looking at the columns Ghost_Record_Count (ghost records
ready for cleanup) and Version_Ghost_Record_Count (ghost
records retained by an outstanding snapshot isolation transaction).

21
Table and Index Fragmentation, Fill factor
Description As data is modified, index and table pages can become fragmented,
leading to reduced performance. You can mitigate this by regularly
reorganizing or rebuilding indexes that have fragmentation levels
above a certain threshold
Fragmentation can be avoided, or reduced, by setting a fill factor for
indexes. This leaves space for the index to grow without needing
page splits that cause fragmentation. This is a tradeoff, because
setting a fill factor leaves empty space in each page, consuming
extra storage space and memory.
For more information, see the Microsoft Developer Network topic
Reorganize and Rebuild Indexes.

Default Server index fill factor default is 0 (meaning fill leaf-level pages to
capacity).

Recommendation Index Fragmentation


When index fragmentation is between 5% and 30%, reorganize,
and when it is greater than 30%, rebuild. See the related section in
Appendix B – Dynamic Management Views (DMVs) for a sample
query that lists fragmentation levels for tables and indexes, and a
query that automatically generates reorganize or rebuild commands
for indexes that are within the above fragmentation levels.
Fill Factor
Index fill factor can be set based on an analysis of how often each
table is updated, leaving static tables at the default value of 0, and
setting a value ranging from 95 for infrequently updated tables to 70
for frequently updated tables. This approach is outlined in the SQL
Server Pro blog post, What is the Best Value for the Fill Factor?
Index, Fill Factor and Performance, Part 2.
The Index Usage section of Appendix B – Dynamic Management
Views (DMVs) shows a sample query that lists counts of index
operations on each index. (This DMV aggregates data since the last
SQL Server restart, so you should run it after a period of
representative usage.)
Frequently updated tables will still eventually encounter
fragmentation, requiring an index rebuild, which also resets the fill
factor. (Note however that an index reorganize does not reset the
fill factor.)

Notes By default, a table lock is held for the duration of an index rebuild
(but not a reorganization), preventing user access to the table.
Specifying ONLINE=ON in the command avoids the table lock (other
than for a brief period at the start), allowing user access to the table
during the rebuild. However, this feature is available only in
Enterprise editions of SQL Server. Also take note of the potential
data corruption issue when running online index rebuilds with
parallelism that is described in the Microsoft Support article, FIX:
Data corruption occurs in clustered index when you run online index
rebuild in SQL Server 2012 or SQL Server 2014.

22
Monitoring Track the perfmon SQLServer:AccessMethods:Page
Splits/Sec counter to observe the rate of page splits, and to help
evaluate the effectiveness of fill-factor settings that are used. (Note
that this includes both mid-page splits that cause fragmentation,
and end-page splits for an increasing index.)

Permissions To rebuild or reorganize an index, you must have alter permission


on the table.

Statistics
Description The query optimizer uses statistics to aid in creating high-quality
query plans that improve performance. Statistics contain information
about the distribution of values in one or more columns of a table or
view, and are used to estimate the number of rows in a query result.
An overview of SQL Server statistics is covered in the MSDN
article, Statistics.
Three database settings control whether SQL Server creates
additional statistics, and when and how it updates statistics:
AUTO_CREATE_STATISTICS: The query optimizer creates
statistics on individual columns in query predicates as necessary.
AUTO_UPDATE_STATISTICS: The query optimizer determines
when statistics might be out of date (based on modification reaching
a threshold) and updates them when they are used by a query.
AUTO_UPDATE_STATISTICS_ASYNC: When set off, queries being
compiled will wait for statistics to update if they are out of date.
When this setting is on, queries compile with existing statistics even
if the statistics are out of date, which could lead to a suboptimal
plan.

Default The first two settings above are on by default, and the third is off. All
can be changed in the model database. When the Content Server
database is created, it will inherit the settings from the model
database.

Recommendation Maintaining up-to-date statistics is a key factor in allowing SQL


Server to generate plans with optimal performance.
OpenText recommends using the default values for these three
settings, to allow the query optimizer to automatically create and
update statistics as needed.
For large installations where some tables may grow to a size where
the default threshold for automatically updating statistics (20%
change) is not sensitive enough, consider enabling trace flag 2371
which reduces the threshold as table size increases. For more
information, see the Microsoft Support article, Controlling Autostat
(AUTO_UPDATE_STATISTICS) behavior in SQL Server.
If your site has a large amount of update activity, or if you want to
update statistics on a more predictable schedule, you can update
statistics during a regular maintenance window using an UPDATE
STATISTICS statement or the sp_updatestats stored
procedure. (Note, however, that the sp_updatestats stored

23
procedure updates statistics on all tables that have one or more
rows modified, so it is normally preferable to use an UPDATE
STATISTICS statement to update statistics on specific tables, as
needed.

Monitoring Use the sys.dm_db_stats_properties DMV to track the


number of rows updated in each table to aid in determining when
trace flag 2371 may be needed, and which statistics may need to be
updated during maintenance windows.

Notes Updating statistics causes queries to recompile, which can be time-


consuming, so any strategy to manually update statistics must
balance the benefit of generating improved query plans with the
need to avoid overly frequent query recompilation.

Permissions To update statistics, you must have alter permission on the table
or view.

Collation
Description The collation for a database defines the language and character set
used to store data, sets rules for sorting and comparing characters,
and determines case-sensitivity, accent-sensitivity, and kana-
sensitivity.

Default When SQL Server is installed, it derives its default server-level


collation from the Windows system locale. (See the Microsoft
Developer Network article, Collation Settings in Setup.)
The default collation for a new database is the same as the SQL
Server default setting. It is inherited from the model database,
where it cannot be changed.
Databases restored from a backup from another server retain their
original collation.

Recommendation To avoid potential issues, provide best performance, and ensure


compatibility with other Content Suite applications, OpenText
recommends the following:
• For new SQL Server installations, select a collation that is
case-sensitive and accent-sensitive for compatibility with
other suite products.
• Ensure that the Content Server database has the same
collation as the server (and hence the same as system
databases like tempdb).
• Ensure that the collation for all objects in the Content
Server database are the same as the database collation,
with the exception of the WebNodesMeta_XX tables, which
may derive a different collation from settings on the
Configure Multilingual Metadata administration page.
• Contact customer support for assistance if you have an
existing deployment that has a database collation that is
different from the server’s or that has tables or columns
(other than WebNodesMeta_XX tables) that have a

24
collation that is different from the database collation.

Notes The following script identifies table columns with a collation that is
different from the database:
DECLARE @DatabaseCollation VARCHAR(100)
SELECT @DatabaseCollation = collation_name
FROM sys.databases WHERE database_id = DB_ID()
SELECT
@DatabaseCollation 'Default database collation'
SELECT
t.Name 'Table Name', c.name 'Col Name', ty.name
'Type Name',
c.max_length, c.collation_name, c.is_nullable
FROM sys.columns c INNER JOIN
sys.tables t ON c.object_id = t.object_id
INNER JOIN
sys.types ty ON c.system_type_id =
ty.system_type_id
WHERE t.is_ms_shipped = 0 AND
c.collation_name <> @DatabaseCollation

Data Compression
Description SQL Server 2008 and later offers data compression at the row and
page level (but only in the Enterprise Edition). Compression
reduces I/O and the amount of storage and memory used by SQL
Server, but adds a small amount of overhead in the form of
additional CPU usage.

Default Not compressed.

Recommendation When storage space, available memory, or disk I/O are under
pressure, and the database server is not CPU-bound, consider
using compression on selected tables and indexes.
Microsoft recommends compressing large objects that have either a
low ratio of update operations, or a high ratio of scan operations.
You can use the sp_estimate_data_compression_savings
stored procedure to estimate the space that row or page
compression could save in each table and index, as outlined in Data
Compression: Strategy, Capacity Planning and Best Practices.
You can automate the process using a script. (An example of this
type of approach and a sample script, which was used for internal
testing, is covered in this SQL Server Pro article.) The script
analyzes the usage of Content Server tables and indexes that have
more than 100 pages and selects candidates for compression. It
estimates the savings from row or page compression, and
generates a command to implement the recommended
compression. The script relies on usage data from the DMVs, so it
should be run after a period of representative usage.
Overall impact from compression on performance, storage,

25
memory, and CPU will depend on many factors related to the
environment and product usage. Testing in the OpenText
performance lab has demonstrated the following:
Performance: For load tests involving a mix of document-
management operations, with a small set of indexes compressed
based on only high-read-ratio indexes, there was minimal
performance impact, but when a larger set of tables and indexes
was compressed, performance was less consistent, and degraded
by up to 20%. For high-volume ingestion of documents with
metadata, there was no impact on ingestion throughput.
CPU: CPU usage increased by up to 8% in relative terms.
MDF File Storage: Reduced by up to 40% depending on what was
compressed. Specific large tables like LLAttrData were reduced
by as much as 82%.
I/O: Read I/O on MDF files reduced by up to 30%; write I/O by up to
18%.
Memory Usage: SQL Buffer memory usage reduced by up to 25%.
As with any configuration change, test the performance impact of
any compression changes on a test system prior to deploying on
production systems.

Notes It can take longer to rebuild indexes when they are compressed.

26
Database Data, Log File Size, and AutoGrowth
Description The initial size of the data and log files, and the amount by which
they grow as data is added to the database. Autogrowth of log files
can cause delays, and frequent growth of data or log files can
cause them to become fragmented, which may lead to performance
issues.

Default SQL Server 2014:


Data File: Initial size: 4 MB; Autogrowth: By 1 MB, unlimited
Log File: Initial size: 2 MB; Autogrowth: By 10 percent, unlimited

Recommendation Optimal data and log file sizes really depend on the specific
environment. In general, it is preferable to size the data and log files
to accommodate expected growth so that you avoid frequent
autogrowth events.
Leave autogrowth enabled to accommodate unexpected growth. A
general rule is to set autogrow increments to about one-eighth the
size of the file, as outlined in the Microsoft Support article,
Considerations for the "autogrow" and "autoshrink" settings in SQL
Server.
Leave the autoshrink parameter set to False for the Content
Server database.

Monitoring Monitor autogrowth events to understand when the database grows.


The database_file_size_change extended event can be used
to track growth events, or use the perfmon SQL Server
Databases > Data File Size counter.

Notes The following script identifies a database’s data file size, and the
amount of used space and free space in it:
SELECT DBName, Name, [FileName], Size as
'Size(MB)', UsedSpace as 'UsedSpace(MB)',(Size -
UsedSpace) as 'AvailableFreeSpace(MB)'
from
( SELECT db_name(s.database_id) as DBName, s.name
AS [Name], s.physical_name AS [FileName],
(s.size * CONVERT(float,8))/1024 AS [Size],
(CAST(CASE s.type WHEN 2 THEN 0 ELSE
CAST(FILEPROPERTY(s.name, 'SpaceUsed') AS float)*
CONVERT(float,8) END AS float))/1024 AS
[UsedSpace],
s.file_id AS [ID]
FROM
sys.filegroups AS g
INNER JOIN sys.master_files AS s ON ((s.type = 2
or s.type = 0) and s.database_id = db_id() and
(s.drop_lsn IS NULL)) AND
(s.data_space_id=g.data_space_id)
) DBFileSizeInfo

27
Recovery Model
Description The Recovery Model is how SQL Server controls how transaction
logs are maintained per database.

Default Simple model is the default. This does not cover backups of the
transaction logs.

Recommendation The Content Server database be configured to Full. This requires


the DBA to set the transaction log backups to prevent the log file
from growing too large. Bulk-Logging is not recommended as it is
not compatible with many of the operations that Content Server can
perform.

Notes Simple Recovery Model is used in test systems or in databases that


are stagnant. Transaction log space is reclaimed once the
transaction has completed.
Full Recovery Model is used for critical data. Transaction logs will
need to be backed up on a regular basis - controlled by the DBA.
Bulk-Logged Recovery Model is used to avoid logging irregular bulk
insert, create index, and select into statements. It works the same
as Full with that exception.

28
Identifying Worst-Performing SQL
There are several ways to identify poorly-performing SQL.

Content Server Connect Logs


Content Server connect logging (SQL logging) generates logs that include every
Content Server transaction and the SQL statements that they issue. It provides
timings in microseconds for each statement. Note that connect logs can use
substantial space if left enabled for significant periods of time, and can add up to
25% overhead.
If you have the Content Server Performance Analyzer, you can use it to open connect
logs and see the percentage mixture and relative importance of your site’s usage
profile. By clicking the Raw Data tab, you can sort the transaction by overall
execution time and SQL time. This is quite useful, because you can see which
Content Server transactions are taking the most SQL time, not just the individual
statements that they issue. To see the individual statements that make up a Content
Server transaction, right-click the transaction and then click Show SQL. Performance
Analyzer displays every SQL statement that the transaction issued, how long each
one took to execute, and how many rows it affected.
If you don’t have Performance Analyzer, you can use other methods, such as Perl
scripts, to pull out the worst SQL and aggregate timing data related to SQL queries.

SQL Server DMVs


The SQL Server sys.dm_exec_query_stats DMV collects performance data
about cached query plans. Appendix B – Dynamic Management Views (DMVs) includes
a sample query that returns the top 500 queries ordered by total elapsed time, along
with various metrics about physical and logical reads and writes. Modify the query as
needed to alter the number of queries returned, and to order the results by the
desired column. By default, this data is aggregated over the period since SQL Server
was last restarted.
As described in the section on SQL Server Performance Monitoring Tools, the Data
Management Warehouse can store this and other performance-related data to allow
drilling down on specific time periods.

29
Appendices

Appendix A – References
Compute Capacity Limits by Edition of SQL Server:
https://msdn.microsoft.com/en-us/library/ms143760(v=sql.120).aspx
SQL Server Resource Governor:
https://msdn.microsoft.com/en-us/library/bb933866(v=sql.120).aspx
Data Compression: Strategy, Capacity Planning and Best Practices:
https://msdn.microsoft.com/en-us/library/dd894051(v=SQL.100).aspx
SQL Server Index Design Guide:
https://technet.microsoft.com/en-us/library/jj835095(v=sql.110).aspx
MaxDOP Recommendations:
https://support.microsoft.com/en-us/kb/2806535
Instant Database File Initialization:
https://msdn.microsoft.com/en-us/library/ms175935(v=sql.120).aspx
Lock Pages in Memory:
https://support.microsoft.com/en-us/kb/2659143
Antivirus software:
https://support.microsoft.com/en-us/kb/309422
SQL Server Memory Configuration Options:
https://msdn.microsoft.com/en-us/library/ms178067(v=sql.120).aspx
Disk Partition Alignment Best Practices for SQL Server:
https://technet.microsoft.com/en-us/library/dd758814(v=sql.100).aspx
Optimizing tempdb Performance:
https://msdn.microsoft.com/en-us/library/ms175527(v=sql.105).aspx
SQL Server 2014 Extended Events:
https://msdn.microsoft.com/en-us/library/bb630282(v=sql.120).aspx
Convert SQL Trace script to Extended Event Session:
https://msdn.microsoft.com/en-us/library/ff878114(v=sql.120).aspx
Reorganize and Rebuild Indexes:
https://msdn.microsoft.com/en-us/library/ms189858(v=sql.120).aspx
Index Fill Factor:
http://sqlmag.com/blog/what-best-value-fill-factor-index-fill-factor-and-performance-
part-2

30
SQL Server Statistics:
https://msdn.microsoft.com/en-us/library/ms190397(v=sql.120).aspx
Collation and Unicode Support:
https://msdn.microsoft.com/en-us/library/ms143726(v=sql.120).aspx
SQL Server Lock Escalation:
https://technet.microsoft.com/en-us/library/ms184286(v=sql.105).aspx

31
Appendix B – Dynamic Management Views (DMVs)
SQL Server Dynamic Management Views (DMVs) provide information used to
monitor the health of the server, diagnose problems, and tune performance. Server-
scoped DMVs retrieve server-wide information and require VIEW SERVER STATE
permission to access. Database-scoped DMVs retrieve database information and
require VIEW DATABASE STATE permission.
This appendix provides a description of some DMVs that may be helpful for
monitoring SQL Server performance, along with samples for querying those DMVs.
All procedures and sample code in this appendix are delivered as is and are for
educational purposes only. They are presented as a guide to supplement official
OpenText product documentation.

Waits (sys.dm_os_wait_stats)
Description Shows aggregate time spent on different wait categories.

Sample SELECT wait_type, wait_time_ms,


waiting_tasks_count, max_wait_time_ms,
signal_wait_time_ms,
wait_time_ms/waiting_tasks_count AS AvgWaitTimems
FROM sys.dm_os_wait_stats WHERE
waiting_tasks_count>0 ORDER BY wait_time_ms desc;

Notes Consider excluding wait types that don’t impact user query
performance, such as described in the SQL Skills blog post, Wait
statistics, or please tell me where it hurts.

Cached Query Plans (sys.dm_exec_cached_plans)


Description Contains one row per query plan in the cache, showing the amount
of memory used and the re-use count.

Sample Show total plan count and memory usage, highlighting single-use
plans:
SELECT objtype AS [CacheType], count_big(*) AS
[Total Plans]
, sum(cast(size_in_bytes as
decimal(18,2)))/1024/1024 AS [Total MBs] ,
avg(usecounts) AS [Avg Use Count]
, sum(cast((CASE WHEN usecounts = 1 THEN
size_in_bytes ELSE 0 END) as
decimal(18,2)))/1024/1024 AS [Total MBs - USE
Count 1]
, sum(CASE WHEN usecounts = 1 THEN 1 ELSE 0 END)
AS [Total Plans - USE Count 1]
FROM sys.dm_exec_cached_plans
GROUP BY objtype
ORDER BY [Total MBs - USE Count 1] DESC

32
Queries using Parallelism
Description Search the plan cache for existing parallel plans and see the cost
associations to these plans.

Sample SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;


SET QUOTED_IDENTIFIER ON;
WITH XMLNAMESPACES
(DEFAULT
'http://schemas.microsoft.com/sqlserver/2004/07/sh
owplan')
SELECT
query_plan AS CompleteQueryPlan,
n.value('(@StatementText)[1]', 'VARCHAR(4000)')
AS StatementText,
n.value('(@StatementOptmLevel)[1]', 'VARCHAR(25)')
AS StatementOptimizationLevel,
n.value('(@StatementSubTreeCost)[1]',
'VARCHAR(128)') AS StatementSubTreeCost,
n.query('.') AS ParallelSubTreeXML, ecp.usecounts,
ecp.size_in_bytes
FROM sys.dm_exec_cached_plans AS ecp
CROSS APPLY sys.dm_exec_query_plan(plan_handle) AS
eqp
CROSS APPLY
query_plan.nodes('/ShowPlanXML/BatchSequence/Batch
/Statements/StmtSimple') AS qn(n)
WHERE
n.query('.').exist('//RelOp[@PhysicalOp="Paralleli
sm"]') = 1

Notes This DMV query shows data about parallel cached query plans,
including their cost and number of times executed. It can be helpful
in identifying a new cost threshold for parallelism setting that will
strike a balance between letting longer queries use parallelism while
avoiding the overhead for shorter queries. However, note that the
cost threshold for parallelism is compared to the serial plan cost for
a query when determining whether to use a parallel plan, the above
DMV query shows the cost of the generated parallel plan and is
typically different (smaller) than the serial plan cost. Consider the
parallel plan costs as just a general guideline towards setting cost
threshold for parallelism.

33
Performance of cached query plans (sys.dm_exec_query_stats)
Description Shows aggregate performance statistics for cached query plans.

Sample SELECT TOP 500 -- change as needed for top X


-- the following four columns are NULL for ad hoc and prepared
batches
DB_Name(qp.dbid) as dbname , qp.dbid , qp.objectid , qp.number
, qt.text
, SUBSTRING(qt.text, (qs.statement_start_offset/2) + 1,
((CASE statement_end_offset
WHEN -1 THEN DATALENGTH(qt.text)
ELSE qs.statement_end_offset END
- qs.statement_start_offset)/2) + 1) as statement_text
, qs.creation_time , qs.last_execution_time , qs.execution_count
, qs.total_worker_time / qs.execution_count as avg_worker_time
, qs.total_physical_reads / qs.execution_count as
avg_physical_reads
, qs.total_logical_reads / qs.execution_count as avg_logical_reads
, qs.total_logical_writes / qs.execution_count as
avg_logical_writes
, qs.total_elapsed_time / qs.execution_count as avg_elapsed_time
, qs.total_clr_time / qs.execution_count as avg_clr_time
, qs.total_worker_time , qs.last_worker_time , qs.min_worker_time ,
qs.max_worker_time , qs.total_physical_reads ,
qs.last_physical_reads , qs.min_physical_reads ,
qs.max_physical_reads , qs.total_logical_reads ,
qs.last_logical_reads , qs.min_logical_reads , qs.max_logical_reads ,
qs.total_logical_writes , qs.last_logical_writes ,
qs.min_logical_writes , qs.max_logical_writes , qs.total_elapsed_time
, qs.last_elapsed_time , qs.min_elapsed_time , qs.max_elapsed_time ,
qs.total_clr_time , qs.last_clr_time , qs.min_clr_time ,
qs.max_clr_time
, qs.plan_generation_num -- , qp.encrypted
,qs.total_rows, qs.total_rows / qs.execution_count as average_rows,
qs.last_rows, qs.min_rows, qs.max_rows
, qp.query_plan --the query plan can be *very* useful; enable if
desired
FROM sys.dm_exec_query_stats as qs
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) as qp
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
--WHERE...
--ORDER BY qs.execution_count DESC --Frequency
--ORDER BY qs.total_worker_time DESC --CPU
ORDER BY qs.total_elapsed_time DESC --Duration
--ORDER BY qs.total_logical_reads DESC --Reads
--ORDER BY qs.total_logical_writes DESC --Writes
--ORDER BY qs.total_physical_reads DESC --PhysicalReads
--ORDER BY avg_worker_time DESC --AvgCPU
--ORDER BY avg_elapsed_time DESC --AvgDurn

34
--ORDER BY avg_logical_reads DESC --AvgReads
--ORDER BY avg_logical_writes DESC --AvgWrites
--ORDER BY avg_physical_reads DESC --AvgPhysicalReads

--sample WHERE clauses


--WHERE last_execution_time > '20070507 15:00'
--WHERE execution_count = 1
-- WHERE SUBSTRING(qt.text, (qs.statement_start_offset/2) + 1,
-- ((CASE statement_end_offset
-- WHEN -1 THEN DATALENGTH(qt.text)
-- ELSE qs.statement_end_offset END
-- - qs.statement_start_offset)/2) + 1)
-- LIKE '%MyText%' 

35
Virtual File Latency (sys.dm_io_virtual_file_stats)
Description For each data and log file, shows aggregate data about number, average size, and latency of
reads and writes.

Sample SELECT
-- @CaptureID,
GETDATE(), CASE
WHEN [num_of_reads] = 0 THEN 0
ELSE ([io_stall_read_ms]/[num_of_reads])
END [ReadLatency],
CASE WHEN [io_stall_write_ms] = 0 THEN 0
ELSE ([io_stall_write_ms]/[num_of_writes])
END [WriteLatency],
CASE WHEN ([num_of_reads] = 0 AND [num_of_writes] = 0) THEN 0
ELSE ([io_stall]/([num_of_reads] + [num_of_writes]))
END [Latency],
--avg bytes per IOP
CASE WHEN [num_of_reads] = 0 THEN 0
ELSE ([num_of_bytes_read]/[num_of_reads])
END [AvgBPerRead],
CASE WHEN [io_stall_write_ms] = 0 THEN 0
ELSE ([num_of_bytes_written]/[num_of_writes])
END [AvgBPerWrite],
CASE WHEN ([num_of_reads] = 0 AND [num_of_writes] = 0) THEN 0
ELSE (([num_of_bytes_read] +
[num_of_bytes_written])/([num_of_reads] + [num_of_writes]))
END [AvgBPerTransfer], LEFT([mf].[physical_name],2) [Drive],
DB_NAME([vfs].[database_id]) [DB],
[vfs].[database_id],[vfs].[file_id],
[vfs].[sample_ms], [vfs].[num_of_reads],
[vfs].[num_of_bytes_read],
[vfs].[io_stall_read_ms], [vfs].[num_of_writes],
[vfs].[num_of_bytes_written],
[vfs].[io_stall_write_ms], [vfs].[io_stall],
[vfs].[size_on_disk_bytes]/1024/1024. [size_on_disk_MB],
[vfs].[file_handle], [mf].[physical_name]
FROM [sys].[dm_io_virtual_file_stats](NULL,NULL) AS vfs
JOIN [sys].[master_files] [mf] ON [vfs].[database_id] =
[mf].[database_id]
AND [vfs].[file_id] = [mf].[file_id]
ORDER BY [Latency] DESC; 

36
Index Usage (sys.dm_db_index_usage_stats)
Description Returns counts of different types of operations on indexes.

Sample SELECT DB_NAME([ddius].[database_id]) AS [database


name] ,
OBJECT_NAME([ddius].[object_id]) AS [Table name] ,
[i].[name] AS [index name] ,
ddius.*
FROM [sys].[dm_db_index_usage_stats] AS ddius
INNER JOIN [sys].[indexes] AS i ON
[ddius].[index_id] = [i].[index_id]
AND [ddius].[object_id] = [i].[object_id] WHERE
[ddius].[database_id] = DB_ID()
ORDER BY [Table name] 

37
Table and index size and fragmentation
(sys.dm_db_index_physical_stats)
Description The first sample below returns size and fragmentation information
for each table and index. The second sample generates alter
index commands for indexes with more than 1000 pages, and
fragmentation greater than 5%.

Sample SELECT dbschemas.[name] as 'Schema',


dbtables.[name] as 'Table',
dbindexes.[name] as 'Index', dbindexes.index_id as
IdxID,
cast(indexstats.avg_fragmentation_in_percent as
decimal(6,2)) as Prcent,
indexstats.page_count, replace(index_type_desc,'
index','') as IndxType,
fragment_count as Fragments, index_depth as
IdxDeep,
cast(avg_fragment_size_in_pages as decimal(10,2))
as AvgFragSize
FROM sys.dm_db_index_physical_stats (DB_ID(),
NULL, NULL, NULL, NULL) AS indexstats
INNER JOIN sys.tables dbtables on
dbtables.[object_id] = indexstats.[object_id]
INNER JOIN sys.schemas dbschemas on
dbtables.[schema_id] = dbschemas.[schema_id]
INNER JOIN sys.indexes AS dbindexes ON
dbindexes.[object_id] = indexstats.[object_id]
AND indexstats.index_id = dbindexes.index_id
WHERE indexstats.database_id = DB_ID()
and cast(indexstats.avg_fragmentation_in_percent
as decimal(6,2))>0
ORDER BY indexstats.avg_fragmentation_in_percent
desc
 
 
SET NOCOUNT ON;
DECLARE @objectid int;
DECLARE @indexid int;
DECLARE @partitioncount bigint;
DECLARE @schemaname nvarchar(130);
DECLARE @objectname nvarchar(130);
DECLARE @indexname nvarchar(130);
DECLARE @partitionnum bigint;
DECLARE @partitions bigint;
DECLARE @frag float;
DECLARE @command nvarchar(4000);
-- Conditionally select tables and indexes from
the sys.dm_db_index_physical_stats function
-- and convert object and index IDs to names.

38
SELECT
object_id AS objectid,
index_id AS indexid,
partition_number AS partitionnum,
avg_fragmentation_in_percent AS frag
INTO #work_to_do
FROM sys.dm_db_index_physical_stats (DB_ID(),
NULL, NULL , NULL, 'LIMITED')
WHERE avg_fragmentation_in_percent > 5.0 AND
index_id > 0 AND page_count > 1000;
-- Declare the cursor for the list of partitions
to be processed.
DECLARE partitions CURSOR FOR SELECT * FROM
#work_to_do;
-- Open the cursor.
OPEN partitions;
-- Loop through the partitions.
WHILE (1=1)
BEGIN;
FETCH NEXT
FROM partitions
INTO @objectid, @indexid, @partitionnum,
@frag;
IF @@FETCH_STATUS < 0 BREAK;
SELECT @objectname = QUOTENAME(o.name),
@schemaname = QUOTENAME(s.name)
FROM sys.objects AS o
JOIN sys.schemas as s ON s.schema_id =
o.schema_id
WHERE o.object_id = @objectid;
SELECT @indexname = QUOTENAME(name)
FROM sys.indexes
WHERE object_id = @objectid AND index_id =
@indexid;
SELECT @partitioncount = count (*)
FROM sys.partitions
WHERE object_id = @objectid AND index_id =
@indexid;
-- 30 is an arbitrary decision point at
which to switch between reorganizing and
rebuilding.
IF @frag < 5.0
SET @command = '';
IF @frag < 30.0
SET @command = N'ALTER INDEX ' + @indexname
+ N' ON ' + @schemaname + N'.' + @objectname + N'
REORGANIZE';
IF @frag >= 30.0
SET @command = N'ALTER INDEX ' + @indexname

39
+ N' ON ' + @schemaname + N'.' + @objectname + N'
REBUILD';
IF @partitioncount > 1
SET @command = @command + N' PARTITION=' +
CAST(@partitionnum AS nvarchar(10));
-- EXEC (@command);
IF LEN( @command ) > 0
PRINT @command;
END;
-- Close and deallocate the cursor.
CLOSE partitions;
DEALLOCATE partitions;
-- Drop the temporary table.
DROP TABLE #work_to_do;
--GO

40
Lock Escalations (sys.dm_db_index_operational_stats)
Description This DMV returns a variety of low-level information about table and
index access. This sample shows lock escalation attempts and
successes for each object in a database.

Sample use <database>;


SELECT
db_name(dios.database_id) AS database_name,
object_name(dios.object_id, dios.database_id) AS
object_name,
i.name as index_name,
dios.partition_number,
dios.index_lock_promotion_attempt_count,
dios.index_lock_promotion_count,
(cast(dios.index_lock_promotion_count AS real) /
dios.index_lock_promotion_attempt_count) AS
percent_success
FROM
sys.dm_db_index_operational_stats(db_id(), null,
null, null) dios
INNER JOIN
sys.indexes i
on dios.object_id = i.object_id
and dios.index_id = i.index_id
WHERE dios.index_lock_promotion_count > 0
ORDER BY index_lock_promotion_count desc;

41
SQL Server and Database information queries
Description The following queries return information about SQL Server and database
configuration, that can be helpful when investigating issues, or as part of
a benchmark exercise to document the state of the system.

Sample Show SQL Server full version: 
SELECT @@VERSION; 
 
Show database snapshot isolation, recovery model, 
collation: 
SELECT name,snapshot_isolation_state_desc, 
CASE is_read_committed_snapshot_on 
WHEN 0 THEN 'OFF' WHEN 1 THEN 'ON' END 
AS is_read_committed_snapshot_on, recovery_model, 
recovery_model_desc, collation_name from sys.databases 
 
Show TempDB Configuration: 
SELECT name AS FileName, size*1.0/128 AS FileSizeinMB, 
CASE max_size WHEN 0 THEN 'Autogrowth is off.' 
WHEN ‐1 THEN 'Autogrowth is on.' 
ELSE 'Log file will grow to a maximum size of 2 TB.' 
END AutogrowthStatus, growth AS 'GrowthValue', 
'GrowthIncrement' = CASE 
WHEN growth = 0 THEN 'Size is fixed and will not grow.' 
WHEN growth > 0 AND is_percent_growth = 0 
THEN 'Growth value is in 8‐KB pages.' 
ELSE 'Growth value is a percentage.' 
END FROM tempdb.sys.database_files;  
 
Database table row count, data and index size: 
SELECT name = object_schema_name(object_id) + '.' + 
object_name(object_id), row_count, data_size = 8*sum(case 
when index_id < 2 then in_row_data_page_count + 
lob_used_page_count + row_overflow_used_page_count 
else lob_used_page_count + row_overflow_used_page_count 
end) 
, index_size = 8*(sum(used_page_count) ‐ sum(case 
when index_id < 2 then in_row_data_page_count + 
lob_used_page_count + row_overflow_used_page_count 
else lob_used_page_count + row_overflow_used_page_count 
end)) FROM sys.dm_db_partition_stats 
where object_schema_name(object_id) != 'sys' 
GROUP BY object_id, row_count 
ORDER BY data_size desc, index_size DESC 

42
For additional guidance and help, please join the community of
experts:

Content Server Discussion Forum

Performance Engineering Forum

43
About OpenText
OpenText is the world’s largest independent provider of Enterprise Content
Management (ECM) software. The Company's solutions manage information for all
types of business, compliance and industry requirements in the world's largest
companies, government agencies and professional service firms. OpenText supports
approximately 46,000 customers and millions of users in 114 countries and 12
languages. For more information about OpenText, visit www.opentext.com.

44
www.opentext.com
NORTH AMERICA +800 499 6544 • UNITED STATES +1 847 267 9330 • GERMANY +49 89 4629 0
UNITED KINGDOM +44 118 984 8000 • AUSTRALIA +61 2 9026 3400

Copyright © 2015 OpenText SA and/or OpenText ULC. All Rights Reserved. OpenText is a trademark or registered trademark of OpenText SA and/or OpenText ULC. The list of trademarks
is not exhaustive of other trademarks, registered trademarks, product names, company names, brands and service names mentioned herein are property of OpenText SA or other respective
owners. 

Você também pode gostar