Você está na página 1de 5

Counters For SQL Server Performance Monitoring Object: Processor %Processor Time

Threshold Value With Description

% Privileged Time.

If this value remains greater than 80%, without corresponding high values for disk and network counters, the processor may be a bottleneck This is the CPU time spent performing kernel level operations, such as disk I/O. If this counter is consistently above 80-90 per cent and corresponds to high disk performance counters, you may have a disk bottleneck rather than a CPU bottleneck

85%

Object: SQLServerAccess Methods Fullscans/sec

Page Splits/Sec

This counter is for an entire server, not just a single database. One thing you will notice with this counter is that there often appears to a pattern of scans occurring periodically. In many cases, these are table scans SQL Server is performing on a regular basis for internal use If you find out that the number of page splits is high, consider increasing the fillfactor of your indexes. It should be as low as possible.it somewhat depends on your system's I/O subsystem. But if you are having disk I/O performance problems on a regular basis, and this counter is over 100 on a regular basis, then you might want to experiment with increasing the fillfactor to see if it helps or not

100

100

FreeSpace Scans/Sec Forwareded Records/Sec Object: SQL Server:Buffer Manager Buffer cache Hit ratio Cache Size It should be 99%. For OLTP it should be between 90 to 96%. Generally, this number should almost come close to the total amount of RAM in your computer, assuming you are devoting your server to SQL Server. This number

should be close to the total amount of RAM in the server, less the RAM used by NT, SQL Server, and any utilities you have running on the server. SQL Server:Cache Manager Cache Hit Ratio If the value for this counter is consistently less than 80 percent, you should allocate more memory to SQL Server Object: System %Total Processor Time

Processor Queue Length

Context Switches /Sec

If the % Total Processor Time counter exceeds 80% for continuous periods (over 10 minutes or so), then you may have a CPU bottleneck. The counter value is avg. for all CPUs. If the Processor Queue Length exceeds 2 per CPU for continuous periods (over 10 minutes or so), then you probably have a CPU bottleneck Should not exceed 8000 per second. Consider switching to fibers.

2 per CPU

Object: Objects Processes Object:Physical Disk %Disk Time

Processes using CPU time.

Avg. Disk Queue Length

To see how busy all the disk drives are. As a rule of thumb, the % Disk Time counter should run less than 55%. If this counter exceeds 55% for continuous periods (over 10 minutes or so), then your SQL Server may be experiencing an I/O bottleneck. If you suspect a physical disk bottleneck, you may also want to monitor the % Disk Read Time counter and the % Disk Write Time counter in order to help determine if the I/O bottleneck is being mostly caused by reads or writes To see how busy the drives are. If the Avg. Disk Queue Length exceeds 2 for continuous periods (over 10 minutes or so) for each disk drive in an array, then you probably have an I/O bottleneck for that array. You will need to calculate this figure because Performance Monitor does not

90%

know how many physical drives are in arrays

Current Disk Queue Length

2* Number of spindles The Avg. Disk sec/Transfer counter reflects how much time a disk takes to fulfill requests. A high value might indicate that the disk controller is continually retrying the disk because of failures. These misses increase average disk transfer time. For most disks, high average disk transfer times correspond to values greater than 0.3 seconds. 0.3Sec You can also check the value of Avg. Disk Bytes/Transfer. A value greater than 20 KB indicates that the disk drive is generally performing well; low values result if an application is accessing a disk inefficiently. For example, applications that access a disk at random raise Avg. Disk sec/Transfer times because random transfers require increased seek time. >20kb

Avg. Disk sec/Transfer

Avg. Disk Bytes/Transfer SQLServer: SQL Statistics Batch Requests/Sec

One way to help identify if you have exceed the NIC capacity of your SQL Server is to watch the SQLServer: SQL Statistics: Batch Requests/Sec counter. This counter measures the amount of SQL batches per second that SQL Server is being given. Generally speaking, a single 100Mbs NIC can handle about 3000/second. If your system consistently exceeds this amount, then you need to consider additional network cards or a faster network card

Object: Memory -

Threshold Values

Pages/sec

Available bytes

Page Faults/sec

This should be close to zero on a dedicated SQL Server. You will see spikes during backups and restores, but this is normal. If SQL Server is the only application running then it should be average 0 for 24 hours of period. But if the counter averages over 20 in a 24 hour period, then your server most likely needs more RAM. The more RAM a server has, the less paging it has to perform. This value should be greater than 5MB. If not, more paging will happen. Find out by changing the server memory option. The Pages Faults/sec counter tells us the number of hard page faults, pages which have to be retrieved from the hard disk since they are not in working memory. It also includes the number of pages written to the hard disk to free space in the working set to support a hard page fault. A high number of Pages Faults/sec indicates excessive paging. Taking a more in-depth look at individual instances of Process: Page Faults/sec, to see if the SQL Server process, for example, has excessive paging, may be necessary. A low rate of Pages Faults/sec (commonly 5-10 per second) is normal, as the operating system will continue to do some house keeping on the working set. Memory - Page Faults/sec is greater than Memory - Cache Faults/sec then there is too much paging.

20 Less than 4 MB

Object: Paging File Review this value in conjunction with Available Bytes and Pages/sec to understand paging activity on your computer.

Paging File\ % Usage Object: Process%Processor time

99%

This counter measures for individual CPU. If the System Object: % Total Processor Time counter in your multiple CPU server regularly runs over 80% or so, then you may want to start monitoring the System: Context Switches/Sec counter.

Page Faults/Sec

A low number for Available Bytes indicates that there may not be enough memory available; or processes, including SQL Server, may not be releasing memory. A high number of Pages Faults/sec indicates excessive paging. Taking a more in-depth look at individual instances of Process: Page Faults/sec, to see if the SQL Server process, for example, has excessive paging, may be necessary. A low rate of Pages Faults/sec (commonly 5-10 per second) is normal, as the operating system will continue to do some house keeping on the working set

Object: Sql Server:General Statistics This parameter is used to find out connections. Use this no to take further step.

User Connections Object: Network Interface Bytes Total / Sec Counter

% Network Utilization

Use this counter value and compare it with the maximum value supported by network connection. This counter will show the total bytes not only for SQL Server. The connection might be 10Mbp, 100Mbp, or even 1Gbp. Given this, the results you receive from the counter must be interpreted in the light of which type of connection you have. Ideally, you will want a network connection to its own dedicated switch port for maximum performance. This counter provides you with what percentage of the bandwidth is being used by the network connection your server is using. This is not the amount of bandwidth being sent to and from your server, but the total bandwidth being used on the connection the network card is attached to.

Você também pode gostar