Você está na página 1de 136

® ®

Microsoft Business Solutions–Navision


Database Resource Kit
MICROSOFT® BUSINESS SOLUTIONS–NAVISION®
DATABASE RESOURCE KIT
DISCLAIMER

This material is for informational purposes only. Microsoft Business Solutions ApS
disclaims all warranties and conditions with regard to use of the material for other
purposes. Microsoft Business Solutions ApS shall not, at any time, be liable for any
special, direct, indirect or consequential damages, whether in an action of contract,
negligence or other action arising out of or in connection with the use or performance
of the material. Nothing herein should be construed as constituting any kind of
warranty.

The example companies, organizations, products, domain names, email addresses,


logos, people and events depicted herein are fictitious. No association with any real
company, organization, product, domain name, e-mail address, logo, person, or event
is intended or should be inferred. The names of actual companies and products
mentioned herein may be the trademarks of their respective owners.

COPYRIGHT NOTICE

Copyright © 2006 Microsoft Business Solutions ApS, Denmark. All rights reserved.

TRADEMARK NOTICE

Microsoft, Great Plains, Navision, FRx, AssistButton, C/AL, C/FRONT, C/ODBC,


C/SIDE, FlowField, FlowFilter, Navision Application Server, Navision Database
Server, Navision Debugger, Navision Financials, Microsoft Business
Solutions–Navision, SIFT, SIFTWARE, SQL Server, SumIndex, SumIndexField,
Windows, Windows 2000, Windows 2000 Server, Windows XP are either registered
trademarks or trademarks of Microsoft Corporation or Great Plains Software, Inc., FRx
Software Corporation, or Microsoft Business Solutions ApS or their affiliates in the
United States and/or other countries. Great Plains Software, Inc., FRx Software
Corporation, and Microsoft Business Solutions ApS are subsidiaries of Microsoft
Corporation.
PREFACE

® ®
The Microsoft Business Solutions–Navision Database Resource Kit is a toolbox
that is designed to help you fine-tune and troubleshoot your Navision 4.0 installation.

The Database Resource Kit consists of a number of tools and a manual. This manual
contains guidelines and instructions on how to use the tools on both server options –
Microsoft Business Solutions–Navision SQL Server Option and Navision Database
Server.

This manual is divided into two sections; the first deals exclusively with the SQL
Server Option while the second explains how to troubleshoot Navision on both server
platforms.

All of the tools and information contained in the Database Resource Kit have been
previously available in the Microsoft Business Solutions–Navision SQL Server Option
Resource Kit and the Performance Troubleshooting Guide for Microsoft Business
Solutions–Navision.

The tools have been tested on SQL Server 2005 and the manual has been updated to
reflect recent changes that have been implemented in Navision.
TABLE OF CONTENTS

PART 1 SQL SERVER TOOL KIT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1


Chapter 1 Running Navision on SQL Server . . . . . . . . . . . . . . . . . . . 3
Planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Form Design and Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Locking and Deadlocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Using PINTABLE for Small Hot Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
SQL Server Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Useful Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

PART 2 PERFORMANCE TROUBLESHOOTING . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65


Chapter 2 Performance Problems – An Introduction. . . . . . . . . . . . 67
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
The Test Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Client Performance Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Common Performance Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

Chapter 3 Identifying Performance Problems . . . . . . . . . . . . . . . . . 75


Using the Session Monitor to Locate the Clients that Cause Performance
Problems on Navision Database Server . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Using the Session Monitor to Locate the Clients that Cause Performance
Problems on SQL Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Time Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
The Client Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Locking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

Chapter 4 Other Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101


Hardware Setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
SQL Server Error Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Keys, Queries and Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Locking in Navision – A Comparison of Navision Database Server and SQL
Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Configuration Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

Appendix A Object List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121


Object List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Table of Contents
Part 1
SQL Server Tool Kit
Chapter 1
Running Navision on SQL Server

This chapter describes several considerations that


Microsoft® Business Solutions–Navision® consultants must
bear in mind when they are implementing the Microsoft
Business Solutions–Navision SQL Server Option.

This tool kit was originally designed for use with Navision
3.70/4.0 on SQL Server 2000. It has been updated and
tested for use with SQL Server 2005, even though some of
the articles are based on previous versions of the products.

This chapter is divided into several sections that


correspond to the different phases of implementing the
SQL Server Option for Navision:

· Planning
· Design
· Tools
· Form Design and Performance
· Locking and Deadlocks
· Using PINTABLE for Small Hot Tables
· SQL Server Maintenance
· Useful Links
Chapter 1. Running Navision on SQL Server

1.1 PLANNING
One of the first things that a consultant must consider is the system requirements that
the customer's installation must conform with. For a complete list of the system
requirements for running Microsoft Business Solutions–Navision on SQL Server, see
http://www.microsoft.com/sql/prodinfo/overview/default.mspx

Maximum Capacity Specifications for SQL Server


The following table specifies the maximum sizes and numbers of various objects
defined in Microsoft SQL Server databases or referenced in Transact-SQL
statements:

Object SQL Server 2000 SQL Server 2005

Batch size 65,536 * Network Packet Size1 65,536 * Network Packet Size1

Bytes per short string column 8,000 8,000

Bytes per text, ntext, or image column 2 GB-2

Bytes per GROUP BY, ORDER BY 8,060

Bytes per index 9002 900

Bytes per foreign key 900 900

Bytes per primary key 900 900

Bytes per row 8,060 8,060

Bytes in source text of a stored procedure Lesser of batch size or 250 MB Lesser of batch size or 250 MB

Clustered indexes per table 1 1

Columns in GROUP BY, ORDER BY Limited only by number of bytes Limited only by number of bytes
per GROUP BY, ORDER BY
Columns or expressions in a GROUP BY 10 10
WITH CUBE or WITH ROLLUP statement

Columns per index 16 16

Columns per foreign key 16 16

Columns per primary key 16 16

Columns per base table 1,024 1,024

Columns per SELECT statement 4,096 4,096

Columns per INSERT statement 1.024 1,024

Connections per client Maximum value of configured Maximum value of configured


connections connections

Database size 1,048,516 TB3 1,048,516 TB3

Databases per instance of SQL Server 32,767 32,767

Filegroups per database 256 32,767

Files per database 32,767 32,767

4
1.1 Planning

Object SQL Server 2000 SQL Server 2005

File size (data) 32 TB 16 TB

File size (log) 32 TB 2 TB

Foreign key table references per table 253 253

Identifier length (in characters) 128 128

Instances per computer 16 16

Length of a string containing SQL 65,536 * Network packet size1 65,536 * Network packet size1
statements (batch size)

Locks per connection Max. locks per server Max. locks per server

Locks per instance of SQL Server 2,147,483,647 (static) Up to 2,147,483,647


40% of SQL Server memory
(dynamic)

Nested stored procedure levels 32 32

Nested sub-queries 32 32

Nested trigger levels 32 32

Non-clustered indexes per table 249 249

Objects concurrently open in an instance of 2,147,483,647 (or available 2,147,483,647 (or available memory)
SQL Server4 memory)

Objects in a database 2,147,483,6474 2,147,483,6474

Parameters per stored procedure 1,024 2,100

REFERENCES per table 253 253

Rows per table Limited by available storage Limited by available storage

Tables per database Limited by number of objects in a Limited by number of objects in a


database4 database

Tables per SELECT statement 256 256

Triggers per table Limited by number of objects in a Limited by number of objects in a


database4 database

UNIQUE indexes or constraints per table 249 non-clustered and 1 249 non-clustered and 1 clustered
clustered

1
Network Packet Size is the size of the tabular data scheme (TDS) packets used to
communicate between applications and the relational database engine. The default
packet size is 4 KB and is controlled by the network packet size configuration option.
2 The maximum number of bytes in any key cannot exceed 900 in SQL Server 2000.
You can define a key using variable-length columns whose maximum sizes add up to
more than 900, provided no row is ever inserted with more than 900 bytes of data in
those columns. For more information, see Maximum Size of Index Keys.
3The size of a database cannot exceed 2 GB when using the SQL Server 2000
Desktop Engine or the Microsoft Data Engine (MSDE) 1.0.

5
Chapter 1. Running Navision on SQL Server

4 Database objects include all tables, views, stored procedures, extended stored
procedures, triggers, rules, defaults, and constraints. The sum of the number of all
these objects in a database cannot exceed 2,147,483,647.

Maximum Number of Processors Supported by SQL Server


The following table lists the maximum number of processors supported by each
edition of SQL Server 2005:

SQL Server 2005 Edition Number of processors Number of processors


supported (32-bit) supported (64-bit)

Enterprise Edition OS maximum1 OS maximum1

Developer Edition 32 64

Standard Edition 4 4

Workgroup Edition 2 N/A2

SQL Server Express Edition 1 N/A2

Evaluation Edition OS maximum OS maximum

1SQL Server 2005 Enterprise Edition supports the maximum number of processors
supported by the operating system.
2This edition of SQL Server 2005 is not available for the 64-bit platform in this release.

The following table lists the number of processors that the database engine in each
SQL Server 2000 edition can support on symmetric multiprocessing (SMP)
computers:

Operating Enterprise Standard Personal Developer Desktop SQL Server Enterprise


System Edition Edition Edition Edition Engine CE Evaluation
Edition

Windows 32 4 2 32 2 1 32
2003 Server

Windows 32 4 2 32 2 N/A 332


2000
DataCenter

Windows 8 4 2 8 2 N/A 8
2000
Advanced
Server

Windows 4 4 2 4 2 N/A 4
2000 Server

Windows N/A N/A 2 2 2 N/A 2


2000
Professional

6
1.1 Planning

Maximum Physical Memory Supported by the Editions of SQL Server


The following table specifies the maximum memory support for each edition of
Microsoft SQL Server 2005:

SQL Server 2005 Edition Maximum memory Maximum memory


supported (32-bit) supported (64-bit)

Enterprise Edition OS maximum1 OS maximum1

Developer Edition OS maximum1 32 TB

Standard Edition OS maximum1 32 TB

Workgroup Edition 3 GB N/A2

SQL Server Express Edition 1 GB N/A2

Evaluation Edition OS maximum OS maximum

1SQL Server 2005 Enterprise Edition will support the maximum memory supported by
the operating system.
2This edition of SQL Server 2005 is not available for the 64-bit platform in this release.

The following table shows the maximum physical memory, or RAM, that the database
engine in each SQL Server 2000 edition can support:

Operating Enterprise Standard Personal Developer Desktop SQL Server Enterprise


System Edition Edition Edition Edition Engine CE Evaluation
Edition

Windows 64 GB 2 GB 2 GB 64 GB 2 GB N/A 64 GB
2003 Server

Windows 64 GB 2 GB 2 GB 64 GB 2 GB N/A 64 GB
2000
DataCenter

Windows 8 GB 2 GB 2 GB 8 GB 2 GB N/A 8 GB
2000
Advanced
Server

Windows 4 GB 2 GB 2 GB 4 GB 2 GB N/A 4 GB
2000 Server

Windows N/A N/A 2 GB 2 GB 2 GB N/A 2 GB


2000
Professional

7
Chapter 1. Running Navision on SQL Server

1.2 DESIGN
The next phase in rolling out a customer's installation is to design the application so
that it meets that customer's needs.

There are many resources that the Navision consultant can access to learn how to
design a Navision application for the SQL Server Option. These include:

· The Application Designer's Guide available on the product CD. This manual
contains all the information you need to create all of the objects that Navision
supports. For the SQL Server Option the topics covered include: data types, SIFT,
numbers & sorting, locking, record level security, performance and the NDBCS
Driver.
· Installation & System Management: Microsoft SQL Server Option available on the
product CD. This manual tells you how to install and implement the SQL Server
Option. Topics covered include: Installation, database creation, backups and
restore, database testing.
· C/SIDE Reference Guide Online Help. An online Help project installed with the
client that explains all of the functions supported by the C/AL programming
language.

Using Find Statements


The Navision application has traditionally used two different Find statements to locate
records in tables. These are FIND('-') and FIND('+').

These functions are used to:

· Find either the first or last record in a table or set of records.


· Check whether or not there are any records in the table.
· Loop through the records in the table or set.

These functions work very well on Navision Database Server because it uses the
ISAM (Indexed Sequential Access Method) disk access method and therefore reads
records individually.

These FIND statements can also be used to return a set of records so that you can
select a key in the index that the loop is based on and then modify the key. While
Navision Database Server has no difficulty with these 'ambiguous' statements, it is
exactly this ambiguity that makes it very difficult for Navision to issue precise SQL
statements. This means that SQL Server does not always interpret these functions
correctly and tends to use set based queries to return small sets of records rather than
just returning the single record that was requested. This is generally the case when
the WHILE FIND('-') method is used instead of the REPEAT UNTIL NEXT method.

This is an extremely inefficient way to use SQL Server. SQL Server prefers more
precise statements that enable it to reuse cursors more frequently.

8
1.2 Design

Navision now contains more unambiguous functions that should be used to perform
these tasks:

· FINDFIRST
· FINDLAST
· FINDSET

These functions perform the same tasks in a much more efficient manner and don't
overload the server with unnecessary cursors. These functions result in fewer server
calls being made, a simpler execution plan and the more efficient reuse of cursors.

FINDFIRST and FINDLAST


The following excerpts from the Navision Client Monitor illustrate how server calls
have been interpreted by SQL Server in earlier versions of Navision:

In the first section, FIND('-') is used without locking (READUNCOMMITTED) for 5


iterations. As you can see, it took four iterations before the Navision SQL driver
NDBCS.dll realized that what we really wanted it to do was find the first record. After it
generates the SELECT TOP 1 statement, the record is cached and no more server
calls are made.

In the second section, FIND('-') is used with locking (UPDLOCK) and the story is the
same. It took four iterations before the Navision SQL driver realized that what we
really wanted it to do was find the first record.

9
Chapter 1. Running Navision on SQL Server

The following excerpts illustrate how the new FINDFIRST function is interpreted more
efficiently by SQL Server:

In the first section, FINDFIRST is used without locking for 5 iterations


(READUNCOMMITTED). As you can see, the Navision SQL driver was able to issue the
correct call immediately. After the record is cached, no more SQL statements were
generated and no more server calls were made.

Similarly, in the second section when FINDFIRST is used with locking (UPDLOCK) the
correct statement is issued.

The effect is the same when you use FINDLAST; a more precise SQL statement is sent
to the server, fewer server calls are made, a simpler 'SQL plan' is generated and
cursors are avoided.

Important
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
You should use the FINDFIRST function instead of FIND('-') when you only need
the first record in a table/set.
You should use the FINDLAST function instead of FIND('+') when you only need the
last record in a table/set.
You should only use these functions when you explicitly want to find the first or the last
record in a table/set. Do not use these functions in combination with REPEAT ..
UNTIL.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Looping with FINDSET


The FINDSET function is designed to make searching through sets of records more
efficient. This function retrieves a set of records from the table and loops through
them.

You should only use the FINDSET function when you explicitly want to loop through a
recordset. You should ONLY use this function in combination with REPEAT .. UNTIL.

10
1.2 Design

Furthermore, FINDSET only supports descending loops. If you want to loop from the
bottom up, you should use FIND('+').

The FINDSET function optimizes loops by:

· Generating more explicit code on SQL Server. This means that the code used in
the Navision SQL Server Option must use this function explicitly to ensure that the
driver generates optimal statements.
· Reusing cursors more efficiently or avoiding them totally.

This function works alongside the database property Record Set.

To set this property, open Navision and click File, Database, Alter and click the
Advanced tab:

This property allows you to specify how many records are cached when Navision
performs a FINDSET operation for a read-only loop (with the ForUpdate parameter set
to FALSE). The default setting is 500 records.

If a FINDSET statement reads more than the number of records specified here,
additional SQL statements will be sent to the server after the records have been read
with NEXT and this will degrade performance. Increasing this value also increases the
amount of memory that the client uses to support each FINDSET statement.

The syntax of the FINDSET function is:

[Ok :=] FINDSET([ForUpdate][, UpdateKey])

The general rules for using FINDSET are:

· FINDSET(FALSE,FALSE)- read-only. This uses no server cursors and the record set
is read with a single server call.
· FINDSET(TRUE,FALSE)- is used to locate and update non-key fields. This uses a
cursor with a fetch buffer similar to FIND('-').
· FINDSET(TRUE,TRUE)- is used to locate and update key fields.

If you have to loop and use a NEXT statement, use FINDSET:

FINDSET ([ForUpdate][, UpdateKey]);


Repeat
Until NEXT=0;

11
Chapter 1. Running Navision on SQL Server

Part of the reasoning behind this is that SQL Server is set based and when you use
FINDFIRST, for example, with NEXT, you are constantly changing the filter that is used,
changing the sort order and changing the key value and this means that the cursor
looses track of where it is and must generate a more complicated query every time it
has to find a new record.

When you are writing code you should remember that SQL Server is set-based and
use this to improve your code. For example, if you only need a small number of
records, you could fetch a small set of records and then throw away the ones that you
don't need.

If you want to retrieve some records from SQL Server and then modify all or some of
these records, you should always place a lock on the table before calling FINDSET.

If you do not lock the table, call FINDSET and then try to modify the set, SQL Server
will throw the first set away and create a new cursor before retrieving a new set and
modifying it. This is both an ugly and expensive scenario.

You should:

Locktable
FINDSET
Then modify

This method is both short and direct. Furthermore, by setting LOCKTABLE you tell the
server that you want to perform an update.

Retrieving large sets When you use FINDSET without any parameters it functions as a firehose and fetches
around 500 records. If you want to retrieve more than 500 records, you can still use
the FIND('-') function. Alternatively, you can use FINDSET(True).

If you need less than 500 records, use LOCKTABLE and then FINDSET.

In summary, there are three simple rules that you should follow when using FIND
statements:

· If you are writing a NEXT statement, always use FINDSET.


· Always tell the server when you want to update some records. Call LOCKTABLE first
and then use FINDSET to modifying the records.
· Your code should match the size of the record set that you want to retrieve.

Understanding Navision Indexes and the SQL Server Option


Understanding how to design indexes so that they work optimally in the SQL Server
Option is one of the keys to building a successful application.

In Navision, indexes are created for several purposes the most important of which are:

· Data retrieval:
To quickly retrieve a result set based on a filter. For example, you want to view all
the customer ledger entries for a specific customer. There is a table in Navision

12
1.2 Design

called Cust. Ledger Entry (table 21) and the primary index of this table is the
Entry No. field (Integer). There is also a secondary index consisting of the
Customer No., Posting Date and the Currency Code fields that you can use to
quickly retrieve a result set that is filtered by Customer No.

· Sorting:
To display a result set in a specific order. For example, you want to see all the
different document types from the Cust. Ledger Entry table for the same
customer. In this case you could filter on the index that consists of the Document
Type, Customer No., Posting Date and the Currency Code fields.

· SIFT (Sum Index FlowField Technology):


SIFT is used to maintain pre-calculated sums for various columns. For example, if
you want to have daily/monthly/yearly summaries of the Sales and Profit columns in
the Cust. Ledger Entry table, you must have SIFT turned on for an index that
contains the following SumIndexFields Sales (LCY), Profit (LCY).

How Indexes Work in Navision


All the indexes in Navision are unique. Note that in the earlier example, the index is
Customer No., Posting Date, Currency Code, Entry No. to enforce uniqueness.

A primary index in Navision translates to a unique clustered index on SQL Server and
a secondary index in Navision translates to unique non-clustered index in SQL Server.

Navision Database Server supports SIFT effortlessly. However, in the SQL Server
Option, when a SIFT field is defined on any index an extra table is created on SQL
Server. This table is maintained by triggers that have been placed on the source data
table. Based on the earlier example, Navision creates a new SIFT table on SQL
Server for the source data table. The source data table is called CRONUS
International Ltd_$Cust_ Ledger Entry and the SIFT table is called CRONUS
International Ltd_$21$0 and looks like this:

CRONUS International Ltd_$21$0

[bucket] [int] total/day/month/and so on

[f3] [varchar] Field3 in Navision = "Customer No."

[f4] [datetime] Field4 in Navision = "Posting Date"

[f11] [varchar] Field11 in Navision = "Currency Code"

[s18] [decimal] Sum of Field18 in Navision = "Sales(LCY)"

[s19] [decimal] Sum of Field19 in Navision = "Profit(LCY)"

[s20] [decimal] Sum of Field20 in Navision = "Inv.Discount (LCY)"

The table name is constructed as: [Company Name$Table Number$SIFTindexNo]

· If this table contains another index that maintains SIFT, Navision creates yet
another SIFT table.

13
Chapter 1. Running Navision on SQL Server

· Navision Database Server can only sort a result set in descending/ascending order.
To have the results sorted by for example Customer No., Document Type,
Posting Date, Currency Code, you must create an index in Navision that matches
this order.
· Tables that contain transaction details use an Integer as the primary index. In the
Cust. Ledger Entry table, the primary index is Entry No. - a field of data type
Integer.
· Boolean (Yes/No) fields in Navision are tinyint on SQL Server.
· Option fields in Navision are Integer on SQL Server. An example of an option field
in Navision is the Document Type field in the Cust. Ledger Entry table
· The Navision client reads record by record (ISAM). Navision Database Server is
designed to work this way but SQL Server is not. The SQL Server Option for
Navision has been designed to detect when it is reading in a loop or reading single
records. When it detects that it is reading single records only, it switches to
singleton queries such as SELECT TOP 1 instead of set-based queries with
subsequent fetches.
This mechanism is disturbed if the application modifies a key in the index that the
loop is based on or if the WHILE FIND('-') method is used instead of the REPEAT
UNTIL NEXT method. Furthermore, the SQL Server query optimizer is often
disturbed by this and frequently switches to another non-clustered index scan or
clustered index scan execution plan. Navision Database Server is robust in this
scenario.

· Navision uses SIFT technology to display FlowFields. These are calculated fields
and are not stored permanently.
· An example is the Sales (LCY) field in the Customer table. This field is defined as:
Sum("Cust. Ledger Entry"."Sales (LCY)" WHERE (Customer
No.=FIELD(No.),Global Dimension 1 Code=FIELD(Global Dimension 1
Filter),Global Dimension 2 Code=FIELD(Global Dimension 2 Filter),Posting
Date=FIELD(Date Filter),Currency Code=FIELD(Currency Filter)))
· There are several different types of FlowField - SUM, AVERAGE, EXIST, COUNT,
MIN, MAX, LOOKUP. However, the standard Navision application does not use the
AVERAGE, MIN and MAX methods.
· In Navision, you can define which indexes should be maintained on SQL Server by
setting the MaintainSQLIndex property for each index. If the application code refers
to a specific index, it needs to be defined in the Navision table; however it doesn't
need to be defined on SQL Server. Therefore, if you need two 'similar' yet different
sort orders in the Navision application:
Field1, Field2, Field3, Field4

Field1, Field2, Field4, Field3

You must define both indexes. However, you can disable the maintenance of one of
the indexes on SQL Server. SQL Server will still be able to retrieve the result set
based on the index that is maintained and sort the result set in the requested sort
order.

14
1.2 Design

· In Navision, you can specify which SIFT indexes should be maintained on SQL
Server by setting the MaintainSIFTIndex property for each index. When you decide
to maintain a SIFT index, you can also specify which SIFT levels should be
maintained by using the SIFTLevelsToMaintain property.
· In the Cust. Ledger Entry table, select the secondary index that consists of the
Customer No., Posting Date and the Currency Code fields and open the
Properties window for this key.
· In the Value field of the SIFTLevelsToMaintain property, click the AssistButton to
open the SIFT Levels List window:

In this window, you specify which SIFT buckets you want to maintain.

If Navision cannot return the requested sum by using the SIFT table, it performs the
SUM operation on the source table. This can take a long time if there are many
records in the source table's result set. However, if the result set is small, the response
time is similar. Furthermore, maintaining SIFT indexes is costly because every update
of a source table causes at least one and probably multiple updates of the SIFT
table(s).

· The Navision indexes have not been redesigned for SQL Server. The scope of the
product development when developing SQL Server Option was defined as "one
application on both server platforms".

· Navision Database Server is optimized for low selectivity keys. For example, if
there is an index that consists of the Document Type and Customer No. fields
and the application filters on the Customer No. field only, Navision Database
Server will search through the index branches and retrieve the result set quickly.
However, SQL Server is not optimized to do that, it scans from the beginning to the
end of a range and in many cases this results in a non-clustered index scan.
· Both server options are inefficient when filtering on datetime fields, therefore the
Navision application defines indexes with datetime keys towards the end of an
index, for example: Document Type,Customer No.,Currency Code,Posting
Date.

15
Chapter 1. Running Navision on SQL Server

Minimizing the Impact on SQL Server


To minimize the impact of the way the Navision application is designed when running
the SQL Server Option:

· Eliminate the maintenance of indexes that are only designed for sorting purposes.
SQL Server will sort the result set without these indexes.
· Redesign the indexes so that their selectivity becomes higher by putting Boolean,
Option and Date fields towards the end of the index.
· Do not maintain SIFT indexes on small tables or temporary tables. Examples of
temporary tables are: Sales Line, Purchase Line, Warehouse Activity Line, and
similar tables.
· If you loop through a table, set a separate looping variable.
· Never use WHILE FIND('-') or WHILE FIND('+') structures.

WHILE FIND('-') or WHILE FIND('+') logic is always used to look for the first or
last record in a set and therefore automatically disables the read ahead mechanism. It
is therefore recommended that you do not use this logic unless you have some very
good reasons for doing so.

Example of bad code:

Customer.SETCURRENTKEY("Currency Code");
Customer.SETRANGE("Currency Code", 'GBP');
WHILE Customer.FIND('-') DO BEGIN
Customer."Currency Code" := 'EUR';
Customer.MODIFY;
END;

For some good examples, see the section "Modifying the Key in an Index" on page 44.

Useful SQL Server Commands


This section contains a description of some useful SQL Server commands.

sp_helpindex
This command displays the indexes for a specified table.

Example 1
sp_helpindex [CRONUS International Ltd_$Cust_ Ledger Entry]

16
1.2 Design

Result:

Some indexes are never or at most rarely used by the application (index $7 is used
only for one report in Navision - Salesperson Commission). You should consider
dropping these and thereby no longer maintaining them on SQL Server.

Example 2

sp_helpindex [CRONUS International Ltd_$Warehouse Activity Line]

Result:

Index_name Index_descritpion Index_keys

$1 nonclustered, unique located on Source Type, Source Subtype,


PRIMARY Source No_, Source Line No_,
Source Subline No_, Unit of
Measure Code, Action Type,
Breakbulk No_, Original Breakbulk,
Activity Type, No_, Line No_

$10 nonclustered, unique located on Activity Type, No_, Action Type, Bin
PRIMARY Code, Line No_

$11 nonclustered, unique located on Activity Type, No_, Item No_, Variant
PRIMARY Code, Action Type, Bin Code, Line
No_

$12 nonclustered, unique located on Whse_ Document No_, Whse_


PRIMARY Document Type, Activity Type,
Whse_ Document Line No_, Action
Type, Unit of Measure Code,
Original Breakbulk, Breakbulk No_,
Lot No_, Serial No_, No_, Line No_

$13 nonclustered, unique located on Item No_, Bin Code, Location Code,
PRIMARY Action Type, Variant Code, Unit of
Measure Code, Breakbulk No_,
Activity Type, Lot No_, Serial No_,
No_, Line No_

17
Chapter 1. Running Navision on SQL Server

Index_name Index_descritpion Index_keys

$14 nonclustered, unique located on Item No_, Location Code, Activity


PRIMARY Type, Bin Type Code, Unit of
Measure Code, Variant Code,
Breakbulk No_, Action Type, Lot
No_, Serial No_, No_, Line No_

$15 nonclustered, unique located on Bin Code, Location Code, Action


PRIMARY Type, Breakbulk No_, Activity Type,
No_, Line No_

$16 nonclustered, unique located on Location Code, Activity Type, No_,


PRIMARY Line No_

$17 nonclustered, unique located on Source No_, Source Line No_,


PRIMARY Source Subline No_, Serial No_, Lot
No_, Activity Type, No_, Line No_

$2 nonclustered, unique located on Activity Type, No_, Sorting


PRIMARY Sequence No_, Line No_

$3 nonclustered, unique located on Activity Type, No_, Shelf No_, Line


PRIMARY No_

$4 nonclustered, unique located on Activity Type, No_, Location Code,


PRIMARY Source Document, Source No_,
Action Type, Zone Code, Line No_

$5 nonclustered, unique located on Activity Type, No_, Due Date, Action


PRIMARY Type, Bin Code, Line No_

$6 nonclustered, unique located on Activity Type, No_, Bin Code,


PRIMARY Breakbulk No_, Action Type, Line
No_

$7 nonclustered, unique located on Activity Type, No_, Bin Ranking,


PRIMARY Breakbulk No_, Action Type, Line
No_

$8 nonclustered, unique located on Activity Type, No_, Destination


PRIMARY Type, Destination No_, Action Type,
Bin Code, Line No_

$9 nonclustered, unique located on Activity Type, No_, Whse_


PRIMARY Document Type, Whse_ Document
No_, Whse_ Document Line No_,
Line No_

CRONUS clustered, unique, primary key Activity Type, No_, Line No_
International located on PRIMARY
Ltd_$Warehouse
Activity Line$0

This table is heavily over indexed with long composite indexes. This table could be
very 'hot' in a busy warehousing environment. You should consider dropping the
maintenance on SQL Server of indexes with the same beginning or composition;
hopefully the resulting subset will be small enough and will be sorted quickly enough.

18
1.2 Design

In this example, indexes $10, $11, $2, $3, $4, $5, $6, $7, $8 $9 start with the same
fields [Activity Type],[No_] which are the same as the beginning of the clustered
index. If you always filter on these two keys, there is a high probability that the SQL
Server query optimizer will estimate that the cost of using the non-clustered index in
this scenario is too high (because using the clustered index will mean fetching the
data as well), and may decide to do a clustered index seek, or use any of the similar
indexes.

Additionally, if you reduce the number of indexes, SQL Server will not have to keep
these in memory, thereby increasing page life expectancy and improving
performance. Maintaining fewer indexes can mean better performance.

dbcc show_statistics
Example 1

dbcc show_statistics ([CRONUS International Ltd_$Cust_ Ledger


Entry],[$3])

Result 1

The All Density column shows the selectivity of the key (how distinct the values are).
In this case, the first two keys have very poor selectivity, if you filter a result set on
these two keys only, SQL Server will have to fetch 1/3 of the table. In such cases, and
sometimes even if you filter further on this composite index, for example on the
[Customer No_] key, the SQL Server query optimizer might switch to a clustered
index scan and not use the 'best' index, or switch to another index and do a
nonclustered index scan or seek (remember that seek is good, scan is bad).

In this case, you could consider using the SQLIndex key property to drop (no longer
maintain) this index, and/or create a new index in Navision for example, Customer
No.,Document Type,… and maintain this one while disabling the maintenance of the
original index. This way, your Navision code will work and also give the optimum
performance.

The SQLIndex property allows you to define the actual fields that are used in the
corresponding index on SQL Server.

Example 2

dbcc show_statistics ([CRONUS International Ltd_$Cust_ Ledger


Entry],[$1])

19
Chapter 1. Running Navision on SQL Server

Result 2 (only the 'interesting' part is shown)

All density Average Length Columns

0.02 5.7974682 Customer No_

0.01 13.797468 Customer No_, Posting Date

0.01 19.063292 Customer No_, Posting Date, Currency Code

1.2658228E-2 23.063292 Customer No_, Posting Date, Currency Code, Entry


No_

The All Density column shows the selectivity of the key (how distinct the values are).
In this case, the first key has good selectivity, if you filter a result set on that key, SQL
Server will have to fetch only 2 percent of the table. However, placing a field of date
data type (Posting Date) towards the beginning of the index (second key) is not very
wise because that field has too high selectivity (it's too granular). This means that it
doesn't matter what key you define next in the index and try to filter on, SQL Server
will never use the index for that part and seek or scan the index because almost the
entire subset contains possible data.

Consider putting the date field at the end of the index. To avoid runtime errors in
Navision, keep the index in the table design but turn off the SIFT maintenance, and
design a new index which might give better performance.

Note
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
These examples are based on an earlier version of the Cust. Ledger Entry table.
The design of this table has been modified since then but the examples still illustrate
the usefulness of this SQL Server command.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Optimizing Navision Indexes and SIFT Tables


This section contains suggestions for a methodology and some tools that you can use
when optimizing Navision indexes on SQL Server.

Minimize the Number of Indexes


Don't maintain indexes that are only used for sorting purposes. SQL Server will sort
the result set. The main focus should be on quickly retrieving the result set. For
example, if you have several indexes that start with the same combination of keys
(index fields), you should maintain only one of them on SQL Server. Hopefully you can
identify the one that is most frequently used in the greatest number of situations. In
Navision, you should then use the MaintainSQLIndex property of the other indexes to
specify that they should no longer be maintained on SQL Server.

Indexes on 'Hot' Tables


Avoid having too many indexes on 'hot' tables because each record update means an
index update producing more disk I/Os. For example, if the Item Ledger Entry table

20
1.2 Design

is growing by 1000 records per day and has 20 indexes, it can easily produce in
excess of 20000 disk I/Os per day. However, if you reduce the number of indexes to
for example 5, it greatly reduces the number of disk I/Os. Experience has shown that it
is always possible to reduce the number of indexes to between 5 and 7 and even less
on 'hot' tables.

Redesign Indexes for Better Selectivity


Redesign indexes so that their selectivity becomes higher. Remember:

· Do not place Boolean and option fields at the beginning of an index.


· Always put date fields towards the end of the index.

Furthermore, indexes like Posting Date,Customer No.,… must be avoided because


the index is like an entire book and the chances that the query optimizer would pick
this index are quite small.

Indexes like Document Type,Customer No.,… have very low selectivity on the first
key. You could create a new index Customer No.,Document Type, … and maintain it
on SQL Server while you turn off the maintenance of the original index on SQL Server.

SQLIndex Key Property


Navision contains a key property, called SQLIndex that allows you to create indexes
on SQL Server that are different from the keys defined in Navision.

This property is designed to address selectivity issues by allowing you to optimize the
way that Navision uses SQL Server indexes without having to rewrite your application
or redesign your keys in Navision. This generally means rearranging the columns in
the SQL Server index but you can also define a completely new index if you wish.

If you want to create a unique index, you must include all the columns from the
primary key in the index you define on SQL Server. However non-unique indexes are
also allowed.

This property is designed to help you minimize the impact of the way the Navision
application is designed when running the SQL Server Option.

You can use it to:

· Stop maintaining indexes that are only designed for sorting purposes. After you
have removed or changed these indexes, SQL Server will still sort the result.
· Redesign the indexes so that their selectivity becomes higher by putting Boolean,
Option and Date fields towards the end of the index.

Clustered Key Property


In Navision, the clustered index is normally the primary key. The Clustered property
allows you to specify that another key should be the clustered index. You can also
have tables that do not contain any clustered key.

This is a way to help the SQL Server Query Optimizer determine the optimal execution
plan for each query.

21
Chapter 1. Running Navision on SQL Server

To set the SQLIndex and Clustered properties for a table, open the table you want to
redesign, open the Keys window, select the key you want to modify and open the
Properties window for that key:

Needless to say, the SQLIndex and Clustrered properties are not a cure-all for every
problem that keys might be causing in your application. They should only be used
after you have given lengthy consideration to the needs of your Navision application.

Important
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Before you redesign a table and the indexes that it contains, you must ensure that the
Maintain relationships database property is not checked. To see whether or not the
property is used in your database, click File, Database, Alter to open the Alter
Database window and click the Integration tab. Remember to check the property
again after the table has been redesigned.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

SIFT Maintenance on Small and Intermediate Tables


Do not maintain SIFT indexes on small tables or on intermediate tables. Examples of
intermediate tables are Sales Line, Purchase Line, Warehouse Activity Line, and
similar tables that only contain records for a short period. SQL Server can retrieve
sums directly from the 'source' table when the SIFT indexes are not maintained. If the
data set within the filter is small, the sum is calculated quickly and at the same time all
the updates of the 'source' table are performed more quickly because Navision does
not have to update all the associated SIFT tables.

Minimize Number of SIFT Buckets


Minimize the number of buckets that are maintained for each SIFT index. There is, for
example, no reason to maintain the daily bucket for the Cust. Ledger Entry table if
you only post several invoices and payments a month for the same customer.

22
1.2 Design

Furthermore, if you have several SIFT indexes defined on a table that you design, you
should investigate whether or not some of the buckets are already maintained by
another index. For example if you have two indexes Customer No.,Currency
Code,… and Customer No.,Open,… which both maintain Amount (LCY) sums, you
could disable the bucket that maintains the totals per Customer No. in one of the
indexes.

Do not assume that the following example is 100% correct - there might be different
record sizes, different fill factors, different block and sector/stripe sizes, and so on. It is
merely provided as an illustration.

If a SIFT table contains a lot of buckets, every single update of the 'source' table
produces a large number of bucket updates. Remember that each SIFT table has two
indexes. For example, imagine that you maintain 40 buckets on the G/L Entry table
and 8 indexes - every time you insert a record into the G/L Entry table, the database
engine must update 40 buckets (80 index entry updates), one 'source' record and it's 8
indexes (all/some keys). This single insert could produce in excess of 100 I/Os on the
disk subsystem.

Obviously the smaller the records in the table the smaller the problems associated
with the indexes and the SIFT indexes become.

23
Chapter 1. Running Navision on SQL Server

1.3 TOOLS
This section describes some useful tools that you can use with the SQL Server Option
for Navision.

The topics covered include:

· Index size and structure


· Database size and growth
· Selecting and fetching on SQL Server

Indexes
This section describes how to analyze the indexes that have been created in your
Navision database.

Indexes per Table


The following query will list all the indexes in a Navision database by table:

SELECT
OBJECT_NAME(id) AS [Object Name],
name AS [Index Name]
FROM sysindexes
WHERE (name NOT LIKE '_WA_%') AND (id > 255)
ORDER BY [Object Name]

Number of Indexes per Table


The following query will show the number of indexes per table in a Navision database
sorted by the number of indexes:

SELECT
OBJECT_NAME(id) AS [Object Name],
COUNT (id) AS [No. of Indexes]
FROM sysindexes
WHERE (name NOT LIKE '_WA_%') AND (id > 255)
GROUP BY id
ORDER BY [No. of Indexes] DESC, [Object Name]

Number of Index Updates


If your database has been running for several months and contains data that is typical
for your application, the following query will show you the number of index updates
that were required when records have been inserted into the database. The results
are sorted by the number of updates.

SELECT
OBJECT_NAME(id) AS [Object Name],
COUNT (id) AS [No. of Indexes],
rowcnt AS [No. Of Rows],

24
1.3 Tools

(rowcnt * COUNT(id)) AS [No. Of Updates]


FROM sysindexes
WHERE (name NOT LIKE '_WA_%') AND (id > 255)
GROUP BY id, rowcnt
ORDER BY [No. Of Updates] DESC

Index Structures
The following query shows the structure of the indexes in your database and sorts the
results by table:

SET NOCOUNT ON

--DROP TABLE ##Table


--DROP TABLE ##IndexHelp

CREATE TABLE ##Table (


[table_name] VARCHAR(255)

CREATE TABLE ##IndexHelp (


[table_name] VARCHAR(255) DEFAULT '',
[index_name] VARCHAR(255),
[index_description] VARCHAR(255),
[index_keys] VARCHAR(512)
)

INSERT INTO ##Table


SELECT [name] AS [table_name]
FROM sysobjects
WHERE [xtype] = 'U' and [id] > 255
ORDER BY [name]

DECLARE @Statement CHAR (255)


DECLARE @name sysname
DECLARE Get_Curs CURSOR FOR
SELECT table_name FROM ##Table

OPEN Get_Curs

FETCH NEXT FROM Get_Curs INTO @name

WHILE @@FETCH_STATUS = 0 BEGIN

INSERT INTO ##IndexHelp (index_name, index_description,


index_keys)

25
Chapter 1. Running Navision on SQL Server

EXEC('sp_helpindex [' + @name + ']')

UPDATE ##IndexHelp SET table_name = @name WHERE table_name = ''

FETCH NEXT FROM Get_Curs INTO @name


END

CLOSE Get_Curs
DEALLOCATE Get_Curs

SELECT * FROM ##IndexHelp


ORDER BY table_name

Number of SIFT Indexes


The following query shows the number of SIFT indexes per 'source table':

SELECT
SUBSTRING(name,1,CHARINDEX('$',name,CHARINDEX('$',name)+1)) AS
[Source Table],
COUNT(SUBSTRING(name,1,CHARINDEX('$',name,CHARINDEX('$',name)+1)))
AS [No. Of SIFT Indexes]
FROM sysobjects
WHERE OBJECTPROPERTY(id, N'IsUserTable') = 1 AND name LIKE
'%'+'$'+'[0-9]'+'%'
GROUP BY
(SUBSTRING(name,1,CHARINDEX('$',name,CHARINDEX('$',name)+1)))
ORDER BY [No. Of SIFT Indexes] DESC, [Source Table]

Number of SIFT Buckets


The following query shows the number of buckets in the SIFT tables.

However, if your database does not contain any data and/or some SIFT 'source' tables
have not been populated, the query will not show any or just a few buckets.

SET NOCOUNT ON

--DROP TABLE ##SIFTtables

CREATE TABLE ##SIFTtables (


[table_name] VARCHAR(255) DEFAULT '',
[bucks] INT DEFAULT 0
)

DECLARE @Statement CHAR (255)


DECLARE @tname sysname

DECLARE Get_Curs CURSOR FOR

26
1.3 Tools

SELECT name
FROM sysobjects
WHERE OBJECTPROPERTY(id, N'IsUserTable') = 1 AND name LIKE
'%'+'$'+'[0-9]'+'%'
ORDER BY name

OPEN Get_Curs

FETCH NEXT FROM Get_Curs INTO @tname

WHILE @@FETCH_STATUS = 0 BEGIN

INSERT INTO ##SIFTtables (bucks)


EXEC('SELECT COUNT(DISTINCT bucket) AS bucks FROM ['+@tname+']')

UPDATE ##SIFTtables SET table_name = @tname WHERE table_name = ''

FETCH NEXT FROM Get_Curs INTO @tname


END

CLOSE Get_Curs
DEALLOCATE Get_Curs

SELECT table_name AS [Table Name], bucks AS [No. Of Buckets] FROM


##SIFTtables

To ensure that this query gives you the right results:

· Create an empty database solely for this purpose and restore your objects into it.
· Import and compile the FillSIFTBuckets.txt object into Navision (it is supplied
as a text file so that you can renumber the object to an unused object number).
· Run the Fill SIFT Buckets codeunit - this code inserts and deletes one record into
every Navision table thereby ensuring that all the SIFT buckets are populated with
zero values (Navision doesn't delete 'empty' buckets).

Key Information Tool


This tool can be used as alternative to the tools that we have just described. It gives
you an overview of the indexes that have been defined for each table including the
number of keys that been enabled, the SumIndexFields that are supported and the
SIFT levels that are enabled.

Note
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Key Information tool is a read-only tool and does not update the tables in the
database. The information that the tool displays is not updated dynamically.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

27
Chapter 1. Running Navision on SQL Server

Before you can use the Key Information tool you must import the SQL Key
Information.fob file that contains all the objects that make up the tool.

Setting Up the Key Information Tool


After you have imported the SQL Key Information.fob file, you must set up the tool
before you can use it.

To set up the Key Information Tool:

1 Select form 50070, Key Information and run it. The first time you run the form it
loads all the information from the Object and the Table Information virtual tables.
It also displays a form containing a more concise version of the following
instructions.

2 In the Key Information window, click Key Info, Setup SQL Connection to open the
SQL Connection Setup window:

3 In the SQL Connection Setup window, enter the following information:

Field Input

Server Name The name of the SQL Server.


You can find the name in the Database Information window.

28
1.3 Tools

Field Input

NT Authentication A check mark here tells the system to use the current NT
Authentication ID. Otherwise you must enter a Used ID and a
password.

User ID The ID of the database login that you are using to connect to
the server. This user should be either a System Administrator
or DBO.
This field is only used if you are not using NT Authentication.

Password The password of the database login.


This field is only used if you are not using NT Authentication.

The SQL Connection Setup window also contains two buttons:

· Test Connection -tests whether or not the information you entered allows you to
connect with the server.
· Edit Password - this button opens a window that allows you to change the
password of the database login.

4 After you have entered this information, close the SQL Connection Setup window.

5 In the Key Information window, click Key Info, Setup and the Key Information
Setup window opens:

This window contains two fields:

· Empty SIFT Record Pct. Threshold - the percentage of empty SIFT records that
are allowed within a SIFT level. The SIFT levels that match or exceed the value
entered in this field are colored in the Key Field List form. This information is also
used to filter out unwanted lines in the Key Information report.
· Empty Key Field Pct. Threshold - the percentage of empty key fields that are
allowed within the SIFT records of a particular key. The fields that match or exceed
the value entered in this field are colored in the Key Field List form. This
information is also used to filter out unwanted lines in the Key Information report.

When you open this window both fields contain the default value - 80%.

6 Close the Key Information Setup window.

29
Chapter 1. Running Navision on SQL Server

The Key Information tool is now ready for use.

Using the Key Information Tool


You can now use the tool to analyze some or all of the tables in your database.

To import information into the tool:

1 In the Object Designer, export the table definitions of the tables that you want to
analyze.

2 In the Key Information window, click Key Info, Re-Load Table List to populate the
Table List window with all the tables in the database.

3 In the Key Information window, click Load Text Objects. Browse to the text file that
you just exported from the Object Designer and click Open to import the definitions
of the tables that you want to analyze.

4 In the Key Information window, click Get SIFT Info to extract the SIFT information
for the relevant tables from the database and update the SIFT columns in the tool.

The General tab of the Key Information window contains the following fields:

Field Meaning

Company Name The name of the company that you are analyzing.
If the database contains more than one company,
you must run the tool once for each company in the
database.

Table No. The number and name of the table that is currently
displayed.

30
1.3 Tools

Field Meaning

No. of Records The number of records in the table. This value is


retrieved from the table Information table and is the
number of records that were in the table when you
imported the data into the tool. This value is not
updated dynamically.

Cost Per Record The number of updates (write transactions) that


must be performed every time you insert or modify a
record in this table.
This value is calculated by adding the number of
keys that are enabled in this table to the number of
SIFT levels that are enabled for each key.
This value can help you see which tables in the
database have the highest performance overhead.
You can then consider reducing the cost per record.

Keys - Enabled The number of keys that are enabled in this table.

Keys - Disabled The number of keys that are disabled in this table.

Key No. The number of the key.

Enabled Whether the key is enabled or not.

Key Fields A list of the fields that make up the key.

Sum Index Fields A list of the SumIndexFields in the key.

Maintain SQL Index Whether or not a SQL index is maintained for this
key.

Maintain SIFT Index Whether or not a SIFT index is maintained for this
key.

SIFT Levels Enabled The number of SIFT levels that are maintained for
this key.

SIFT Recs The total number of SIFT records created for all the
SIFT levels that are maintained.

Empty SIFT Recs The total number of empty records created for all
the SIFT levels that are maintained.

Empty SIFT Recs % The percentage of the total number of SIFT records
that are empty.

31
Chapter 1. Running Navision on SQL Server

In the Key Information window, the Properties tab displays the properties of the
selected table:

This information is retrieved from the Objects virtual table and is not updated
dynamically.

To analyze the information gathered by the Key Information tool:

1 In the General tab of the Key Information window, select the key that you want to
analyze and click the Key Fields field.

2 In the Key Fields field, click the AssistButton to open the Key Field List window:

This window displays the SIFT record information for each individual field in the key.
The Blank SIFT Recs for Field % field gives you an indication of whether or not the
field is contributing to the key. If a field is not contributing to a key, it does not help SQL
Server to optimize data retrieval. The key is therefore not as efficient as it could be.

32
1.3 Tools

Note
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
In certain types of field (Option, Boolean, Integer and so on), a blank value is not
necessarily a blank value. You must analyze the field more carefully to determine
whether or not the field really is blank.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3 In the Key Information window, select the key that you want to analyze and click
the Sum Index Fields field.

4 In the Sum Index Fields field, click the AssistButton to open the Key Information
Field List window:

This window lists the SumIndexFields associated with this key.

5 In the Key Information window, select the key that you want to analyze and click
the SIFT Levels Enabled field.

6 In the SIFT Levels Enabled field, click the AssistButton to open the SIFT Level
List window:

This window lists all the SIFT levels that are defined for the current key. It also lists the
number of records per SIFT level and the number of blank records per SIFT level.
These values are then use to calculate the % of 0 SIFT Recs.

You can also open this window by clicking the AssistButton in the SIFT Recs field and
in the Empty SIFT Recs field in the Key Information window.

33
Chapter 1. Running Navision on SQL Server

The Key Information tool also contains two reports that summarize all the information
gathered by the tool:

· Key Information - contains all the information gathered by the Key Information
tool, except the list of fields that have been defined for the keys.
· Key Field Information - contains the list of fields that have been defined for the
keys.

Both of these reports can be filtered so that they only show the records that meet or
exceed the values that you specified in the Key Information Setup window.

The Navision Database Sizing Tool


If your application has been running for several months, you might have a good idea
of how fast the database grows - how many records are added per day/month per
table. If you do not have an idea, you can use the Navision Database Sizing Tool
(NDST) to estimate the growth rate of your database.

The Navision Database Sizing Tool is designed to help you evaluate your hard disk
requirements by estimating the growth rate of your database. The tool can be used to
estimate the growth rate of new implementations as well as existing installations.

Scope of NDST
The NDST can be used with both database options - Navision and SQL.

The NDST only gives you estimates. The results it generates can be used as
guidelines for how much your database will grow. The exact growth rate of the
database depends on a number of customer specific factors. No two customers will
have the exact same record sizes, or create the exact same number of new records
per order.

Database growth depends on factors such as:

Application areas used A few examples: Customers using Warehouse


Management create additional records such as Picks, Putaways, Warehouse entries
and so on. If the customer uses Analysis Views, then this will increase database
growth; synchronizing contacts with customers will add to the space used, and so on.

Working practice The more fields you enter in a record, the more space it takes up.
A customer who enters every possible piece of information (including comments) in all
the records will use more space than a customer with the same number of records,
who only enters the most basic information.

Record size This is based on the size of the table compared to the number of
records in that table. The size of a table includes indexes (keys). So record sizes will
be affected by the number and size of the indexes.

The record sizes that NDST uses for calculations are only estimates and are based on
a very simple setup. The real numbers may well be different.

34
1.3 Tools

Contents of NDST
NDST contains:

· DBSizingWorksheet.xls
· NDST.fob
· SpaceUsed.sql
Each of these components is described in the following sections.

How to Use NDST


The NDST can be used before implementing a new system, or it can be used on an
existing system to forecast future database growth.

Estimating Database Size for a New System


To estimate the appropriate database size for a new system, you only need to use the
DBSizingWorksheet.xls spreadsheet.

1 Open DBSizingWorksheet.xls:

2 In the grey column, enter your estimate of the size that the table should have. In this
example, the estimate is for 900 customers, 100 vendors and so on. They place an

35
Chapter 1. Running Navision on SQL Server

estimated 3650 orders (Purchase and Sales) generating an average of 8 lines


each).

The factors you can enter are:

Factor Meaning

Database Version Navision Database Server or SQL Server.


The tool does not allow other values.

Customer The number of customers you expect.

Vendor The number of vendors you expect.

Item The number of items you expect.

Contact The number of contacts you expect.

Other1 .. Other2 These lines are user defined. For example, if you want to
include the size of your Fixed Assets table, you can enter a
line here called Fixed Assets and then enter the number of
Fixed Assets you expect. If you want to enter any user
defined tables, then you also have to edit the underlying
factors as described later.

Number of orders The number of orders you expect. This includes sales as
well as purchase orders.

Number of lines pr order The average number of lines in an order.

Number of dimensions The average number of dimensions in an order.

The spreadsheet calculates the total size of the data based on the numbers you
entered.

It then adds two extra values:

· Initial Size: Even an empty database has a size.


· Overhead: The spreadsheet calculates the database growth based on the amount
of data in the most commonly used tables. To account for other tables and fields, for
example, comments, G/L Accounts, Budget Entries, To-dos, and so on, it adds an
estimated overhead. This is set to 10%.

Changing the Underlying Factors


NDST uses estimated record sizes to calculate database growth. You can change
these estimates. Reasons for changing the estimates could be, that you have
experienced that the real record size for some table is different to the ones used in
NDST, or you might have customized some tables.

The DBSizingWorksheet.xls consists of two worksheets: Sizing and RecSizes.


Both worksheets are protected, except for the grey cells. Before you can modify
anything outside of the grey cells, you must un-protect the worksheet:

To un-protect the worksheet:

In Excel, click Tools, Protection, Unprotect Sheet.

36
1.3 Tools

The RecSizes sheet contains the estimated average size of a record in all the tables
that are used to estimate database growth.

At the top is the base tables, and lower down you have the dynamic tables.

Because record size often differs depending on the database option, the average size
is specified for both Navision Database Server, and SQL Server. Average Size (Bytes)
is the average size of one record.

For dynamic tables, you also have the columns Records pr order, and Records pr
line. These specify how many records are created when you post a sales or purchase
order. For instance, we estimate that posting one order creates three G/L Entries and
three Item Ledger Entries pr. line. These values can depend on the settings in
Navision. For instance, if you have activated Automatic Cost Posting, then one order
will create at least two additional G/L Entries, and "Discount Posting" in Sales &
Receivables setup will also affect the number of entries created per order. Finally, of
course customizations can affect the number of records created.

37
Chapter 1. Running Navision on SQL Server

Warning
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
You can change the values on this sheet, but do not move the existing rows and
columns. The formulas on the sizing sheet refer to many of these cells. If you move
them, the calculations will probably be wrong.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

If you specify any user-defined tables (Other 1, Other 6), then you must also specify
the record size on the RecSizes sheet.

Finally, on the right hand side of the Sizing sheet (in the blue square), you can modify
the Initial Size and the Overhead if you wish.

Details
The lower half of the sizing sheet contains calculated values for a number of tables.

These details show how much each of the tables are expected to grow, based on the
values you entered.

The detailed values are calculated, and should not be overwritten manually. They are
calculated based on the Average Size and Records pr order/line, which are set up in
the RecSizes sheet.

38
1.3 Tools

Forecasting Database Growth for an Existing System


The NDST also contains functionality that allows you forecast the growth rate of an
existing system. However, you can also use it to estimate the size of a new system by
finding out how much a single operation causes the database to grow and then
multiplying that by the number of expected operations.

It works by taking snapshots of the size of the database at different times and
comparing the two snapshots.

The NDST contains two files: NDST.fob and SpaceUsed.sql.

· NDST.fob can be used on both Navision Database Server and on the SQL Server
Option.
· SpaceUsed.sql can only be used to get more detailed information about the SQL
Server Option.

NDST.fob contains 8 objects:

Type No. Name Version List

Table 72000 sp_spaceused file Line NDST

Table 72001 Space Used Version NDST

Form 72000 Space Used Version Lines NDST

Form 72001 Space Used Version List NDST

Form 72005 DB Sizing Card NDST

Codeunit 72000 Snapshot from files (SQL) NDST

Codeunit 72001 Make Spreadsheet NDST

Codeunit 72003 Snapshot from DB Info NDST

Creating Snapshots to Evaluate Database Growth


To create snapshots and estimate database growth:

1 In Navision, open the Object Designer and click File, Import to import NDST.fob.
You can use both Navision Database Server and the SQL Server Option.

39
Chapter 1. Running Navision on SQL Server

2 In the Object Designer, run form 72005, DB Sizing Card:

3 Click Functions, Create Snapshot. This creates a snapshot, and enters a version
number in the Version 1 field.

4 Close the DB Sizing Card window.

5 Create a sales order and post it (or perform any other activity that will create new
records).

6 Open the DB Sizing Card and click Functions, Generate Snapshot again. This
creates a new version, and automatically specifies this new version as Version 2.
With Version 1 and Version 2 specified on the card, you can see the number of new
records and the amount of new data (KB) that has been generated between the two
different versions.

7 To get more details, click Functions, Generate Report. This creates an Excel
Spreadsheet that contains details about each table that was involved in the
transactions that you performed.

If you want to repeat the exercise, click Functions, Clear to clear the form and repeat
the process.

The DB Sizing Card


All the functionality in the objects that were imported in the NDST.fob can be run from
the DB Sizing Card. The Functions button gives you access to a number of functions:

Create Snapshot
The function creates a snapshot of the database. Every time you create a new
snapshot, NDST automatically creates a new version. On the DB Sizing Card, you
can select one or two versions, then immediately see the number of rows, and the
amount of data in that version, as well as the difference between the two versions.

40
1.3 Tools

By comparing the two versions (snapshots), you can see how much the database
grew in the time that elapsed between the two snapshots. This enables you to forecast
database growth based on such things as:

· Running a batch job


· Posting one order
· Synchronizing contacts and so on
· Growth per month / quarter / and so on

Generate Report
This function creates a report that gives you a more detailed overview than the DB
Sizing Card does. It creates a new Excel workbook which shows you the changes in
each individual table. The workbook contains one line for each table which has been
changed (and displays either the number of records added or the size it has grown
by).

It generates two spreadsheets:

· Changes, which shows you data for version 2 as well as the changes between
version 1 and 2.
· Details, which shows you version 1 and version 2.

Finally, at the top of the Changes sheet, you see a summary.

If you have problems making this Excel integration work, please refer to the
Troubleshooting section.

Clear
This function clears the contents of the form. It does not delete any data it only blanks
out the form. If you clear the form before creating a snapshot, then the form will
automatically enter the newly created version as Version 1 (or Version 2 if Version 1 is
already specified).

Delete all versions


This function deletes all the versions (all the snapshots) and clears the form.

Import SQL Snapshot


This is another way to create a snapshot. It only applies to the SQL Server Option,
and gives you a bit more information than the Create Snapshot function. This means
that with the SQL Server Option, you can choose between using Create Snapshot and
Import SQL Snapshot. However, you can only compare like with like.

The Import SQL Snapshot function splits the amount of space used into Data and
Indexes. The Create Snapshot function shows Data and Indexes as one field. So if
you compare a version created by "Create Snapshot" with a version created by
"Import SQL Snapshot", the result will not be very precise.

41
Chapter 1. Running Navision on SQL Server

The advantages of the Import SQL Snapshot function compared to the Create
Snapshot function are:

· It displays separate values for the amount of database space used by the data and
the indexes.
· You get individual results for the SIFT tables.
· For example: Create Snapshot creates one record for the, G/L Entry table. This
record contains the accumulated values for the G/L Entry table and it's two SIFT
tables 17$0 and 17$1. The Import SQL Snapshot function creates three individual
records - one for each table.

You must run a SQL query from Query Analyzer before you can run the Import SQL
Snapshot function.

To run this query:

1 Open Query Analyzer.

2 Log in, and select your Navision database.

3 Open the script SpaceUsed.sql. Set Execute Mode to Results to File…


(CTRL+SHIFT+F), then execute the query. This batch uses the stored procedure
sp_spaceused to export the details for each table.

4 In Navision open the DB Sizing Card window and on the SQL tab, specify the file
that was created in Query Analyzer.

5 Run Import SQL Snapshot.

Troubleshooting
The Generate Report function was developed using Microsoft Office 2003 but it has
also been tested with Office 2000. If you have problems making this work:

1 Make sure you have Office installed.

2 Even though Office 2003 is backwards compatible, previous versions may have
problems. If this is the case, you may have to check the design of codeunit 72001,
Make Spreadsheet.

3 Click View, Globals, and check the four Automation variables - they will most likely
say Unknown Automation Server. Even if they say Unknown Automation Server, it
may still work. If it does not work, try to re-set the variables to the installed version
of Office.

42
1.3 Tools

For each Automation variable, click the AssistButton in the SubType field to open
the Automation Object List window:

4 Click the AssistButton in the Automation Server field to open the Automation
Server List window and select the latest version of Office in the list.

5 In the Automation Object List window, select the corresponding class for each of
the variables:

· XApp.Application
· XBook_Workbook
· XSheet_Worksheet
· xRange.Range

Note
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
For Workbook and Worksheet, select the classes that are preceded by an underscore.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6 Save and close and compile the codeunit.

Factors that Affect Database Growth


As mentioned earlier, a large number of customer specific factors can affect the pace
at which a customer's database grows.

This means, that the results generated by the DBSizingWorksheet.xls tool must not
be taken at face value. This section discusses some of the most common factors that
must be considered and provides guidelines on how to deal with them.

As explained in the section "Changing the Underlying Factors" you can change the
factors that the DBSizingWorksheet.xls tool uses to calculate the estimated
database size. For instance, if you activate Automatic Cost Posting in your application,
Navision will create 5 instead of 3 G/L entries for each order you post. You can easily
update DBSizingWorksheet.xls to reflect this.

43
Chapter 1. Running Navision on SQL Server

Use NDST to Make Customer Specific Estimates


There is only one way to find out exactly how much any particular action (for example,
posting an order) affects a database, and that is to run the action and see what
happens. This is one of the things that NSDT can help you do - see the section "The
DB Sizing Card."

Example:

1 Set up Navision in the same the way as it is for the customer, and create a typical
order.

2 Create a snapshot, post the order and then create another snapshot.

3 Run "Generate Report", and you can now see exactly how many entries were
created in each table.

4 Now you can update the DBSizingWorksheet.xls tool with these values.

Typical Factors that Affect Database Growth


Apart from the most obvious factors, such as having the Contact table synchronized
with the Customer table, a number of other typical but less obvious factors will have
an effect on database growth:

Number of Posting Groups


Posting an order creates two G/L entries per Combination of Gen. Business Posting
Group and Gen. Product Posting Group plus one balancing entry. If, for example you
post an order with two items which belong to separate Gen. Product Posting Groups,
Navision will create 5 G/L entries.

Part Shipping
Every time you part ship an order, Navision creates posted documents, entries,
dimension entries and so on. This means that if you part ship/invoice orders, the
database will grow more per order than it will if you only ship and invoice full orders.

Once you have a good idea of how many records are created when an order is
posted, you can update the DBSizingWorksheet.xls tool to get a more precise
estimate of database growth.

Modifying the Key in an Index


If you modify a key in an index that you use for browsing through the records, the read
ahead mechanism is switched off. We therefore recommended that you restore the
key values just before the NEXT statement, or use one record variable for browsing
through the set and another one for modifying the records.

Example of bad code:

Customer.SETCURRENTKEY("Currency Code");
Customer.SETRANGE("Currency Code",'GBP');
IF Customer.FIND('-') THEN

44
1.3 Tools

REPEAT
Customer."Currency Code" := 'EUR';
Customer.MODIFY;
//The above modifies the key value
UNTIL Customer.NEXT = 0;

Example of good code 1 (restoring key value):

Customer.SETCURRENTKEY("Currency Code");
Customer.SETRANGE("Currency Code",'GBP');
IF Customer.FINDSET(TRUE,TRUE) THEN
REPEAT
Customer."Currency Code" := 'EUR';
Customer.MODIFY;
Customer."Currency Code" := 'GBP';
//The above restores the original value
//of the key, just before NEXT statement
UNTIL Customer.NEXT = 0;

Example of good code 2 (restoring key value)

Customer.SETCURRENTKEY("Currency Code");
Customer.SETRANGE("Currency Code",'GBP');
IF Customer.FINDSET(TRUE,TRUE) THEN
REPEAT
Customer2 := Customer;
Customer."Currency Code" := 'EUR';
Customer.MODIFY;
Customer."Currency Code" := Customer2."Currency Code";
//The above restores the original value
//of the key, just before NEXT statement
UNTIL Customer.NEXT = 0;

Example of good code 3 (separate variables for browsing)

Customer1.SETCURRENTKEY("Currency Code");
Customer1.SETRANGE("Currency Code",'GBP');
IF Customer1.FINDSET(FALSE,FALSE) THEN
REPEAT
Customer2 := Customer1;
Customer2."Currency Code" := 'EUR';
Customer2.MODIFY;
UNTIL Customer1.NEXT = 0;

45
Chapter 1. Running Navision on SQL Server

1.4 FORM DESIGN AND P ERFORMANCE


This section describes how to create forms that perform optimally in Navision. You
should pay particular attention to these three areas:

· SIFT
· SourceTablePlacement property
· Find As You Type feature

SIFT
SIFT technology is a great feature that allows you to retrieve and maintain cumulated
sums. However, it has a performance overhead.

The best practices when using SIFT are:

Display on Demand
Do not place (or at least minimize the number of) FlowFields on normal forms, such as
the Customer Card, Item Card, and so on. Use special forms such as Customer
Statistics, Item Statistics, Customer Entry Statistics, Item Entry Statistics,
Customer Sales, Item Turnover, and so on. The basic principle is to display these
FlowFields on demand rather than by default when the user is not even interested in
the information provided.

Never place FlowFields on List Forms


When you place FlowFields on list forms, you delay the retrieval time because each
line must calculate the FlowField value(s). You should also be aware that if you hide a
FlowField on a form, C/SIDE still calculates the FlowField value. You must delete the
field from the form to avoid the overhead.

SourceTablePlacement Property
The SourceTablePlacement property is used to tell the system which record it should
display when a particular form is opened. The default value is 'Saved', which means
that C/SIDE tries to position the cursor on the record that was last displayed. If the
form is based on a big table, this has a large performance overhead (as a rule of
thumb a big table contains more than 1 million records). The best practice is to find
these forms and change the SourceTablePlacement property to 'First' or 'Last'.
Remember that tables grow, so try to estimate how many records they will contain in a
year or two.

Find As You Type Feature


The Find As You Type feature allows you to type over a non-editable field in a list form,
and while you are typing C/SIDE 'jumps through' the underlying table. This feature can
have a massive impact on performance because it is impossible to predict the number
of times that the cursor must reposition itself in the displayed table. We therefore

46
1.4 Form Design and Performance

recommended that this feature is disabled in large installations. You can use the Find
(CTRL+F) or Field/Table Filter (F7 or CTRL+F7) features instead, and train the users to
always change the key (index) to an index which includes the field they are searching
and/or filtering on.

To disable the Find As You Type feature:

1 Click File, Database, Alter to open the Alter Database window.

2 Click the Options tab.

3 Make sure that the "Allow Find As You Type" is unchecked.

To minimize the dissatisfaction that can be caused by disabling the feature after users
have gotten used to using it, we recommend that you always disable it when you
create a new database and only enable it when requested.

47
Chapter 1. Running Navision on SQL Server

1.5 LOCKING AND D EADLOCKS


This section outlines some best practices and techniques that you can use to avoid
locks and deadlocks that are otherwise unavoidable. It must be stressed that
application developers should focus on performance before looking into locks
because improving performance might minimize locking.

Deadlocks
Deadlock situations arise when two processes have locked data and each process
cannot release the lock until the other process has released its lock. The following
logic would generate a deadlock:

· Session A locks table A


· Session B locks table B
· Session A tries to lock table B (but is blocked and must wait for session B to finish)
· Session B tries to lock table A (but is blocked and must wait for session A to finish)
· Result - Deadlock.

SQL Server resolves the deadlock by terminating one of the transactions. However
detecting the deadlock can take some time and consumes valuable resources.

Preventing Deadlocks
There are two well-known approaches to preventing deadlocks;

· Always lock tables in the same order.


Application developers must agree on the order in which they lock tables. For
example, when processing a sales order we always lock the following tables in this
order: table 39, Purchase Line, table 38, Purchase Header, table 37, Sales Line
and table 36 Sales Header.

· Lock an agreed "master resource" first.


It might be impossible or just too complicated to agree the locking order. As a
workaround, developers must agree that they will always lock a master resource
before they process any tables (in any order). An example of this approach can be
found in Codeunit 80, Sales-Post, which locks the last record in the G/L Entry
table before processing anything else:

GLEntry.LOCKTABLE;
IF GLEntry.FINDLAST THEN;

However, it must be stressed that even though this mechanism is used, the first
principle (locking tables in the same order) is also used.

LockTimeout
Navision contains a database property that allows you to specify whether or not a
session will wait to place a lock on a resource that has already been locked by another

48
1.5 Locking and Deadlocks

session. You can let the session wait indefinitely or you can specify how long the
session will wait before timing out and failing to place a lock. This is the SQL property
LockTimeout that is exposed in Navision.

To set this property in Navision, click File, Database, Alter and click the Advanced
tab:

This property is designed to improve performance by ensuring that the users and
transactions that are waiting to place locks don't have to wait indefinitely.

After a session has waited the specified number of seconds and failed to place a lock
the user receives a message informing them that the lock could not be placed:

The user can then perform another task instead of waiting indefinitely for the lock to be
released.

This is a database property and once set it remains set until you remove the check
mark in this window. However, you can also use the C/SIDE function LOCKTIMEOUT to
temporarily enable or disable this property in the application.

Always Rowlock
The Advanced tab also contains a property called Always rowlock.

By default this property is not selected and SQL Server uses its default locking
behavior. This can improve performance by allowing SQL Server to determine the
best locking granularity.

In Navision 4.00 and earlier, rowlock hints were always issued to SQL Server. With
Navision 4.01 and later, Navision takes advantage of the SQL Server locking behavior
by default.

When you upgrade to Navision 4.01 or later, the new default functionality might
change the behavior of your application and could have an adverse effect on

49
Chapter 1. Running Navision on SQL Server

concurrency. If you find this unacceptable, you should select this property and force
Navision to issue rowlock hints.

Minimizing the Duration of Locks


You should always try to place a lock as late as possible and lock for as short a time
as possible. If you have a lot of records to process, to give other users a chance to
access the system as well, for example when if you are importing 1000 orders from a
web site, you could create a loop which would:

· import 5 records (use LOCKTABLE commands in the right order, see deadlock
section later)
· write a log
· COMMIT the changes
· SLEEP for 1 minute (just an example)
· SELECTLATESTVERSION
· repeat this sequence

This enables other users to access the database in the slots of time when the system
is not locked by the import routine.

Never Allow User Input during a Transaction


Consider the worst scenario; after you have started a transaction (locked the records)
the system displays a form so that you can select some options, answer yes/no, click
OK button, or some such user interaction. However, before you do this, you leave your
desk to fetch a cup of coffee and get distracted by a colleague who needs some
assistance and in the meantime all the other users are blocked from using the same or
possibly some other functionality.

C/SIDE automatically detects this when you use, for example, the Form.RUNMODAL
command after you have locked a resource. You would get the following error
message:

The following C/AL functions can be used only to a limited degree during write
transactions (because one or more tables will be locked).

Form.RunModal() is not allowed in write transactions.

CodeUnit.Run() is allowed in write transactions only if the return value is not used.
For example, OK := CodeUnit.Run() is not allowed.

Report.RunModal() is allowed in write transactions only if RequestForm = FALSE.


For example, Report.RunModal(...,FALSE) is allowed.

DataPort.RunModal() is allowed in write transactions only if RequestForm =


FALSE. For example, DataPort.RunModal(...,FALSE) is allowed.

Use the COMMIT function to save the changes before this call, or structure the code
differently.

50
1.5 Locking and Deadlocks

Unfortunately C/SIDE doesn't detect other commands which can be used for user
input such as Window.INPUT and IF [NOT] CONFIRM constructs.

Application Setup
It is also worthwhile reviewing some of the application setup.

For example, if you enable the Automatic Cost Posting field in the Inventory Setup
table, every stock transaction automatically generates an entry in the General Ledger.

Another example is the Update on Posting field in the Analysis View table, which
tells the system to generate analysis view entries for the view when you post to the
General Ledger.

It could be a disaster if both of these fields are enabled because when you post a
sales order or a purchase order with items, the system will post (lock) to the Item
Ledger table as well as to the General Ledger table and it will update (lock) the
Analysis View table.

Overnight Processing
You might consider performing some heavy tasks outside of normal working hours
(overnight) by using the Navision CRM Job Scheduling functionality. Batch jobs such
as Adjust Cost - Item Entries and Post Inventory Cost to G/L are typical examples.

51
Chapter 1. Running Navision on SQL Server

1.6 USING PINTABLE FOR S MALL H OT TABLES


Note
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The functionality described in this section only applies if you are running on SQL
Server 2000. This functionality has been removed from later versions of SQL Server.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Some Navision application tables might be:

· small
· temporary
· accessed all the time (hot)
· over indexed

A typical example could be the Warehouse Activity Line table. In a busy


warehousing application, Picks and Put-Away documents are created/registered
(deleted) regularly in this table. However, the table would typically never contain more
than a few hundred records, and would have many indexes that are similar (use
sp_helpindex to see the standard application indexes).

The following adjustments can improve performance:

Use the MaintainSIFTIndex property to disable all the SIFT tables that are based on
the source table.

Use the MaintainSQLIndex property to disable all but a few of the SQL Server
Indexes.

Study the changes and if reading the tables is still too slow, pin the table to memory
using DBCC PINTABLE.

Note
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Pinning the table doesn't mean that it is read to memory automatically. You can ensure
that the table is cached in memory by scheduling a job that reads the entire table
(using SELECT *…) to run every time SQL Server starts.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

For more information about the DBCC PINTABLE command, see SQL Server Books
Online.

The following information is taken from SQL Server Books Online.

DBCC PINTABLE
Marks a table to be pinned, which means SQL Server does not flush the pages for the
table from memory.

52
1.6 Using PINTABLE for Small Hot Tables

Syntax
DBCC PINTABLE ( database_id , table_id )

Arguments

database_id
This is the database identification (ID) number of the table to be pinned. To determine
the database ID, use the DB_ID function.

table_id
This is the object identification number of the table to be pinned. To determine the
table ID, use the OBJECT_ID function.

Remarks
DBCC PINTABLE does not read the table into memory. As the pages from the table are
read into the buffer cache by normal Transact-SQL statements, they are marked as
pinned pages. SQL Server does not flush pinned pages when it needs space to store
a new page. SQL Server still logs updates to the page and, if necessary, writes the
updated page back to disk. SQL Server does, however, keep a copy of the page in the
buffer cache until the table is unpinned with the DBCC UNPINTABLE statement.

DBCC PINTABLE is useful for keeping small, frequently used tables in memory. The
pages for the small table are read into memory once and then all future references to
this data do not require a disk read.

Important
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Although DBCC PINTABLE can give performance improvements, it must be used with
care. If a large table is pinned, it can start using a large portion of the buffer cache and
not leave enough cache to service the other tables in the system adequately. If a table
larger than the buffer cache is pinned, it can fill the entire buffer cache. To unpin the
table, a member of the sysadmin fixed server role must shut down SQL Server and
restart SQL Server. Pinning too many tables can cause the same problems as pinning
a table larger than the buffer cache.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Warning
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Pinning tables should be carefully considered. If a pinned table is larger, or grows
larger, than the available data cache, the server may need to be restarted and the
table unpinned.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

DBCC execution completed. If DBCC printed error messages, contact your system
administrator.

53
Chapter 1. Running Navision on SQL Server

Permissions
DBCC PINTABLE permissions default to members of the sysadmin fixed server role
and are not transferable.

Example
This example pins the Authors table in the Pubs database:

DECLARE @db_id int, @tbl_id int


USE pubs
SET @db_id = DB_ID('pubs')
SET @tbl_id = OBJECT_ID('pubs..authors')
DBCC PINTABLE (@db_id, @tbl_id)

54
1.7 SQL Server Maintenance

1.7 SQL SERVER M AINTENANCE


This section focuses on some of the most important maintenance tasks on SQL
Server. These include updating statistics, defragmenting indexes and optimizing fill
factors. You can use a Database Maintenance Plan in SQL Server to manage this, as
well as other tasks such as backup, disaster recovery, etc.

Updating SQL Server Statistics


When you create a Navision database on SQL Server, statistical information is
created automatically. In order for the SQL Server Option for Navision to function
optimally, you must update these statistics regularly.

You can use two commands to update statistics:

· update statistics [table name] to update the statistics for a single table.
· sp_updatestats to update the statistics for all the tables.

For more information about these functions, see SQL Server Books Online.

Notes

· Updating the statistics for Navision tables can be a time-consuming task depending
on the number of records that the tables contain. The tables that are updated most
often are the tables whose statistics must be updated most regularly. Therefore, we
recommend that you create a job on SQL Server that performs regular updates of
all the tables when the system is not in use. If performing the update is still too time-
consuming, you can divide it into smaller jobs that update the statistics for some of
the tables. Creating a SQL job allows you to automate the task and generate
reports containing details about the success of the job.
· If you update statistics regularly using SQL jobs, turn off the Auto Update Statistics
and Auto Create Statistics options otherwise you may have unpredictable behavior
when this process starts during normal working hours.
To turn off these options:

Open SQL Server Enterprise Manager and select the Navision database.

Right-click the database and select Properties.

In the Properties window, click the Options tab, and make sure that the Auto
Update Statistics and Auto Create Statistics options are not selected.

· If you use SQL jobs to update statistics regularly, make sure that when you create,
modify or delete a table or index in Navision your SQL jobs are updated
accordingly.

55
Chapter 1. Running Navision on SQL Server

Index Fragmentation
The following is an excerpt from SQL Server Books Online:

When you create an index in the database, the index information used by queries is
stored in index pages. The sequential index pages are chained together by pointers
from one page to the next. When changes are made to the data that affect the index,
the information in the index can become scattered in the database. Rebuilding an
index reorganizes the storage of the index data (and table data in the case of a
clustered index) to remove fragmentation. This can improve disk performance by
reducing the number of page reads required to obtain the requested data.

There are a number of alternative ways to reduce the fragmentation of an index, the
most common being:

· DBCC INDEXDEFRAG
· DBCC DBREINDEX

Unlike DBCC DBREINDEX (or the index building operation in general), DBCC
INDEXDEFRAG is an online operation. It does not hold locks for a long time and
therefore does not block running queries or updates. A relatively unfragmented index
can be defragmented faster than a new index can be built because the time it takes to
defragment is related to the amount of fragmentation. A very fragmented index might
take considerably longer to defragment than to rebuild. Furthermore, the
defragmentation is always fully logged, regardless of the database recovery model
setting (see ALTER DATABASE). Defragmenting a very fragmented index can
generate a larger log than even a fully logged index creation. The defragmentation,
however, is performed as a series of short transactions and therefore does not require
a large log if log backups are taken frequently or if the recovery model setting is
SIMPLE.

Whichever method is chosen, it is possible to detect the fragmentation of an index by


using the DBCC SHOWCONTIG command. It is recommended that indexes are
maintained more frequently on volatile tables, perhaps via a scheduled job or a
maintenance plan.

In general, if the level of Extent switches is much higher than Extents scanned, or a
low scan density is detected, defragmenting or rebuilding the index may be beneficial.
See DBCC SHOWCONTIG in Books Online for further details, and a sample script that
you can use to detect a fragmented index and rebuild it if the specified threshold is
exceeded.

Index Defrag Tool


This tool gives you easy access to some SQL Server functionality from within
Navision. It contains a simple user interface that allows you to quickly identify the
tables in your application that contain fragmented indexes or indexes that need to be
rebuilt. It also allows you to generate a report containing details of the tables that you
have analyzed.

56
1.7 SQL Server Maintenance

This tool calls a SQL statement called DBCC SHOWCONTIG to gather and display
fragmentation information about the data and indexes of a specific table. For more
information about this procedure, see SQL Server Books Online.

Setting Up the Index Defrag Tool


To set up the Index Defrag Tool:

1 In Navision, open the Object Designer and import Index Defrag Tool.fob.

2 Open form 50090, Index Defrag Card:

For more information about any of the fields in this window, see SQL Server Books
Online.

3 Click Defrag, Setup, File Locations to open the SQL IO Setup window:

4 In the SQL Script File Directory field, enter the path to the folder where you want
to store the scripts that are generated by this tool.

57
Chapter 1. Running Navision on SQL Server

5 In the Index Defrag Script Filename field, enter the name that you want to give
the script that you use to run the DBCC SHOWCONTIG SQL statement in your
database. You can edit this file so that it always uses the same database.

6 In the Rebuild Index Script Filename field, enter the name that you want to give
this file. This is the file that you use to store the information about which tables
contain indexes that should be defragmented or rebuilt.

7 Click the Execute tab and in the Query Analyzer field, click the AssistButton and if
you are running on SQL Server 2000, browse to the folder where the .exe file for
the SQL Server Query Analyzer is stored. This could be on another computer.

If Query Analyzer is installed on your local computer, you can just enter isqlw - the
name of the .exe file for Query Analyzer.

If you are running on SQL Server 2005, SQL Server Query Analyzer is now a part
of Microsoft SQL Server Management Studio. In the Query Analyzer field, click the
AssistButton and browse to C:\Program Files\Microsoft SQL
Server\90\Tools\Binn\VSShell\Common7\IDE\SqlWb.exe this is the folder
and name of the .exe file for SQL Server Management Studio.

Connecting with the Server


The Index Defrag Tool allows you to specify which instance of SQL Server you want to
connect with, the kind of authentication you want to use and change the password for
you database login.

To connect with an instance of SQL Server:

1 In the Index Defrag Card window, click DeFrag, Setup, SQL Connection to open
the SQL Connection Setup window:

2 In the Server Name field, enter the name of the SQL Server that you want to
connect with.

3 In the NT Authentication field, enter a check mark if you want to use NT


authentication when connecting to the server. If you want to use a database login to
connect to the server, enter the user ID and password of the database login that
you want to use.

4 If you want to change the password of a database login, click Edit Password.

58
1.7 SQL Server Maintenance

5 Click Test Connection to see whether or not you can connect with the server.

Running the Index Defrag Tool


Now that you have set up the tool, you can start to run it.

To run the Index Defrag tool:

1 In the Index Defrag Card window, click Process. The tool now runs the DBCC
SHOWCONTIG SQL statement. When it is finished, it populates the Index Defrag
Card with the results:

You can also print a report that summarizes the information contained in this window.
To print this report click Defrag, DBCC Showcontig, Print Results.

The Index Defrag tool also contains functionality that allows you to get a better
overview of the data that has been collected. This makes it much easier for you to
identify the tables that contain fragmented indexes.

Identifying Fragmented Indexes


To identify the fragmented indexes:

1 Open the Index Defrag Card window and click Recommend to open the Index
Defrag Recommendations window.

59
Chapter 1. Running Navision on SQL Server

2 Click Generate and the Index Defrag Recommendations window is populated


with data telling you which indexes the tool recommends should be defragmented
and which should be re-indexed:

If the tool has found any tables that contain fragmented indexes or indexes that need
to be rebuilt, there is a check mark in the appropriate field for that table.

You can add or remove check marks as you see fit.

If you would like to defragment the indexes in every table or rebuild every index in the
database, click Override, DBReIndex - All or IndexDefrag - All depending on what you
want to do.

Important
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Rebuilding indexes is a complicated and time consuming process that locks many
resources and should only be carried out when there are no users accessing the
database. Index defragmentation can be performed at any time.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

To defragment the indexes:

1 When you have decided which indexes need defragmenting, click Recommend,
Create Scripts to create the SQL scripts that will defragment and rebuild the
indexes.

2 Click Recommend, Edit/Execute Scripts and the scripts are opened in the Query
Analyzer. You can now edit the scripts if you want to.

3 In Query Analyzer, run the script and the indexes are defragmented.

You can verify that the indexes have been defragmented by repeating the entire
process described in this section.

You can also print a report that lists all the tables and tells you which ones should be
defragmented and which ones should be rebuilt. To generate this report click
Recommend, Print Report.

60
1.7 SQL Server Maintenance

Maintenance Plan
Many of the topics that we have discussed in this document can be addressed by
creating a SQL Server maintenance plan. A maintenance plan can be created that
addresses such issues as automating backups and rebuilding indexes on a scheduled
basis. You can use a wizard to create a maintenance plan and this is fully documented
in SQL Server Books Online.

Excerpt from SQL Server Books Online:

The Database Maintenance Plan Wizard can be used to help you set up the core
maintenance tasks necessary to ensure that your database performs well, is regularly
backed up in case of system failure, and is checked for inconsistencies. The Database
Maintenance Plan Wizard creates a SQL Server 2000 job that performs these
maintenance tasks automatically at scheduled intervals.

The maintenance tasks that can be scheduled to run automatically are:

· Reorganizing the data on the data and index pages by rebuilding indexes with a
new fill factor. This ensures that database pages contain an equally distributed
amount of data and free space, which allows future growth to be faster. For more
information, see Fill Factor.
· Compressing data files by removing empty database pages.
· Updating index statistics to ensure the query optimizer has up-to-date information
about the distribution of data values in the tables. This allows the query optimizer to
make better judgments about the best way to access data because it has more
information about the data stored in the database. Although index statistics are
automatically updated by SQL Server periodically, this option can force the
statistics to be updated immediately.
· Performing internal consistency checks of the data and data pages within the
database to ensure that a system or software problem has not damaged data.
· Backing up the database and transaction log files. Database and log backups can
be retained for a specified period. This allows you to create a history of backups to
be used in the event that you need to restore the database to a time earlier than the
last database backup.
· Setting up log shipping. Log shipping allows the transaction logs from one database
(the source) to be constantly fed to another database (the destination). Keeping the
destination database in synchronization with the source database allows you to
have a standby server, and also provides a way to offload query processing from
the main computer (source server) to read-only destination servers.

The results generated by the maintenance tasks can be written as a report to a text
file, HTML file, or the sysdbmaintplan_history tables in the msdb database. The
report can also be e-mailed to an operator.

Optimizing Navision Tables


Navision includes a feature that you can use to optimize your database tables.

61
Chapter 1. Running Navision on SQL Server

To optimize a table:

1 Click File, Database, Information, and the Database Information window opens.

2 Click Tables and the Database Information (Tables) window opens.

3 Select the table or tables that you want to optimize and click Optimize. You can
select a single table, several tables or all the tables.

What Does Table Optimization Do?


Navision executes the following statement for each index that is maintained on SQL
Server:

CREATE …. INDEX …. WITH DROP_EXISTING

If any SIFT tables are maintained, the zero entries are removed from the SIFT tables.

What Are the Benefits?


The benefits of table optimization include:

· Better performance as a result of defragmenting the indexes.


· Fewer records in the SIFT tables. There is a simple reason for this - Navision does
not delete SIFT records. If you, for example, maintain some SIFT indexes on
temporary tables such as the Warehouse Activity Line table, the SIFT records
are not deleted from the SIFT tables when you delete (process) the records from
the source table.

How Long Does It Take to Optimize?


It takes the same amount of time as it takes to create the indexes from scratch.

Can the Process Be Scheduled on SQL Server?


You can automate the first part using DBCC REINDEX or DBCC INDEXDEFRAG, however
rebuilding SIFT tables cannot be automated easily and is not without its risks.

You can design custom SQL queries that delete records that contain zero values from
the SIFT tables - all the 's' columns contain zero values. However, those queries must
be modified whenever you change the design of the SIFT indexes in Navision.

62
1.7 SQL Server Maintenance

Server Performance Counters to Monitor


The following table contains a list server counters that you should monitor and lists the
recommended action that you should take if the counters are not at their optimum
value:

Performance Counter Name Instances Best Value Recommendation


Object (when the Best Value
is not met)

Memory Available MBytes SQL Server >5MB Add more memory


TS Servers Reserve less memory for
SQL Server

Pages/sec SQL Server <25 Add more memory


TS Servers Reserve less memory for
SQL Server

Physical Disk Avg. Disk Read SQL Server <2 Change disk system
Queue length Disks

Avg. Disk Write SQL Server <2 Change disk system


Queue length Disks

Processor % Processor SQL Server 0-80 Add more CPUs


Time TS Servers

System Processor queue SQL Server <2 Add more CPUs


Length TS Servers

Context SQL Server <8000 Set Affinity Mask


Switches/sec TS Servers
(multi-
processors)

Network Interface Output Queue SQL Server <2 Increase network


Length TS Servers capacity

SQL Server Full Scans/sec SQL Server Review Navision C/AL


Access Methods code

Page Splits/sec SQL Server 0 Defrag SQL Server


indexes
Review Navision C/AL
Keys

SQL Server Buffer cache hit SQL Server >90 Add more memory
Buffer Manager ratio

SQL Server Log Growths SQL Server O (during Increase and set the size
Databases peak times) of the transaction log

SQL Server User SQL Server


General Statistics Connections

SQL Server SQL Server Review Navision C/AL


General Statistics code

SQL Server SQL Server Review Navision C/AL


Locks code

63
Chapter 1. Running Navision on SQL Server

1.8 USEFUL L INKS


Here is a list of links to SQL Server resources that you might find useful:

SQL Server Home Page

SQL Server 2005 Product Information

SQL Server 2000 Operations Guide

The contents of this guide are drawn from the knowledge and best practice
guidelines drawn up by Microsoft Consulting Services (MCS), Microsoft’s Internal
Technical Support Group, and the SQL Server Development Team.

How To Use the SQLIOStress Utility to Stress a Disk Subsystem Such As SQL Server

Assessing the New Microsoft SQL Server 2000 Resource Kit

Microsoft SQL Server 2000 Resource Kit CD

Best Practices Analyzer Tool for Microsoft SQL Server 2000 1.0

Microsoft SQL Server Best Practices Analyzer is a database management tool that
lets you verify the implementation of best practices on your servers.

How to View SQL Server 2000 Performance Data

Identifying Common Administrative Issues for Microsoft SQL Server 2000

How to Monitor SQL Server 2000 Blocking

64
Part 2
Performance Troubleshooting
Chapter 2
Performance Problems – An
Introduction

This chapter introduces you to the basic elements that are


covered in the following chapters of this guide.

This chapter contains the following sections:

· Introduction
· The Test Environment
· Client Performance Indicators
· Common Performance Problems
Chapter 2. Performance Problems – An Introduction

2.1 INTRODUCTION
This section is designed to help you identify performance problems in a Microsoft
Business Solutions–Navision application. It describes how to troubleshoot on both
server options and describes and explains how to use the debugging tools that exist in
Navision to identify performance problems. It also describes how to use the
troubleshooting tools that come with this guide. Furthermore, it contains a brief
description of some of the Microsoft SQL Server tools that you can use.

This following chapters describe some of the most common performance problems
and the reasons that can cause them. The topics covered include:

· The Client Monitor and the Code Coverage tool


· The new performance trouble shooting tools
· Hardware setup and performance
· Setting up the test environment
· Identifying the clients that cause performance problems
· Profiling a task with the Client Monitor
· Identifying the worst server calls and the keys and filters that cause them
· Identifying the tasks that cause deadlocks on Microsoft Business
Solutions–Navision Database Server
· Using the SQL Error Log to identify the clients involved in deadlocks on SQL Server
· How to identify locking problems
· How to set up locking order rules and check whether or not your application follows
these rules
· Identifying index problems on SQL Server
· How to identify bad C/AL NEXT statements on SQL Server
· How to use Excel pivot tables to get an overview of the data in the Client Monitor.

Performance problems can be caused by bottlenecks in the hardware setup or by


problems within Navision. Performance problems in Navision can be caused by, for
example, the way that keys are designed, by the way that keys are used together with
filters, or by the way that tables or records are locked.

This guide is mainly concerned with identifying performance problems that exist within
Navision, even though poor performance can be caused by inadequate or badly
configured hardware. For more information about hardware considerations, see the
section "Hardware Setup" on page 102.

Database Servers
The two database server options for Navision, Navision Database Server and SQL
Server, behave differently both with regard to performance and locking. However, the
methodology and the tools that you can use to identify these bottlenecks are almost
the same.

68
2.1 Introduction

SQL Server Diagnostic Utility


SQL Server contains a utility called sqldiag that gathers and stores diagnostic
information. This tool is designed to help Microsoft Product Support Services gather
information. If you have any problem with your SQL Server and need help from
support personnel you will probably be asked to provide this information.

For more information about sqldiag, see "SQL Diagnostic Utility" on page 107.

69
Chapter 2. Performance Problems – An Introduction

2.2 THE TEST E NVIRONMENT


You will generally need to set up a separate test environment before you can start
troubleshooting and solving the performance problems that exist in a working
installation.

Setting up a test environment means:

1 Setting up a separate database server.

The test server should be set up on a separate computer and not on the computer
that is used by the production system. Using a separate test environment gives you
complete control over the system and over who has access to it. It also means that
the customer can continue to use the production system.

2 Copying the production system database to the test server.

If you are running on SQL Server, use the backup/restore functions in Enterprise
Manager to make a fast copy of the database to the test server. If you are running
on Navision Database Server, you can use the server-based backup program
HotCopy.

3 Warming up the server, to ensure that you get realistic measurements.

You must warm up the test server regardless of which server option you are using.

Warming Up SQL Server


If you have just turned SQL Server on or if you have just created the database or
company, you must warm up SQL Server by using the database and the company.
This ensures that the system resembles the actual customer installation and means
that you can generate realistic performance measurements.

You only need to run an initial test to warm up SQL Server. When SQL Server is
warmed up, the execution plans for most queries have already been generated and
are ready for use. Furthermore, the most frequently used data is now available in
memory.

When SQL Server is not warmed up, you will, for example, see that inserting,
modifying or deleting the first record in a table that contain keys that have
SumIndexFields associated with them can take up to several seconds to finish. This
would normally be done much faster in a working installation.

Warming Up Navision Database Server


If you have just started Navision or if you have just opened the database and company
in Navision, you must warm up Navision Database Server before you can generate
realistic performance measurements. Running an initial test will warm up Navision by
ensuring that all the objects you need are available on the client.

70
2.2 The Test Environment

Debugging Tools
To ensure that you have the most recent version of the debugging tools, install the
newest version of the Navision client on the computers that you are going to use to
test the installation.

Note
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The earlier version of Navision, Navision Financials does not have a Client Monitor for
the SQL Server Option. The Client Monitor is an essential element in the
troubleshooting procedures described in this training material.

If you develop corrections in Navision that you would like to implement in Navision
Financials, you can use the REMID feature from the Navision Upgrade Toolkit to
change the Navision objects before importing them into Navision Financials.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

71
Chapter 2. Performance Problems – An Introduction

2.3 CLIENT P ERFORMANCE I NDICATORS


When you connect a client to Navision Database Server, you can see status
information about server calls that take two or more seconds in the status bar.

Typical server calls that generate status information are:

· Server calls that modify or delete sets of records.


· Server calls that scan an index or an entire table to find some data.
· Server calls that need to lock a record or a table can be forced to wait until other
transactions are committed and release the locks that they placed.

Therefore, you should keep an eye on the indicator in the status bar when you are
trying to identify problematic tasks because this information might be all you need to
break down a performance problem.

However, if you are using the Microsoft SQL Server Option for Microsoft Business
Solutions–Navision, the client’s user interface has no indicators to tell you how much
time is spent on long running tasks. You therefore need some other procedures to
break down a performance problem on SQL Server. However, when clients are
waiting for locks to be released by other clients, you can see this information by using
the session monitor as described in the section "Using the Session Monitor to Locate
the Clients that Cause Performance Problems on SQL Server" on page 77.

72
2.4 Common Performance Problems

2.4 COMMON P ERFORMANCE PROBLEMS


Many things can cause poor performance but some of the most common causes are:

· The way that keys are defined combined with the way that they are used in filters or
queries when you want to read data.
· The way keys are defined with SumIndexFields combined with the way that
summing FlowFields are defined.
· The number of keys that are defined with SumIndexFields when running on SQL
Server.

The procedures described in this material will help you identify the places where these
problems occur.

If you are running on SQL Server, you must also be aware of a very specific
performance problem that applies to forms:

· Setting the SourceTablePlacement property to the default value (Saved) will often
make opening forms that display data from tables that contain many records
(1,000,000 or more), for example G/L entries, very slow. To fix this problem, set the
SourceTablePlacement property to First in these forms.

Performance and Locking


Performance problems that are related to specific tasks should always be tested in the
test environment, when no other users are logged on to the database server. This will
help you determine whether the performance problem is related to the task itself, or if
the problem only occurs when the task is executed in combination with other tasks on
the same server.

Performance problems that are caused by a specific task are described in the section
"The Client Monitor" on page 83. Performance problems that are caused by clients
spending time waiting for other clients to release locks on resources that the client in
question wants to place exclusive locks on are described in the section "Locking" on
page 91.

Deadlocks occur when concurrent transactions try to lock the same resources but
don’t lock them in the same order. This can either be solved by always using the same
locking order or by using a “locking semaphore” that will prevent these transactions
from running concurrently. You can find out which resources are causing deadlocks by
following the procedure described in the section "Locking" on page 91.

Hardware
When you are trying to identify any bottlenecks that exist in an installation, you must
also check the hardware that the installation is running on. You should check the
hardware that is being used by both the server and the client computers. For more
information, see the section "Hardware Setup" on page 102.

73
Chapter 2. Performance Problems – An Introduction

Getting Some Assistance


Depending on your expertise in the technical areas that are essential for
troubleshooting performance problems, you can choose to follow all the steps
described in this troubleshooting guide or you can stop at some point and give your
results to other experts.

For example, the data that you get by using the Client Monitor (as described in the
next chapter) can easily be transferred to other experts. They will then be able to
identify performance problems solely on the basis of this data, before looking at the
objects from the database and before having access to the data in the database.

74
Chapter 3
Identifying Performance Problems

This chapter explains how to identify performance problems


in Navision.

This chapter contains the following sections:

· Using the Session Monitor to Locate the Clients that


Cause Performance Problems on Navision Database
Server
· Using the Session Monitor to Locate the Clients that
Cause Performance Problems on SQL Server
· Time Measurements
· The Client Monitor
· Locking
Chapter 3. Identifying Performance Problems

3.1 USING THE S ESSION MONITOR TO LOCATE THE C LIENTS THAT C AUSE
PERFORMANCE P ROBLEMS ON N AVISION D ATABASE SERVER
You use the Session Monitor (Navision Database Server) to identify the clients that
cause performance problems on Navision Database Server.

You must import some helper objects before you can start to identify the clients that
are causing performance problems:

1 Import the Session Monitor (Navision Server).fob file. The objects imported
include form 150010, Session Monitor (Navision Srv).

2 Run form 150010, Session Monitor (Navision Srv):

The Session Monitor (Navision Server) window displays updated information from
the Session table. The information is refreshed every second by default, but you can
change this setting by clicking Monitor, Setup.

By default, the most active sessions in terms of the amount of records scanned are
shown at the top of the list. The Records Scanned field tells you how many records
the database server has scanned in order to find the records that this session wanted.

The sessions with the largest number of scanned records are the ones that should be
investigated first. Follow the guidelines in the other sections to investigate these
sessions.

Note
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
In the Session Monitor (Navision Server) window, if the value in the
Found/Scanned Ratio field is high, this indicates that the indexes and queries match.
A value of 30-50% is normal, while 3% is low.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

76
3.2 Using the Session Monitor to Locate the Clients that Cause Performance Problems on SQL Server

3.2 USING THE S ESSION MONITOR TO LOCATE THE C LIENTS THAT C AUSE
PERFORMANCE P ROBLEMS ON SQL S ERVER
You use the Session Monitor (SQL Server) to locate the clients that cause
performance problems when you are using the SQL Server Option for Navision.

You must import some helper objects before you can start to identify the clients that
are causing performance problems:

1 Ensure that you have installed the client components for SQL Server from the
Microsoft SQL Server CD.

2 Open the Microsoft SQL Server Management Studio (the Query Analyzer tool on
SQL Server 2000) and click New Query, File, Open. Browse to the folder where you
have stored the session monitor tools and open the Session Monitor (SQL
Server).sql file.

3 In the toolbar, select the Navision database that you want to monitor from the drop
down list of available databases.

4 Click Query, Execute. The Session Monitor (SQL Server).sql drops the
current Session (SQL) view and creates a new view to replace it.

5 In Navision, open the Object Designer and import the Session Monitor (SQL
Server).fob file.

6 Run form 150014, Session Monitor (SQL Server) window and click Monitor,
Setup and the Session Monitor Setup (SQL) window opens.

7 Click the Log tab:

8 Enter a check mark in the Log Session Activity check box. Specify a time interval
in the Log Interval (sec) field. If you are only interested in identifying any blocks
that occur, enter a check mark in the Log only Blocks check box.

You must enter a value in the Log Interval (sec) field, for example 15 seconds.
Navision will now log the current level of activity once every 15 seconds. When
Navision logs the activity, it runs through the active sessions and creates one entry
per session. These entries are logged to the Session Information History table.

77
Chapter 3. Identifying Performance Problems

If you select the Log only Blocks option, Navision only logs the sessions that are
involved in blocking. This includes the sessions that are blocked and the sessions
that are blocking others.

Finding the correct setting for the Log Interval (sec) option is a matter of achieving
the right balance between how accurately you want Navision to log activity, and
how large a performance overhead you are ready to accept. 15 seconds would
seem to give a reasonable balance.

The Session Monitor creates a log entry every time the specified interval has
elapsed and therefore any blocks that occur within the specified interval are not
logged. If you want to make sure, that the Session Monitor catches as many blocks
as possible, you can decrease the interval to, for example, 5 seconds or lower. To
simultaneously decrease the performance overhead, select the Log only Blocks
option so that only those sessions involved in a block are logged.

9 In the Session Monitor Setup (SQL) window, click Functions, Start Logging.

The Log Session Activity codeunit runs as a single instance codeunit. This means
that the only way to stop the codeunit is to close and re-open the current company.
However, you can suspend it by removing the check mark from the Log Session
Activity option. The codeunit will still keep running in the background, but it will not
log anything. To resume logging, you must restart the session and start logging
again.

Viewing the Log


As mentioned earlier, the information gathered by the Session Monitor is inserted into
the Session Information History table. You use the Session Information History
window to view this data.

The Session Information History window shows you all the entries that have been
logged. You can analyze this information in Navision or export it to a .csv file and
analyze it in Excel.

To export the log to Excel:

1 Open the Object Designer and run form 90015, Session Information History:

78
3.2 Using the Session Monitor to Locate the Clients that Cause Performance Problems on SQL Server

2 Click Functions, Export and in the Options tab specify a file name as well as the
location where you want to save the log file.

Remember to use the extension .csv, for example, Log.csv.

3 Click OK to export the log file.

4 Locate the file and double click it to open it in Excel.

Excel has a limitation of 65,536 rows. If the log contains more entries that this, you
will not be able to open the file in Excel. You can get around this by applying a filter
and only exporting some of the entries. Alternatively, you can delete some of the
entries and then export the remainder. If you think that you are close to the limit of
65,536 records, click Functions, Count to find out how many entries are within the
current filter.

You can also view the information gathered by the Session Monitor in the Session
Monitor (SQL Server) window:

The Session Monitor (SQL Server) window displays updated information from a
view. This view is similar to the one that lies behind the Session table. The Session
Monitor (SQL Server) window tells you which clients are currently connected to the
server as well as the current load on the server. The information is refreshed every
second by default, but you can change this setting by clicking Monitor, Setup and
entering a changing the number in the Refresh Rate (sec) field.

By default, the most active sessions in terms of physical I/O are listed at the top of the
Session Monitor (SQL Server) window. These are the sessions that should be
investigated first. You can also list the sessions according to their memory usage,
because this is also is a good indicator of activity. SQL Server can also give you
information about the CPU usage, but unfortunately this information is not very reliable
on SQL Server 2000.

To investigate these sessions, follow the guidelines described in the following


sections.

79
Chapter 3. Identifying Performance Problems

The Session Monitor (SQL Server) window also lists information about the clients
that are waiting for locks held by other clients to be released, as well as the identity of
the clients that placed the locks. If you want to concentrate on this area only, look at
and/or filter the fields starting with Blocked (Blocked, Blocked by Connection ID,
Blocked by User ID, Blocked by Host Name).

Deleting Entries
To delete entries from the Session Information History table:

1 Open the Object Designer and open the Session Information History window.

2 Click Functions, Delete Log Entries.

This deletes all the entries within the current filter. If you have not placed a filter, all the
entries are deleted.

80
3.3 Time Measurements

3.3 TIME MEASUREMENTS


When you are working with performance problems, we recommended that you make
accurate measurements of the amount of time it takes to perform the tasks in
question.

You can monitor and measure the time a task takes by using a stopwatch or
preferably by using some helper objects.

The ActivityLog.fob file contains some objects that enable you to measure
precisely the time spent on a particular task. These measurements are stored in a
table enabling you to review the amount of time spent on the task.

To measure the time spent on a task:

1 Import the ActivityLog.fob file and compile the objects that are imported.

2 Use the objects in the same way as they are used in the codeunit contained in
Sample use of Activity Log.fob.

To see how this codeunit works:

1 Import the Sample use of Activity Log.fob file and open codeunit 150000,
Sample use of Activity Log in the editor to see how the Activity Log is used to
monitor the task in the codeunit.

2 Run codeunit 150000, Sample use of Activity Log. This creates a new line in the
Activity Log.

3 Run form 150000, Activity Log:

The Activity Log window contains the following information:

· The date and time at which the activity was started.


· The date and time at which the activity finished.
· The status of the activity.
· The total time in milliseconds that the activity took.
· The number of operations that the activity involved.
· The average time that each operation took.

81
Chapter 3. Identifying Performance Problems

· A check mark indicating which session was yours.


· The connection ID of the session that carried out the activity.

Details of the Activity


The Activity Log functionality allows you to see any changes that an activity made to
the database. However, you must first enable this feature. Codeunit 150000, Sample
use of Activity Log shows you how to enable this feature.

After you have enabled the feature and carried out the activity that you are interested
in monitoring, select the activity and click Activity, Table Size Changes to see the
changes that were made to the database.

The Table Size Changes window appears:

This window contains information about the net amount of records that were entered
into or deleted from the tables in the database during the activity. Knowing the amount
of changes that an activity involves helps you understand the amount of time used by
a particular task.

82
3.4 The Client Monitor

3.4 THE CLIENT MONITOR


The Client Monitor is an important tool for troubleshooting performance and locking
problems. You can also use it to identify the worst server calls and to identify index
and filter problems in the SQL Server Option. The Client Monitor and the Code
Coverage tool now work closely together allowing you to easily identify, for example,
the code that generated a particular server call.

Using the Client Monitor to Profile a Task


The Client Monitor displays all the details of the server calls made by the current
client, including the time spent on each server call. This makes it an invaluable tool
when you want to analyze a particular task and study the server calls that the task
makes as well as the code that initiates the server calls.

To profile and analyze a given task in Navision using the Client Monitor, you must
have some Client Monitor helper objects:

1 Import the Client Monitor.fob file, including Form 150020, Client Monitor.

2 Compile all of the objects that are imported. This must be done because some of
the field definitions are different on the two database server options.

3 Click Tools, Debugger, Code Coverage to open the Code Coverage window. Start
the Code Coverage tool and then start the Client Monitor just before you are ready
to perform the task that you want to investigate.

4 Perform the task that you want to test.

5 When you have finished the task, stop the Client Monitor and then stop the Code
Coverage tool. The Client Monitor uses many lines to describe a single server call,
and this makes it difficult to use for data analysis.

6 Run form 150020 Client Monitor. This processes the data from the Client Monitor
and displays it in a new window.

The Client Monitor window displays and formats the data that has been gathered by
the Client Monitor so that it can be more easily analyzed. It carries out a kind of cross
tabulation of the operations and parameters and uses one line per server call.

Important
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
When you use these tools, make sure that your test tasks are focused on the area that
you are interested in testing. If you spend time doing other tasks, both the Client
Monitor and the Code Coverage tool will fill up with irrelevant information.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

You can analyze the Client Monitor data within Navision, or you can perform a more
detailed analysis by importing the data into pivot tables in Excel.

If you are analyzing a lengthy task that takes an hour or more to run, you should
consider restricting the scope of the task. You can limit the task by applying filters that

83
Chapter 3. Identifying Performance Problems

will make the task handle less data, or by stopping the task after several minutes. You
can then use the Client Monitor data from the part of the task that was performed as
the basis for your analysis.

Here is an example of the kind of data that you can see in form 150020, Client
Monitor (taken from SQL Server):

The Client Monitor displays the database function calls that are made by the C/AL
code as follows:

Function call Function Name (+ Search Method) shown in the Client Monitor:
in C/AL:

GET FIND/NEXT(‘=’)

FIND(‘-‘) FIND/NEXT(‘-’)

NEXT FIND/NEXT(‘>’)

ISEMPTY ISEMPTY (as long as no MARKEDONLY filter is used)

CALCSUMS CALCSUMS

CALCFIELDS If the FlowField is of type sum: CALCSUMS


If the FlowField is of type lookup: FIND/NEXT(‘-‘)

LOCKTABLE LOCKTABLE

INSERT If the table is not locked already: LOCKTABLE


INSERT

MODIFY If the table is not locked already: LOCKTABLE


Often on SQL Server: FIND/NEXT(‘=’)
MODIFY

DELETE If the table is not locked already: LOCKTABLE


DELETE

MODIFYALL If the table is not locked already: LOCKTABLE


MODIFYALL (as long as validation code isn’t executed)

DELETEALL If the table is not locked already: LOCKTABLE


DELETEALL (as long as validation code isn’t executed)

84
3.4 The Client Monitor

Generally, the FIND/NEXT function in the Client Monitor means:

Function Search Means:


Name: Method:

FINDFIRST Find the first record within the current filter using the current key.

FINDLAST Find the last record within the current filter using the current key.

FINDSET Find the set of records that matches the current key and the current
filter.

FIND/NEXT – FIND(‘-’) or find the first record (within the current filter using the current
key).

FIND/NEXT + FIND(‘+’) or find the last record (within the current filter using the current
key).

FIND/NEXT > NEXT or find the record with key values greater than the current key
values (within the current filter using the current key).

FIND/NEXT < NEXT(-1) or find the record with key values less than the current key
values (within the current filter using the current key).

FIND/NEXT = GET or find one record with key values equal to the current key values
(within the current filter).

In the Client Monitor:

· For each FIND/NEXT, the Search Method field tells you what kind of FIND/NEXT
server call it is (see the previous table).
· For each FIND/NEXT, the Search Result field contains a value (the same as the
Search Method field) if data was found. Otherwise, it is blank.
· For each COMMIT, the Commit field normally contains the value 1, indicating that
the transaction was committed. If the transaction was rolled back and not
committed, the value is 0.
· The Elapsed Time (ms) field shows the amount of time that elapsed between the
start and the end of a server call. This means that it gives you the total time spent
on the client, the network and the server.

Note that the information in the Elapsed Time (ms) field tends to be slightly
inaccurate for fast server calls. If a server call takes 1 ms, then the average Elapsed
Time (ms) is 1 ms. However, 90% of the server calls will have 0 in the Elapsed Time
(ms) field, while 10% of the server calls will have 10 in the Elapsed Time (ms) field.

· The Table Size (Current) field tells you the amount of records that were in the
table when you started the Client Monitor. This information can help explain why
server calls to a particular table can take such a long time (that is, when the
Elapsed Time (ms) values are high).
· The SQL Status field tells you whether the data was read from the server or from
the client cache.
· A check mark in the Locking field indicates that the server call has locked data. To
focus on locking issues you can place a filter on this field.

85
Chapter 3. Identifying Performance Problems

The Client Monitor cannot be used to analyze temporary tables because they are
stored on the client and inserts into temporary tables do not involve server calls.

The Client Monitor gathers and displays all the database function calls that are made
by the C/AL code, as well as the server calls that are made indirectly by, for example,
opening a form. The C/AL code that initiates a database function call can be seen in
the Source Object and Source Text fields (these fields are not shown in the previous
picture).

However, this is not the most informative way of viewing the code. To get a more
detailed overview of the code that made a particular database call, select the line in
question in the Client Monitor and click C/AL Code. The Code Overview window
opens displaying the code that made the database function call. This gives you a
quick way of locating the relevant piece of code.

The focus of the Code Overview window is the line in the code that made the
database call. If C/SIDE made the database call, the Code Overview window will
point to the last C/AL code that was executed before the database call was made.

Note
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
When the code contains an IF..THEN ..ELSE statement, the break point that is
displayed in the Code Overview window will often be slightly inaccurate.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

You can also specify that the information collected by the Code Coverage tool is
stored permanently in a table. This means that the data is always available and you
will not have to profile the task every time you need to see this data. However,
permanently storing this information makes the Client Monitor and the Code Coverage
tool work more slowly.

To store the code coverage information in a table:

1 Run form 150021, Client Monitor Setup.

2 Enter a check mark in the Save Code Coverage Info. field.

86
3.4 The Client Monitor

The Most Problematic Server Calls


To identify the most problematic server calls:

1 Profile your task as described in “Using the Client Monitor to Profile a Task”.

2 Run form 150020, Client Monitor.

3 Click View, Sort to sort the data in the Client Monitor window. Sorting by Elapsed
Time (ms) in descending order is one of the more useful ways of viewing the data.
The server calls that took the longest time will then be listed at the top. This will help
you identify the most problematic server calls.

After you have identified the problematic server calls, you can optimize the slow
queries that are caused by filters and keys that don’t match (especially on SQL
Server) by using the appropriate keys in the queries or possibly by changing the
existing keys.

Note
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Rearranging the fields in a key, for example, by moving the first field in a key to the
end and by changing the references to the key (both in the code and in the
properties), can solve a performance problem. Furthermore, any FlowFields in the key
that are calculating sums are guaranteed to work as long as all the original fields are
left in the key. If you remove some of the fields from a key, you can cause some
FlowFields that are calculating sums to produce run time errors.

When you are developing an application, you will not encounter problems like the one
described above, unless you enter some pseudo-realistic amounts of data into the
database.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Carefully read the information about performance in the section "Keys, Queries and
Performance" on page 110.

The Overall Picture


You can use the Client Monitor together with Microsoft Excel to analyze the time spent
by tasks that make many server calls (that is, 100+). You must begin by profiling the
task as described in the section "Using the Client Monitor to Profile a Task" on page
83. The data must then be transferred into Excel.

To transfer the data into Excel:

1 Run form 150020, Client Monitor.

2 Click Export and save as a .txt file.

3 In Excel, import the .txt file that you have just saved.

You now have a spreadsheet containing the basic Client Monitor data.

87
Chapter 3. Identifying Performance Problems

You use the pivot tables in Excel to generate a sorted list of the tables that take the
most time. The pivot table must also list the functions that are used as well as the
search method and the search result for each table. You must also check the server
calls that generated the sums to see the average amount of elapsed time for each
server call.

You can also create new spreadsheets that summarize different operations on various
tables by using the Pivot Table feature.

To create a pivot table:

1 Click Data, PivotTable, PivotChart Report and click Finish in the wizard that
appears.

You can now choose which breakdown of the Client Monitor elements you would
like to analyze. This example uses the typical elements.

2 In the PivotTable window, select the Table Name button and drag it over to the
range that says "Drop Row Fields Here".

3 Repeat this procedure for Function Name, Search Method and Search Result
placing each field to the right of the previous field.

4 Drag Elapsed Time (ms) over to the range that says "Drop Data Items Here".

You can now see the breakdown of timings per table/function etc. summed up.

To list the most important tables first:

1 Double-click the Table Name field heading.

2 Select Advanced.

3 In the AutoSort options, select Descending. In the Using drop-down list select
Sum of Elapsed Time (ms).

To list the most important functions per table first, repeat this procedure for the
Function Name field.

If there are any totals that you do not want to see, right-click the field that contains the
word Total and click Hide.

For more information about pivot tables, see the online help in Excel.

88
3.4 The Client Monitor

Here is a snapshot of an analysis of elapsed time:

Analysis at Table and Function Level – Sum of Elapsed Time (ms)

Sum of Elapsed Time (ms)

Table Name Function Name Search Search Result Total


Method

Reservation Entry CALCSUMS (blank) (blank) 380

DELETE (blank) (blank) 1382

FIND/NEXT - - 630
(blank) 361

+ + 0

(blank) 30

= = 140
(blank) 0

> > 40
(blank) 1851

INSERT (blank) (blank) 20

LOCKTABLE (blank) (blank) 0

MODIFY (blank) (blank) 1673

Reservation Entry Total 6507

Calendar Entry FIND/NEXT + + 490

< (blank) 1792

Calendar Entry Total 2282

These pivot tables contain information about the amount of time that was spent
performing certain operations. The table contains a table-by-table breakdown of the
time spent on each of the tables that were involved in the operations as well as the
total time used. The tables are listed in ascending order of time spent. The table also
lists the functions that were called on each table and the time it took to perform each
operation.

The first table listed is the Reservation Entry table. You can see that a total of 380
milliseconds were spent calculating sums and 1382 milliseconds were spent
performing deletions. A list of the FIND operations follows, which details the
operations that were performed and the amount of time that each operation took.

A total of 20 milliseconds were spent performing inserts into the Reservation Entry
table. A total of 1673 milliseconds were spent modifying data in the Reservation
Entry table.

89
Chapter 3. Identifying Performance Problems

If time is spent on modifications (INSERT, MODIFY, DELETE, MODIFYALL, DELETEALL)


and the average time spent on modification server calls is high, you should check the
keys in the table. The number and length of the keys greatly influence the time it takes
to make modifications on both database servers. On SQL Server, if the average time
spent on modification server calls is extremely long, check whether or not there are
SumIndexFields in the keys and whether or not the MaintainSIFTIndex property is set
to Yes for these keys. Setting the MaintainSIFTIndex property to No for these keys
can greatly improve the speed of modification server calls, but there will be some loss
of performance for those tasks that generate sums using these keys.

90
3.5 Locking

3.5 LOCKING
You can also use the Client Monitor to see whether or not locking prevents concurrent
tasks from being executed at the same time and to identify where deadlocks occur in a
specific multi-user scenario. Before using the Client Monitor to identify potential
locking problems you must import some Client Monitor helper objects:

1 Import the Client Monitor.fob file, including the following forms:

Form 150020, Client Monitor

Form 150024, Client Monitor (Multi-User)

2 Compile all the objects that are imported. This must be done because some of the
field definitions are different on the two database server options.

To identify locking problems:

1 Before trying to identify any locking problems, you must make sure the clocks are
synchronized on all the client machines. You can set up computers running most
Windows operating systems so that their clocks are automatically synchronized
with the time on a server when they log on by using the following command: “net
time \\computername /SET”.

2 Start the Client Monitor on all the computers that are involved in the multi-user test.

3 Perform the tasks that you want to test.

4 Stop the Client Monitor on all the computers.

5 On each client computer, process the common client monitor data by running form
150020, Client Monitor.

6 Run form 150024, Client Monitor (Multi-User) on one of the client computers:

The Client Monitor (Multi-User) window contains information about the


transactions that might have blocked other clients. The COMMIT in the transactions
that might have blocked other clients is shown together with the server calls made
by the clients that are potentially waiting for the COMMIT to be completed. The other
server calls are listed before the COMMIT.

91
Chapter 3. Identifying Performance Problems

For example, in the window shown here, you can see that the client with
Connection ID 52 has waited almost 4 seconds. This indicates that there might
have been a block.

7 Check the values in the Elapsed Time (ms) field to see if there are any server calls
that are taking longer than normal.

A high value in the Elapsed Time (ms) field indicates that a server call is waiting
for locks to be released. Use the filtering features in Navision to see all the details
of the locking scenarios. The value in the Locking field is Yes when a server call
locks data. You should put a filter on this field to limit the data.

8 If a deadlock has occurred on SQL Server, the SQL Error field in the Client
Monitor (Multi-User) window will show the error message generated by SQL
Server. To see all these lines, set the filter “SQL Error<>’’. These lines are marked
red and bold.

For more information about locking, see the section "Locking in Navision – A
Comparison of Navision Database Server and SQL Server" on page 112.

· You can “optimize” locking by checking whether a set is empty or not. If it is not
empty, you can continue locking and reading through the set. If a table is empty, it
should not be used at all, and this will remove all the locking problems that are
caused by that table. Such a solution can, for example, be acceptable for the
comment lines on a sales order during posting.
· You can limit the locking on SQL Server by setting the MaintainSIFTIndex property
on a key with SumIndexFields to No.
· In Navision, you can use the SIFTLevelsToMaintain property to more precisely
control the level of performance and locking on SQL Server.

Locating the Tasks That Cause Deadlock Problems on Navision Database Server
Two transactions can only cause a deadlock if they both lock some of the same
tables. However, if both of the transactions are defined so that the first lock they place
is on the same table, no deadlock will occur. In other words, a deadlock occurs when
two or more transactions have a conflicting locking order.

When you want to identify potential locking problems, you only need to use one client.
You run the tasks on the client that you think might cause locking problems and gather
all of the relevant data in the Client Monitor and then open a special form to see if
there are any potential deadlocks.

To find potential deadlocks on Navision Database Server:

1 Import the Client Monitor.fob file, if you have not already imported it.

2 Compile all the objects that are imported. This must be done because some of the
field definitions are different on the two database server options.

3 Prepare the tasks that you want to run concurrently without any deadlocks
occurring.

92
3.5 Locking

4 Open and start the Code Coverage tool and then open and start the Client Monitor.

5 Perform the tasks.

6 Stop the Client Monitor and then stop the Code Coverage tool.

7 Run form 150030, Potential Deadlocks (Navision):

The Potential Deadlocks (Navision Server) window lists all the potential deadlocks
or locking order conflicts that occurred during the tasks that you performed and is
based on an analysis of the locking order that is used in each write transaction that
was carried out. Each line in the window contains information about two transactions
that represent a potential deadlock. These transactions represent a potential deadlock
because they both lock some of the same tables but lock them in a different order.
Sets of transactions that do not contain a potential deadlock are not displayed.

Each line in the window contains the following information:

Field: Contains:

Transaction No. The number of the first of the two transactions that represent a
potential deadlock.

Transaction Description By default, this field contains the name of the object that initiated
the first transaction.

Transaction No. 2 The number of the second transaction.

Transaction Description 2 By default, this field contains the name of the object that initiated
the second transaction.

First Locked Table ID The ID and the name of the first table that is common to both
transactions and is locked by the first transaction.
First Locked Table Name

First Locked Table ID 2 The ID and the name of the first table that is common to both
transactions and is locked by the second transaction.
First Locked Table Name 2

From this form, you can access more detailed information about the locks that were
placed by each transaction, as well as the code that was used.

93
Chapter 3. Identifying Performance Problems

· To see the details about one of the transactions, select a line in the Potential
Deadlocks (Navision Server) window and click Transaction, Client Monitor,
Transaction 1 or 2. The details are displayed in the Client Monitor.
· To see the locking operations that were performed by one of the transactions,
select a line in the Potential Deadlocks (Navision Server) window and click
Transaction, Client Monitor (Locking Operations Only), Transaction 1 or 2. The
locking operations are displayed in the Client Monitor.
· To see the locking order used by one of the transactions, select a line in the
Potential Deadlocks (Navision Server) window and click Transaction, Locking
Order, Transaction 1 or 2. The locking order used by the transaction you select is
displayed in the Transaction Locking Order window.
· To see the code that made the first server call in the first transaction, select the
appropriate line in the Potential Deadlocks (Navision Server) window and click
C/AL Code 1. The code is displayed in the Code Overview window.

Locking Order Rules on Navision Database Server


As stated in the previous section a deadlock occurs when two or more transactions
have a conflicting locking order and no deadlock can occur if the first lock the
transactions place is on the same table.

From this we can deduce that if you have an agreed set of rules that determine the
locking order that must be used in your application then no deadlocks will occur. The
problem is that agreeing to a set of rules is one thing, adhering to the rules is another
thing entirely. Remembering the rules isn’t as easy as it sounds – there could be a
large number of them.

There is also a tool that can help you see whether or not your application follows the
locking rules that you have specified.

This involves determining which rules must apply in your application, entering them
into the system and then checking your application to see if it violates the rules or not.

After you have determined the rules that must govern locking order in your application,
you can enter them into the system:

1 Run form 150029, Locking Order Rules:

94
3.5 Locking

2 Enter the rules that you want your application to follow. Each entry represents a rule
and you can enter as many rules as you need.

Each rule specifies that one table must be locked before another table. Needless to
say your rules must not contain any conflicts. The consistency of the rules is checked
when you test your application to see if it follows the rules. If the rules contain a
conflict you will receive an error message.

After you have entered the rules, you can test whether or not your application follows
the rules.

1 Open and start the Code Coverage tool and then open and start the Client Monitor.

2 Perform the tasks that you want to test.

3 Stop the Client Monitor and then stop the Code Coverage tool.

4 Run form 150027,Transactions (Locking Rules).

5 The Transactions (Locking Rules) window appears listing the transactions that
you performed:

· If any of the transactions violated the rules that you specified earlier a check mark is
displayed in the Locking Rule Violations field.
· To see the C/AL code that broke the locking rule, select the transaction in question
and click C/AL Code. The Code Coverage window appears displaying the relevant
code.
· To see all the operations and tables that were involved in a particular transaction,
select the transaction and click Transaction, Client Monitor.
· To see only the locking operations and the tables that were locked in a particular
transaction, select the transaction and click Transaction, Client Monitor (Locking
Operations Only).
· To see the order in which tables were locked by a particular transaction, select the
transaction and click Transaction, Locking Order.
· To see the locking rules that were violated by a particular transaction, select the
transaction and click Transaction, Locking Rules Violated.

95
Chapter 3. Identifying Performance Problems

Identifying Locking Order Rules on SQL Server


The previous section explained how to analyze the record locking order of
transactions and compare them with the locking order rules on Navision Database
Server. The procedure is basically the same on SQL Server. As explained earlier the
locking order rules must be defined in the Locking Order Rules window.

The most practical way of identifying the locking order rules is to focus on the key
procedures and document the order in which they lock the various tables.

To identify the locking order rules for a procedure:

1 Start the Client Monitor.

2 Perform an isolated task (procedure).

3 Stop the Client Monitor.

4 Run form 150025, Transactions.

5 Run form 150026, Transaction Locking Order.

6 Open form 150029, Locking Order Rules.

7 Use the locking order displayed in the Locking Order Rules window as manual
input for the Locking Order Rules form.

After you have defined the locking order rules, you can perform the same procedure
again. When you are finished, you can check whether or not the second procedure
violated the locking order rules that you have just specified.

To verify that the locking order rules are correct, you must:

1 Start the Client Monitor.

2 Perform the procedure again.

3 Stop the Client Monitor.

4 Run form 150027, Transactions (Locking Rules).

5 In the Transactions (Locking Rules) window, check the Locking Rule


Violations field.

6 Click Transaction, Locking Rules Violations to check the violation.

7 In Locking Rules Violations window, you can see if and how your transaction
violated the locking order rules. You can now fix the code, amend the locking order
rules or decide that the probability of the two processes running concurrently is
minimal and ignore the violation.

Needles to say, you can run the Code Coverage tool during the test and this will help
you identify the code that is causing the conflicts. However the Code Coverage tool is
quite a 'heavy' tool and should not be used for multiple transactions, or run for more
than 10 minutes. It takes a long time to perform complex procedures when the Code

96
3.5 Locking

Coverage tool is running and it also takes some time to process the information in the
tool afterwards. On the other hand, if you only use the Client Monitor without the Code
Coverage tool you will normally get enough information to identify the code that is
causing the conflict.

In summary, you can run the Client Monitor on its own in a real-life environment
without experiencing any major decrease in performance and still get the information
needed to resolving incorrect locking order problems.

Important
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
You must perform all these steps for both server options because the locking order
could look very different in the two options.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

The only way to lock a record on SQL Server is to actually read it, while on Navision
Database Server you can use the LOCKTABLE command to dictate and define the
locking order. The LOCKTABLE command on SQL Server Option does not do anything
except flag internally that locks will be placed later when records are read.

Locating the Clients That Cause Deadlock Problems on SQL Server


When a deadlock occurs, one transaction continues, while the others are aborted. To
find out which clients are involved in a deadlock on SQL Server and which clients are
not stopped, see the section "SQL Server Error Log" on page 105.

Identifying the Keys That Cause Problems on SQL Server


When you are using the SQL Server Option, it is important that any customizations
that you develop contain keys and filters that are designed to run optimally on SQL
Server. We have therefore developed a tool that helps you test your keys and filters in
a development environment and ensure that they conform to the demands made by
SQL Server

When you want to see if an application contains any keys that might cause problems,
you only need a demonstration database and not a copy of the customer’s database.
Inserting, modifying and deleting records will take a similar amount of time in both
large and small databases. However, the amount of time that it takes to read will be
very different, especially for tables that become very large in the customer’s database.
This means that an analysis based on the Elapsed Time (ms) field in the Client
Monitor is not enough when you are troubleshooting performance problems in a small
database.

To check whether the keys and filters are designed correctly:

1 Open and start the Code Coverage tool and then open and start the Client Monitor.

2 Perform the task that you want to test.

3 Stop the Client Monitor and then stop the Code Coverage tool.

97
Chapter 3. Identifying Performance Problems

4 Open form 150022, Client Monitor (Key Usage):

Queries with filters that do not use the keys appropriately are shown in this window.
The key that is being used is split into two fields: Good Filtered Start of Key and Key
Remainder. Those fields that are filtered to a single value, but are not used efficiently
on SQL Server because of the selection and ordering of fields in the key that is used,
are shown in the Key Candidate Fields field.

Remember, SQL Server always wants a single-value field as the filter or as the first
field in the filter. For more information about keys and filters, see the section "Keys,
Queries and Performance" on page 110.

The information in the Client Monitor (Key Usage) window is sorted by table name
and only displays the queries with filters that can potentially cause problems. You will
therefore have to use your knowledge of the application that you are developing and
the theory behind the design of keys for SQL Server to decide which of the queries
you can ignore and which you will have to modify.

In general, you should:

· Ignore those queries that use small tables that will not grow very large in the
customer’s database. An example of a small table that you can readily ignore is
table 95, G/L Budget Name.
· Focus on the large tables and the tables that will grow rapidly in the customer’s
database.
· Focus on the Key Candidate Fields and the Good Filtered Start of Key fields.
· As mentioned earlier you should look at the Good Filtered Start of Key field. If
this field is empty, check the Key Candidate Fields field and decide whether the
fields shown here would have made a difference if they had been used efficiently.
This is where your understanding of the application will help you.

You need to decide whether the suggested key will make the query run more
efficiently or not. If the suggested filter is a field that contains many different values,
it will probably help.

If you really understand the theory behind the design of efficient queries, you will
know whether or not it makes sense to change the application.

98
3.5 Locking

However, if you are uncertain about the theory you will have to test the suggested
query. This means that you must use a database that contains a realistic amount of
data and then test the existing filter and the suggested new filter to see which one
works more efficiently.

Note
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Client Monitor (Key Usage) window also gives you information about how a key
works on Navision Database Server. Some keys will give problems on both servers
and others will only be a problem on SQL Server.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Reading Problems on SQL Server


When you are running on SQL Server and are reading data, some NEXT function calls
can generate separate SQL statements instead of using the data that is stored in
cache, and this can cause performance problems. The Client Monitor also contains a
tool that can help you locate the C/AL code that generated these problematic SQL
NEXT statements.

To locate this C/AL code, you must perform the tasks in question, identify the
problematic NEXT statements and then locate the C/AL code that generated these
statements.

1 Import the Client Monitor.fob file, if you have not already imported it.

2 Compile all the objects that are imported. This must be done because some of the
field definitions are different on the two database server options.

3 Prepare the tasks that you want to perform.

4 Open and start the Code Coverage tool and then open and start the Client Monitor.

5 Perform the tasks that you want to test.

6 Stop the Client Monitor and then stop the Code Coverage tool.

7 Run form 150023, Client Monitor (Cache Usage):

99
Chapter 3. Identifying Performance Problems

The Client Monitor (Cache Usage) window lists the problematic NEXT statements
that were generated by the tasks that you performed. All of the normal NEXT
statements are ignored.

These NEXT statements are problematic because they generate their own SQL
statement and database call to retrieve data from the server and do not use the data
that is already cached on the client. It is difficult to know with certainty why these NEXT
statements behave like this. It might be:

· because C/SIDE is unable to optimize this function.


· a result of the way that the code is written.

However these NEXT statements only cause problems if they are run repeatedly as
part of a long-running batch job and generate a large number of extra server calls.

To see the C/AL code that generated the SQL NEXT statements, select the line you
are interested in and click C/AL Code. The code that generated the statement is
displayed in the Code Overview window.

To export the data in this window as a text file, click Export.

100
Chapter 4
Other Issues

This chapter describes some of the other issues that you


must take into consideration when you are trying to identify
and solve performance problems.

This chapter contains the following sections:

· Hardware Setup
· SQL Server Error Log
· Keys, Queries and Performance
· Locking in Navision – A Comparison of Navision
Database Server and Microsoft SQL Server
· Configuration Parameters
Chapter 4. Other Issues

4.1 HARDWARE SETUP


It is also possible that poor performance is caused by the hardware that you are using.
There are three aspects of the hardware that you should consider: the server, the
clients and the network.

Hardware Setup for the Database Server


To identify any hardware bottlenecks that may exist on a server, use the Windows
Performance Monitor. You should check the time usage for the both the disks and the
CPU.

If the disks cause the bottleneck, you can lessen the problem and improve
performance by:

· Adding more RAM to decrease disk activity, such as swapping.


· Spreading the SQL Server database and log files across more disks.
· Adding one disk controller per disk.
· Installing faster disks.
· If the bottleneck is caused by the CPUs, you can improve performance by installing
faster CPUs.

On SQL Server, if the performance problems are only significant when multiple users
are working simultaneously, adding more CPUs will lessen the problem. If the
bottleneck is not caused by the hard disks or the CPUs, it is outside the server and lies
either in the network or on the client computers.

It is always a good idea to expand the search for bottlenecks beyond the server
hardware and try to identify the reasons behind the unacceptable usage of resources.

You can use the Windows Performance Monitor to monitor the following server
performance counters:

Object Name Counter Name Instances Best Values Recommendation


(Best Values not
met)

Memory Available Mbytes SQL Server 5MB Add more memory


TS Servers Reserve less memory
for SQL Server

Pages/sec SQL Server <25 Add more memory


TS Servers Reserve less memory
for SQL Server

Physical Disk Avg Disk Read SQL Server <2 Change disk system
Queue length Disks

Avg Disk Write SQL Server <2 Change disk system


Queue length Disks

102
4.1 Hardware Setup

Object Name Counter Name Instances Best Values Recommendation


(Best Values not
met)

Processor % Processor time SQL Server 0-80 Add more CPUs


TS Servers

System Processor Queue SQL Server <2 Add more CPUs


Length TS Servers

Context SQL Server <8000 Set Affinity Mask


Switches/sec TS Servers
SQL Server (Multi-processor)

Network Interface Output queue SQL Server <2 Increase network


length TS Servers capacity

SQL Server Full Scans/sec SQL Server Review Navision C/AL


Access Methods Code

Page Splits/sec SQL Server 0 Defrag SQL Server


Indexes,
Review Navision C/AL
Keys

SQL Server Buffer Cache Hit SQL Server >90 Add more memory
Buffer Manager Ratio

SQL Server Log Growths SQL Server 0 (during peak Increase and set the
Databases times) size of the transaction
log

SQL Server User Connections SQL Server


General Statistics

SQL Server Lock SQL Server Review Navision C/AL


Locks Requests/sec Code

Lock Wait/sec SQL Server Review Navision C/AL


Code

Hardware Setup for Clients


To identify any hardware bottlenecks that may exist on a client computer, use the
Windows Performance Monitor. You should check the time usage for both the disks
and the CPU.

If the disks on the client computer cause the bottleneck, you can lessen the problem
and improve performance by:

· Adding more RAM to decrease disk activity such as swapping.


· Installing faster disks.

If the bottleneck is the CPU, you can improve performance by changing to a faster
CPU.

103
Chapter 4. Other Issues

If it isn’t the hard disks or the CPU that are causing the bottleneck, the problem (if it is
a hardware problem) is located outside the client and lies either in the network or on
the server.

Network
If neither the database server nor the clients seem to be the bottleneck in a system,
look at the network. To find out if a critical task runs slowly because the network is
slow, run the same task directly on the database server. The difference in speed tells
you the maximum improvement you can hope to achieve by having an optimal
network. You can also create a form based on the Performance virtual table to see
some measurements for the network you are using.

The good thing about hardware problems is that the solution is always hardware. It will
always be possible to improve performance by adding new hardware. The
improvement may only be slight, but it is still an improvement. Unfortunately, this can
give you a false sense of security. Your installation may have some serious problems,
the symptoms of which you are only alleviating by adding new hardware, while the
real source of the problem remains unaltered.

104
4.2 SQL Server Error Log

4.2 SQL SERVER E RROR LOG


To monitor deadlocks on SQL Server, enable trace flags 1204 and 3605 (if you are
running SQL Server 2000 and have not upgraded to Service Pack 3) in the Startup
Parameters of SQL Server, and then restart the server. This will generate diagnostic
messages in the SQL Server Error Log.

Trace Flag 1204 – returns the type of locks that are participating in the deadlock and
the current command affected.

Trace Flag 3605 – sends the trace output to the error log. This is always done when
you are running SQL Server 2000 Service Pack 3 or later. (If you start SQL Server
from the command prompt, the output also appears on the screen.)

You should also implement a SQL Server alert to notify an operator when a deadlock
occurs so that an investigation of the deadlock scenario can start immediately.

To enable a trace flag:

1 Open SQL Server Configuration Manager.

2 In the left-hand panel, right-click SQL Server 2005 Services and click Open to see
all the services:

105
Chapter 4. Other Issues

3 In the left-hand panel, right-click SQL Server (MSSQLSERVER) and select


Properties to open the SQL Server (MSSQLSERVER) Properties window:

4 In the SQL Server (MSSQLSERVER) Properties window, click the Advanced


tab and expand the Advanced option if necessary.

5 Click the Startup Parameters property and open the drop down list:

6 Enter the name of the trace flag in the drop down list. You must enter a semicolon
after masterlog.ldf and use a space to separate the trace flags.

If your application experiences some locking problems, you can open the SQL error
log and see the types of locks that were placed and the clients that were involved.

To open the SQL error log:

1 Open the Startup Parameters window as described earlier.

106
4.2 SQL Server Error Log

2 Use the scroll bar in the Existing Parameters field to see which of the parameters
is the error log. This also tells you the path to where the error log is stored. This is
normally:

C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\ERRORLOG.

3 Locate the error log and open it in Notepad.

SQL Diagnostic Utility


SQL Server contains a utility called sqldiag that gathers and stores diagnostic
information and the contents of the query history trace (if it is running). The output file
includes error logs, output from sp_configure and additional version information. If
the query history trace is running when the sqldiag utility is invoked, the trace file will
contain the last 100 SQL events and exceptions.

sqldiag is intended to expedite and simplify information gathering by Microsoft


Product Support Services. If you have any problem with your SQL Server you will be
asked to provide this information.

Running with the default options:

Run sqldiag.exe. The default path is:

SQL Server 2000:

C:\Program Files\Microsoft SQL Server\MSSQL\Binn\sqldiag.exe

SQL Server 2005:

C:\Program Files\Microsoft SQL Server\90\Tools\Binn

107
Chapter 4. Other Issues

Collecting the results:

The results are collected in a file called SQLdiag.txt and stored in the MSSQL\LOG
folder. The default path is:

SQL Server 2000:

C:\Program Files\Microsoft SQL Server\MSSQL\LOG\SQLdiag.txt

and

SQL Server 2005:

C:\Program Files\Microsoft SQL Server\90\Tools\Binn\SQLDIAG

For more information:

http://msdn.microsoft.com/library/en-us/coprompt/cp_sqldiag_96k9.asp

Typical Performance Problems


The following table lists some of the problems that have been dealt with by Microsoft
Business Solutions support teams:

Area Problem Examples and Comments

Hardware Badly using RAID5,


configured incorrect stripe size, incorrect cache policy,
disks database files and transaction log files on same disk,
too many disks on the same channel,
database files not spread across the disks,

RAM 4GB RAM on machine, 2GB only used by SQL Server - must use
Windows Advanced Server and enable 3GB switch.

Parallelism Reducing the Max Degree of Parallelism to 1 increased


performance of heavy batch jobs.

Platform Maintenance No maintenance plan

No archiving plans - tables can grow very large over the years.

Old version of We recommended that you keep your C/SIDE up to date - there
C/SIDE are potentially big performance improvements and upgrading
C/SIDE is nowhere near as costly as upgrading the application.

108
4.2 SQL Server Error Log

Area Problem Examples and Comments

Application SIFT Large 'empty' SIFT tables - Optimize the tables to remove the
entries which have zero sums or are based on nonexistent source
tables.

Too may SIFT indexes on hot tables.

Too many SIFT levels maintained on big composite indexes.

SIFT indexes on 'temporary' tables such as 36, 37, 38, 39 and so


on. The record sets in these tales are typically small and only live in
the database for several days.

SIFT indexes on the Warehouse Activity Line table can cause


deadlocks.

Indexes Too many indexes on 'hot' tables such as the ledger entry tables.

Users User input screen left hanging after processing has been started
thereby locking the tables involved.

Renaming an object during working hours and causing massive


blocks because the application needs to update (lock) all the
relevant tables.

Business Logic Enabling "Automatic Cost Posting" and/or "Expected Cost Posting
and Timing to G/L" in Inventory Setup. These functions extend the processing
time of any stock transaction by posting to the General Ledger and
locking the G/L Entry table. It makes more sense to run this as a
batch job outside normal working hours.

Enabling "Update on Posting" in Analysis Views. This function


extends processing times of any G/L transaction by updating the
view. It makes more sense to run this as a batch job outside normal
working hours.

Posting to large journals during work hours.

Inventory cost adjustment during work hours.

Batch posting stock to G/L during work hours.

Running MRP during work hours.

Calcfields Enabling automatic credit limit warnings (but not using the feature)
- the code updates a lot of calcfields.

Enabling automatic stock out warnings (but not using the feature) -
the code updates a lot of calcfields.

Processing One lengthy transaction can, for example, fetch all the sales lines,
Lengthy create and release picks, update associated purchase orders and
Transactions so on. While this activity is run, almost all the users are blocked.

Dimensions Too many dimensions. The more dimensions you have the more
indexes and SIFT tables need to be updated and maintained.

General Enabling the Find as you Type feature.

Running ad hoc queries on large tables.

109
Chapter 4. Other Issues

4.3 K EYS , Q UERIES AND PERFORMANCE


When you write a query that searches through a subset of the records in a table, you
should always carefully define the keys both in the table and in the query so that
Navision can quickly identify this subset. For example, the entries for a specific
customer will normally be a small subset of a table containing entries for all the
customers.

If Navision can locate and read the subset efficiently, the time it will take to complete
the query will only depend on the size of the subset. If Navision cannot locate and
read the subset efficiently, performance will deteriorate. In the worst-case scenario,
Navision will read through the entire table and not just the relevant subset. In a table
containing 100,000 records, this could mean taking either a few milliseconds or
several seconds to answer the query.

To maximize performance, you must define the keys in the table so that they facilitate
the queries that you will have to run. These keys must then be specified correctly in
the queries.

For example, you would like to retrieve the entries for a specific customer. To do this,
you apply a filter to the Customer No. field in the Cust. Ledger Entry table. In order
to run the query efficiently on SQL Server, you must have defined a key in the table
that has Customer No. as the first field. You must also specify this key in the query.

The table could have these keys:

Entry No.
Customer No.,Posting Date

The query could look like this:

SETCURRENTKEY("Customer No.");
SETRANGE("Customer No.",'1000');
IF FINDSET(FALSE,FALSE) THEN
REPEAT
UNTIL NEXT = 0;

You should define keys and queries in the same way when you are using Navision
Database Server. However, Navision Database Server can run the same query
almost as efficiently if Customer No. is not the first field in the key. For example, if you
have defined a key that contains Country/Region Code as the first field and
Customer No. as the second field and if there are only a few different country/region
codes used in the entries, it will only take a little longer to run the query.

The table could have these keys:

Entry No.
Country/Region Code, Customer No.,Posting Date

The query could look like this:

SETCURRENTKEY("Country/Region Code","Customer No.");


SETRANGE("Customer No.",'1000');
IF FINDSET(FALSE,FALSE) THEN

110
4.3 Keys, Queries and Performance

REPEAT
UNTIL NEXT = 0;

But SQL Server will not be able to answer this query efficiently and will read through
the entire table.

In conclusion, SQL Server makes stricter demands than Navision Database Server on
the way that keys are defined in tables and on the way they are used in queries.

This section has been copied from the Application Designer’s Guide.

111
Chapter 4. Other Issues

4.4 L OCKING IN N AVISION – A COMPARISON OF N AVISION DATABASE S ERVER


AND SQL S ERVER

The following information explains the differences and similarities in the way that
locking is carried out in the two database options for Navision, Navision Database
Server and SQL Server.

Important
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The following information only covers the default transaction type UpdateNoLocks for
the SQL Server Option for Navision. For information about the other transaction types,
see the C/SIDE Reference Guide online Help.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Both Server Options


In the beginning of a transaction, the data that you read will not be locked. This means
that reading data will not conflict with transactions that modify the same data. If you
want to ensure that you are reading the latest data from a table, you must lock the
table before you read it.

Locking Single Records


Normally, you must not lock a record before you read it even though you may want to
modify or delete it afterwards. When you try to modify or delete the record, you will get
an error message if another transaction has modified or deleted the record in the
meantime. You receive this error message because C/SIDE checks the timestamp
that it keeps on all of the records in a database and detects that the timestamp on the
copy you have read is different from the timestamp on the modified record in the
database.

Locking Record Sets


Normally, you lock a table before reading a set of records in that table if you want to
read these records again later to modify or delete them. You must lock the table to
ensure that another transaction does not modify these records in the meantime.

You will not receive an error message if you do not lock the table even though the
records have been modified as a result of other transactions being carried out while
you were reading them.

Minimizing Deadlocks
To minimize the amount of deadlocks that occur, you must lock records and tables in
the same order for all transactions. You can also try to locate areas where you lock
more records and tables than you actually need, and then diminish the extent of these
locks or remove them completely. This can prevent conflicts on these records and
tables.

If this does not prevent deadlocks, you can, as a last resort, lock more records and
tables to prevent transactions from running concurrently.

112
4.4 Locking in Navision – A Comparison of Navision Database Server and SQL Server

If you cannot prevent the occurrence of deadlocks by programming, you must run the
deadlocking transactions separately.

Locking in Navision Database Server


Data that is not locked will be read from the same snapshot (version) of the database.

If you call LOCKTABLE or a modifying function (for example, INSERT, MODIFY or


DELETE), on a table, the whole table will be locked.

Locking Record Sets


As mentioned previously, you will normally lock a table before reading a set of records
in that table if you want to read these records again later to modify or delete them.
With Navision Database Server you can choose to lock the table with
LOCKTABLE(TRUE,TRUE) after reading the records for the first time instead of locking
with LOCKTABLE before reading the records for the first time.

When you try to modify or delete the records, you will receive an error message if
another transaction has modified the records in the meantime.

You will also receive an error message if another transaction has inserted a record
into the record set in the meantime. However, if another transaction has deleted a
record from the record set in the meantime, you will not be able to notice this change.
The purpose of locking with LOCKTABLE(TRUE,TRUE) after reading the records for the
first time is to postpone the table lock that Navision Database Server puts on the table.
This improves concurrency.

Locking in SQL Server


When data is read without locking, you will get the latest (possibly uncommitted) data
from the database.

If you call Rec.LOCKTABLE, nothing will happen right away. However, when data is
read from the table after LOCKTABLE has been called, the data will be locked.

If you call INSERT, MODIFY or DELETE, the specified record will be locked
immediately. This means that two transactions, which insert, modify or delete
separate records in the same table will not conflict. Furthermore, locks will also be
placed whenever data is read from the table after the modifying function has been
called.

It is also important to note that even though SQL Server initially puts locks on single
records, it can also choose to escalate a single record lock to a table lock. It will do so
if it determines that the overall performance will be improved by not having to set locks
on individual records. The improvement in performance must outweigh the loss in
concurrency that this excessive locking causes.

If you specify what record to read, for example, by calling Rec.GET, that record will be
locked. This means that two transactions, which read specific, but separate, records in
a table will not cause conflicts.

113
Chapter 4. Other Issues

If you browse a record set (that is, read sequentially through a set of records), for
example, by calling Rec.FIND('-') or Rec.NEXT, the record set (including the
empty space) will be locked as you browse through it. However, the locking
implementation used in SQL Server will also cause the record before and the record
after this record set to be locked. This means that two transactions, which just read
separate sets of records in a table, will cause a conflict if there are no records
between these two record sets. When locks are placed on a record set, other
transactions cannot put locks on any record within the set.

Note that C/SIDE decides how many records to retrieve from the server when you ask
for the first or the next record within a set. C/SIDE then handles subsequent reads
with no additional effort, and fewer calls to the server will give better performance. In
addition to improving performance, this means that you cannot precisely predict when
locks are set when you browse.

Note
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Navision tables that have keys defined for SumIndexFields cause additional tables to
be created in SQL Server to support SIFT functionality. One table is created for each
key that contains SumIndexFields. When you modify a Navision table that has keys
defined for SumIndexFields, modifications can be made to these SQL Server tables.
When this happens, there is no guarantee that two transactions can modify different
records in the Navision table without causing conflicts.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Locking Differences in the Code


A typical use of LOCKTABLE(TRUE,TRUE) in Navision Database Server is shown in
the first column of the table below. The equivalent code for the SQL Server Option is
shown in the second column. The code that works on both servers is shown in the
third column.

The RECORDLEVELLOCKING property is used to detect whether record level locking is


being used. If this is the case, then you are using the SQL Server Option for Navision.
This is currently the only server that supports record level locking.This section has
been copied from the Application Designer’s Guide.

Navision Database Server SQL Server

IF Rec.FIND('-') THEN Rec.LOCKTABLE;

REPEAT IF Rec.FINDSET(FALSE,FALSE) THEN

UNTIL Rec.NEXT = 0; REPEAT

Rec.LOCKTABLE(TRUE,TRUE); UNTIL Rec.NEXT = 0;


IF Rec.FIND('-') THEN IF Rec.FINDSET(TRUE) THEN

REPEAT REPEAT

Rec.MODIFY; Rec.MODIFY;

UNTIL Rec.NEXT = 0; UNTIL Rec.NEXT = 0;

114
4.4 Locking in Navision – A Comparison of Navision Database Server and SQL Server

Both Server Types


IF Rec.RECORDLEVELLOCKING THEN
Rec.LOCKTABLE;
IF Rec.FINDSET(FALSE,FALSE) THEN

REPEAT

UNTIL Rec.NEXT = 0;
IF NOT Rec.RECORDLEVELLOCKING THEN

Rec.LOCKTABLE(TRUE,TRUE);

IF Rec.FINDSET(TRUE) THEN

REPEAT

Rec.MODIFY;

UNTIL Rec.NEXT = 0;

115
Chapter 4. Other Issues

4.5 CONFIGURATION P ARAMETERS


You can configure a Navision database by creating a SQL Server table configuration
parameter table and entering parameters into the table that will determine some of the
behavior of Navision when it is using this database.

In the database create a table, owned by dbo:

CREATE TABLE [$ndo$dbconfig] (config VARCHAR(512) NOT NULL)

GRANT SELECT ON [$ndo$dbconfig] TO public

(You can add additional columns to this table, if necessary. The length of the config
column should be large enough to contain the necessary configuration values, as
explained below, but need not be 512.)

There is one record in this table for each parameter that is required.

The following sections describe the parameters that you can enter into this table.

Index Hinting
It is possible to force SQL Server to use a particular index when executing queries for
FINDFIRST, FINDLAST, FINDSET and GET statements. This can be used as a
workaround when SQL Server's Query Optimizer picks the wrong index for a query.

Index hinting can help avoid situations where SQL Server’s Query Optimizer chooses
an index access method that requires many page reads and generates long-running
queries with response times that vary from seconds to several minutes. Selecting an
alternative index can give instant 'correct' query executions with response times of
milliseconds. This problem usually occurs only for particular tables and indexes that
contain certain data spreads and index statistics.

In the rare situations where it is necessary, you can direct Navision to use index
hinting for such problematic queries. When you use index hinting, Navision adds
commands to the SQL queries that are sent to the server. These commands bypass
the normal decision making of SQL Server's Query Optimizer and force the server to
choose a particular index access method.

Note
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
This feature should only be used after all the other possibilities have been exhausted,
for example, updating statistics, optimizing indexes or re-organizing column order in
indexes.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

The index hint syntax is:

IndexHint=<Yes,No>;Company=<company name>;Table=<table
name>;Key=<keyfield1,keyfield2,...>; Search Method=<search method
list>;Index=<index id>

116
4.5 Configuration Parameters

Each parameter keyword can be localized in the "Driver configuration parameters"


section of the .stx file.

The guidelines for interpreting the index hint are:

· If IndexHint=No, the entry is ignored.


· All the keywords must be present or the entry is ignored.
· If a given keyword value cannot be matched the entry is ignored.
· The values for the company, table, key fields and search method must be
surrounded by double-quotes to delimit names that contain spaces, commas etc.
· The table name corresponds to the name supplied in the Object Designer (not the
Caption name).
· The key must contain all the key fields that match the required key in the Keys
window in the Table Designer.
4 The search method contains a list of search methods used in FIND statements,
that must be one of '-', '+', '=', or '!' (for the C/AL GET function).

The index ID corresponds to a SQL Server index for the table: 0 represents the
primary key; all other IDs follow the number included in the index name for all the
secondary keys. Use the SQL Server command sp_helpindex to get information
about the index ID associated with indexes on a given table. In this example we are
looking for index information about the Item Ledger Entry table:

sp_helpindex 'CRONUS International Ltd_$Item Ledger Entry'

When Navision executes a query, it checks whether or not the query is for the
company, table, current key and search method listed in one of the IndexHint entries.
If it is, it will hint the index for the supplied index ID in that entry.

Note that:

· If the company is not supplied, the entry will match all the companies.
· If the search method is not supplied, the entry will match all the search methods.
· If the index ID is not supplied, the index hinted is the one that corresponds to the
supplied key. This is probably the desired behavior in most cases.
· If the company/table/fields are renamed or the table's keys redesigned, the
IndexHint entries must be modified manually.

Here are a few examples that illustrate how to add an index hint to the table by
executing a statement in Query Analyzer:

EXAMPLE 1

INSERT INTO [$ndo$dbconfig] VALUES

('IndexHint=Yes;Company="CRONUS International Ltd.";Table="Item


Ledger Entry";Key="Item No.","Variant Code";Search Method="-
+";Index=3')

117
Chapter 4. Other Issues

This will hint the use of the $3 index of the CRONUS International Ltd_$Item Ledger Entry
table for FIND('-') and FIND('+') statements when the Item No.,Variant Code key is set as
the current key for the Item Ledger Entry table in the CRONUS International Ltd. company.

EXAMPLE 2

INSERT INTO [$ndo$dbconfig] VALUES

('IndexHint=No;Company="CRONUS International Ltd.";Table="Item


Ledger Entry";Key="Item No.","Variant Code";Search Method="-
+";Index=3')

The index hint entry is disabled.

EXAMPLE 3

INSERT INTO [$ndo$dbconfig] VALUES

('IndexHint=Yes;Company="CRONUS International Ltd.";Table="Item


Ledger Entry";Key="Item No.","Variant Code";Search Method="-
+";Index=')

This will hint the use of the Item No.,Variant Code index of the CRONUS International Ltd_$Item
Ledger Entry table for FIND('-') and FIND('+') statements when the Item No.,Variant
Code key is set as the current key for the Item Ledger Entry table in the CRONUS International
Ltd. company. This is probably the way that the index-hinting feature is most commonly used.

EXAMPLE 4

INSERT INTO [$ndo$dbconfig] VALUES

('IndexHint=Yes;Company=;Table="Item Ledger Entry";Key="Item


No.","Variant Code";Search Method="-+";Index=3')

This will hint the use of the $3 index of the CRONUS International Ltd_$Item Ledger Entry
table for FIND('-') and FIND('+') statements when the Item No.,Variant Code key is set as
the current key for the Item Ledger Entry table for all the companies (including a non-company
table with this name) in the database.

EXAMPLE 5

INSERT INTO [$ndo$dbconfig] VALUES

('IndexHint=Yes;Company="CRONUS International Ltd.";Table="Item


Ledger Entry";Key="Item No.","Variant Code";Search Method=;Index=3')

This will hint the use of the $3 index of the CRONUS International Ltd_$Item Ledger Entry
table for every search method when the Item No.,Variant Code key is set as the current key for the
Item Ledger Entry table in the CRONUS International Ltd. company.

Lock Granularity
When Navision is reading data from tables it places forced ROWLOCK hints, by default.
These rowlock hints prevent SQL Server from automatically determining the

118
4.5 Configuration Parameters

granularity (row, page or table) of the locks that it places. This can lead to a high
locking overhead on the server, even though concurrency is optimum.

To allow SQL Server to determine the granularity of the locks that it places, the
DefaultLockGranularity parameter can be used in the database configuration
table.

The syntax of the DefaultLockGranularity parameter is:

DefaultLockGranularity=<Yes,No>

When the parameter is Yes, SQL Server will choose the granularity of the locks that it
places. When the parameter is No, Navision will override SQL Server and place
ROWLOCKs.

This section has been copied from the Application Designer’s Guide.

119
Chapter 4. Other Issues

120
Appendix A
Object List

This appendix contains a list of all the objects in the fob files
described in this manual.
Appendix A. Object List

A.1 OBJECT L IST


The following tables list all the objects contained in the .fob files described in this
document:

Session Monitor (Navision Server).fob

Table 150010 Session Monitor Setup

Table 150011 Session Information

Form 150010 Session Monitor (Navision Server)

Form 150011 Session Monitor Lines

Form 150012 Session Monitor Setup (SQL)

Form 150013 Sessions

Codeunit 150010 Session Monitor Mgt.

Session Monitor (SQL Server).fob

Table 150012 Session Monitor Setup (SQL)

Table 150013 Session Information (SQL)

Table 150014 Session (SQL)

Form 150014 Session Monitor (SQL Server)

Form 150015 Session Monitor Lines (SQL)

Form 150016 Session Monitor Setup (SQL)

Form 150017 Sessions (SQL Server)

Codeunit 150011 Session Monitor Mgt. (SQL Server)

ActivityLog.fob

Table 150000 Activity Log Entry

Table 150001 Table Size Change

Form 150000 Activity Log

Form 150001 Table Size Changes

Sample Use of Activity Log.fob

Codeunit 150000 Sample Use of Activity Log

Client Monitor.fob

Table 150020 Client Monitor

Table 150021 Client Monitor Setup

Table 150022 Code Coverage Line

122
A.1 Object List

Table 150023 Transaction

Table 150024 Transaction Lock

Table 150025 Locking Rule

Table 150026 Potential Deadlock

Table 150027 Locking Rule Violation

Form 150020 Client Monitor

Form 150021 Client Monitor Setup

Form 150022 Client Monitor (Key Usage)

Form 150023 Client Monitor (Cache Usage)

Form 150024 Client Monitor (Multi-User)

Form 150025 Transactions

Form 150026 Transaction Locking Order

Form 150027 Transactions (Locking Rules)

Form 150028 Locking Rule Violations

Form 150029 Locking Order Rules

Form 150030 Potential Deadlocks (Navision)

Form 150031 C/AL Code – Code Coverage

Form 150032 C/AL Code – Adv. Code Coverage

Report 150020 Code Coverage

Dataport 150020 Export Client Monitor

Dataport 150021 Export/Import Code Coverage

Dataport 150022 Import/Export Locking Rules

Codeunit 150020 Client Monitor Mgt.

Codeunit 150021 Client Monitor to C/AL Code

Codeunit 150022 Transaction Mgt.

Codeunit 150023 Transaction to C/AL Code

Codeunit 150024 Transaction Lock to C/AL Code

Codeunit 150025 Locking Order Rules Mgt.

Codeunit 150027 Deadlock to C/AL Code

Codeunit 150028 Deadlock to C/AL Code 2

Index Defrag Tool.fob

Table 50090 Index Defrag

Table 50091 SQL IO Setup

Table 50092 Index Defrag Recommendations

Table 50093 SQL Databases

123
Appendix A. Object List

Table 50094 Index Fragmentation

Table 50095 DBCC Statement

Table 50096 SQL Connection Setup - ID

Form 50090 Index Defrag Card

Form 50091 SQL IO Setup

Form 50092 Index Defrag List

Form 50093 SQL Databases List

Form 50094 Indez Defrag Recommendations

Form 50095 Index Defrag Record Info

Form 50096 SQL Connection Seup - ID

Form 50097 SQL Connection Edit Pwd - ID

Report 50090 Index Defrag Results

Report 50091 Index Defrag Recommendation

Codeunit 50090 IndexDefrag Management

Codeunit 50091 SQL Script Management

Key Information Tool.fob

Table 50070 Keys

Table 50071 Tables

Table 50072 Key Fields

Table 50073 SIFT Levels

Table 50074 SIFT Records

Table 50075 Key Information Setup

Table 50076 SIFT Key Records

Table 50077 SQL Connection Setup

Form 50070 Key Information

Form 50071 Key List

Form 50072 Table List

Form 50073 SIFT Level List

Form 50074 Key Field List

Form 50075 Key Information Setup

Form 50076 Key Information Field List

Form 50077 SQL Connection Setup

Form 50078 SQL Connection Edit Pwd

Report 50070 Key Information

Report 50071 Key Field Information

124
A.1 Object List

Codeunit 50070 Key Information

Codeunit 50071 SQL Information

Codeunit 50072 Key Field Information

NDST.fob

Table 72000 sp_spaceused file line

Table 72001 Space Used Version

Form 72000 Space Used Version Lines

Form 72001 Space Used Version List

Form 72005 DB Sizing Card

Codeunit 72000 Snapshot from files (SQL)

Codeunit 72001 Make Spreadsheet

Codeunit 72003 Snapshot from DB Info

125
Appendix A. Object List

126
INDEX

A L
activity log locking
time measurements . . . . . . . . . . . . . 81 Client Monitor . . . . . . . . . . . . . . . . . . 91
differences in the code . . . . . . . . . . 114
C record sets . . . . . . . . . . . . . . . . . . . 112
client record sets on Navision Database Server
hardware setup . . . . . . . . . . . . . . . . 103 113
Client Monitor single records . . . . . . . . . . . . . . . . . 112
cache usage . . . . . . . . . . . . . . . . . . 100 SQL Server . . . . . . . . . . . . . . . . . . . 113
exporting data to Excel . . . . . . . . . . . 87 locking and deadlocks
Find/Next function . . . . . . . . . . . . . . . 85 on SQL Server . . . . . . . . . . . . . . . . . 48
key usage . . . . . . . . . . . . . . . . . . . . . 98 locking order rules
locking . . . . . . . . . . . . . . . . . . . . . . . . 91 Navision Database Server . . . . . . . . 94
next statements . . . . . . . . . . . . . . . . 100 SQL Server . . . . . . . . . . . . . . . . . . . . 96
profiling a task . . . . . . . . . . . . . . . . . . 83 LockTimeout
Clustered database property . . . . . . . . . . . . . . . 48
key property . . . . . . . . . . . . . . . . . . . 21 looping with FINDSET . . . . . . . . . . . . . 10

D M
database property maintenance plan
LockTimeout . . . . . . . . . . . . . . . . . . . 48 SQL Server . . . . . . . . . . . . . . . . . . . . 61
database server monitoring performance counters
hardware setup . . . . . . . . . . . . . . . . 102 SQL Server . . . . . . . . . . . . . . . . . . . . 63
dbcc show_statistics
SQL Server command . . . . . . . . . . . 19 N
deadlocks Navision Database Server
minimizing . . . . . . . . . . . . . . . . . . . . 112 deadlocks . . . . . . . . . . . . . . . . . . . . . 92
Navision Database Server . . . . . . . . 92 locking order rules . . . . . . . . . . . . . . . 94
locking record sets . . . . . . . . . . . . . 113
F warming up . . . . . . . . . . . . . . . . . . . . 70
Find/Next function Navision Database Sizing Tool . . . . 34–44
Client Monitor . . . . . . . . . . . . . . . . . . 85 estimating database size for a new system
FINDSET 35
looping with . . . . . . . . . . . . . . . . . . . . 10 factors that affect database growth . . 43
forecasting database growth . . . . . . . 39
H scope of . . . . . . . . . . . . . . . . . . . . . . 34
hardware setup
Navision indexes
client . . . . . . . . . . . . . . . . . . . . . . . . 103
optimizing on SQL Server . . . . . . . . . 20
database server . . . . . . . . . . . . . . . 102
Navision tables
optimizing on SQL Server . . . . . . . . . 61
I
index hinting
P
examples . . . . . . . . . . . . . . . . . . . . . 117
performance
SQL Server . . . . . . . . . . . . . . . . . . . 116
keys and queries . . . . . . . . . . . . . . . 110
Performance Monitor
K
key property counters to monitor . . . . . . . . . . . . . 102
Clustered . . . . . . . . . . . . . . . . . . . . . . 21 performance problems
SQLIndex . . . . . . . . . . . . . . . . . . . . . 21 common causes of . . . . . . . . . . . . . . 73
key usage locking . . . . . . . . . . . . . . . . . . . . . . . . 73
Client Monitor . . . . . . . . . . . . . . . . . . 98 profiling a task
Keys Client Monitor . . . . . . . . . . . . . . . . . . 83
Key Information Tool . . . . . . . . . . . . . 27 property
keys and queries MaintainSIFTIndex . . . . . . . . . . . . . . 15
defining . . . . . . . . . . . . . . . . . . . . . . 110 SIFTLevelsToMaintain . . . . . . . . . . . 15
Index

R SQL Server 2000


reading problems using PINTABLE . . . . . . . . . . . . . . . . 52
SQL Server . . . . . . . . . . . . . . . . . . . . 99 SQL Server commands
record sets dbcc show_statistics . . . . . . . . . . . . . 19
locking . . . . . . . . . . . . . . . . . . . . . . . 112 sp_helpindex . . . . . . . . . . . . . . . . . . . 16
sp_updatestats . . . . . . . . . . . . . . . . . 55
S update statistics . . . . . . . . . . . . . . . . 55
Session Monitor SQL Server Option
Navision Database Server . . . . . . . . 76 keys that cause problems . . . . . . . . . 97
SQL Server . . . . . . . . . . . . . . . . . . . . 77 performance counters to monitor . . 102
Session Monitor (SQL Server) SQL Server tools
logging blocks . . . . . . . . . . . . . . . . . . 77 index structures . . . . . . . . . . . . . . . . . 25
SIFT indexes per table . . . . . . . . . . . . . 22, 24
best practices on SQL Server . . . . . . 46 number of indexes per table . . . . . . . 24
SIFT buckets number of SIFT buckets . . . . . . . . . . 26
minimizing the number of . . . . . . . . . 22 number of SIFT indexes . . . . . . . . . . 26
on SQL Server . . . . . . . . . . . . . . . . . 15 sqldiag . . . . . . . . . . . . . . . . . . . . . . . 107
SIFT tables SQLIndex
optimizing on SQL Server . . . . . . . . . 20 key property . . . . . . . . . . . . . . . . . . . 21
sp_helpindex
SQL Server command . . . . . . . . . . . 16 T
sp_updatestats test environment
SQL Server command . . . . . . . . . . . 55 setting up . . . . . . . . . . . . . . . . . . . . . 70
SQL Server time measurements
best practices when using SIFT . . . . 46 activity log . . . . . . . . . . . . . . . . . . . . . 81
diagnostic utility . . . . . . . . . . . . . . . . 107 Tools
error log . . . . . . . . . . . . . . . . . . . . . . 105 Index Defrag tool . . . . . . . . . . . . . 56–60
FIND statements . . . . . . . . . . . . . . . . . 8 Key Information tool . . . . . . . . . . 27–34
FINDFIRST . . . . . . . . . . . . . . . . . . . . . 9 Navision Database Sizing tool . . 34–44
FINDLAST . . . . . . . . . . . . . . . . . . . . . . 9
FINDSET . . . . . . . . . . . . . . . . . . . . . . 10 U
improving selectivity . . . . . . . . . . . . . 21 update statistics
index fragmentation . . . . . . . . . . . . . 56 SQL Server command . . . . . . . . . . . 55
index hinting . . . . . . . . . . . . . . . . . . 116
index hinting, examples . . . . . . . . . 117 W
indexes on hot tables . . . . . . . . . . . . 20 warming up
locking . . . . . . . . . . . . . . . . . . . . . . . 113 Navision Database Server . . . . . . . . 70
locking and deadlocks . . . . . . . . . . . . 48 SQL Server . . . . . . . . . . . . . . . . . . . . 70
locking order rules . . . . . . . . . . . . . . . 96 Windows Performance Monitor
maintentance plan . . . . . . . . . . . . . . . 61 counters to monitor . . . . . . . . . . . . . 102
max number of processors supported 6
max physical memory supported . . . . 7
maximum capacity specifications . . . . 4
monitoring errors . . . . . . . . . . . . . . . 105
monitoring performance counters . . . 63
optimizing Navision indexes and SIFT
tables . . . . . . . . . . . . . . . . . . . . . . . . 20
optimizing Navision tables . . . . . . . . 61
reading problems . . . . . . . . . . . . . . . 99
SIFT buckets . . . . . . . . . . . . . . . . . . . 15
SIFT indexes maintaining . . . . . . . . . 15
sp_helpindex . . . . . . . . . . . . . . . . . . . 16
typical performance problems . . . . . 108
understanding Navision indexes . . . . 12
updating statistics . . . . . . . . . . . . . . . 55
useful SQL Server commands . . . . . 16
warming up . . . . . . . . . . . . . . . . . . . . 70

Você também pode gostar