Você está na página 1de 172

JAMPANI TECHNOLOGIES SURESH CHOWDARY 1

1. HISTORY OF SQLSERVER
2. SOFTWARE&HARDWARE REQUIREMENTS
3. INSTALLATION, CONFIGURATIONS&CHECKS
4. TOOLS
5. DATABASES& DESIGNING, ALTERING NEW DATABASE
6. ARCHITECTURE OF SQLSERVER
7. SERVER STARTUP PROCESS
8. SECURITIES
9. RECOVERY MODELS&COMPATIBILITY
10. SQLSERVER AGENT
11. BACKUPS&RESTORE
12. DTS INTRODUCTION
13. IMPORT&EXPORT
14. REPLICATION
15. LOGSHIPPING
16. DATABASE MIRRORING
17. CLUSTERING
18. DATABASE MAIL CONFIGURATION
19. PERFORMANCE TUNING
20. UPGRADATION

HISTORY OF SQLSERVER

1. Sqlserver is RDBMS
2. It is a product of MICROSOFT CORPORATION
3. User friendly
4. Sqlserver is Case insensitive
5. Sqlserver is Platform dependant only compatible for WINDOWS

 SQLSEREVR introduced by MICROSOFT in 1994

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 2

Editions of SQLSEREVR 2005:

Editions and Components of SQL Server 2005

Installation requirements can vary widely, depending on your application needs. The different
editions of SQL Server 2005 accommodate the unique performance, runtime, and price
requirements of organizations and individuals. Which of the various SQL Server 2005
components you install will also depend on your organization or individual needs.
You can use all editions of SQL Server 2005 in production environments except for SQL Server
2005 Developer Edition and SQL Server 2005 Evaluation Edition. The following paragraphs
describe the editions of SQL Server 2005.

SQL Server 2005 Enterprise Edition (32-bit and 64-bit)


Enterprise Edition scales to the performance levels required to support the largest
enterprise online transaction processing (OLTP), highly complex data analysis, data
warehousing systems, and Web sites. Enterprise Edition’s comprehensive business
intelligence and analytics capabilities and its high availability features such as failover
clustering allow it to handle the most mission critical enterprise workloads. Enterprise
Edition is the most comprehensive edition of SQL Server and is ideal for the largest
organizations and the most complex requirements.

SQL Server 2005 Evaluation Edition (32-bit and 64-bit)


SQL Server 2005 is also available in a 180-day Evaluation Edition for 32-bit or 64-bit
platforms. SQL Server Evaluation Edition supports the same feature set as SQL Server
2005 Enterprise Edition. You can upgrade SQL Server Evaluation Edition for production
use.

SQL Server 2005 Standard Edition (32-bit and 64-bit)


SQL Server 2005 Standard Edition is the data management and analysis platform for
small- and medium-sized organizations. It includes the essential functionality needed for
e-commerce, data warehousing, and line-of-business solutions. Standard Edition’s
integrated business intelligence and high availability features provide organizations with
the essential capabilities needed to support their operations. SQL Server 2005 Standard
Edition is ideal for the small- to medium-sized organization that needs a complete data
management and analysis platform.

SQL Server 2005 Workgroup Edition (32-bit only)


SQL Server 2005 Workgroup Edition is the data management solution for small
organizations that need a database with no limits on size or number of users. SQL Server
2005 Workgroup Edition can serve as a front-end Web server, or for departmental or
branch office operations. It includes the core database features of the SQL Server
product line, and is easily upgradeable to SQL Server 2005 Standard Edition or SQL

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 3
Server 2005 Enterprise Edition. SQL Server 2005 Workgroup Edition is an ideal entry-
level database that is reliable, robust, and easy-to-manage.

SQL Server 2005 Developer Edition (32-bit and 64-bit)


SQL Server 2005 Developer Edition lets developers build any type of application on top
of SQL Server. It includes all of the functionality of SQL Server 2005 Enterprise Edition,
but is licensed for use as a development and test system, not as a production server.
SQL Server 2005 is an ideal choice for independent software vendors (ISVs),
consultants, system integrators, solution providers, and corporate developers who build
and test applications. You can upgrade SQL Server 2005 for production use.

SQL Server 2005 Express Edition (32-bit only)


The SQL Server Express database platform is based on Microsoft SQL Server 2005. It is
also a replacement for Microsoft Desktop Engine (MSDE). Integrated with Microsoft
Visual Studio 2005, SQL Server Express makes it easy to develop data-driven
applications that are rich in capability, secure in storage, and fast to deploy.

SQL Server Express is free and can be redistributed (subject to agreement), and
functions as the client database, as well as a basic server database. SQL Server Express
is an ideal choice for independent software vendors (ISVs), server users, non-
professional developers, Web application developers, Web site hosts, and hobbyists
building client applications. If you need more advanced database features, SQL Server
Express can be seamlessly upgraded to more sophisticated versions of SQL Server.

SQL Server Express also offers additional components that are available as part of
Microsoft SQL Server 2005 Express Edition with Advanced Services (SQL Server
Express). In addition to the features of SQL Server Express, SQL Server Express with
Advanced Services contains the following features:

• SQL Server Management Studio Express (SSMSE), a subset of SQL Server


Management Studio.

• Support for full-text catalogs.

• Support for viewing reports via Reporting Services.

SQL Server 2005 Mobile Edition (32-bit only)


SQL Server Mobile is the compact database that extends enterprise data management
capabilities to devices. SQL Server Mobile is capable of replicating data with Microsoft
SQL Server 2005 and Microsoft SQL Server 2000, letting users maintain a mobile data
store that is synchronized with the primary database. SQL Server Mobile is the only SQL
Server edition that provides relational database management capabilities for smart
devices.

Using SQL Server 2005 with an Internet Server

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 4
On an Internet server, such as a server running Internet Information Services (IIS), you will
typically install the SQL Server 2005 client tools. Client tools include the client connectivity
components used by an application connecting to an instance of SQL Server.

Using SQL Server 2005 with Client/Server Applications


You can install just the SQL Server 2005 client components on a computer running client/server
applications that connect directly to an instance of SQL Server. A client components installation is
also a good option if you administer an instance of SQL Server on a database server, or if you
plan to develop SQL Server applications.

The client components option installs the following SQL Server features: Command prompt tools,
Reporting Services tools, connectivity components, programming models, management tools,
development tools, Books Online, sample databases, and sample applications.

SQLSERVER 2005 COMPONENTS:

Topic Description

Notification Notification Services is a new platform for building highly-scaled applications


Services that send and receive notifications. Notification Services can send timely,
Enhancements personalized messages to thousands or millions of subscribers using a wide
variety of devices.
Reporting Services Reporting Services is a new server-based reporting platform that supports
Enhancements report authoring, distribution, management, and end-user access.
New Service Broker Service Broker is a new technology for building database-intensive
applications that are secure, reliable, and scalable. Service Broker provides
message queues the applications use to communicate requests and
responses.
Database Engine The Database Engine introduces new programmability enhancements such
Enhancements as integration with the Microsoft .NET Framework and Transact-SQL
enhancements, new XML functionality, and new data types. It also includes
improvements to the scalability and availability of databases.
Data Access SQL Server 2005 introduces improvements in the programming interfaces
Interfaces used to access data in SQL Server databases. For example, the SQL Native
Enhancements Client data access technology is new, and the .NET Framework Data
Provider for SQL Server, also referred to as SqlClient, is enhanced.
Analysis Services Analysis Services introduces new management tools, an integrated
Enhancements development environment and integration with the .NET Framework. Many
(SSAS) new features extend the data mining and analysis capabilities of Analysis
Services.
Integration Services Integration Services introduces a new extensible architecture and a new
Enhancements designer that separates job flow from data flow and provides a rich set of
control flow semantics. Integration Services also provides improvements to
package management and deployment, along with many new packaged
tasks and transformations.
Replication Replication offers improvements in manageability, availability,
Enhancements programmability, mobility, scalability, and performance.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 5
Tools and Utilities SQL Server 2005 introduces an integrated suite of management and
Enhancements development tools that improve the ease-of-use, manageability, and
operations support for large scale SQL Server systems.

Protocols of SQLSERVER 2005:

When an application communicates with the SQL Server Database Engine, the application
programming interfaces (APIs) exposed by the protocol layer formats the communication using a
Microsoft-defined format called a tabular data stream (TDS) packet. There are Net-Libraries on
both the server and client computers that encapsulate the TDS packet inside a standard
communication protocol, such as TCP/IP or Named Pipes. The following protocols are available:

• Shared Memory The simplest protocol to use, with no configurable settings. Clients using the
Shared Memory protocol can connect only to a SQL Server instance running on the same
computer, so this protocol is not useful for most database activity.
• Named Pipes a protocol developed for local area networks (LANs). A portion of memory is
used by one process to pass information to another process, so that the output of one is the
input of the other.
• TCP/IP The most widely used protocol over the Internet. TCP/IP can communicate across
interconnected networks of computers with diverse hardware architectures and operating
systems. It includes standards for routing network traffic and offers advanced security features.
• Virtual Interface Adapter (VIA) A protocol that works with VIA hardware. This is a specialized
protocol.

SQL Server 2005 also introduces a new concept for defining SQL Server connections: the
connection is represented on the server end by a TDS endpoint. During setup, SQL Server
creates an endpoint for each of the four Net-Library protocols supported by SQL Server, and if
the protocol is enabled, all users have access to it. For disabled protocols, the endpoint still exists
but cannot be used. An additional endpoint is created for the dedicated administrator connection
(DAC), which can be used only by members of the sysadmin fixed server role.

SOFTWARE REQUIREMENTS

 OS&SQLSERVER 2005 EDITIONS COMPATIBILITY:

Operating System Requirements (32-Bit):

Express
Edition and
Express with
Enterprise Developer Standard Workgroup Advanced Evaluation
Edition1 Edition Edition Edition Services Edition

Windows 2000 No No No No No No
Windows 2000 No Yes Yes Yes Yes Yes
Professional
Edition SP42, 4

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 6
Windows 2000 Yes Yes Yes Yes Yes Yes
Server SP42
Windows 2000 Yes Yes Yes Yes Yes Yes
Advanced
Server SP42
Windows 2000 Yes Yes Yes Yes Yes Yes
Datacenter
Edition SP42
Windows XP No No No No No No
Embedded
Windows XP No Yes No No Yes No
Home Edition
SP2
Windows XP No Yes Yes Yes Yes Yes
Professional
Edition SP24
Windows XP No Yes Yes Yes Yes Yes
Media Edition
SP2
Windows XP No Yes Yes Yes Yes Yes
Tablet Edition
SP2
Windows 2003 Yes Yes Yes Yes Yes Yes
Server SP1
Windows 2003 Yes Yes Yes Yes Yes Yes
Enterprise
Edition SP1
Windows 2003 Yes Yes Yes Yes Yes Yes
Datacenter
Edition SP1
Windows 2003 No No No No Yes No
Web Edition
SP1
Windows Small Yes Yes Yes Yes Yes Yes
Business
Server 2003
Standard
Edition SP1
Windows Small Yes Yes Yes Yes Yes Yes
Business
Server 2003
Premium
Edition SP1
Windows 2003 No No No No No No
64-Bit Itanium
Datacenter
Edition SP1

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 7
Windows 2003 No No No No No No
64-Bit Itanium
Enterprise
Edition SP1
Windows 2003 WOW643 WOW643 WOW643 WOW643 WOW643 WOW643
64-Bit X64
Standard
Edition SP1
Windows 2003 WOW643 WOW643 WOW643 WOW643 WOW643 WOW643
64-Bit X64
Datacenter
Edition SP1
Windows 2003 WOW643 WOW643 WOW643 WOW643 WOW643 WOW643
64-Bit X64
Enterprise
Edition SP1
Windows XP No WOW WOW WOW WOW No
x64
Professional
2003

1
SQL Server 2005 Evaluation Edition supports the same feature set as SQL Server 2005
Enterprise Edition, but SQL Server 2005 Enterprise Edition is not supported on all of the
operating systems that support Evaluation Edition.
2
You can download Windows 2000 SP4 from this Microsoft Web site. Service packs for Windows
2000 Datacenter Edition must be obtained through the original equipment manufacturer (OEM).
3
These editions of SQL Server 2005 can be installed to the Windows on Windows (WOW64) 32-
bit subsystem of a 64-bit server.
4
You can install Microsoft SQL Server Books Online, client tools and some legacy tools for SQL
Server 2005 Enterprise Edition on Windows 2000 Professional SP4, and Windows XP SP2.
Client tools include SQL Server Management Studio, and Business Intelligence Development
Studio, SQL Server 2005 software development kit. Legacy tools include the Data Transformation
Services Runtime and SQL-DMO.

Reporting Services, which is installed as part of SQL Server Express with Advanced Services,
will not install on operating systems that do not include Internet Information Services (IIS).

The following limitations or issues affect installations on supported operating systems:

• Native Web Service (SOAP/HTTP) support is only available for instances of SQL Server
2005 running on Windows 2003.

• Individual topics in Microsoft SQL Server 2005 Integration Services (SSIS) programming,
Analysis Management Objects (AMO), and ADOMD.NET documentation may indicate support
for earlier versions of Windows, such as Windows 98, Windows ME, or Windows NT 4.0.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 8
However, for this release, these three programming interfaces are only supported on Windows
XP, Windows 2000, and Windows 2003.
SQL Server 2005 failover clusters require Microsoft Cluster Server (MSCS) on at least one
node of your server cluster. MSCS is only supported if it is installed on a hardware
configuration that has been tested for compatibility with the MSCS software.

Supported Clients (32-Bit):


SQL Server 2005 32-bit client components can be installed on Windows 2000 Professional SP4
or later.

Note: This release supports Tabular Data Stream (TDS) 4.2 client connectivity through legacy
MDAC/DB-Library, not by using new SQL Server 2005 features.

Operating System Requirements (64-Bit):

Express
Edition and Evalua
Enterprise Enterprise Developer Developer Standard Standard Express with Evaluation tion
Edition1(IA6 Edition1(X6 Edition Edition Edition Edition Advanced Edition Edition
4) 4) (IA64)2 (X64)3 (IA64) (X64) Services (IA64) (X64)

Windows Yes4 No Yes4 No Yes4 No No Yes4 No


2003 64-
Bit Itanium
Datacenter
Edition
SP1
Windows Yes4 No Yes4 No Yes4 No No Yes4 No
2003 64-
Bit Itanium
Enterprise
Edition
SP1
Windows No Yes4 No Yes4 No Yes4 WOW644 No Yes4
2003 64-
Bit X64
Standard
Edition
SP1
Windows No Yes4 No Yes4 No Yes4 WOW644 No Yes4
2003 64-
Bit X64
Datacenter
Edition
SP1
Windows No Yes4 No Yes4 No Yes4 WOW644 No Yes4
2003 64-

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 9
Bit X64
Enterprise
Edition
SP1
Windows No Yes4 No Yes4 No Yes4 WOW64 No Yes4
XP x64
Profession
al 2003

1
SQL Server 2005 Evaluation Edition supports the same feature set as SQL Server 2005
Enterprise Edition, but Enterprise Edition is not supported on all of the operating systems that
support Evaluation Edition.
2
IA64 = Intel Itanium architecture.
3
X64 = AMD architecture / Intel Extended Systems architecture.
4
Tools native/WOW64. For more information on WOW64, see Extended System Support.

Extended System Support


SQL Server 2005 64-bit versions include support for extended systems, also known as Windows
on Windows (WOW64). WOW64 is a feature of 64-bit editions of Microsoft Windows that allows
32-bit applications to execute natively in 32-bit mode. Applications function in 32-bit mode even
though the underlying operating system is running on the 64-bit platform.

Supported Clients (64-Bit)


SQL Server 2005 64-bit client components can be installed on Windows 2003 (64-bit).

 WINDOWS INSTALLER REQUIRED FOR SQLSERVER 2005 INSTALLAYION

 IIS(INTERNET INFORMATION SERVICES)

HARDWARE REQUIREMENTS

SQL Server 2005


(64-bit) Processor type1 Processor speed2 Memory (RAM)3

SQL Server 2005 IA64 minimum: Itanium processor IA64 minimum: 1 IA64 minimum: 512
Enterprise Edition 4 or higher GHz MB
SQL Server 2005 X64 minimum: AMD Opteron, IA64 recommended: IA64 recommended:
Developer Edition AMD Athlon 64, Intel Xenon with 1 GHz or more 1 GB or more
SQL Server 2005 Intel EM64T support, Intel X64 minimum: 1 IA64 maximum: 32
Standard Edition Pentium IV with EM64T support GHz TB
X64 recommended: OS maximum

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 10
1 GHz or more minimum:512 MB
X64 recommended:
1 GB or more
X64 maximum: OS
maximum

1
System Configuration Checker (SCC) will block Setup if the processor type requirement is not
met.
2
SCC will warn the user but will not block Setup if the minimum or recommended processor
speed check is not met.
3
SCC will warn the user but will not block Setup if the minimum or recommended RAM check is
not met. Memory requirements are for this release only, and do not reflect additional memory
requirements of the operating system. SCC verifies the memory available when Setup starts.
4
SQL Server 2005 Evaluation Edition supports the same feature set as SQL Server 2005
Enterprise Edition.

SQL Server 2005 (32-bit) Processor type1 Processor speed2 Memory (RAM)3

SQL Server 2005 Enterprise Pentium III- Minimum: 600 MHz Minimum: 512 MB
Edition 4 compatible processor Recommended: 1 Recommended: 1 GB or
SQL Server 2005 Developer or higher GHz or higher more
Edition Maximum: Operating
SQL Server 2005 Standard system maximum
Edition
SQL Server 2005 Workgroup Pentium III- Minimum: 600 MHz Minimum: 512 MB
Edition compatible processor Recommended: 1 Recommended: 1 GB or
or higher GHz or higher more
Maximum: Operating
system maximum
SQL Server 2005 Express Pentium III- Minimum: 500 MHz Minimum: 192 MB
Edition compatible processor Recommended: 1 Recommended: 512 MB
or higher GHz or higher or more
Maximum: Operating
system maximum
SQL Server 2005 Express Pentium III- Minimum: 600 MHz Minimum: 512 MB
Edition with Advanced compatible processor Recommended: 1 Recommended: 1 GB or
Services or higher GHz or higher more
Maximum: Operating
system maximum
1
System Configuration Checker (SCC) will block Setup if the requirement for processor type is not
met.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 11
2
SCC will warn the user but will not block Setup if the minimum or recommended processor
speed check is not met. No warning will appear on multiprocessor machines.
3
SCC will warn the user but will not block Setup if the minimum or recommended RAM check is
not met. Memory requirements are for this release only, and do not reflect additional memory
requirements of the operating system. SCC verifies the memory available when Setup starts.
4
SQL Server Evaluation Edition supports the same feature set as SQL Server 2005 Enterprise
Edition.

Hard Disk Space Requirements (32-Bit and 64-Bit)


During installation of SQL Server 2005, Windows Installer creates temporary files on the system
drive. Before you run Setup to install or upgrade to SQL Server 2005, verify that you have 1.6 GB
of available disk space on the system drive for these files. This requirement applies even if you
install SQL Server components to a non-default drive.

Actual hard disk space requirements depend on your system configuration and the applications
and features you choose to install. The following table provides disk space requirements for SQL
Server 2005 components.

Feature Disk space requirement

Database Engine and data files, Replication, and Full-Text Search 150 MB
Analysis Services and data files 35 MB
Reporting Services and Report Manager 40 MB
Notification Services engine components, client components, and rules 5 MB
components
Integration Services 9 MB
Client Components 12 MB
Management Tools 70 MB
Development Tools 20 MB
SQL Server Books Online and SQL Server Mobile Books Online 15 MB
Samples and sample databases 390 MB

Install SQL Server 2005 (Setup)

The Microsoft SQL Server 2005 Installation Wizard is Windows Installer-based, and provides a
single feature tree for installation of all SQL Server 2005 components:

• Database Engine

• Analysis Services

• Reporting Services

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 12
• Notification Services

• Integration Services

• Replication

• Management Tools

• Connectivity Components

• Sample databases, samples, and SQL Server 2005 documentation

Process Screenshots:

1. You need to create a service account, so create a user account named SqlServer, and make
it a member of the Administrators local group. You can perform this task using one of these tools:
_
On a Windows member server or on Windows XP, use Computer Management.
_
On a Windows domain controller, use Active Directory Users and Computers.

2. Insert the SQL Server CD, and wait for the auto menu to open.
3. Under Install, choose Server Components _Tools _Books online _Samples.
4. You will then be asked to read and agree with the end user license agreement (EULA);
check the box to agree, and click next.

5. If your machine does not have all the prerequisite software installed, the setup will install
them for you at this time. Click Install if you are asked to do so. When complete, click Next.

6. Next you will see a screen telling you that the setup is inspecting your system’s configuration
again, and then the welcome screen appears. Click Next to continue.
7. Another, more in-depth system configuration screen appears letting you know whether
any configuration settings will prevent SQL Server from being installed. You need to
repair errors (marked with a red icon) before you can continue. You can optionally repair
warnings (marked with a yellow icon), which will not prevent SQL Server from installing.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 13
Once you have made any needed changes, click next.

8. After a few configuration setting screens, you will be asked for your product key. Enter
it, and click Next.
9. On the next screen, you need to select the components you want to install. Check the
boxes next to the SQL Server Database Services option and the Workstation Components,
Books Online and Development Tools option.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 14
10. Click the Advanced button to view the advanced options for the setup.
11. Expand Documentation _Samples _Sample Databases. Then click the button next to Sample
Databases, select Entire Feature Will Be Installed on Local Hard Drive, and then click
Next.

12. On the Instance Name screen, choose Default Instance, and click Next.

13. On the next screen, enter the account information for the service account you
created in step 1. You will be using the same account for each service. When
finished, click Next.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 15

14. On the Authentication Mode screen, select Mixed Mode, enter a password for the
sa account, and click Next.

15. Select the Latin1_General collation designator on the next screen, and click Next.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 16

16. On the following screen, you can select to send error and feature usage information
directly to Microsoft. You will not be enabling this function here. So, leave the defaults,
and click Next.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 17

17. On the Ready to Install screen, review your settings, and then click Install.
18. The setup progress appears during the install process. When the setup is finished (which
may take several minutes), click Next.
19. The final screen gives you an installation report, letting you know whether any errors
occurred and reminding you of any post-installation steps to take. Click Finish to complete
your install.
20. Reboot your system if requested to do so.

CHECKS AFTER INSTALLATION

1. Services Check:

You have completed this task when you have a running instance of SQL Server 2005 installed
on your system. To verify this, select Start _All Programs _Microsoft SQL Server 2005 _
Configuration Tools _SQL Server Configuration Manager. Select SQL Server 2005 Services,
and check the icons. If the icon next to SQL Server (MSSQLServer) service is green, then your
installation is a success

SHORTCUT: STARTMENU—RUN—(TYPE SQLMANAGER.MSC)

Check SQL Server Configuration Manager to see whether your services are running after
installation.

2. Tools check: In this check all tools of SQLSERVER 2005 are working properly or not
3. System Databases Check: Check status of all system Databases

NOTE:

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 18

1. For SQLSERVER 2005 We can install 50 INSTANCES into single machine


2. For SQLSERVER 2000 We can install 16 INSTANCES into single machine

INSTANCE: It is a fresh copy of SQLSERVER installation; there are two types of INSTANCES

1. Default Instance: Name will be assigned by HOST


2. Name Instance: We have to assign a name explicitly

SQLSERVER SERVICES:

SQL Server Agent Service:

SQL Server Agent is a Microsoft Windows service that executes scheduled administrative
tasks, which are called jobs. SQL Server Agent uses SQL Server to store job information.
Jobs contain one or more job steps. Each step contains its own task, for example, backing up
a database. SQL Server Agent can run a job on a schedule, in response to a specific event, or on
demand. For example, if you want to back up all the company servers every weekday after hours,
you can automate this task. Schedule the backup to run after 22:00 Monday through Friday; if the
backup encounters a problem, SQL Server Agent can record the event and notify you.

NOTE:
By default, the SQL Server Agent service is disabled when SQL Server 2005 is installed unless
the user explicitly chooses to auto start the service.
To automate administration, follow these steps:
1.Establish which administrative tasks or server events occur regularly and whether these tasks
or events can be administered programmatically. A task is a good candidate for automation if
it involves a predictable sequence of steps and occurs at a specific time or in response to a
specific event.
2. Define a set of jobs, schedules, alerts, and operators by using SQL Server Management
Studio, Transact-SQL scripts, or SQL Management Objects (SMO).
3.Run the SQL Server Agent jobs you have defined.

NOTE:
For the default instance of SQL Server, the SQL Server service is named SQLSERVERAGENT.
For named instances, the SQL Server Agent service is named SQLAgent$instancename.

SQL Server Browser Service:

The SQL Server Browser program runs as a Windows service. SQL Server Browser listens for
incoming requests for Microsoft SQL Server resources and provides information about SQL
Server instances installed on the computer. SQL Server Browser contributes to the following
actions:
• Browsing a list of available servers
• Connecting to the correct server instance
• Connecting to dedicated administrator connection (DAC) endpoints
For each instance of the Database Engine and SSAS, the SQL Server Browser service
(sqlbrowser) provides the instance name and the version number. SQL Server Browser is
installed with Microsoft SQL Server 2005, and provides this service for previous versions of SQL
Server that are running on that computer, starting with Microsoft SQL Server 7.0.
SQL Server Browser can be configured during setup or by using the Surface Area Configuration
tool. It can be managed using SQL Server Configuration Manager. By default, the SQL Server
Browser service starts automatically:

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 19
• When upgrading an installation.
• When installing side-by-side with an instance of SQL Server 2000.
• When installing on a cluster.
• When installing a named instance of SQL Server 2005 Enterprise Edition, Standard
Edition, or Workgroup Edition.
• When installing a named instance of Analysis Services.
The SQL Server Browser program runs as a Windows service. SQL Server Browser listens for
incoming requests for Microsoft SQL Server resources and provides information about SQL
Server instances installed on the computer. SQL Server Browser contributes to the following
actions:
• Browsing a list of available servers
• Connecting to the correct server instance
• Connecting to dedicated administrator connection (DAC) endpoints
For each instance of the Database Engine and SSAS, the SQL Server Browser service
(sqlbrowser) provides the instance name and the version number. SQL Server Browser is
installed with Microsoft SQL Server 2005, and provides this service for previous versions of SQL
Server that are running on that computer, starting with Microsoft SQL Server 7.0.
SQL Server Browser can be configured during setup or by using the Surface Area Configuration
tool. It can be managed using SQL Server Configuration Manager. By default, the SQL Server
Browser service starts automatically:
• When upgrading an installation.
• When installing side-by-side with an instance of SQL Server 2000.
• When installing on a cluster.
• When installing a named instance of SQL Server 2005 Enterprise Edition, Standard
Edition, or Workgroup Edition.
• When installing a named instance of Analysis Services.

SQL Server Notification Services:

Microsoft SQL Server Notification Services is a platform for developing and deploying
applications that generate and send notifications to subscribers. The notifications generated are
personalized, timely messages that can be sent to a wide range of devices, and that reflect the
preferences of the subscriber.
Subscribers create subscriptions to notification applications. A subscription is an expressed
interest in a specific type of event. For example, subscriptions might express the following
preferences: "Notify me when my stock price reaches $70.00," or "Notify me when the strategy
document my team is writing is updated."
A notification can be generated and sent to the subscriber as soon as a triggering event occurs. A
notification can also be generated and sent on a predetermined schedule specified by the
subscriber.
Notifications can be sent to a wide range of devices. For example, a notification can be sent to a
subscriber's cellular phone, personal digital assistant (PDA), Microsoft Windows Messenger, or e-
mail account. Because these devices often accompany the subscriber, notifications are ideal for
sending important information.
Notification applications are valuable for many reasons, including the following:
• Notification applications enable you to send critical information to customers, partners,
and employees. The notifications can contain links to a Web site to retrieve more
information or to acknowledge receipt of the information.
• Notification applications enhance and strengthen your relationships with customers by
providing more customized and timely services to them.
• Notification applications help increase your revenue by making it easier for customers
to initiate business transactions with you.
• Notification applications help make your employees more productive by providing them
with the information they need, whenever and wherever they need it.
• Notification applications allow you to communicate with mobile subscribers over a wide
variety of devices.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 20
With Notification Services you can build and deploy applications quickly, and scale the
applications to support millions of subscribers if desired. Notification Services consists of:
• A simple yet powerful Notification Services programming framework that enables you to
quickly create and deploy notification applications. You can develop applications using
XML or Notification Services Management Objects (NMO).
• A reliable, high-performance, scalable engine that runs notification applications. The
Notification Services engine is built on the Microsoft .NET Framework and SQL Server
2005.
Integration Services is a platform for building high performance data integration and workflow
solutions, including extraction, transformation, and loading (ETL) operations for data
warehousing.
Integration Services includes graphical tools and wizards for building and debugging packages;
tasks for performing workflow functions such as FTP operations, SQL statement execution, and
e-mail messaging; data sources and destinations for extracting and loading data; transformations
for cleaning, aggregating, merging, and copying data; a management service, the Integration
Services service, for administering Integration Services packages; and application programming
interfaces (APIs) for programming the Integration Services object model.

SQL Server Integration Services:

Microsoft SQL Server 2005 Integration Services (SSIS) is a platform for building high
performance data integration solutions, including extraction, transformation, and load (ETL)
packages for data warehousing.
Integration Services includes graphical tools and wizards for building and debugging packages;
tasks for performing workflow functions such as FTP operations, for executing SQL statements,
or for sending e-mail messages; data sources and destinations for extracting and loading data;
transformations for cleaning, aggregating, merging, and copying data; a management service, the
Integration Services service, for administering Integration Services; and application programming
interfaces (APIs) for programming the Integration Services object model.

Integration Services replaces Data Transformation Services (DTS), which was first introduced as
a component of SQL Server 7.0.

Full Text Search Service:

SQL Server Full Text Search service is a specialized indexing and querying service for
unstructured text stored in SQL Server databases. The full text search index can be created on
any column with character based text data. It allows for words to be searched for in the text
columns. While it can be performed with the SQL LIKE operator, using SQL Server Full Text
Search service can be more efficient. Full Text Search (FTS) allows for inexact matching of the
source string, indicated by a Rank value which can range from 0 to 1000 - a higher rank means a
more accurate match. It also allows linguistic matching ("inflectional search"), i.e., linguistic
variants of a word (such as a verb in a different tense) will also be a match for a given word (but
with a lower rank than an exact match). Proximity searches are also supported, i.e., if the words
searched for do not occur in the sequence they are specified in the query but are near each
other, they are also considered a match. T-SQL exposes special operators that can be used to
access the FTS capabilities.

The Full Text Search engine is divided into two processes - the Filter Daemon process
(msftefd.exe) and the Search process (msftesql.exe). These processes interact with the SQL
Server. The Search process includes the indexer (that creates the full text indexes) and the full
text query processor. The indexer scans through text columns in the database. It can also index
through binary columns, and use iFilters to extract meaningful text from the binary blob (for
example, when a Microsoft Word document is stored as an unstructured binary file in a
database). The iFilters are hosted by the Filter Daemon process. Once the text is extracted, the

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 21
Filter Daemon process breaks it up into a sequence of words and hands it over to the indexer.
The indexer filters out noise words, i.e., words like A, And etc, which occur frequently and are not
useful for search. With the remaining words, an inverted index is created, associating each word
with the columns they were found in. SQL Server itself includes a Gatherer component that
monitors changes to tables and invokes the indexer in case of updates.

When a full text query is received by the SQL Server query processor, it is handed over to the
FTS query processor in the Search process. The FTS query processor breaks up the query into
the constituent words, filters out the noise words, and uses an inbuilt thesaurus to find out the
linguistic variants for each word. The words are then queried against the inverted index and a
rank of their accurateness is computed. The results are returned to the client via the SQL Server
process.

SQL Server Analysis Services:

SQL Server Analysis Services adds OLAP and data mining capabilities for SQL Server
databases. The OLAP engine supports MOLAP, ROLAP and HOLAP storage modes for data.
Analysis Services supports the XML for Analysis standard as the underlying communication
protocol. The cube data can be accessed using MDX queries.[17] Data mining specific functionality
is exposed via the DMX query language. Analysis Services includes various algorithms - Decision
trees, clustering algorithm, Naive Bayes algorithm, time series analysis, sequence clustering
algorithm, linear and logistic regression analysis, and neural networks - for use in data mining.

SQL Reporting Services:


SQL Server 2005 Reporting Services is a server-based reporting platform that you can use to
create and manage tabular, matrix, graphical, and free-form reports that contain data from
relational and multidimensional data sources. The reports that you create can be viewed and
managed over a World Wide Web-based connection. Reporting Services includes the following
core components:
• A complete set of tools that you can use to create, manage, and view reports.
• A Report Server component that hosts and processes reports in a variety of formats.
Output formats include HTML, PDF, TIFF, Excel, CSV, and more.
• An API that allows developers to integrate or extend data and report processing in
custom applications, or create custom tools to build and manage reports.
The reports that you build can be based on relational or multidimensional data from SQL Server,
Analysis Services, Oracle, or any Microsoft .NET data provider such as ODBC or OLE DB. You
can create tabular, matrix, and free-form reports. You can also create ad hoc reports that use
predefined models and data sources.
Visually and functionally, the reports that you build in Reporting Services surpass traditional
reporting by including interactive and Web-based features. Some examples of these features
include drill-down reports that enable navigation through layers of data, parameterized reports
that support content filtering at run time, free-form reports that support content in vertical, nested,
and side-by-side layouts, links to Web-based content or resources, and secure, centralized
access to reports over remote or local Web connections.
Although Reporting Services integrates with other Microsoft technologies out-of-the-box,
developers and third-party vendors can build components to support additional report output
formats, delivery formats, authentication models, and data source types. The development and
run-time architecture was purposely created in a modular design to support third-party extension
and integration opportunities.

STARTUP EVENTS OF SQLSERVER 2005

1. Initially a Server process id is created (E.g.: Server process ID 4060).

2. It verifies for the authentication mode.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 22

3. It logs Sql server messages in Error Log.

4. It generates the time and last server process id with which the instance was started.

5. Registry Start up parameters are verified.

-d C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\master.mdf


-e C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\ERRORLOG;
-l C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\mastlog.ldf
6. Checks for number of CPU’s
7. It enables Database Mirroring.

8. Database Master is started. Checkpoint is created in Master [1].

9. Sql trace ID 1 was started by login sa and mssqlresource database is stared.

10. Checks server name starts model database and cleans tempdb.

11. Server Listens on port 1433 and server local connection provider is ready to accept
connection on [\\.\pipe\SQLLocal\MSSQLSERVER], and Server local connection provider is ready
to accept connection on [\\.\pipe\sql\query].

12. Server listens on port 1434 and dedicated admin connection support was established for
listening locally on port 1434.

13. Starting up database 'tempdb'.

14. Service Broker manager has started. SQL Server is now ready for client connections.
15. Starting up database 'msdb Starts User Databases.

16. CHECKDB for database ‘msdb’ and User databases finished without errors.

17. Recovery is complete.

18. Using 'xpstar90.dll' version '2005.90.3042' to execute extended stored procedure


'xp_enumerrorlogs

ARCHITECTURE

1. QUERY PROCESSING:

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 23

The protocol layer receives the request and translates it into a form that the relational engine can
work with, and it also takes the final results of any queries, status messages, or error messages
and translates them into a form the client can understand before sending them back to the client.
The relational engine layer accepts SQL batches and determines what to do with them. For
Transact-SQL queries and programming constructs, it parses, compiles, and optimizes the
request and oversees the process of executing the batch. As the batch is executed, if data is
needed, a request for that data is passed to the storage engine. The storage engine manages all
data access, both through transaction-based commands and bulk operations such as backup,
bulk insert, and certain DBCC (Database Consistency Checker) commands. The SQLOS layer
handles activities that are normally considered to be operating system responsibilities, such as
thread management (scheduling), synchronization primitives, deadlock detection, and memory
management, including the buffer pool.

Protocols

When an application communicates with the SQL Server Database Engine, the application
programming interfaces (APIs) exposed by the protocol layer formats the communication using a
Microsoft-defined format called a tabular data stream (TDS) packet. There are Net-Libraries on

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 24
both the server and client computers that encapsulate the TDS packet inside a standard
communication protocol, such as TCP/IP or Named Pipes. The following protocols are available:

• Shared Memory The simplest protocol to use, with no configurable settings. Clients using the
Shared Memory protocol can connect only to a SQL Server instance running on the same
computer, so this protocol is not useful for most database activity.
• Named Pipes A protocol developed for local area networks (LANs). A portion of memory is
used by one process to pass information to another process, so that the output of one is the
input of the other.
• TCP/IP The most widely used protocol over the Internet. TCP/IP can communicate across
interconnected networks of computers with diverse hardware architectures and operating
systems. It includes standards for routing network traffic and offers advanced security features.
• Virtual Interface Adapter (VIA) A protocol that works with VIA hardware. This is a specialized
protocol.

SQL Server 2005 also introduces a new concept for defining SQL Server connections: the
connection is represented on the server end by a TDS endpoint. During setup, SQL Server
creates an endpoint for each of the four Net-Library protocols supported by SQL Server, and if
the protocol is enabled, all users have access to it. For disabled protocols, the endpoint still exists
but cannot be used. An additional endpoint is created for the dedicated administrator connection
(DAC), which can be used only by members of the sysadmin fixed server role.

The Relational Engine

The relational engine is also called the query processor. It includes the components of SQL
Server that determine exactly what your query needs to do and the best way to do it. By far the
most complex component of the query processor, and maybe even of the entire SQL Server
product, is the query optimizer, which determines the best execution plan for the queries in the
batch.

The Command Parser


The command parser handles Transact-SQL language events sent to SQL Server. It checks for
proper syntax and translates Transact-SQL commands into an internal format that can be
operated on. This internal format is known as a query tree. If the parser doesn't recognize the
syntax, a syntax error is immediately raised that identifies where the error occurred. However,
non-syntax error messages cannot be explicit about the exact source line that caused the error.
Because only the command parser can access the source of the statement, the statement is no
longer available in source format when the command is actually executed.

The Query Optimizer


The query optimizer takes the query tree from the command parser and prepares it for execution.
Statements that can't be optimized, such as flow-of-control and DDL commands, are compiled
into an internal form. The statements that are optimizable are marked as such and then passed to
the optimizer. The optimizer is mainly concerned with the DML statement SELECT, INSERT,
UPDATE, and DELETE, which can be processed in more than one way, and it is the optimizer's
job to determine which of the many possible ways is the best.
The query optimization and compilation result in an execution plan.
The first step in producing such a plan is to normalize each query, which potentially breaks down
a single query into multiple, fine-grained queries. After the optimizer normalizes a query, it
optimizes it, which means it determines a plan for executing that query. Query optimization is cost
based; the optimizer chooses the plan that it determines would cost the least based on internal
metrics that include estimated memory requirements, CPU utilization, and number of required
I/Os. The optimizer considers the type of statement requested, checks the amount of data in the
various tables affected, looks at the indexes available for each table, and then looks at a

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 25
sampling of the data values kept for each index or column referenced in the query. The sampling
of the data values is called distribution statistics.
The optimizer also uses pruning heuristics to ensure that optimizing a query doesn't take longer
than it would take to simply choose a plan and execute it. With a complex query, it could take
much longer to estimate the cost of every conceivable plan than it would to accept a good plan,
even if not the best one, and execute it.
After normalization and optimization are completed, the normalized tree produced by those
processes is compiled into the execution plan, which is actually a data structure. This execution
plan might be considerably more complex than is immediately apparent. In addition to the actual
commands, the execution plan includes all the steps necessary to ensure that constraints are
checked. Steps for calling a trigger are slightly different from those for verifying constraints.
The step that carries out the actual INSERT statement might be just a small part of the total
execution plan necessary to ensure that all actions and constraints associated with adding a row
are carried out.

The SQL Manager


The SQL manager is responsible for everything related to managing stored procedures and their
plans. It determines when a stored procedure needs recompilation, and it manages the caching of
procedure plans so that other processes can reuse them.
The SQL manager also handles autoparameterization of queries. SQL Server can save and
reuse plans in other ways, but in some situations using a saved plan might not be a good idea.

The Database Manager


The database manager handles access to the metadata needed for query compilation and
optimization, making it clear that none of these separate modules can be run completely
separately from the others. The metadata is stored as data and is managed by the storage
engine, but metadata elements such as the datatypes of columns and the available indexes on a
table must be available during the query compilation and optimization phase, before actual query
execution starts.

The Query Executor


The query executor runs the execution plan that the optimizer produced, acting as a dispatcher
for all the commands in the execution plan. This module steps through each command of the
execution plan until the batch is complete. Most of the commands require interaction with the
storage engine to modify or retrieve data and to manage transactions and locking.

The Storage Engine

The SQL Server storage engine has traditionally been considered to include all the components
involved with the actual processing of data in your database. SQL Server 2005 separates out
some of these components into a module called the SQLOS. In fact, the SQL Server storage
engine team at Microsoft actually encompasses three areas: access methods, transaction
management, and the SQLOS.

2. STORAGE ARCHITECTURE (PHYSICAL DATABASE ARCHITECTURE):

The physical database architecture contains

• Pages and Extents


• Physical Database File and File groups
• Space Allocation and Reuse
• Table and Index Architecture

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 26
• Transaction Log Architecture

Pages and Extends


The fundamental unit of data storage in SQL Server is the page. The disk space allocated to a
data file (.mdf or .ndf) in a database is logically divided into pages numbered contiguously from 0
to n. Disk I/O operations are performed at the page level. That is, SQL Server reads or writes
whole data pages.

Extents are a collection of eight physically contiguous pages and are used to efficiently manage
the pages. All pages are stored in extents.

Pages
In SQL Server, the page size is 8 KB. This means SQL Server databases have 128 pages per
megabyte. Each page begins with a 96-byte header that is used to store system information
about the page. This information includes the page number, page type, the amount of free space
on the page, and the allocation unit ID of the object that owns the page.

The following table shows the page types used in the data files of a SQL Server database.
Page Type Contents
Data Data rows with all data, except text, ntext, image, nvarchar(max),
varchar(max), varbinary(max), and xml data, when text in row is
set to ON.
Index Index entries.
Text/Image Large object data types:
• text, ntext, image, nvarchar(max), varchar(max),
varbinary(max), and xml data
Variable length columns when the data row exceeds 8 KB:
• varchar, nvarchar, varbinary, and sql_variant
Global Allocation Map, Information about whether extents are allocated.
Shared Global Allocation Map
Page Free Space Information about page allocation and free space available on
pages.

Index Allocation Map Information about extents used by a table or index per allocation
unit.

Bulk Changed Map Information about extents modified by bulk operations since the
last BACKUP LOG statement per allocation unit.

Differential Changed Map Information about extents that have changed since the last
BACKUP DATABASE statement per allocation unit.

Note: Log files do not contain pages; they contain a series of log records.

Data rows are put on the page serially, starting immediately after the header. A row offset table
starts at the end of the page, and each row offset table contains one entry for each row on the
page. Each entry records how far the first byte of the row is from the start of the page. The entries
in the row offset table are in reverse sequence from the sequence of the rows on the page.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 27

Large Row Support


Rows cannot span pages in SQL Server 2005, however portions of the row may be moved off the
row's page so that the row can actually be very large. The maximum amount of data and
overhead that is contained in a single row on a page is 8,060 bytes (8 KB). However, this does
not include the data stored in the Text/Image page type. In SQL Server 2005, this restriction is
relaxed for tables that contain varchar, nvarchar, varbinary, or sql_variant columns. When the
total row size of all fixed and variable columns in a table exceeds the 8,060 byte limitation, SQL
Server dynamically moves one or more variable length columns to pages in the
ROW_OVERFLOW_DATA allocation unit, starting with the column with the largest width. This is
done whenever an insert or update operation increases the total size of the row beyond the 8060
byte limit. When a column is moved to a page in the ROW_OVERFLOW_DATA allocation unit, a
24-byte pointer on the original page in the IN_ROW_DATA allocation unit is maintained. If a
subsequent operation reduces the row size, SQL Server dynamically moves the columns back to
the original data page.

Extents
Extents are the basic unit in which space is managed. An extent is eight physically contiguous
pages, or 64 KB. This means SQL Server databases have 16 extents per megabyte.
To make its space allocation efficient, SQL Server does not allocate whole extents to tables with
small amounts of data. SQL Server has two types of extents:

• Uniform extents are owned by a single object; all eight pages in the extent can only be
used by the owning object.
• Mixed extents are shared by up to eight objects. Each of the eight pages in the extent
can be owned by a different object.

A new table or index is generally allocated pages from mixed extents. When the table or index
grows to the point that it has eight pages, it then switches to use uniform extents for subsequent
allocations. If you create an index on an existing table that has enough rows to generate eight
pages in the index, all allocations to the index are in uniform extents.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 28
Physical Database Files and File groups
SQL Server 2005 maps a database over a set of operating-system files. Data and log information
are never mixed in the same file, and individual files are used only by one database. File groups
are named collections of files and are used to help with data placement and administrative tasks
such as backup and restore operations.

Database Files
SQL Server 2005 databases have three types of files:
• Primary data files
The primary data file is the starting point of the database and points to the other files in
the database. Every database has one primary data file. The recommended file name
extension for primary data files is .mdf.
• Secondary data files
Secondary data files make up all the data files, other than the primary data file. Some
databases may not have any secondary data files, while others have several secondary
data files. The recommended file name extension for secondary data files is .ndf.
• Log files
Log files hold all the log information that is used to recover the database. There must be
at least one log file for each database, although there can be more than one. The
recommended file name extension for log files is .ldf.

SQL Server 2005 does not enforce the .mdf, .ndf, and .ldf file name extensions, but these
extensions help you identify the different kinds of files and their use.

In SQL Server 2005, the locations of all the files in a database are recorded in the primary file of
the database and in the master database. The Database Engine uses the file location information
from the master database most of the time. However, the database engine uses the file location
information from the primary file to initialize the file location entries in the master database in the
following situations:

• When attaching a database using the CREATE DATABASE statement with either the
FOR ATTACH or FOR ATTACH_REBUILD_LOG options.
• When upgrading from SQL Server version 2000 or version 7.0 to SQL Server 2005.
• When restoring the master database.

Logical and Physical File Names


SQL Server 2005 files have two names:

logical_file_name
The logical_file_name is the name used to refer to the physical file in all Transact-SQL
statements. The logical file name must comply with the rules for SQL Server identifiers and must
be unique among logical file names in the database.

os_file_name
The os_file_name is the name of the physical file including the directory path. It must follow the
rules for the operating system file names.

The following illustration shows examples of the logical file names and the physical file names of
a database created on a default instance of SQL Server 2005:

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 29

SQL Server data and log files can be put on either FAT or NTFS file systems. NTFS is
recommended for the security aspects of NTFS. Read/write data file groups and log files cannot
be placed on an NTFS compressed file system. Only read-only databases and read-only
secondary file groups can be put on an NTFS compressed file system.

When multiple instances of SQL Server are run on a single computer, each instance receives a
different default directory to hold the files for the databases created in the instance.

Data File Pages


Pages in a SQL Server 2005 data file are numbered sequentially, starting with zero (0) for the first
page in the file. Each file in a database has a unique file ID number. To uniquely identify a page
in a database, both the file ID and the page number are required. The following example shows
the page numbers in a database that has a 4-MB primary data file and a 1-MB secondary data
file.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 30

The first page in each file is a file header page that contains information about the attributes of
the file. Several of the other pages at the start of the file also contain system information, such as
allocation maps. One of the system pages stored in both the primary data file and the first log file
is a database boot page that contains information about the attributes of the database.

File Size
SQL Server 2005 files can grow automatically from their originally specified size. When you
define a file, you can specify a specific growth increment. Every time the file is filled, it increases
its size by the growth increment. If there are multiple files in a file group, they will not auto grow
until all the files are full. Growth then occurs in a round-robin fashion.

Each file can also have a maximum size specified. If a maximum size is not specified, the file can
continue to grow until it has used all available space on the disk. This feature is especially useful
when SQL Server is used as a database embedded in an application where the user does not
have convenient access to a system administrator. The user can let the files auto grow as
required to reduce the administrative burden of monitoring free space in the database and
manually allocating additional space.

Database File groups


Database objects and files can be grouped together in file groups for allocation and
administration purposes. There are two types of file groups:

Primary
The primary file group contains the primary data file and any other files not specifically assigned
to another file group. All pages for the system tables are allocated in the primary file group.

User-defined
User-defined file groups are any file groups that are specified by using the FILEGROUP keyword
in a CREATE DATABASE or ALTER DATABASE statement.

Log files are never part of a file group. Log space is managed separately from data space.
No file can be a member of more than one file group. Tables, indexes, and large object data can
be associated with a specified file group. In this case, all their pages will be allocated in that file
group, or the tables and indexes can be partitioned. The data of partitioned tables and indexes is
divided into units each of which can be placed in a separate file group in a database. For more
information about partitioned tables and indexes, see Partitioned Tables and Indexes.

One file group in each database is designated the default file group. When a table or index is
created without specifying a file group, it is assumed all pages will be allocated from the default
file group. Only one file group at a time can be the default file group. Members of the db_owner

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 31
fixed database role can switch the default file group from one file group to another. If no default
file group is specified, the primary file group is the default file group.

File and Filegroup Example


The following example creates a database on an instance of SQL Server. The database has a
primary data file, a user-defined filegroup, and a log file. The primary data file is in the primary
filegroup and the user-defined filegroup has two secondary data files. An ALTER DATABASE
statement makes the user-defined filegroup the default. A table is then created specifying the
user-defined filegroup.

USE master;
GO
-- Create the database with the default data
-- filegroup and a log file. Specify the
-- growth increment and the max size for the
-- primary data file.
CREATE DATABASE MyDB
ON PRIMARY
( NAME='MyDB_Primary',
FILENAME=
'c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\data\MyDB_Prm.mdf',
SIZE=4MB,
MAXSIZE=10MB,
FILEGROWTH=1MB),
FILEGROUP MyDB_FG1
( NAME = 'MyDB_FG1_Dat1',
FILENAME =
'c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\data\MyDB_FG1_1.ndf',
SIZE = 1MB,
MAXSIZE=10MB,
FILEGROWTH=1MB),
( NAME = 'MyDB_FG1_Dat2',
FILENAME =
'c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\data\MyDB_FG1_2.ndf',
SIZE = 1MB,
MAXSIZE=10MB,
FILEGROWTH=1MB)
LOG ON
( NAME='MyDB_log',
FILENAME =
'c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\data\MyDB.ldf',
SIZE=1MB,
MAXSIZE=10MB,
FILEGROWTH=1MB);
GO
ALTER DATABASE MyDB
MODIFY FILEGROUP MyDB_FG1 DEFAULT;
GO

-- Create a table in the user-defined filegroup.


USE MyDB;
CREATE TABLE MyTable
( cola int PRIMARY KEY,
colb char(8) )
ON MyDB_FG1;
GO

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 32
The following illustration summarizes the results of the previous example.

Space Allocation and Reuse


SQL Server 2005 is effective at quickly allocating pages to objects and reusing space that is
made available by deleted rows. These operations are internal to the system and use data
structures that are not visible to users. However, these processes and structures are still
occasionally referenced in SQL Server messages.

This section is an overview of the space allocation algorithms and the data structures. It also
provides users and administrators with the knowledge they require to understand the references
to the terms in the messages generated by SQL Server.

Managing Extent Allocations and Free Space


The SQL Server 2005 data structures that manage extent allocations and track free space have a
relatively simple structure. This has the following benefits:

• The free space information is densely packed, so relatively few pages contain this
information.
This increases speed by reducing the amount of disk reads that are required to retrieve
allocation information. This also increases the chance that the allocation pages will
remain in memory and not require more reads.
• Most of the allocation information is not chained together. This simplifies the maintenance
of the allocation information.

Each page allocation or deallocation can be performed quickly. This decreases the contention
between concurrent tasks having to allocate or deallocate pages.

SQL Server uses two types of allocation maps to record the allocation of extents:

• Global Allocation Map (GAM)


GAM pages record what extents have been allocated. Each GAM covers 64,000 extents,

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 33
or almost 4 GB of data. The GAM has one bit for each extent in the interval it covers. If
the bit is 1, the extent is free; if the bit is 0, the extent is allocated.
• Shared Global Allocation Map (SGAM)
SGAM pages record which extents are currently being used as mixed extents and also
have at least one unused page. Each SGAM covers 64,000 extents, or almost 4 GB of
data. The SGAM has one bit for each extent in the interval it covers. If the bit is 1, the
extent is being used as a mixed extent and has a free page. If the bit is 0, the extent is
not used as a mixed extent, or it is a mixed extent and all its pages are being used.

Each extent has the following bit patterns set in the GAM and SGAM, based on its current use.

Current use of Extent GAM bit setting SGAM bit setting


Free, not being used 1 0
Uniform extent, or full mixed extent 0 0
Mixed extent with free pages 0 1

This causes simple extent management algorithms. To allocate a uniform extent, the Database
Engine searches the GAM for a 1 bit and sets it to 0. To find a mixed extent with free pages, the
Database Engine searches the SGAM for a 1 bit. To allocate a mixed extent, the Database
Engine searches the GAM for a 1 bit, sets it to 0, and then also sets the corresponding bit in the
SGAM to 1. To deallocate an extent, the Database Engine makes sure that the GAM bit is set to
1 and the SGAM bit is set to 0. The algorithms that are actually used internally by the Database
Engine are more sophisticated than what is described in this topic, because the Database Engine
distributes data evenly in a database. However, even the real algorithms are simplified by not
having to manage chains of extent allocation information.

Tracking Free Space


Page Free Space (PFS) pages record the allocation status of each page, whether an individual
page has been allocated, and the amount of free space on each page. The PFS has one byte for
each page, recording whether the page is allocated, and if so, whether it is empty, 1 to 50 percent
full, 51 to 80 percent full, 81 to 95 percent full, or 96 to 100 percent full.

After an extent has been allocated to an object, the Database Engine uses the PFS pages to
record which pages in the extent are allocated or free. This information is used when the
Database Engine has to allocate a new page. The amount of free space in a page is only
maintained for heap and Text/Image pages. It is used when the Database Engine has to find a
page with free space available to hold a newly inserted row. Indexes do not require that the page
free space be tracked, because the point at which to insert a new row is set by the index key
values.

A PFS page is the first page after the file header page in a data file (page number 1). This is
followed by a GAM page (page number 2), and then an SGAM page (page 3). There is a PFS
page approximately 8,000 pages in size after the first PFS page. There is another GAM page
64,000 extents after the first GAM page on page 2, and another SGAM page 64,000 extents after
the first SGAM page on page 3. The following illustration shows the sequence of pages used by
the database engine to allocate and manage extents.

Managing Space Used by Objects


An Index Allocation Map (IAM) page maps the extents in a 4-GB part of a database file used by
an allocation unit. An allocation unit is one of three types:

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 34

• IN_ROW_DATA
holds a partition of a heap or index.
• LOB_DATA
Holds large object (LOB) data types, such as xml, varbinary(max), and varchar(max).
• ROW_OVERFLOW_DATA
Holds variable length data stored in varchar, nvarchar, varbinary, or sql_variant columns
that exceed the 8,060 byte row size limit.

Each partition of a heap or index contains at least an IN_ROW_DATA allocation unit. It may also
contain a LOB_DATA or ROW_OVERFLOW_DATA allocation unit, depending on the heap or
index schema. For more information about allocation units, see Table and Index Organization.

An IAM page covers a 4-GB range in a file and is the same coverage as a GAM or SGAM page. If
the allocation unit contains extents from more than one file, or more than one 4-GB range of a
file, there will be multiple IAM pages linked in an IAM chain. Therefore, each allocation unit has at
least one IAM page for each file on which it has extents. There may also be more than one IAM
page on a file, if the range of the extents on the file allocated to the allocation unit exceeds the
range that a single IAM page can record.

IAM pages are allocated as required for each allocation unit and are located randomly in the file.
The system view, sys.system_internals_allocation_units, points to the first IAM page for an
allocation unit. All the IAM pages for that allocation unit are linked in a chain.

The sys.system_internals_allocation_units system view is for internal use only and is subject
to change. Compatibility is not guaranteed.

An IAM page has a header that indicates the starting extent of the range of extents mapped by
the IAM page. The IAM page also has a large bitmap in which each bit represents one extent.
The first bit in the map represents the first extent in the range, the second bit represents the
second extent, and so on. If a bit is 0, the extent it represents is not allocated to the allocation unit
owning the IAM. If the bit is 1, the extent it represents is allocated to the allocation unit owning the
IAM page.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 35
When the Database Engine has to insert a new row and no space is available in the current page,
it uses the IAM and PFS pages to find a page to allocate, or, for a heap or a Text/Image page, a
page with sufficient space to hold the row. The Database Engine uses the IAM pages to find the
extents allocated to the allocation unit. For each extent, the Database Engine searches the PFS
pages to see if there is a page that can be used. Each IAM and PFS page covers lots of data
pages, so there are few IAM and PFS pages in a database. This means that the IAM and PFS
pages are generally in memory in the SQL Server buffer pool, so they can be searched quickly.
For indexes, the insertion point of a new row is set by the index key. In this case, the search
process previously described does not occur.

The database engine allocates a new extent to an allocation unit only when it cannot quickly find
a page in an existing extent with sufficient space to hold the row being inserted. The Database
Engine allocates extents from those available in the filegroup using a proportional allocation
algorithm. If a filegroup has two files and one has two times the free space as the other, two
pages will be allocated from the file with the available space for every one page allocated from
the other file. This means that every file in a filegroup should have a similar percentage of space
used.

Tracking Modified Extends


SQL Server 2005 uses two internal data structures to track extents modified by bulk copy
operations and extents modified since the last full backup. These data structures greatly speed
up differential backups. They also speed up the logging of bulk copy operations when a database
is using the bulk-logged recovery model. Like the Global Allocation Map (GAM) and Shared
Global Allocation Map (SGAM) pages, these structures are bitmaps in which each bit represents
a single extent.

• Differential Changed Map (DCM):


This tracks the extents that have changed since the last BACKUP DATABASE statement.
If the bit for an extent is 1, the extent has been modified since the last BACKUP
DATABASE statement. If the bit is 0, the extent has not been modified.

Differential backups read just the DCM pages to determine which extents have been modified.
This greatly reduces the number of pages that a differential backup must scan. The length of time
that a differential backup runs is proportional to the number of extents modified since the last
BACKUP DATABASE statement and not the overall size of the database.

• Bulk Changed Map (BCM)


this tracks the extents that have been modified by bulk logged operations since the last
BACKUP LOG statement. If the bit for an extent is 1, the extent has been modified by a
bulk logged operation after the last BACKUP LOG statement. If the bit is 0, the extent has
not been modified by bulk logged operations.

Although BCM pages appear in all databases, they are only relevant when the database is using
the bulk-logged recovery model. In this recovery model, when a BACKUP LOG is performed, the
backup process scans the BCMs for extents that have been modified. It then includes those
extents in the log backup.

This lets the bulk logged operations be recovered if the database is restored from a database
backup and a sequence of transaction log backups. BCM pages are not relevant in a database
that is using the simple recovery model, because no bulk logged operations are logged. They are
not relevant in a database that is using the full recovery model, because that recovery model
treats bulk logged operations as fully logged operations.

The interval between DCM pages and BCM pages is the same as the interval between GAM and
SGAM page, 64,000 extents. The DCM and BCM pages are located behind the GAM and SGAM
pages in a physical file:

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 36

TABLES AND INDEXES:

Table and Index Organization


The following illustration shows the organization of a table. A table is contained in one or more
partitions and each partition contains data rows in either a heap or a clustered index structure.
The pages of the heap or clustered index are managed in one or more allocation units, depending
on the column types in the data rows.

Partitions
In SQL Server 2005, table and index pages are contained in one or more partitions. A partition is
a user-defined unit of data organization. By default, a table or index has only one partition that
contains all the table or index pages. The partition resides in a single filegroup. A table or index
with a single partition is equivalent to the organizational structure of tables and indexes in earlier
versions of SQL Server.

When a table or index uses multiple partitions, the data is partitioned horizontally so that groups
of rows are mapped into individual partitions, based on a specified column. The partitions can be
put on one or more filegroups in the database. The table or index is treated as a single logical
entity when queries or updates are performed on the data.

To view the partitions used by a table or index, use the sys.partitions (Transact-SQL) catalog
view.

Clustered Tables, Heaps, and Indexes


SQL Server 2005 tables use one of two methods to organize their data pages within a partition:

• Clustered tables are tables that have a clustered index.


The data rows are stored in order based on the clustered index key. The clustered index
is implemented as a B-tree index structure that supports fast retrieval of the rows, based
on their clustered index key values. The pages in each level of the index, including the

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 37
data pages in the leaf level, are linked in a doubly-linked list. However, navigation from
one level to another is performed by using key values. For more information, see
Clustered Index Structures.
• Heaps are tables that have no clustered index.
The data rows are not stored in any particular order, and there is no particular order to
the sequence of the data pages. The data pages are not linked in a linked list.

Indexed views have the same storage structure as clustered tables.


When a heap or a clustered table has multiple partitions, each partition has a heap or B-tree
structure that contains the group of rows for that specific partition. For example, if a clustered
table has four partitions, there are four B-trees; one in each partition.

Nonclustered Indexes
Nonclustered indexes have a B-tree index structure similar to the one in clustered indexes. The
difference is that nonclustered indexes do not affect the order of the data rows. The leaf level
contains index rows. Each index row contains the nonclustered key value, a row locator and any
included, or nonkey, columns. The locator points to the data row that has the key value.

XML Indexes
One primary and several secondary XML indexes can be created on each xml column in the
table. An XML index is a shredded and persisted representation of the XML binary large objects
(BLOBs) in the xml data type column. XML indexes are stored as internal tables. To view
information about xml indexes, use the sys.xml_indexes or sys.internal_tables catalog views.

Allocation Units
An allocation unit is a collection of pages within a heap or B-tree used to manage data based on
their page type. The following table lists the types of allocation units used to manage data in
tables and indexes.

Allocation unit type Is used to manage


IN_ROW_DATA Data or index rows that contain all data, except large object (LOB)
data.
Pages are of type Data or Index.
LOB_DATA Large object data stored in one or more of these data types: text,
ntext, image, xml, varchar(max), nvarchar(max), varbinary(max), or
CLR user-defined types (CLR UDT).
Pages are of type Text/Image.
ROW_OVERFLOW_DATA Variable length data stored in varchar, nvarchar, varbinary, or
sql_variant columns that exceed the 8,060 byte row size limit.
Pages are of type Data.

A heap or B-tree can have only one allocation unit of each type in a specific partition. To view the
table or index allocation unit information, use the sys.allocation_units catalog view.

IN_ROW_DATA Allocation Unit


For every partition used by a table (heap or clustered table), index, or indexed view, there is one
IN_ROW_DATA allocation unit that is made up of a collection of data pages. This allocation unit
also contains additional collections of pages to implement each nonclustered and XML index
defined for the table or view. The page collections in each partition of a table, index, or indexed
view are anchored by page pointers in the sys.system_internals_allocation_units system view.

The sys.system_internals_allocation_units system view is for internal use only and is subject
to change. Compatibility is not guaranteed.

Each table, index, and indexed view partition has a row in sys.system_internals_allocation_units
uniquely identified by a container ID (container_id). The container ID has a one-to-one mapping
to the partition_id in the sys.partitions catalog view that maintains the relationship between the

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 38
table, index, or the indexed view data stored in a partition and the allocation units used to
manage the data within the partition.

The allocation of pages to a table, index, or an indexed view partition is managed by a chain of
IAM pages. The column first_iam_page in sys.system_internals_allocation_units points to the first
IAM page in the chain of IAM pages managing the space allocated to the table, index, or the
indexed view in the IN_ROW_DATA allocation unit.

sys.partitions returns a row for each partition in a table or index.

• A heap has a row in sys.partitions with index_id = 0.


The first_iam_page column in sys.system_internals_allocation_units points to the
IAM chain for the collection of heap data pages in the specified partition. The server uses
the IAM pages to find the pages in the data page collection, because they are not linked.
• A clustered index on a table or a view has a row in sys.partitions with index_id = 1.
The root_page column in sys.system_internals_allocation_units points to the top of
the clustered index B-tree in the specified partition. The server uses the index B-tree to
find the data pages in the partition.
• Each nonclustered index created for a table or a view has a row in sys.partitions with
index_id > 1.
The root_page column in sys.system_internals_allocation_units points to the top of
the nonclustered index B-tree in the specified partition.
• Each table that has at least one LOB column also has a row in sys.partitions with
index_id > 250.
The first_iam_page column points to the chain of IAM pages that manage the pages in
the LOB_DATA allocation unit.

ROW_OVERFLOW_DATA Allocation Unit


For every partition used by a table (heap or clustered table), index, or indexed view, there is one
ROW_OVERFLOW_DATA allocation unit. This allocation unit contains zero (0) pages until a data
row with variable length columns (varchar, nvarchar, varbinary, or sql_variant) in the
IN_ROW_DATA allocation unit exceeds the 8 KB row size limit. When the size limitation is
reached, SQL Server moves the column with the largest width from that row to a page in the
ROW_OVERFLOW_DATA allocation unit. A 24-byte pointer to this off-row data is maintained on
the original page.

Text/Image pages in the ROW_OVERFLOW_DATA allocation unit are managed in the same way
pages in the LOB_DATA allocation unit are managed. That is, the Text/Image pages are
managed by a chain of IAM pages.

LOB_DATA Allocation Unit


When a table or index has one or more LOB data types, one LOB_DATA allocation unit per
partition is allocated to manage the storage of that data. The LOB data types include text, ntext,
image, xml, varchar(max), nvarchar(max), varbinary(max), and CLR user-defined types.

Partition and Allocation Unit Example


The following example returns partition and allocation unit data for two tables: DatabaseLog, a
heap with LOB data and no nonclustered indexes, and Currency, a clustered table without LOB
data and one nonclustered index. Both tables have a single partition.

USE AdventureWorks;
GO
SELECT o.name AS table_name,p.index_id, i.name AS index_name , au.type_desc AS
allocation_type, au.data_pages, partition_number
FROM sys.allocation_units AS au
JOIN sys.partitions AS p ON au.container_id = p.partition_id
JOIN sys.objects AS o ON p.object_id = o.object_id

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 39
JOIN sys.indexes AS i ON p.index_id = i.index_id AND i.object_id = p.object_id
WHERE o.name = N'DatabaseLog' OR o.name = N'Currency'
ORDER BY o.name, p.index_id;

Here is the result set. Notice that the DatabaseLog table uses all three allocation unit types,
because it contains both data and Text/Image page types. The Currency table does not have
LOB data, but does have the allocation unit required to manage data pages. If the Currency table
is later modified to include a LOB data type column, a LOB_DATA allocation unit is created to
manage that data.

table_name index_id index_name allocation_type data_pages partition_number


----------- -------- ----------------------- --------------- ----------- ------------
Currency 1 PK_Currency_CurrencyCode IN_ROW_DATA 1 1
Currency 3 AK_Currency_Name IN_ROW_DATA 1 1
DatabaseLog 0 NULL IN_ROW_DATA 160 1
DatabaseLog 0 NULL ROW_OVERFLOW_DATA 0 1
DatabaseLog 0 NULL LOB_DATA 49 1
(5 row(s) affected)

Heap Structures
A heap is a table without a clustered index. Heaps have one row in sys.partitions, with index_id =
0 for each partition used by the heap. By default, a heap has a single partition. When a heap has
multiple partitions, each partition has a heap structure that contains the data for that specific
partition. For example, if a heap has four partitions, there are four heap structures; one in each
partition.

Depending on the data types in the heap, each heap structure will have one or more allocation
units to store and manage the data for a specific partition. At a minimum, each heap will have one
IN_ROW_DATA allocation unit per partition. The heap will also have one LOB_DATA allocation
unit per partition, if it contains large object (LOB) columns. It will also have one
ROW_OVERFLOW_DATA allocation unit per partition, if it contains variable length columns that
exceed the 8,060 byte row size limit.

The column first_iam_page in the sys.system_internals_allocation_units system view points to


the first IAM page in the chain of IAM pages that manage the space allocated to the heap in a
specific partition. SQL Server 2005 uses the IAM pages to move through the heap. The data
pages and the rows within them are not in any specific order and are not linked. The only logical
connection between data pages is the information recorded in the IAM pages.

The sys.system_internals_allocation_units system view is for internal use only and is subject
to change. Compatibility is not guaranteed.

Table scans or serial reads of a heap can be performed by scanning the IAM pages to find the
extents that are holding pages for the heap. Because the IAM represents extents in the same
order that they exist in the data files, this means that serial heap scans progress sequentially
through each file. Using the IAM pages to set the scan sequence also means that rows from the
heap are not typically returned in the order in which they were inserted.

The following illustration shows how the SQL Server Database Engine uses IAM pages to retrieve
data rows in a single partition heap.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 40

Clustered Index Structures


In SQL Server, indexes are organized as B-trees. Each page in an index B-tree is called an index
node. The top node of the B-tree is called the root node. The bottom level of nodes in the index is
called the leaf nodes. Any index levels between the root and the leaf nodes are collectively known
as intermediate levels. In a clustered index, the leaf nodes contain the data pages of the
underlying table. The root and leaf nodes contain index pages holding index rows. Each index
row contains a key value and a pointer to either an intermediate level page in the B-tree, or a data
row in the leaf level of the index. The pages in each level of the index are linked in a doubly-
linked list.
Clustered indexes have one row in sys.partitions, with index_id = 1 for each partition used by the
index. By default, a clustered index has a single partition. When a clustered index has multiple
partitions, each partition has a B-tree structure that contains the data for that specific partition.
For example, if a clustered index has four partitions, there are four B-tree structures; one in each
partition.
Depending on the data types in the clustered index, each clustered index structure will have one
or more allocation units in which to store and manage the data for a specific partition. At a
minimum, each clustered index will have one IN_ROW_DATA allocation unit per partition. The
clustered index will also have one LOB_DATA allocation unit per partition if it contains large
object (LOB) columns. It will also have one ROW_OVERFLOW_DATA allocation unit per partition
if it contains variable length columns that exceed the 8,060 byte row size limit. For more
information about allocation units, see Table and Index Organization.

The pages in the data chain and the rows in them are ordered on the value of the clustered index
key. All inserts are made at the point where the key value in the inserted row fits in the ordering
sequence among existing rows. The page collections for the B-tree are anchored by page
pointers in the sys.system_internals_allocation_units system view.

The sys.system_internals_allocation_units system view is for internal use only and is subject
to change. Compatibility is not guaranteed.

For a clustered index, the root_page column in sys.system_internals_allocation_units points to


the top of the clustered index for a specific partition. SQL Server moves down the index to find
the row corresponding to a clustered index key. To find a range of keys, SQL Server moves
through the index to find the starting key value in the range and then scans through the data
pages using the previous or next pointers. To find the first page in the chain of data pages, SQL
Server follows the leftmost pointers from the root node of the index.

This illustration shows the structure of a clustered index in a single partition.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 41

Nonclustered Index Structures


Nonclustered indexes have the same B-tree structure as clustered indexes, except for the
following significant differences:
• The data rows of the underlying table are not sorted and stored in order based on their
nonclustered keys.
• The leaf layer of a nonclustered index is made up of index pages instead of data pages.

Nonclustered indexes can be defined on a table or view with a clustered index or a heap. Each
index row in the nonclustered index contains the nonclustered key value and a row locator. This
locator points to the data row in the clustered index or heap having the key value.

The row locators in nonclustered index rows are either a pointer to a row or are a clustered index
key for a row, as described in the following:

• If the table is a heap, which means it does not have a clustered index, the row locator is a
pointer to the row. The pointer is built from the file identifier (ID), page number, and
number of the row on the page. The whole pointer is known as a Row ID (RID).
• If the table has a clustered index, or the index is on an indexed view, the row locator is
the clustered index key for the row. If the clustered index is not a unique index, SQL
Server 2005 makes any duplicate keys unique by adding an internally generated value
called a uniqueifier. This four-byte value is not visible to users. It is only added when
required to make the clustered key unique for use in nonclustered indexes. SQL Server
retrieves the data row by searching the clustered index using the clustered index key
stored in the leaf row of the nonclustered index.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 42

Nonclustered indexes have one row in sys.partitions with index_id >1 for each partition used by
the index. By default, a nonclustered index has a single partition. When a nonclustered index has
multiple partitions, each partition has a B-tree structure that contains the index rows for that
specific partition. For example, if a nonclustered index has four partitions, there are four B-tree
structures, with one in each partition.

Depending on the data types in the nonclustered index, each nonclustered index structure will
have one or more allocation units in which to store and manage the data for a specific partition. At
a minimum, each nonclustered index will have one IN_ROW_DATA allocation unit per partition
that stores the index B-tree pages. The nonclustered index will also have one LOB_DATA
allocation unit per partition if it contains large object (LOB) columns . Additionally, it will have one
ROW_OVERFLOW_DATA allocation unit per partition if it contains variable length columns that
exceed the 8,060 byte row size limit. For more information about allocation units, see Table and
Index Organization. The page collections for the B-tree are anchored by root_page pointers in the
sys.system_internals_allocation_units system view.

The sys.system_internals_allocation_units system view is for internal use only and is subject
to change. Compatibility is not guaranteed.

Included Column Indexes


In SQL Server 2005, the functionality of nonclustered indexes can be extended by adding
included columns, called nonkey columns, to the leaf level of the index. While the key columns
are stored at all levels of the nonclustered index, nonkey columns are stored only at the leaf level.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 43
Transaction Log Architecture:

Every SQL Server 2005 database has a transaction log that records all the transactions and
database modifications made by each transaction. The transaction log is a critical component of
any database. This section contains the architectural information required to understand how the
transaction log is used to guarantee the data integrity of the database and how it is used for data
recovery.

Transaction Log Fundamentals


Every SQL Server 2005 database has a transaction log that records all transactions and the
database modifications made by each transaction. The transaction log is a critical component of
the database and, in the case of a system failure, can be the only source of recent data. It should
never be deleted or moved unless the consequences of doing that are fully understood.

Operations Supported by the Transaction Log


The transaction log supports the following operations:

• Recovery of individual transactions


If an application issues a ROLLBACK statement, or if the Database Engine detects an
error such as the loss of communication with a client, the log records are used to roll
back the modifications made by an incomplete transaction.
• Recovery of all incomplete transactions when SQL Server is started
If a server that is running SQL Server fails, the databases may be left in a state where
some modifications were never written from the buffer cache to the data files, and there
may be some modifications from incomplete transactions in the data files. When an
instance of SQL Server is started, it runs a recovery of each database. Every modification
recorded in the log which may not have been written to the data files is rolled forward.
Every incomplete transaction found in the transaction log is then rolled back to make sure
the integrity of the database is preserved.
• Rolling a restored database, file, filegroup, or page forward to the point of failure
After a hardware loss or disk failure affecting the database files, you can restore the
database to the point of failure. You first restore the last full backup and the last full
differential backup, and then restore the subsequent sequence of the transaction log
backups to the point of failure. As you restore each log backup, the Database Engine
reapplies all the modifications recorded in the log to roll forward all the transactions.
When the last log backup is restored, the Database Engine then uses the log information
to roll back all transactions that were not complete at that point.
• Supporting transactional replication
The Log Reader Agent monitors the transaction log of each database configured for
transactional replication and copies the transactions marked for replication from the
transaction log into the distribution database. For more information, see How
Transactional Replication Works.
• Supporting standby server solutions
The standby-server solutions, database mirroring and log shipping, rely heavily on the
transaction log. In a log shipping scenario, the primary server sends the active
transaction log of the primary database to one or more destinations. Each secondary
server restores the log to its local secondary database. For more information, see
Understanding Log Shipping.

In a database mirroring scenario, every update to a database, the principal database, is


immediately reproduced in a separate, full copy of the database, the mirror database. The
principal server instance sends each log record immediately to the mirror server instance which
applies the incoming log records to the mirror database, continually rolling it forward. For more
information, see Overview of Database Mirroring.

Transaction Log Characteristics

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 44
Following are the characteristics of the SQL Server Database Engine transaction log:

• The transaction log is implemented as a separate file or set of files in the database. The
log cache is managed separately from the buffer cache for data pages, which results in
simple, fast, and robust code within the database engine.
• The format of log records and pages is not constrained to follow the format of data pages.
• The transaction log can be implemented in several files. The files can be defined to
expand automatically by setting the FILEGROWTH value for the log. This reduces the
potential of running out of space in the transaction log, while at the same time reducing
administrative overhead. For more information, see ALTER DATABASE (Transact-SQL).
• The mechanism to reuse the space within the log files is quick and has minimal effect on
transaction throughput.

Transaction Log Logical Architecture


The SQL Server 2005 transaction log operates logically as if the transaction log is a string of log
records. Each log record is identified by a log sequence number (LSN). Each new log record is
written to the logical end of the log with an LSN that is higher than the LSN of the record before it.

Log records are stored in a serial sequence as they are created. Each log record contains the ID
of the transaction that it belongs to. For each transaction, all log records associated with the
transaction are individually linked in a chain using backward pointers that speed the rollback of
the transaction.

Log records for data modifications record either the logical operation performed or they record the
before and after images of the modified data. The before image is a copy of the data before the
operation is performed; the after image is a copy of the data after the operation has been
performed.
The steps to recover an operation depend on the type of log record:

• Logical operation logged


• To roll the logical operation forward, the operation is performed again.
• To roll the logical operation back, the reverse logical operation is performed.

• Before and after image logged


• To roll the operation forward, the after image is applied.
• To roll the operation back, the before image is applied.

Many types of operations are recorded in the transaction log. These operations include:

• The start and end of each transaction.


• Every data modification (insert, update, or delete). This includes changes to system
tables made by system stored procedures or data definition language (DDL) statements.
• Every extent and page allocation or deallocation.
• Creating or dropping a table or index.

Rollback operations are also logged. Each transaction reserves space on the transaction log to
make sure that enough log space exists to support a rollback that is caused by either an explicit
rollback statement or if an error is encountered. The amount of space reserved depends on the
operations performed in the transaction, but generally is equal to the amount of space used to log
each operation. This reserved space is freed when the transaction is completed.

The section of the log file from the first log record that must be present for a successful database-
wide rollback to the last-written log record is called the active part of the log, or the active log.
This is the section of the log required to do a full recovery of the database. No part of the active
log can ever be truncated.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 45

Transaction Log Physical Architecture


The transaction log in a database maps over one or more physical files. Conceptually, the log file
is a string of log records. Physically, the sequence of log records is stored efficiently in the set of
physical files that implement the transaction log.

The SQL Server Database Engine divides each physical log file internally into a number of virtual
log files. Virtual log files have no fixed size, and there is no fixed number of virtual log files for a
physical log file. The database engine chooses the size of the virtual log files dynamically while it
is creating or extending log files. The Database Engine tries to maintain a small number of virtual
files. The size of the virtual files after a log file has been extended is the sum of the size of the
existing log and the size of the new file increment. The size or number of virtual log files cannot
be configured or set by administrators.

The only time virtual log files affect system performance is if the log files are defined by small size
and growth_increment values. If these log files grow to a large size because of many small
increments, they will have lots of virtual log files. This can slow down database startup and also
log backup and restore operations. We recommend that you assign log files a size value close to
the final size required, and also have a relatively large growth_increment value.

The transaction log is a wrap-around file. For example, consider a database with one physical log
file divided into four virtual log files. When the database is created, the logical log file begins at
the start of the physical log file. New log records are added at the end of the logical log and
expand toward the end of the physical log. Log records in the virtual logs that appear in front of
the minimum recovery log sequence number (MinLSN) are deleted, as truncation operations
occur. The transaction log in the example database would look similar to the one in the following
illustration.

When the end of the logical log reaches the end of the physical log file, the new log records wrap
around to the start of the physical log file.

This cycle repeats endlessly, as long as the end of the logical log never reaches the beginning of
the logical log. If the old log records are truncated frequently enough to always leave sufficient
room for all the new log records created through the next checkpoint, the log never fills. However,
if the end of the logical log does reach the start of the logical log, one of two things occurs:

• If the FILEGROWTH setting is enabled for the log and space is available on the disk, the
file is extended by the amount specified in growth_increment and the new log records are

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 46
added to the extension. For more information about the FILEGROWTH setting, see
ALTER DATABASE (Transact-SQL).
• If the FILEGROWTH setting is not enabled, or the disk that is holding the log file has less
free space than the amount specified in growth_increment, an 9002 error is generated.

If the log contains multiple physical log files, the logical log will move through all the physical log
files before it wraps back to the start of the first physical log file.

Write-Ahead Transaction Log


SQL Server 2005 uses a write-ahead log (WAL). A write-ahead log guarantees that no data
modifications are written to disk before the associated log record is written to disk. This maintains
the ACID properties for a transaction.

To understand how the write-ahead log works, it is important for you to know how modified data is
written to disk. SQL Server maintains a buffer cache into which it reads data pages when data
must be retrieved. Data modifications are not made directly to disk, but are made to the copy of
the page in the buffer cache. The modification is not written to disk until a checkpoint occurs in
the database, or the modification must be written to disk so the buffer can be used to hold a new
page. Writing a modified data page from the buffer cache to disk is called flushing the page. A
page modified in the cache, but not yet written to disk is called a dirty page.

At the time a modification is made to a page in the buffer, a log record is built in the log cache that
records the modification. This log record must be written to disk before the associated dirty page
is flushed from the buffer cache to disk. If the dirty page is flushed before the log record is written,
the dirty page will create a modification on the disk that cannot be rolled back if the server fails
before the log record is written to disk. SQL Server has logic that prevents a dirty page from being
flushed before the associated log record is written. Log records are written to disk when the
transactions are committed.

Checkpoints and the Active Portion of the Log


Checkpoints flush dirty data pages from the buffer cache of the current database to disk. This
minimizes the active portion of the log that must be processed during a full recovery of a
database. During a full recovery, two types of actions are performed:

• The log records of modifications not flushed to disk before the system stopped are rolled
forward.
• All modifications associated with incomplete transactions, such as transactions for which
there is no COMMIT or ROLLBACK log record, are rolled back.

Checkpoint Operation
A checkpoint performs the following processes in the current database:

• Writes a record to the log file marking the start of the checkpoint.
• Stores information recorded for the checkpoint in a chain of checkpoint log records.

One piece of information recorded in the checkpoint records is the LSN of the first log record that
must be present for a successful database-wide rollback. This LSN is called the Minimum
Recovery LSN (MinLSN) and is the minimum of the:

• LSN of the start of the checkpoint


• LSN of the start of the oldest active transaction
• LSN of the start of the oldest replication transaction that has not yet been delivered to the
distribution database

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 47
Another piece of information recorded in the checkpoint records is a list of all the active
transactions that have modified the database.

• Marks for reuse the space that precedes the MinLSN, if the database uses the simple
recovery model.
• Writes all dirty log and data pages to disk.
• Writes a record marking the end of the checkpoint to the log file.
• Writes the LSN of the start of this chain to the database boot page.

Activities That Cause a Checkpoint


Checkpoints occur in the following situations:

• A CHECKPOINT statement is explicitly executed. A checkpoint occurs in the current


database for the connection.
• A minimally logged operation is performed in the database; for example, a bulk-copy
operation is performed on a database that is using the Bulk-Logged recovery model.
• Database files have been added or removed by using ALTER DATABASE.
• A change to the simple recovery model as part of the log truncation process that occurs
during this operation.
• An instance of SQL Server is stopped by a SHUTDOWN statement or by stopping the
SQL Server (MSSQLSERVER) service. Either will checkpoint each database in the
instance of SQL Server.
• An instance of SQL Server periodically generates automatic checkpoints in each
database to reduce the time that the instance would take to recover the database.
• A database backup is taken.
• An activity requiring a database shutdown is performed. For example, AUTO_CLOSE is
ON and the last user connection to the database is closed, or a database option change
is made that requires a restart of the database.

Automatic Checkpoints
The SQL Server Database Engine generates automatic checkpoints. The interval between
automatic checkpoints is based on the amount of log space used and the time elapsed since the
last checkpoint. The time interval between automatic checkpoints can be highly variable and long,
if few modifications are made in the database. Automatic checkpoints can also occur frequently if
lots of data is modified.

The interval between automatic checkpoints is calculated for all the databases on a server
instance from the recovery interval server configuration option. This option specifies the
maximum time the Database Engine should use to recover a database during a system restart.
The Database Engine estimates how many log records it can process in the recovery interval
during a recovery operation.

The interval between automatic checkpoints also depends on the recovery model:

• If the database is using either the full or bulk-logged recovery model, an automatic
checkpoint is generated whenever the number of log records reaches the number the
database engine estimates it can process during the time specified in the recovery
interval option.
• If the database is using the simple recovery model, an automatic checkpoint is generated
whenever the number of log records reaches the lesser of these two values:
• The log becomes 70 percent full.
• The number of log records reaches the number the Database Engine estimates it
can process during the time specified in the recovery interval option.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 48
Automatic checkpoints truncate the unused section of the transaction log if the database is using
the simple recovery model. However, if the database is using the full or bulk-logged recovery
models, the log is not truncated by automatic checkpoints.

The CHECKPOINT statement now provides an optional checkpoint_duration argument that


specifies the requested period of time, in seconds, for checkpoints to finish.

Active Log
The section of the log file from the MinLSN to the last-written log record is called the active
portion of the log, or the active log. This is the section of the log required to do a full recovery of
the database. No part of the active log can ever be truncated. All log records must be truncated
from the parts of the log before the MinLSN.

The following figure shows a simplified version of the end-of-a-transaction log with two active
transactions. Checkpoint records have been compacted to a single record.

LSN 148 is the last record in the transaction log. At the time that the recorded checkpoint at LSN
147 was processed, Tran 1 had been committed and Tran 2 was the only active transaction. That
makes the first log record for Tran 2 the oldest log record for a transaction active at the time of
the last checkpoint. This makes LSN 142, the Begin transaction record for Tran 2, the MinLSN.

Long-Running Transactions
The active log must include every part of all uncommitted transactions. An application that starts
a transaction and does not commit it or roll it back prevents the Database Engine from advancing
the MinLSN. This can cause two types of problems:

• If the system is shut down after the transaction has performed many uncommitted
modifications, the recovery phase of the subsequent restart can take much longer than
the time specified in the recovery interval option.
• The log might grow very large, because the log cannot be truncated past the MinLSN.
This occurs even if the database is using the simple recovery model where the
transaction log is generally truncated on each automatic checkpoint.

Replication Transactions
The Log Reader Agent monitors the transaction log of each database configured for transactional
replication and copies the transactions marked for replication from the transaction log into the
distribution database. The active log must contain all transactions that are marked for replication,
but that have not yet been delivered to the distribution database. If these transactions are not
replicated in a timely manner, they can prevent the truncation of the log.

Truncating the Transaction Log


If log records were never deleted from the transaction log, the logical log would grow until it filled
all the available space on the disks holding the physical log files. To reduce the size of the logical
log, the transaction log is periodically truncated. In very early versions of SQL Server, truncating
the log meant physically deleting the log records that were no longer needed for recovering or
restoring a database. However, in recent versions, the truncation process just marks for reuse the
space that was used by the old log records. The log records in this space are eventually
overwritten by new log records.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 49
Truncation does not reduce the size of a physical log file. Instead, it reduces the size of the
logical log file and frees disk space for reuse.

The active portion of the transaction log, the active log, can never be truncated. The active
portion of the log is the part of the log that is used to recover the database and must always be
present in the database. The record at the start of the active portion of the log is identified by the
minimum recovery log sequence number (MinLSN). The log records before the MinLSN are only
needed to maintain a sequence of the transaction log backups.

The recovery model selected for a database determines how much of the transaction log in front
of the MinLSN must be retained in the database, as shown in the following:

• In the simple recovery model, a sequence of transaction logs is not being maintained. All
log records before the MinLSN can be truncated at any time, except while a BACKUP
statement is being processed.
• In the full and bulk-logged recovery models, a sequence of transaction log backups is
being maintained. The part of the logical log in front of the MinLSN cannot be truncated
until the transaction log has been backed up.

Operations That Truncate the Log


Log truncation occurs at these points:

• After a BACKUP LOG statement is completed and it does not specify NO TRUNCATE.
• Every time a checkpoint is processed, if the database is using the simple recovery model.
This includes both explicit checkpoints that result from a CHECKPOINT statement and
implicit checkpoints that are generated by the system. The exception is that the log is not
truncated if the checkpoint occurs when a BACKUP statement is still active.

Log Truncation Example


Transaction logs are divided internally into sections called virtual log files. Virtual log files are the
unit of space that can be reused. When a transaction log is truncated, the log records in front of
the virtual log file containing the MinLSN are overwritten as new log records are generated.

This illustration shows a transaction log that has four virtual logs. The log has not been truncated
after the database was created. The logical log starts at the front of the first virtual log and the
part of virtual log 4 beyond the end of the logical file has never been used.

This illustration shows how the log appears after truncation. The space before the start of the
virtual log that contains the MinLSN record has been marked for reuse.

Shrinking the Transaction Log

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 50
The size of the log files are physically reduced in the following situations:

• A DBCC SHRINKDATABASE statement is executed


• A DBCC SHRINKFILE statement referencing a log file is executed
• An autoshrink operation occurs

Shrinking a log depends on first truncating the log. Log truncation does not reduce the size of a
physical log file. However, it does reduce the size of the logical log and marks as inactive the
virtual logs that do not hold any part of the logical log. A log shrink operation removes enough
inactive virtual logs to reduce the log file to the requested size.

The unit of the size reduction is a virtual log file. For example, if you have a 600-MB log file that
has been divided into six 100-MB virtual logs, the size of the log file can only be reduced in 100-
MB increments. The file size can be reduced to sizes such as 500 MB or 400 MB, but the file
cannot be reduced to sizes such as 433 MB or 525 MB.

The size of the virtual log file is chosen dynamically by the database engine when log files are
created or extended.

Virtual log files that hold part of the logical log cannot be freed. If all the virtual log files in a log file
hold parts of the logical log, the file cannot be shrunk until a truncation marks as inactive one or
more of the virtual logs at the end of the physical log.
When any file is shrunk, the space freed must come from the end of the file. When a transaction
log file is shrunk, enough virtual log files from the end of the log file are freed to reduce the log to
the size requested by the user. The target_size specified by the user is rounded to the next
highest virtual log file boundary. For example, if a user specifies a target_size of 325 MB for our
sample 600-MB file that contains 100-MB virtual log files, the last two virtual log files are removed
and the new file size is 400 MB.

A DBCC SHRINKDATABASE or DBCC SHRINKFILE operation immediately tries to shrink the


physical log file to the requested size:

• If no part of the logical log in the virtual log files extends beyond the target_size mark, the
virtual log files that come after the target_size mark are freed and the successful DBCC
statement is completed with no messages.

If part of the logical log in the virtual logs does extend beyond the target_size mark, the SQL
Server Database Engine frees as much space as possible and issues an informational message.
The message tells you what actions you have to perform to remove the logical log from the virtual
logs at the end of the file. After you perform this action, you can then reissue the DBCC statement
to free the remaining space.

For example, assume that a 600-MB log file that contains six virtual log files has a logical log that
starts in virtual log 3 and ends in virtual log 4 when you run a DBCC SHRINKFILE statement with
a target_size of 275 MB:

Virtual log files 5 and 6 are freed immediately, because they do not contain part of the logical log.
However, to meet the specified target_size, virtual log file 4 should also be freed, but it cannot
because it holds the end portion of the logical log. After freeing virtual log files 5 and 6, the
Database Engine fills the remaining part of virtual log file 4 with dummy records. This forces the

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 51
end of the log file to the end of virtual log file 1. In most systems, all transactions starting in virtual
log file 4 will be committed within seconds. This means that the entire active portion of the log is
moved to virtual log file 1. The log file now looks similar to this:

The DBCC SHRINKFILE statement also issues an informational message that states that it could
not free all the space requested and that you can run a BACKUP LOG statement to free the
remaining space. After the active portion of the log moves to virtual log file 1, a BACKUP LOG
statement will truncate the entire logical log that is in virtual log file 4:

Because virtual log file 4 no longer holds any portion of the logical log, you can now run the same
DBCC SHRINKFILE statement with a target_size of 275 MB. Virtual log file 4 will then be freed
and the size of the physical log file will be reduced to the size you requested.

SYSTEM DATABASES AND MOVING SYSTEM DATABASES

System Databases:
SQL Server 2005 includes the following system databases.

System
database Description

master Records all the system-level information for an instance of SQL Server.
Database

msdb Is used by SQL Server Agent for scheduling alerts and jobs.
Database

model Is used as the template for all databases created on the instance of SQL Server.
Database Modifications made to the model database, such as database size, collation,
recovery model, and other database options, are applied to any databases
created afterward.

Resource Is a read-only database that contains system objects that are included with SQL
Database Server 2005. System objects are physically persisted in the Resource database,
but they logically appear in the sys schema of every database.

tempdb Is a workspace for holding temporary objects or intermediate result sets.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 52
Database

Moving the tempdb database:


You can move tempdb files by using the ALTER DATABASE statement.
Determine the logical file names for the tempdb database by using sp_helpfile as follows:
use tempdb
go
sp_helpfile
go
The logical name for each file is contained in the name column. This example uses the default file
names of tempdev and templog.
2. Use the ALTER DATABASE statement, specifying the logical file name as follows:
use master
go
Alter database tempdb modify file (name = tempdev, filename = 'D:\SQL
2005\sqldata\tempdb.mdf')
go
Alter database tempdb modify file (name = templog, filename = 'D:\SQL
2005\sqldata\templog.ldf')
Go

You should receive the following messages that confirm the change:
Message 1
The file "tempdev" has been modified in the system catalog. The new path will be used the next
time the database is started.
Message 2
The file "templog" has been modified in the system catalog. The new path will be used the next
time the database is started .
3. Using sp_helpfile in tempdb will not confirm these changes until you restart SQL Server.
4. Stop and then restart SQL Server.

Moving the master database:


1. Change the path for the master data files and the master log files in SQL Server Enterprise
Manager.
NOTE: : If you are using SQL Server 2005, use SQL Server Configuration Manager to change
the path for the master data files and the master log files.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 53
NOTE: :: You may also change the location of the error log here.
2. Right-click the SQL Server in Enterprise Manager and then click Properties.
3. Click Startup Parameters to see the following entries:

-dD:\MSSQL7\data\master.mdf
-eD:\MSSQL7\log\ErrorLog
-lD:\MSSQL7\data\mastlog.ldf

-d is the fully qualified path for the master database data file.

-e is the fully qualified path for the error log file.

-l is the fully qualified path for the master database log file.

4. Change these values as follows:


a. Remove the current entries for the Master.mdf and Mastlog.ldf files.
b. Add new entries specifying the new location:

5. Stop SQL Server.


6. Copy the Master.mdf and Mastlog.ldf files to the new location (E:\Sqldata).

7. Restart SQL Server.

Moving the MSDB database:

1. Stop, and then restart SQL Server.


2. Make sure that the SQL Server Agent service is not currently running.
3. Detach the msdb database as follows:
use master
go
sp_detach_db 'msdb'
go
4. Move the Msdbdata.mdf and Msdblog.ldf files from the current location (D:\Mssql8\Data) to the
new location (E:\Mssql8\Data).
5. Remove -c -m -T3608 from the startup parameters box in Enterprise Manager.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 54
6. Stop and then restart SQL Server.

NOTE: : If you try to reattach the msdb database by starting SQL Server together with the -c
option, the -m option, and trace flag 3608, you may receive the following error message:

Server: Msg 615, Level 21, State 1, Line 1


Could not find database table ID 3, name 'model'.

7. Reattach the msdb database as follows:


use master
go
sp_attach_db 'msdb','E:\Mssql8\Data\msdbdata.mdf','E:\Mssql8\Data\msdblog.ldf'
go

NOTE: : If you use this procedure together with moving the model database, you are trying to
detach the msdb database while you detach the model database. When you do this, you must
reattach the model database first, and then reattach the msdb database. If you reattach the msdb
database first, you receive the following error message when you try to reattach the model
database:

Msg 0, Level 11, State 0, Line 0


A severe error occurred on the current command. The results, if any, should be discarded.

In this case, you must detach the msdb database, reattach the model database, and then
reattach the msdb database,

After you move the msdb database, you may receive the following error message:
Error 229: EXECUTE permission denied on object 'ObjectName', database 'master', owner 'dbo'.

This problem occurs because the ownership chain has been broken. The database owners for
the msdb database and for the master database are not the same. In this case, the ownership of
the msdb database had been changed. To work around this problem, run the following Transact-
SQL statements. You can do this by using the Osql.exe command-line utility (SQL Server 7.0 and
SQL Server 2000) or the Sqlcmd.exe command-line utility (SQL Server 2005):

USE MSDB
Go
EXEC sp_changedbowner 'sa'
Go

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 55

Moving the model database:

In SQL Server 2005 and in SQL Server 2000, you cannot detach system databases by using the
sp_detach_db stored procedure. When you try to run the sp_detach_db 'model' statement, you
receive the following error message:

Server: Msg 7940, Level 16, State 1, Line 1


System databases master, model, msdb, and tempdb cannot be detached.
To move the model database, you must start SQL Server together with the -c option, the -m
option, and trace flag 3608. Trace flag 3608 prevents SQL Server from recovering any database
except the master database.

NOTE: : You will not be able to access any user databases after you do this. You must not
perform any operations, other than the following steps, while you use this trace flag.
To add trace flag 3608 as a SQL Server startup parameter, follow these steps:
If you are using SQL Server 2005, you can use SQL Server Configuration Manager to change the
startup parameters of the SQL Server service. For more information about how to change the
startup parameters, visit the following Microsoft Developer Network (MSDN) Web site:

After you add the -c option, the -m option, and trace flag 3608, follow these steps:
1. Stop and then restart SQL Server.
2. Detach the model database by using the following commands:use master
go
sp_detach_db 'model'
go

3. Move the Model.mdf and Modellog.ldf files from the D:\Mssql7\Data folder to the E:\Sqldata
folder.
4. Reattach the model database by using the following commands:use master
go
sp_attach_db 'model','E:\Sqldata\model.mdf','E:\Sqldata\modellog.ldf'
go

5. Remove -c -m -T3608 from the startup parameters in SQL Server Enterprise Manager or in
SQL Server Configuration Manager.
6. Stop and then restart SQL Server. You can verify the change in file locations by using the
sp_helpfile stored procedure. For example, use the following command: use model
go

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 56
sp_helpfile
go

TOOLS
DATABASE DESIGNING

Syntax:

CREATE DATABASE database_name


[ ON
[ PRIMARY ] [ <filespec> [ ,...n ]
[ , <filegroup> [ ,...n ] ]
[ LOG ON { <filespec> [ ,...n ] } ]
]
[ COLLATE collation_name ]
[ WITH <external_access_option> ]
]
[;]
To attach a database
CREATE DATABASE database_name
ON <filespec> [ ,...n ]
FOR { ATTACH [ WITH <service_broker_option> ]
| ATTACH_REBUILD_LOG }
[;]
<filespec> ::=
{
(
NAME = logical_file_name ,
FILENAME = 'os_file_name'
[ , SIZE = size [ KB | MB | GB | TB ] ]
[ , MAXSIZE = { max_size [ KB | MB | GB | TB ] | UNLIMITED } ]
[ , FILEGROWTH = growth_increment [ KB | MB | GB | TB | % ] ]
) [ ,...n ]
}

<filegroup> ::=
{
FILEGROUP filegroup_name [ DEFAULT ]
<filespec> [ ,...n ]
}

<external_access_option> ::=
{
DB_CHAINING { ON | OFF }
| TRUSTWORTHY { ON | OFF }
}
<service_broker_option> ::=
{
ENABLE_BROKER
| NEW_BROKER
| ERROR_BROKER_CONVERSATIONS
}

Create a database snapshot

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 57
CREATE DATABASE database_snapshot_name
ON
(
NAME = logical_file_name,
FILENAME = 'os_file_name'
) [ ,...n ]
AS SNAPSHOT OF source_database_name
[;]

Example:

CREATE DATABASE sa
ON PRIMARY

(NAME=KANCHAN,
FILENAME="D:\MSSQL\aditya\KANCHAN.mdf",
SIZE=100,
MAXSIZE=200,
FILEGROWTH=25%),

(NAME=BHARATI,
FILENAME="D:\MSSQL\aditya\BHARATI.NDF",
SIZE=100,
MAXSIZE=200,
FILEGROWTH=25%),

FILEGROUP MANOHAR
(NAME=BHARATI2,
FILENAME="D:\MSSQL\aditya\BHARATI2.NDF",
SIZE=100,
MAXSIZE=200,
FILEGROWTH=25%),

(NAME=BHARATI3,
FILENAME="D:\MSSQL\aditya\BHARATI3.NDF",
SIZE=100,
MAXSIZE=200,
FILEGROWTH=25%)

Guidelines for creation of database:

First, you must decide where to put the data and log files. Here are some guidelines to use:
_ Data and log files should be on separate physical drives so that, in case of a disaster, you have
a better chance of recovering all data.
_ Transaction logs are best placed on a RAID-1 array because this has the fastest sequential
write speed.
_ Data files are best placed on a RAID-5 array because they have faster read speed than
other RAID-arrays.
_ If you have access to a RAID-10 array, you can place data and log files on it because it has all
the advantages of RAID-1 and RAID-5.

Next, you must decide how big your files should be. Data files are broken down into 8KB pages
and 64KB extents (eight contiguous pages). To figure out how big your database will need to be,
you must figure out how big your tables will be. You can do that using these steps:

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 58
1. Calculate the space used by a single row of the table.
a. To do this, add the storage requirements for each datatype in the table.
b. Add the null bitmap using this formula: null_bitmap = 2 + ((number of columns
+ 7) /8).
c. Calculate the space required for variable length columns using this formula:
variable_datasize = 2 + (num_variable_columns X 2) + max_varchar_size.
d. Calculate the total row size using this formula: Row_Size = Fixed_Data_Size +
Variable_Data_Size + Null_Bitmap + Row_Header.
The row header is always 4 bytes.
2. Calculate the number of rows that will fit on one page. Each page is 8,192 bytes with a
header, so each page holds 8,096 bytes of data. Therefore, calculate the number of rows
using this formula: 8096  ( RowSize + 2).
3. Estimate the number of rows the table will hold. No formula exists to calculate this; you just
need to have a good understanding of your data and user community.
4. Calculate the total number of pages that will be required to hold these rows. Use this
formula:
Total Number of Pages = Number of Rows in Table / Number of Rows Per Page.

Creation of database using GUI:

Once you have decided where to put your files and how big they should be, follow these
steps to create a database named Sales (you will be creating the files on a single drive for
simplicity):
1. Start SQL Server Management Studio by selecting Start _ Programs _ Microsoft SQL
Server 2005 _ Management Studio.
2. Connect to your default instance of SQL Server.
3. Expand your Databases folder.
4. Right-click either the Databases folder in the console tree or the white space in the right pane,
and choose New Database from the context menu.
5. You should now see the General tab of the Database properties sheet. Enter the database
name Sales, and leave the owner as <default>.
6. In the data files grid, in the Logical Name column, change the name of the primary data file to
Sales_Data. Use the default location for the file, and make sure the initial size is 3.
7. Click the ellipsis button (the one with three periods) in the Autogrowth column for the
Sales_Data file. In the dialog box that opens, check the Restricted File Growth radio
button, and restrict the filegrowth to 20MB.

8. To add a secondary data file, click the Add button, and change the logical name of the new file
to Sales_Data2. Here too use the default location for the file, and make sure the initial size is 3.
9. Restrict the filegrowth to a maximum of 20MB for Sales_Data2 by clicking the ellipsis button in
the Autogrowth column.
10. Leave all of the defaults for the Sales_Log file.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 59

11. Click OK when you are finished. You should now have a new Sales database.

SECURITY
Server-Level Security:

A user may be initially identified to SQL Server via one of three methods:
✦ Windows user login
✦ Membership in a Windows user group
✦ SQL Server–specific login (if the server uses mixed-mode security)
At the server level, the user is known by his or her LoginID, which is either his or her SQL Server
login, or his or her Windows domain and user name.
Once the user is known to the server and identified, the user has whatever server-level
administrative rights have been granted via fixed server roles. If the user belongs to the sysadmin
role, he or she has full access to every server function, database, and object in the server.
A user can be granted access to a database, and his or her network login ID can be mapped to a
database-specific user ID in the process. If the user doesn’t have access to a database, he or she
can gain access as the guest user with some configuration changes within the database server.

Database-Level Security:

At the database level, the user may be granted certain administrative-level permissions by
belonging to fixed database roles.
The user still can’t access the data. He or she must be granted permission to the database
objects (e.g., tables, stored procedures, views, functions). User-defined roles are custom roles
that serve as groups. The role may be granted permission to a database object, and users may

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 60
be assigned to a database user-defined role. All users are automatically members of the public
standard database role.
Object permissions are assigned by means of grant, revoke, and deny. A deny permission
overrides a grant permission, which overrides a revoke permission. A user may have multiple
permission paths to an object (individually, through a standard database role, and through the
public role). If any of these paths is denied, the user is blocked from accessing the object.
Otherwise, if any of the paths is granted permission, then the user can access the object.
Object permission is very detailed and a specific permission exists for every action that can be
performed (select, insert, update, run, and so on) for every object. Certain database
fixed roles also affect object access, such as the ability to read or write to the database.
It’s very possible for a user to be recognized by SQL Server and not have access to any
database. It’s also possible for a user to be defined within a database but not recognized by the
server. Moving a database and its permissions to another server, but not moving the logins, will
cause such orphaned users.

Object Ownership:

The final aspect of this overview of SQL Server’s security model involves object ownership.
Every object is owned by a schema. The default schema is dbo—not to be confused with the dbo
role.
In previous versions of SQL Server, objects were owned by users, or, more precisely, every
owner was also a schema. There are several advantages in SQL Server 2005. ANSI SQL defines
a model of database–schema–objects.
Ownership becomes critical when permission is being granted to a user to run a stored procedure
when the user doesn’t have permission to the underlying tables. If the ownership chain from the
tables to the stored procedure is consistent, then the user can access the stored procedure and
the stored procedure can access the tables as its owner. However, if the ownership chain is
broken, meaning there’s a different owner somewhere between the stored procedure and the
table, then the user must have rights to the stored procedure, the underlying tables, and every
other object in between.
Most security management can be performed in Management Studio. With code, security is
managed by means of the grant, revoke, and deny Data Control Language (DCL) commands,
and several system stored procedures.

Windows Security:

Because SQL Server exists within a Windows environment, one aspect of the security strategy
must be securing the Windows server.
SQL Server databases frequently support websites, so Internet Information Server (IIS) security
and firewalls must be considered within the security plan.
Windows security is an entire topic in itself, and therefore outside the scope of this book. If, as a
DBA, you are not well supported by qualified network staff, then you should make the effort to
become proficient in Windows Server technologies, especially security.

SQL Server Login:

Don’t confuse user access to SQL Server with SQL Server’s Windows accounts. The two logins
are completely different.
SQL Server users don’t need access to the database directories or data files on a Windows level
because the SQL Server process, not the user, will perform the actual file access.
However, the SQL Server process needs permission to access the files, so it needs a Windows
account. Two types are available:
✦ Local admin account: SQL Server can use the local admin account of the operating system
for permission to the machine. This option is adequate for single-server installations
but fails to provide the network security required for distributed processing.
✦ Domain user account (recommended): SQL Server can use a Windows user account
created specifically for it. The SQL Server user account can be granted administrator

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 61
rights for the server and can access the network through the server to talk to other
servers.
The SQL Server accounts were initially configured when the server was installed.

Server Security:

SQL Server uses a two-phase security-authentication scheme. The user is first authenticated to
the server. Once the user is “in” the server, access can be granted to the individual databases.
SQL Server stores all login information within the master database.
SQL Server Authentication Mode
When SQL Server was installed, one of the decisions made was which of the following
authentication methods to use:
✦ Windows authentication mode: Windows authentication only
✦ Mixed mode: Both Windows authentication and SQL Server user authentication

This option can be changed after installation in Management Studio, in the Security page of the
SQL Server Properties dialog box, as shown in Figure

From code, the authentication mode can be checked by means of the xp_loginconfig system
stored procedure, as follows:
EXEC xp_loginconfig ‘login mode’
Result:
name config_value
---------------------------- ----------------------------
login mode Mixed

Notice that the system stored procedure to report the authentication mode is an extended
stored procedure. That’s because the authentication mode is stored in the registry in the following
entry:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\
MicrosoftSQLServer\<instance_name>\MSSQLServer\LoginMode
A value for LoginMode is 0 is for Windows authentication and 1 for mixed mode.
The only ways to set the authentication mode are to use either Management Studio or
RegEdit.

Windows Authentication:

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 62
Windows authentication is superior to mixed mode because the user does not need to learn yet
another password and because it leverages the security design of the network.
Using Windows authentication means that users must exist as Windows users to be recognized
by SQL Server. The Windows SID (security identifier) is passed from Windows to SQL
Server.
Windows authentication is very robust in that it will authenticate not only Windows users, but also
users within Windows user groups.
When a Windows group is accepted as a SQL Server login, any Windows user who is a member
of the group can be authenticated by SQL Server. Access, roles, and permissions can be
assigned for the Windows group, and they will apply to any Windows user in the group.
If the Windows users are already organized into groups by function and security level,
using those groups as SQL Server users provides consistency and reduces administrative
overhead.
SQL Server also knows the actual Windows username, so the application can gather audit
information at the user level as well as at the group level.

Denying a Windows Login:

Using the paradigm of grant, revoke, and deny, a user may be blocked for access using
sp_denylogin. This can prevent a user or group from accessing SQL Server even if he or she
could otherwise gain entry from another method.
For example, suppose the Accounting group is granted normal login access, while the Probation
group is denied access. Joe is a member of both the Accounting group and the Probation group.
The Probation group’s denied access blocks Joe from the SQL Server even though he is granted
access as a member of the Accounting group, because deny overrides grant.
To deny a Windows user or group, use the sp_denylogin system stored procedure. If the
user or group being denied access doesn’t exist in SQL Server, then sp_denylogin adds and then
denies him, her, or it:
EXEC sp_denylogin ‘XPS\Joe’
To restore the login after denying access, you must first grant access with the sp_grant login
system stored procedure.
You can only revoke a login using T-SQL. The feature isn’t supported in Management Studio.
Setting the Default Database
The default database is set in the Login Properties form in the General page. The default
database can be set from code by means of the sp_defaultdb system stored procedure:
EXEC sp_defaultdb ‘Sam’, ‘OBXKites’

SQL Server Logins:

The optional SQL Server logins are useful when Windows authentication is inappropriate or
unavailable. It’s provided for backward compatibility and for legacy applications that are hard-
coded to a SQL Server login.
Implementing SQL Server logins (mixed mode) will automatically create an sa user, who will
be a member of the sysadmin fixed server role and have all rights to the server. An sa user
without a password is very common and the first attack every hacker tries when detecting a SQL
Server. Therefore, the Best Practice is disabling the sa user and assigning different users, or
roles, to the sysadmin fixed server role instead.
To manage SQL Server users in Management Studio use the same Login–New dialog used when
adding Windows users, but select SQL Server Authentication.
In T-SQL code, use the sp_addlogin system stored procedure. Because this requires setting up a
user, rather than just selecting one that already exists, it’s more complex than adding a
sp_grantlogin. Only the login name is required:
sp_addlogin ‘login’, ‘password’, ‘defaultdatabase’,
‘defaultlanguage’, ‘sid’, ‘encryption_option’
For example, the following code adds Joe as a SQL Server user and sets his default database to
the OBX Kite Store sample database:

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 63
EXEC sp_addlogin ‘Sam’, ‘myoldpassword’, ‘OBXKites’
The encryption option (skip_encryption) directs SQL Server to store the password without any
encryption in the sysxlogins system table. SQL Server expects the password to be encrypted, so
the password won’t work. Avoid this option.
The server user ID, or SID, is an 85-bit binary value that SQL Server uses to identify the
user. If the user is being set up on two servers as the same user, then the SID will need to be
specified for the second server. Query the sysserver_principals catalog view to find the
user’s SID:
SELECT Name, SID
FROM sysserver_principals
WHERE Name = ‘Sam’
Result:
Name SID
--------- --------------------------------------------
Sam 0x1EFDC478DEB52045B52D241B33B2CD7E
Updating a Password
The password can be modified by means of the sp_password system stored procedure:
EXEC sp_password ‘myoldpassword’, ‘mynewpassword’, ‘Joe’
If the password is empty, use the keyword NULL instead of empty quotes (‘’).

Removing a Login
To remove a SQL Server login, use the sp_droplogin system stored procedure:
EXEC sp_droplogin ‘Joe’
Removing a login will also remove all the login security settings.
Setting the Default Database
The default database is set in the Login Properties form in the General page, just as it is for
Windows users. The default database can be set from code by means of the sp_defaultdb system
stored procedure:
EXEC sp_defaultdb ‘Sam’, ‘OBXKites’

Fixed Database Roles:

SQL Server includes a few standards or fixed database roles. Like the server fixed roles, these
primarily organize administrative tasks. A user may belong to multiple roles. The fixed database
roles include the following:

✦ Db_accessadmin: can authorize a user to access the database, but not to manage database-
level security.
✦ Db_backupoperators: can perform backups, checkpoints, and dbcc commands, but not
restores (only server sysadmins can perform restores).
✦ Db_datareaders: can read all the data in the database. This role is the equivalent of a grant
on all objects, and it can be overridden by a deny permission.
✦ Db_datawriters: can write to all the data in the database. This role is the equivalent of a grant
on all objects, and it can be overridden by a deny permission.
✦ Db_ddladmins: can issue DDL commands (create, alter and drop).
✦ Db_denydatareaders: can read from any table in the database. This denies will override any
object-level grant.
✦ Db_denydatawriters: blocks modifying data in any table in the database. This denies will
override any object-level grant.
✦ Db_owner: is a special role that has all permissions in the database. This role includes all the
capabilities of the other roles. It is different from the dbo user role. This is not the database-level
equivalent of the server sysadmin role; an object-level deny will override membership in this role.
✦ Db_securityadmins: can manage database-level security—roles and permissions.

Assigning Fixed Database Roles with Management Studio:

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 64

The fixed database roles can be assigned with Management Studio with either of the following
two procedures:
✦ Adding the role to the user in the user’s Database User Properties form (see Figure 40-7),
either as the user is being created or after the user exists.
✦ Adding the user to the role in the Database Role Properties dialog. Select Roles under the
database’s Security node, and use the context menu to open the Properties form (see Figure 40-
8).

Assigning Fixed Database Roles with T-SQL:

From code, you can add a user to a fixed database role with the sp_addrole system stored
procedure.
To examine the assigned roles in T-SQL, query the sysdatabase_role_members catalog view
joined with sysdatabase_principal.

Server Roles:

SQL Server includes only fixed, predefined server roles. Primarily, these roles grant permission to
perform certain server-related administrative tasks. A user may belong to multiple roles.
The following roles are best used to delegate certain server administrative tasks:

✦ Bulk admin: can perform bulk insert operations.


✦ Dbcreator: can create, alter, drop, and restore databases.
✦ Diskadmin: can create, alter, and drop disk files.
✦ Processadmin: can kill a running SQL Server process.
✦ Securityadmin: can manage the logins for the server.
✦ Serveradmin: can configure the server wide settings, including setting up full-text searches
and shutting down the server.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 65
✦ Setupadmin: can configure linked servers, extended stored procedures, and the startup stored
procedure.
✦ Sysadmin: can perform any activity in the SQL Server installation, regardless of any other
permission setting. The sysadmin role even overrides denied permissions on an object.

SQL Server automatically creates a user, ‘BUILTINS/Administrators’, which includes all Windows
users in the Windows Admins group, and assigns that group to the SQL Server sysadmin role.
The BUILTINS/Administrators user can be deleted or modified if desired. If the SQL Server is
configured for mixed-mode security, it also creates an sa user and assigns that user to the SQL
Server sysadmin role. The sa user is there for backward compatibility. Disable or rename the sa
user, or at least assign it a password but don’t use it as a developer and DBA sign on. In addition,
delete the BUILTINS/Administrators user. Instead, use Windows authentication and assign the
DBAs and database developers to the sysadmin role.
A user must reconnect for the full capabilities of the sysadmin role to take effect. The server roles
are set in Management Studio in the Server Roles page of the Login Properties dialog (see
Figure 40-5).

In code, a user is assigned to a server role by means of a system stored procedure:


sp_addsrvrolemember
[ @loginame = ] ‘login’,
[ @rolename = ] ‘role’
For example, the following code adds the login XPS\Lauren to the sysadmin role:
EXEC sp_addsrvrolemember ‘XPS\Lauren’, ‘sysadmin’
The counterpart of sp_addsrvrolemember, sp_dropsrvrolemember, removes a login from a server
fixed role:
EXEC sp_dropsrvrolemember ‘XPS\Lauren’, ‘sysadmin’
To view the assigned roles using code, query the sysserver_principals catalog view to select the
members, joined with the sysserver_role_members, and joined again to the sysserver_principals
to select the roles.

Orphaned Windows Users:

When a Windows user is added to SQL Server and then removed from the Windows domain, the
user still exists in SQL Server but is considered orphaned. Being an orphaned user means even

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 66
though the user has access to the SQL Server, he or she may not necessarily have access to the
network and thus no access to the SQL Server box itself.
The sp_validate logins system stored procedure will locate all orphaned users and return
their Windows NT security identifiers and login names. For the following code example, Joe was
granted access to SQL Server and then removed from Windows:
EXEC sp_validatelogins
Result (formatted):
SID NT Login
----------------------------------------------- ----------
0x010500000000000515000000FCE31531A931... XPS\Joe
This is not a security hole. Without a Windows login with a matching SID, the user can’t log into
SQL Server.
To resolve the orphaned user:
1. Remove the user from any database access using sp_revokedbaccess.
2. Revoke the user’s server access using sp_revokelogin.
3. Add the user as a new login.
Security Delegation In an enterprise network with multiple servers and IIS, logins can become a
problem because a user may be logging into one server that is accessing another server. This
problem arises because each server must have a trust relationship with the others. For internal
company servers, this may not be a problem, but when one of those servers sits in a DMZ on the
Internet, you may not want to establish that trust, as it presents a security hole.
Security delegation is a Windows 2005 feature that uses Kerberos to pass security information
among trusted servers.
For example, a user can access IIS, which can access a SQL Server, and the SQL Server will
see the user as the username even though the connection came from IIS.
A few conditions must be met in order for Kerberos to work:
✦ All servers must be running Windows 2000 or later, running Active Directory in the
same domain or within the same trust tree.
✦ The “Account is sensitive and cannot be delegated” option must not be selected for the
user account.
✦ The “Account is trusted for delegation” option must be selected for the SQL Server
service account.
✦ The “Computer is trusted for delegation” option must be selected for the server
running SQL Server.
✦ SQL Server must have a Service Principal Name (SPN), created by setspn.exe, available in
the Windows 2000 Resource Kit.

Security delegation is difficult to set up and may require the assistance of your network domain
administrator. However, the ability to recognize users going through IIS is a powerful security
feature.

Surface Area Reduction:


SQL Server 2005 installation minimizes the "attack surface" because by default, optional features
are not installed. During installation the administrator can choose to install:
• Database Engine
• Analysis Services Engine
• Reporting Services
• Integration Services
• Notification Services
• Documentation and Samples

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 67
It is a good practice to review which product features you actually need and install only those
features. Later, install additional features only as needed. SQL Server 2005 includes sample
databases for OLTP, data warehousing, and Analysis Services. Install sample databases on test
servers only; they are not installed by default when you install the corresponding engine feature.
SQL Server 2005 includes sample code covering every feature of the product. These samples
are not installed by default and should be installed only on a development server, not on a
production server. Each item of sample code has undergone a review to ensure that the code
follows best practices for security. Each sample uses Microsoft Windows® security principals and
illustrates the principal of least privilege.
SQL Server has always been a feature-rich database and the number of new features in
SQL Server 2005 can be overwhelming. One way to make a system more secure is to limit the
number of optional features that are installed and enabled by default. It is easier to enable
features when they are needed than it is to enable everything by default and then turn off features
that you do not need. This is the installation policy of SQL Server 2005, known as "off by default,
enable when needed." One way to ensure that security policies are followed is to make secure
settings the default and make them easy to use.
SQL Server 2005 provides a "one-stop" utility that can be used to enable optional features on a
per-service and per-instance basis as needed. Although there are other utilities (such as Services
in Control Panel), server configuration commands (such as sp_configure), and APIs such as
WMI (Windows Management Instrumentation) that you can use, the SQL Server Surface Area
Configuration tool combines this functionality into a single utility program. This program can be
used either from the command line or via a graphic user interface.
SQL Server Service Area Configuration divides configuration into two subsets: services and
connections, and features. Use the Surface Area Configuration for Services and Connections tool
to view the installed components of SQL Server and the client network interfaces for each engine
component. The startup type for each service (Automatic, Manual, or Disabled) and the client
network interfaces that are available can be configured on a per-instance basis. Use the Surface
Area Configuration for Features tool to view and configure instance-level features.
The features enabled for configuration are:
• CLR Integration
• Remote use of a dedicated administrator connection
• OLE Automation system procedures
• System procedures for Database Mail and SQL Mail
• Ad hoc remote queries (the OPENROWSET and OPENDATASOURCE functions)
• SQL Server Web Assistant
• xp_cmdshell availability
The features enabled for viewing are:
• HTTP endpoints
• Service Broker endpoint
The SQL Server Surface Area Configuration command-line interface, sac.exe, permits you to
import and export settings. This enables you to standardize the configuration of a group of
SQL Server 2005 instances. You can import and export settings on a per-instance basis and also
on a per-service basis by using command-line parameters. For a list of command-line
parameters, use the -? command-line option. You must have sysadmin privilege to use this
utility. The following code is an example of exporting all settings from the default instance of SQL
Server on server1 and importing them into server2:

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 68
sac out server1.out –S server1 –U admin –I MSSQLSERVER
sac in server1.out –S server2

When you upgrade an instance of SQL Server to SQL Server 2005 by performing an in-place
upgrade, the configuration options of the instance are unchanged. Use SQL Server Surface Area
Configuration to review feature usage and turn off features that are not needed. You can turn off
the features in SQL Server Surface Area Configuration or by using the system stored procedure,
sp_configure. Here is an example of using sp_configure to disallow the execution of
xp_cmdshell on a SQL Server instance:

-- Allow advanced options to be changed.


EXEC sp_configure 'show advanced options', 1
GO
-- Update the currently configured value for advanced options.
RECONFIGURE
GO
-- Disable the feature.
EXEC sp_configure 'xp_cmdshell', 0
GO
-- Update the currently configured value for this feature.
RECONFIGURE
GO

In SQL Server 2005, SQL Server Browser functionality has been factored into its own service and
is no longer part of the core database engine. Additional functions are also factored into separate
services. Services that are not a part of the core database engine and can be enabled or disabled
separately include:
• SQL Server Active Directory Helper
• SQL Server Agent
• SQL Server FullText Search
• SQL Server Browser
• SQL Server VSS Writer
The SQL Server Browser service needs to be running only to connect to named SQL Server
instances that use TCP/IP dynamic port assignments. It is not necessary to connect to default
instances of SQL Server 2005 and named instances that use static TCP/IP ports. For a more
secure configuration, always use static TCP/IP port assignments and disable the SQL Server
Browser service. The VSS Writer allows backup and restore using the Volume Shadow Copy
framework. This service is disabled by default. If you do not use Volume Shadow Copy, disable
this service. If you are running SQL Server outside of an Active Directory® directory service,
disable the Active Directory Helper.
Best practices for surface area reduction
• Install only those components that you will immediately use. Additional components can
always be installed as needed.
• Enable only the optional features that you will immediately use.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 69
• Review optional feature usage before doing an in-place upgrade and disable unneeded
features either before or after the upgrade.
• Develop a policy with respect to permitted network connectivity choices. Use SQL Server
Surface Area Configuration to standardize this policy.
• Develop a policy for the usage of optional features. Use SQL Server Surface Area
Configuration to standardize optional feature enabling. Document any exceptions to the policy
on a per-instance basis.
• Turn off unneeded services by setting the service to either Manual startup or Disabled.

Service Account Selection and Management


SQL Server 2005 executes as a set of Windows services. Each service can be configured to use
its own service account. This facility is exposed at installation. SQL Server provides a special
tool, SQL Server Configuration Manager, to manage these accounts. In addition, these accounts
can be set programmatically through the SQL Server WMI Provider for Configuration. When you
select a Windows account to be a SQL Server service account, you have a choice of:
• Domain user that is not a Windows administrator
• Local user that is not a Windows administrator
• Network Service account
• Local System account
• Local user that is a Windows administrator
• Domain user that is a Windows administrator
When choosing service accounts, consider the principle of least privilege. The service account
should have exactly the privileges that it needs to do its job and no more privileges. You also
need to consider account isolation; the service accounts should not only be different from one
another, they should not be used by any other service on the same server. Only the first two
account types in the list above have both of these properties. Making the SQL Server service
account an administrator, at either a server level or a domain level, bestows too many unneeded
privileges and should never be done. The Local System account is not only an account with too
many privileges, but it is a shared account and might be used by other services on the same
server. Any other service that uses this account has the same set up privileges as the
SQL Server service that uses the account. Although Network Service has network access and is
not a Windows superuser account, it is a shareable account. This account is useable as a
SQL Server service account only if you can ensure that no other services that use this account
are installed on the server.
Using a local user or domain user that is not a Windows administrator is the best choice. If the
server that is running SQL Server is part of a domain and must access domain resources such as
file shares or uses linked server connections to other computers running SQL Server, a domain
account is the best choice. If the server is not part of a domain (for example, a server running in
the perimeter network (also known as the DMZ) in a Web application) or does not need to access
domain resources, a local user that is not a Windows administrator is preferred.
Creating the user account that will be used as a SQL Server service account is easier in
SQL Server 2005 than in previous versions. When SQL Server 2005 is installed, a Windows
group is created for each SQL Server service, and the service account is placed in the
appropriate group. To create a user that will serve as a SQL Server service account, simply
create an "ordinary" account that is either a member of the Users group (non-domain user) or
Domain Users group (domain user). During installation, the user is automatically placed in the
SQL Server service group and the group is granted exactly the privileges that are needed.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 70
If the service account needs additional privileges, the privilege should be granted to the
appropriate Windows group, rather than granted directly to the service user account. This is
consistent with the way access control lists are best managed in Windows in general. For
example, the ability to use the SQL Server Instant File Initialization feature requires that the
Perform Volume Maintenance Tasks user rights be set in the Group Policy Administration tool.
This privilege should be granted to
SQLServer2005MSSQLUser$MachineName$MSSQLSERVER group for the default instance of
SQL Server on server "MachineName."
SQL Server service accounts should be changed only by using SQL Server Configuration
Manager, or by using the equivalent functionality in the WMI APIs. Using Configuration Manager
ensures that the new service account is placed in the appropriate Windows group, and is thus
granted exactly the correct privileges to run the service. In addition, using SQL Server
Configuration Manager also re-encrypts the service master key that is using the new account. For
more information on the service master key, see Encryption later in this paper. Because
SQL Server service accounts also abide by Windows password expiration policies, it is necessary
to change the service account passwords at regular intervals. In SQL Server 2005, it is easier to
abide by password expiration policies because changing the password of the service account
does not require restarting SQL Server.
SQL Server 2005 requires that the service account have less privilege than in previous versions.
Specifically, the privilege Act As Part of the Operating System (SE_TCB_NAME) is not required
for the service account unless SQL Server 2005 is running on the Microsoft Windows
Server™ 2000 SP4 operating system. After doing an upgrade in place, use the Group Policy
Administration tool to remove this privilege.
The SQL Server Agent service account requires sysadmin privilege in the SQL Server instance
that it is associated with. In SQL Server 2005, SQL Server Agent job steps can be configured to
use proxies that encapsulate alternate credentials. A CREDENTIAL is simply a database object
that is a symbolic name for a Windows user and password. A single CREDENTIAL can be used
with multiple SQL Server Agent proxies. To accommodate the principal of least privilege, do not
give excessive privileges to the SQL Server Agent service account. Instead, use a proxy that
corresponds to a CREDENTIAL that has just enough privilege to perform the required task. A
CREDENTIAL can also be used to reduce the privilege for a specific task if the SQL Server Agent
service account has been configured with more privileges than needed for the task. Proxies can
be used for:
• ActiveX scripting
• Operating system (CmdExec)
• Replication agents
• Analysis Services commands and queries
• SSIS package execution (including maintenance plans)
Best practices for SQL Server service accounts
• Use a specific user account or domain account rather than a shared account for SQL Server
services.
• Use a separate account for each service.
• Do not give any special privileges to the SQL Server service account; they will be assigned
by group membership.
• Manage privileges through the SQL Server supplied group account rather than through
individual service user accounts.
• Always use SQL Server Configuration Manager to change service accounts.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 71
• Change the service account password at regular intervals.
• Use CREDENTIALs to execute job steps that require specific privileges rather than adjusting
the privilege to the SQL Server Agent service account.
• If an agent user needs to execute a job that requires different Windows credentials, assign
them a proxy account that has just enough permissions to get the task done.

Authentication Mode
SQL Server has two authentication modes: Windows Authentication and Mixed Mode
Authentication. In Windows Authentication mode, specific Windows user and group accounts are
trusted to log in to SQL Server. Windows credentials are used in the process; that is, either NTLM
or Kerberos credentials. Windows accounts use a series of encrypted messages to authenticate
to SQL Server; no passwords are passed across the network during the authentication process.
In Mixed Mode Authentication, both Windows accounts and SQL Server-specific accounts (known
as SQL logins) are permitted. When SQL logins are used, SQL login passwords are passed
across the network for authentication. This makes SQL logins less secure than Windows logins.
It is a best practice to use only Windows logins whenever possible. Using Windows logins with
SQL Server achieves single sign-on and simplifies login administration. Password management
uses the ordinary Windows password policies and password change APIs. Users, groups, and
passwords are managed by system administrators; SQL Server database administrators are only
concerned with which users and groups are allowed access to SQL Server and with authorization
management.
SQL logins should be confined to legacy applications, mostly in cases where the application is
purchased from a third-party vendor and the authentication cannot be changed. Another use for
SQL logins is with cross-platform client-server applications in which the non-Windows clients do
not possess Windows logins. Although using SQL logins is discouraged, there are security
improvements for SQL logins in SQL Server 2005. These improvements include the ability to
have SQL logins use the password policy of the underlying operating system and better
encryption when SQL passwords are passed over the network. We'll discuss each of these later
in the paper.
SQL Server 2005 uses standard DDL statements to create both Windows logins and SQL logins.
Using the CREATE LOGIN statement is preferred; the sp_addlogin and sp_grantlogin system
stored procedures are supported for backward compatibility only. SQL Server 2005 also provides
the ability to disable a login or change a login name by using the ALTER LOGIN DDL statement.
For example, if you install SQL Server 2005 in Windows Authentication mode rather than Mixed
Mode, the sa login is disabled. Use ALTER LOGIN rather than the procedures sp_denylogin or
sp_revokelogin, which are supported for backward compatibility only.
If you install SQL Server in Windows Authentication mode, the sa login account is disabled and a
random password is generated for it. If you later need to change to Mixed Mode Authentication
and re-enable the sa login account, you will not know the password. Change the sa password to
a known value after installation if you think you might ever need to use it.
Best practices for authentication mode
• Always use Windows Authentication mode if possible.
• Use Mixed Mode Authentication only for legacy applications and non-Windows users.
• Use the standard login DDL statements instead of the compatibility system procedures.
• Change the sa account password to a known value if you might ever need to use it. Always
use a strong password for the sa account and change the sa account password periodically.
• Do not manage SQL Server by using the sa login account; assign sysadmin privilege to a
knows user or group.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 72
• Rename the sa account to a different account name to prevent attacks on the sa account by
name.

Network Connectivity
A standard network protocol is required to connect to the SQL Server database. There are no
internal connections that bypass the network. SQL Server 2005 introduces an abstraction for
managing any connectivity channel—entry points into a SQL Server instance are all represented
as endpoints. Endpoints exist for the following network client connectivity protocols:
• Shared Memory
• Named Pipes
• TCP/IP
• VIA
• Dedicated administrator connection
In addition, endpoints may be defined to permit access to the SQL Server instance for:
• Service Broker
• HTTP Web Services
• Database mirroring
Following is an example of creating an endpoint for Service Broker.

CREATE ENDPOINT BrokerEndpoint_SQLDEV01


AS TCP
( LISTENER_PORT = 4022 )
FOR SERVICE_BROKER
( AUTHENTICATION = WINDOWS )

SQL Server 2005 discontinues support for some network protocols that were available with earlier
versions of SQL Server, including IPX/SPX, Appletalk, and Banyon Vines.
In keeping with the general policy of "off by default, enable only when needed," no Service
Broker, HTTP, or database mirroring endpoints are created when SQL Server 2005 is installed,
and the VIA endpoint is disabled by default. In addition, in SQL Server 2005 Express Edition,
SQL Server 2005 Developer Edition, and SQL Server 2005 Evaluation Edition, the Named Pipes
and TCP/IP protocols are disabled by default. Only Shared Memory is available by default in
those editions. The dedicated administrator connection (DAC), new with SQL Server 2005, is
available only locally by default, although it can be made available remotely. Note that the DAC is
not available in SQL Server Express Edition by default and requires that the server be run with a
special trace flag to enable it. Access to database endpoints requires the login principal to have
CONNECT permission. By default, no login account has CONNECT permission to Service Broker
or HTTP Web Services endpoints. This restricts access paths and blocks some known attack
vectors. It is a best practice to enable only those protocols that are needed. For example, if
TCP/IP is sufficient, there is no need to enable the Named Pipes protocol.
Although endpoint administration can be accomplished via DDL, the administration process is
made easier and policy can be made more uniform by using the SQL Server Surface Area
Configuration tool and SQL Server Configuration Manager. SQL Server Surface Area
Configuration provides a simplified user interface for enabling or disabling client protocols for a
SQL Server instance, as shown in Figure 1 and Figure 2. Configuration is described in
Knowledge Base article KB914277, How to configure SQL Server 2005 to allow remote

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 73
connections, as well as in SQL Server 2005 Books Online. A screenshot showing the remote
connections configuration dialog box is shown in Figure 1.

Figure 1 Configuring remote connections

In the Surface Area Configuration for Services and Connections dialog box, you can see if any
HTTP or Service Broker endpoints are defined for the instance. New endpoints must be defined
by using DDL statements; SQL Server Surface Area Configuration cannot be used to define
these. You can use the Surface Area Configuration for Features tool to enable remote access to
the dedicated administrator connection.
SQL Server Configuration Manager provides more granular configuration of server protocols.
With Configuration Manager, you can:
• Choose a certificate for SSL encryption.
• Allow only encryption connections from clients.
• Hide an instance of SQL Server from the server enumeration APIs.
• Enable and disable TCP/IP, Shared Memory, Named Pipes, and VIA protocols.
• Configure the name of the pipe each instance of SQL Server will use.
• Configure a TCP/IP port number that each instance listens on for TCP/IP connections.
• Choose whether to use TCP/IP dynamic port assignment for named instances.
The dialog for configuring TCP/IP address properties such as port numbers and dynamic port
assignment is shown in Figure 2.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 74

Figure 2 TCP/IP Addresses configuration page in SQL Server Configuration Manager

SQL Server 2005 can use an encrypted channel for two reasons: to encrypt credentials for SQL
logins, and to provide end-to-end encryption of entire sessions. Using encrypted sessions
requires using a client API that supports these. The OLE DB, ODBC, and ADO.NET clients all
support encrypted sessions; currently the Microsoft JDBC client does not. The other reason for
using SSL is to encrypt credentials during the login process for SQL logins when a password is
passed across the network. If an SSL certificate is installed in a SQL Server instance, that
certificate is used for credential encryption. If an SSL certificate is not installed, SQL Server 2005
can generate a self-signed certificate and use this certificate instead. Using the self-signed
certificate prevents passive man-in-the-middle attacks, in which the man-in-the-middle intercepts
network traffic, but does not provide mutual authentication. Using an SSL certificate with a trusted
root certificate authority prevents active man-in-the-middle attacks and provides mutual
authentication.
In SQL Server 2005, you can GRANT, REVOKE, or DENY permission to CONNECT to a specific
endpoint on a per-login basis. By default, all logins are GRANTed permission on the Shared
Memory, Named Pipes, TCP/IP, and VIA endpoints. You must specifically GRANT users
CONNECT permission to other endpoints; no users are GRANTed this privilege by default. An
example of granting this permission is:

GRANT CONNECT ON MyHTTPEndpoint TO MyDomain\Accounting

Best practices for network connectivity


• Limit the network protocols supported.
• Do not enable network protocols unless they are needed.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 75
• Do not expose a server that is running SQL Server to the public Internet.
• Configure named instances of SQL Server to use specific port assignments for TCP/IP rather
than dynamic ports.
• If you must support SQL logins, install an SSL certificate from a trusted certificate authority
rather than using SQL Server 2005 self-signed certificates.
• Use "allow only encrypted connections" only if needed for end-to-end encryption of sensitive
sessions.
• Grant CONNECT permission only on endpoints to logins that need to use them. Explicitly
deny CONNECT permission to endpoints that are not needed by users or groups.

Lockdown of System Stored Procedures


SQL Server uses system stored procedures to accomplish some administrative tasks. These
procedures almost always begin with the prefix xp_ or sp_. Even with the introduction of standard
DDL for some tasks (for example, creating logins and users), system procedures remain the only
way to accomplish tasks such as sending mail or invoking COM components. System extended
stored procedures in particular are used to access resources outside the SQL Server instance.
Most system stored procedures contain the relevant security checks as part of the procedure and
also perform impersonation so that they run as the Windows login that invoked the procedure. An
example of this is sp_reserve_http_namespace, which impersonates the current login and then
attempts to reserve part of the HTTP namespace (HTTP.SYS) by using a low-level operating
system function.
Because some system procedures interact with the operating system or execute code outside of
the normal SQL Server permissions, they can constitute a security risk. System stored
procedures such as xp_cmdshell or sp_send_dbmail are off by default and should remain
disabled unless there is a reason to use them. In SQL Server 2005, you no longer need to use
stored procedures that access the underlying operating system or network outside of the
SQL Server permission space. SQLCLR procedures executing in EXTERNAL_ACCESS mode
are subject to SQL Server permissions, and SQLCLR procedures executing in UNSAFE mode
are subject to some, but not all, security checks. For example, to catalog a SQLCLR assembly
categorized as EXTERNAL_ACCESS or UNSAFE, either the database must be marked as
TRUSTWORTHY (see Database Ownership and Trust) or the assembly must be signed with a
certificate or asymmetric key that is cataloged to the master database. SQLCLR procedures
should replace user-written extended stored procedures in the future.
Some categories of system stored procedures can be managed by using SQL Server Surface
Area Configuration. These include:
• xp_cmdshell - executes a command in the underlying operating system
• Database Mail procedures
• SQL Mail procedures
• COM component procedures (e.g. sp_OACreate)
Enable these procedures only if necessary.
Some system stored procedures, such as procedures that use SQLDMO and SQLSMO libraries,
cannot be configured by using SQL Server Surface Area Configuration. They must be configured
by using sp_configure or SSMS directly. SSMS or sp_configure can also be used to set most of
the configuration feature settings that are set by using SQL Server Surface Area Configuration.
The system stored procedures should not be dropped from the database; dropping these can
cause problems when applying service packs. Removing the system stored procedures results in
an unsupported configuration. It is usually unnecessary to completely DENY all users access to

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 76
the system stored procedures, as these stored procedures have the appropriate permission
checks internal to the procedure as well as external.
Best practices for system stored procedures
• Disable xp_cmdshell unless it is absolutely needed.
• Disable COM components once all COM components have been converted to SQLCLR.
• Disable both mail procedures (Database Mail and SQL Mail) unless you need to send mail
from SQL Server. Prefer Database Mail as soon as you can convert to it.
• Use SQL Server Surface Area Configuration to enforce a standard policy for extended
procedure usage.
• Document each exception to the standard policy.
• Do not remove the system stored procedures by dropping them.
• Do not DENY all users/administrators access to the extended procedures.

Password Policy
Windows logins abide by the login policies of the underlying operating system. These policies can
be set using the Domain Security Policy or Local Security Policy administrator Control Panel
applets. Login policies fall into two categories: Password policies and Account Lockout policies.
Password policies include:
• Enforce Password History
• Minimum and Maximum Password Age
• Minimum Password Length
• Password Must Meet Complexity Requirements
• Passwords are Stored Using Reversible Encryption (Note: this setting does not apply to SQL
Server)
Account Lockout policies include:
• Account Lockout Threshold (Number of invalid logins before lockout)
• Account Lockout Duration (Amount of time locked out)
• Reset Lockout Counter After n Minutes
In SQL Server 2005, SQL logins can also go by the login policies of the underlying operating
system if the operating system supports it. The operating system must support the system call
NetValidatePasswordPolicy. Currently, the only operating system that supports this is Windows
Server 2003 and later versions. If you use SQL logins, run SQL Server 2005 on a Windows
Server 2003 or later operating system. CREATE LOGIN parameters determine whether the login
goes by the operating system policies. These parameters are:
• CHECK_POLICY
• CHECK_EXPIRATION
• MUST_CHANGE
CHECK_POLICY specifies that the SQL login must abide by the Windows login policies and
Account Lockout policies, with the exception of password expiration. This is because, if SQL
logins must go by the Windows password expiration policy, underlying applications must be
outfitted with a mechanism for password changing. Most applications currently do not provide a
way to change SQL login passwords. In SQL Server 2005, both SSMS and SQLCMD provide a
way to change SQL Server passwords for SQL logins. Consider outfitting your applications with a
password-changing mechanism as soon as possible. Having built-in password changing also
allows logins to be created with the MUST_CHANGE parameter; using this parameter requires
the user to change the password at the time of the first login. Administrators should be aware of

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 77
the fact that password length and complexity policies, but not expiration policies, apply to
passwords used with encryption keys as well as to passwords used with SQL logins. For a
description of encryption keys, see Encryption.
When SQL logins are used on pre-Windows 2003 operating systems, there is a series of hard-
coded password policies in lieu of the domain or operating system policies if CHECK_POLICY =
ON. These policies are enumerated in SQL Server Books Online.
Best practices for password policy
• Mandate a strong password policy, including expiration and a complexity policy for your
organization.
• If you must use SQL logins, ensure that SQL Server 2005 runs on the Windows Server 2003
operating system and use password policies.
• Outfit your applications with a mechanism to change SQL login passwords.
• Set MUST_CHANGE for new logins.

Administrator Privileges
SQL Server 2005 makes all permissions grantable and also makes grantable permissions more
granular than in previous versions. Privileges with elevated permissions now include:
• Members of the sysadmin server role.
• The sa built-in login, if it is enabled.
• Any login with CONTROL SERVER permission.
CONTROL SERVER permission is new in SQL Server 2005. Change your auditing procedures to
include any login with CONTROL SERVER permission.
SQL Server automatically grants the server's Administrators group (BUILTIN\administrators) the
sysadmin server role. When running SQL Server 2005 under Microsoft Windows Vista™, the
operating system does not recognize membership in the BUILTIN\Administrators group unless
the user has elevated themselves to a full administrator. In SP2, you can use SQL Server Surface
Area Configuration to enable a principal to act as administrator by selecting Add New
Administrator from the main window as shown in Figure 3.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 78

Figure 3 Adding a new administrator in SP2 SQL Server Surface Area Configuration

Clicking on this link opens the SQL Server 2005 User Provisioning Tool for Vista as shown in
Figure 4. This tool can also be automatically invoked as the last step of an SQL Server 2005 SP2
installation.

Figure 4 The SQL Server 2005 User Provisioning Tool for Vista

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 79
When running SQL Server Express SP2 under the Vista operating system, Set Up incorporates
the specification of a specific principal to act as administrator. SQL Server Express SP2 Set Up
also allows command-line options to turn user instances on or off (ENABLERANU) and to add the
current Set Up user to the SQL Server Administrator role (ADDUSERASADMIN). For more
detailed information, see Configuration Options (SQL Server Express) in SQL Server 2005 SP2
Books Online. For additional security-related considerations when running SQL Server 2005 with
the Windows Vista operating system, see the SQL Server 2005 SP2 Readme file. In particular,
see section 5.5.2 "Issues Caused by User Account Control in Windows Vista."
For accountability in the database, avoid relying on the Administrators group and add only
specific database administrators to the sysadmin role. Another option is to have a specific
DatabaseAdministrators role at the operating system level. Minimizing the number of
administrators who have sysadmin or CONTROL SERVER privilege also makes it easier to
resolve problems; fewer logins with administrator privilege means fewer people to check with if
things go wrong. The permission VIEW SERVER STATE is useful for allowing administrators and
troubleshooters to view server information (dynamic management views) without granting full
sysadmin or CONTROL SERVER permission.
Best practices for administrator privileges
• Use administrator privileges only when needed.
• Minimize the number of administrators.
• Provision admin principals explicitly.
• Have multiple distinct administrators if more than one is needed.
• Avoid dependency on the builtin\administrators Windows group.

Database Ownership and Trust


A SQL Server instance can contain multiple user databases. Each user database has a specific
owner; the owner defaults to the database creator. By definition, members of the sysadmin
server role (including system administrators if they have access to SQL Server through their
default group account) are database owners (DBOs) in every user database. In addition, there is
a database role, db_owner, in every user database. Members of the db_owner role have
approximately the same privileges as the dbo user.
SQL Server can be thought of as running in two distinct modes, which can be referred to as IT
department mode and ISV mode. These are not database settings but simply different ways to
manage SQL Server. In an IT department, the sysadmin of the instance manages all user
databases. In an Internet service provider environment (say, a Web-hosting service), each
customer is permitted to manage their own database and is restricted from accessing system
databases or other user databases. For example, the databases of two competing companies
could be hosted by the same Internet service provider (ISV) and exist in the same SQL Server
instance. Dangerous code could be added to a user database when attached to its original
instance, and the code would be enabled on the ISV instance when deployed. This situation
makes controlling cross-database access crucial.
If each database is owned and managed by the same general entity, it is still not a good practice
to establish a "trust relationship" with a database unless an application-specific feature, such as
cross-database Service Broker communication, is required. A trust relationship between
databases can be established by allowing cross-database ownership chaining or by marking a
database as trusted by the instance by using the TRUSTWORTHY property. An example of
setting the TRUSTWORTHY property follows:

ALTER DATABASE pubs SET TRUSTWORTHY ON

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 80
Best practices for database ownership and trust
• Have distinct owners for databases; not all databases should be owned by sa.
• Minimize the number of owners for each database.
• Confer trust selectively.
• Leave the Cross-Database Ownership Chaining setting off unless multiple databases are
deployed at a single unit.
• Migrate usage to selective trust instead of using the TRUSTWORTHY property.

Schemas
SQL Server 2005 introduces schemas to the database. A schema is simply a named container for
database objects. Each schema is a scope that fits into the hierarchy between database level and
object level, and each schema has a specific owner. The owner of a schema can be a user, a
database role, or an application role. The schema name takes the place of the owner name in the
SQL Server multi-part object naming scheme. In SQL Server 2000 and previous versions, a table
named Employee that was part of a database named Payroll and was owned by a user name
Bob would be payroll.bob.employee. In SQL Server 2005, the table would have to be part of a
schema. If payroll_app is the name of the SQL Server 2005 schema, the table name in
SQL Server 2005 is payroll.payroll_app.employee.
Schemas solve an administration problem that occurs when each database object is named after
the user who creates it. In SQL Server versions prior to 2005, if a user named Bob (who is not
dbo) creates a series of tables, the tables would be named after Bob. If Bob leaves the company
or changes job assignments, these tables would have to be manually transferred to another user.
If this transfer were not performed, a security problem could ensue. Because of this, prior to
SQL Server 2005, DBAs were unlikely to allow individual users to create database objects such
as tables. Each table would be created by someone acting as the special dbo user and would
have a user name of dbo. Because, in SQL Server 2005, schemas can be owned by roles,
special roles can be created to own schemas if needed—every database object need not be
owned by dbo. Not having every object owned by dbo makes for more granular object
management and makes it possible for users (or applications) that need to dynamically create
tables to do so without dbo permission.
Having schemas that are role-based does not mean that it’s a good practice to have every user
be a schema owner. Only users who need to create database objects should be permitted to do
so. The ability to create objects does not imply schema ownership; GRANTing Bob ALTER
SCHEMA permission in the payroll_app schema can be accomplished without making Bob a
schema owner. In addition, granting CREATE TABLE to a user does not allow that user to create
tables; the user must also have ALTER SCHEMA permission on some schema in order to have a
schema in which to create the table. Objects created in a schema are owned by the schema
owner by default, not by the creator of the object. This makes it possible for a user to create
tables in a known schema without the administrative problems that ensue when that user leaves
the company or switches job assignments.
Each user has a default schema. If an object is created or referenced in a SQL statement by
using a one-part name, SQL Server first looks in the user's default schema. If the object isn't
found there, SQL Server looks in the dbo schema. The user's default schema is assigned by
using the CREATE USER or ALTER USER DDL statements. If the default schema is specified,
the default is dbo. Using named schemas for like groups of database objects and assigning each
user's default schema to dbo is a way to mandate using two-part object names in SQL
statements. This is because objects that are not in the dbo schema will not be found when a one-
part object name is specified. Migrating groups of user objects out of the dbo schema is also a

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 81
good way to allow users to create and manage objects if needed (for example, to install an
application package) without making the installing user dbo.
Best practices for using schemas
• Group like objects together into the same schema.
• Manage database object security by using ownership and permissions at the schema level.
• Have distinct owners for schemas.
• Not all schemas should be owned by dbo.
• Minimize the number of owners for each schema.

Authorization
Authorization is the process of granting permissions on securables to users. At an operating
system level, securables might be files, directories, registry keys, or shared printers. In
SQL Server, securables are database objects. SQL Server principals include both instance-level
principals, such as Windows logins, Windows group logins, SQL Server logins, and server roles
and database-level principals, such as users, database roles, and application roles. Except for a
few objects that are instance-scoped, most database objects, such as tables, views, and
procedures are schema-scoped. This means that authorization is usually granted to database-
level principals.
In SQL Server, authorization is accomplished via Data Access Language (DAL) rather than DDL
or DML. In addition to the two DAL verbs, GRANT and REVOKE, mandated by the ISO-ANSI
standard, SQL Server also contains a DENY DAL verb. DENY differs from REVOKE when a user
is a member of more than one database principal. If a user Fred is a member of three database
roles A, B, and C and roles A and B are GRANTed permission to a securable, if the permission is
REVOKEd from role C, Fred still can access the securable. If the securable is DENYed to role C,
Fred cannot access the securable. This makes managing SQL Server similar to managing other
parts of the Windows family of operating systems.
SQL Server 2005 makes each securable available by using DAL statements and makes
permissions more granular than in previous versions. For example, in SQL Server 2000 and
earlier versions, certain functions were available only if a login was part of the sysadmin role.
Now sysadmin role permissions are defined in terms of GRANTs. Equivalent access to
securables can be achieved by GRANTing a login the CONTROL SERVER permission.
An example of better granularity is the ability to use SQL Server Profiler to trace events in a
particular database. In SQL Server 2000, this ability was limited to the special dbo user. The new
granular permissions are also arranged in a hierarchy; some permissions imply other
permissions. For example, CONTROL permission on a database object type implies ALTER
permission on that object as well as all other object-level permissions. SQL Server 2005 also
introduces the concept of granting permissions on all of the objects in a schema. ALTER
permission on a SCHEMA includes the ability to CREATE, ALTER, or DROP objects in that
SCHEMA. The DAL statement that grants access to all securables in the payroll schema is:

GRANT SELECT ON schema::payroll TO fred

The advantage of granting permissions at the schema level is that the user automatically has
permissions on all new objects created in the schema; explicit grant after object creation is not
needed. For more information on the permission hierarchy, see the Permission Hierarchy section
of SQL Server Books Online.
A best practice for authorization is to encapsulate access through modules such as stored
procedures and user-defined functions. Hiding access behind procedural code means that users
can only access objects in the way the developer and database administrator (DBA) intend;

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 82
ad hoc changes to objects are disallowed. An example of this technique would be permitting
access to the employee pay rate table only through a stored procedure "UpdatePayRate." Users
that need to update pay rates would be granted EXECUTE access to the procedure, rather than
UPDATE access to the table itself. In SQL Server 2000 and earlier versions, encapsulating
access was dependent on a SQL Server feature known as ownership chains. In an ownership
chain, if the owner of stored procedure A and the owner of table B that the stored procedure
accesses are the same, no permission check is done. Although this works well most of the time,
even with multiple levels of stored procedures, ownership chains do not work when:
• The database objects are in two different databases (unless cross-database ownership
chaining is enabled).
• The procedure uses dynamic SQL.
• The procedure is a SQLCLR procedure.
SQL Server 2005 contains features to address these shortcomings, including signing of
procedural code, alternate execution context, and a TRUSTWORTHY database property if
ownership chaining is desirable because a single application encompasses multiple databases.
All of these features are discussed in this white paper.
A login only can only be granted authorization to objects in a database if a database user has
been mapped to the login. A special user, guest, exists to permit access to a database for logins
that are not mapped to a specific database user. Because any login can use the database
through the guest user, it is suggested that the guest user not be enabled.
SQL Server 2005 contains a new type of user, a user that is not mapped to a login. Users that are
not mapped to logins provide an alternative to using application roles. You can invoke selective
impersonation by using the EXECUTE AS statement (see Execution Context later in this paper)
and allow that user only the privileges needed to perform a specific task. Using users without
logins makes it easier to move the application to a new instance and limits the connectivity
requirements for the function. You create a user without a login using DDL:

CREATE USER mynewuser WITHOUT LOGIN

Best practices for database object authorization


• Encapsulate access within modules.
• Manage permissions via database roles or Windows groups.
• Use permission granularity to implement the principle of least privilege.
• Do not enable guest access.
• Use users without logins instead of application roles

Catalog Security
Information about databases, tables, and other database objects is kept in the system catalog.
The system metadata exists in tables in the master database and in user databases. These
metadata tables are exposed through metadata views. In SQL Server 2000, the system catalog
was publicly readable and, the instance could be configured to make the system tables writeable
as well. In SQL Server 2005, the system metadata tables are read-only and their structure has
changed considerably. The only way that the system metadata tables are readable at all is in
single-user mode. Also in SQL Server 2005, the system metadata views were refactored and
made part of a special schema, the sys schema. So as not to break existing applications, a set of
compatibility metadata views are exposed. The compatibility views may be removed in a future
release of SQL Server.
SQL Server 2005 makes all metadata views secured by default. This includes:

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 83
• The new metadata views (for example, sys.tables, sys.procedures).
• The compatibility metadata views (for example, sysindexes, sysobjects).
• The INFORMATION_SCHEMA views (provided for SQL-92 compliance).
The information in the system metadata views is secured on a per-row basis. In order to be able
to see system metadata for an object, a user must have some permission on the object. For
example, to see metadata about the dbo.authors table, SELECT permission on the table is
sufficient. This prohibits browsing the system catalog by users who do not have appropriate
object access. Discovery is often the first level of prevention. There are two exceptions to this
rule: sys.databases and sys.schemas are public-readable. These metadata views may be
secured with the DENY verb if required.
Some applications present lists of database objects to the user through a graphic user interface.
It may be necessary to keep the user interface the same by permitting users to view information
about database objects while giving them no other explicit permission on the object. A special
permission, VIEW DEFINITION, exists for this purpose.
Best practices for catalog security
• The catalog views are secure by default. No additional action is required to secure them.
• Grant VIEW DEFINITION selectively at the object, schema, database, or server level to grant
permission to view system metadata without conferring additional permissions.
• Review legacy applications that may depend on access to system metadata when migrating
the applications to SQL Server 2005.

Remote Data Source Execution


There are two ways that procedural code can be executed on a remote instance of SQL Server:
configuring a linked server definition with the remote SQL Server and configuring a remote server
definition for it. Remote servers are supported only for backward compatibility with earlier
versions of SQL Server and should be phased out in preference to linked servers. Linked servers
allow more granular security than remote servers. Ad hoc queries through linked servers
(OPENROWSET and OPENDATASOURCE) are disabled by default in a newly installed instance
of SQL Server 2005.
When you use Windows to authenticate to SQL Server, you are using a Windows network
credential. Network credentials that use both NTLM and Kerberos security systems are valid for
one network "hop" by default. If you use network credentials to log on to SQL Server and attempt
to use the same credentials to connect via a linked server to a SQL Server instance on a different
computer, the credentials will not be valid. This is known as the "double hop problem" and also
occurs in environments that use Windows authentication to connect to a Web server and attempt
to use impersonation to connect to SQL Server. If you use Kerberos for authentication, you can
enable constrained delegation, that is, delegation of credentials constrained to a specific
application, to overcome the "double hop problem." Only Kerberos authentication supports
delegation of Windows credentials. For more information, see Constrained Delegation in
SQL Server Books Online.
Best practices for remote data source execution
• Phase out any remote server definitions.
• Replace remote servers with linked servers.
• Leave ad hoc queries through linked servers disabled unless they are absolutely needed.
• Use constrained delegation if pass-through authentication to a linked server is necessary.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 84
Execution Context
SQL Server always executes SQL statements and procedural code as the currently logged on
user. This behavior is a SQL Server-specific behavior and is made possible, in the case of
procedural code, by the concept of ownership chains. That is, although a stored procedure
executes as the caller of the stored procedure rather than as the owner, if ownership chaining is
in place, permissions are not checked for object access and stored procedures can be used to
encapsulate tables, as mentioned previously in this paper. In SQL Server 2005, the creator of a
procedure can declaratively set the execution context of the procedure by using the EXECUTE
AS keyword in the CREATE PROCEDURE, FUNCTION, and TRIGGER statements. The
execution context choices are:
• EXECUTE AS CALLER - the caller of the procedure (no impersonation). This is the only pre-
SQL Server 2005 behavior.
• EXECUTE AS OWNER - the owner of the procedure.
• EXECUTE AS SELF - the creator of the procedure.
• EXECUTE AS 'username' - a specific user.
To maintain backward compatibility, EXECUTE AS CALLER is the default. The distinction
between AS OWNER and AS SELF is needed because the creator of the procedure may not be
the owner of the schema in which the procedure resides. In this case, AS SELF refers to the
procedure owner, AS OWNER refers to the object owner (the schema owner). In order to use
EXECUTE AS 'username', the procedure creator must have IMPERSONATE permission on the
user named in the execution context.
One reason to use an alternate execution context would be when a procedure executes without a
particular execution context. An example of this is a service broker queue activation procedure. In
addition, EXECUTE AS OWNER can be used to circumvent problems that are caused when
ownership chains are broken. For example, ownership chains in a procedure are always broken
when dynamic SQL statements (such as sp_executeSQL) are used.
Often what is needed is to grant the appropriate permissions to the procedural code itself, rather
than either changing the execution context or relying on the caller's permissions.
SQL Server 2005 offers a much more granular way of associating privileges with procedural code
—code signing. By using the ADD SIGNATURE DDL statement, you can sign the procedure with
a certificate or asymmetric key. A user can then be created for the certificate or asymmetric key
itself and permissions assigned to that user. When the procedure is executed, the code executes
with a combination of the caller's permissions and the key/certificate's permissions. An example
of this would be:

CREATE CERTIFICATE HRCertificate


WITH ENCRYPTION BY PASSWORD = 'HacdeNj162kqT'
CREATE USER HRCertificateUser
FOR CERTIFICATE HRCertificate WITHOUT LOGIN
GRANT UPDATE ON pension_criteria TO HRCertificate
-- this gives the procedure update_pension_criteria
-- additional privileges of HRCertificate
ADD SIGNATURE TO update_pension_criteria BY CERTIFCATE HRCertificate
-- backup the private key and remove it from the certificate,
-- so that the procedure cannot be re-signed without permission

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 85
BACKUP CERTIFICATE HRCertificate
TO FILE = 'c:\certs_backup\HRCertificate.cer'
WITH PRIVATE KEY (FILE = 'c:\certs_backup\ HRCertificate.pvk',
ENCRYPTION BY PASSWORD = 'jBjebfP43j1!',
DECRYPTION BY PASSWORD = 'eWyveyYqW96A@!q')
ALTER CERTIFICATE HRCertificate REMOVE PRIVATE KEY

EXECUTE AS can also be used to set the execution context within an SQL batch. In this form,
the SQL batch contains an EXECUTE AS USER='someuser' or EXECUTE AS
LOGIN='somelogin' statement. This alternate execution context lasts until the REVERT statement
is encountered. EXECUTE AS and REVERT blocks can also be nested; REVERT reverts one
level of execution context. As with EXECUTE AS and procedural code, the user changing the
execution context must have IMPERSONATE permission on the user or login being
impersonated. EXECUTE AS in SQL batches should be used as a replacement for the SETUSER
statement, which is much less flexible.
If the execution context is set but should not be reverted without permission, you can use
EXECUTE AS ... WITH COOKIE or EXECUTE AS ... WITH NO REVERT. When WITH COOKIE
is specified, a binary cookie is returned to the caller of EXECUTE AS and the cookie must be
supplied in order to REVERT back to the original context.
When a procedure or batch uses an alternate execution context, the system functions normally
used for auditing, such as SUSER_NAME(), return the name of the impersonated user rather
than the name of the original user or original login. A new system function, ORIGINAL_LOGIN(),
can be used to obtain the original login, regardless of the number of levels of impersonation used.
Best practices for execution context
• Set execution context on modules explicitly rather than letting it default.
• Use EXECUTE AS instead of SETUSER.
• Use WITH NO REVERT/COOKIE instead of Application Roles.
• Consider using code signing of procedural code if a single granular additional privilege is
required for the procedure.

Encryption
SQL Server 2005 has built-in data encryption. The data encryption exists at a cell level and is
accomplished by means of built-in system procedures. Encrypting data requires secure
encryption keys and key management. A key management hierarchy is built into
SQL Server 2005. Each instance of SQL Server has a built-in service master key that is
generated at installation; specifically, the first time that SQL Server is started after installation.
The service master key is encrypted by using both the SQL Server Service account key and also
the machine key. Both encryptions use the DPAPI (Data Protection API). A database
administrator can define a database master key by using the following DDL.

CREATE MASTER KEY


WITH ENCRYPTION BY PASSWORD = '87(HyfdlkRM?_764#GRtj*(NS£”_+^$('

This key is actually encrypted and stored twice by default. Encryption that uses a password and
storage in the database is required. Encryption that uses the service master key and storage in
the master database is optional; it is useful to be able to automatically open the database master

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 86
key without specifying the password. The service master key and database master keys can be
backed up and restored separately from the rest of the database.
SQL Server 2005 can use DDL to define certificates, asymmetric keys, and symmetric keys on a
per-database basis. Certificates and asymmetric keys consist of a private key/public key pair. The
public key can be used to encrypt data that can be decrypted only by using the private key. Or,
for the sake of performance, the public key can be used to encrypt a hash that can be decrypted
only by using the private key. Encrypted checksum generation to ensure non-repudiation is
known as signing.
Alternatively, the private key can be used to encrypt data that can be decrypted by the receiver by
using the public key. A symmetric key consists of a single key that is used for encryption and
decryption. Symmetric keys are generally used for data encryption because they are orders of
magnitude faster than asymmetric keys for encryption and decryption. However, distributing
symmetric keys can be difficult because both parties must have the same copy of the key. In
addition, it is not possible with symmetric key encryption to determine which user encrypted the
data. Asymmetric keys can be used to encrypt and decrypt data but ordinarily they are used to
encrypt and decrypt symmetric keys; the symmetric keys are used for the data encryption. This is
the preferred way to encrypt data for the best security and performance. Symmetric keys can also
be protected by individual passwords.
SQL Server 2005 makes use of and also can generate X.509 certificates. A certificate is simply
an asymmetric key pair with additional metadata, including a subject (the person the key is
intended for), root certificate authority (who vouches for the certificate's authenticity), and
expiration date. SQL Server generates self-signed certificates (SQL Server itself is the root
certificate authority) with a default expiration date of one year. The expiration date and subject
can be specified in the DDL statement. SQL Server does not use certificate "negative lists" or the
expiration date with data encryption. A certificate can be backed up and restored separately from
the database; certificates, asymmetric keys, and symmetric keys are backed up with the
database. A variety of block cipher encryption algorithms are supported, including DES, Triple
DES, and AES (Rijndael) algorithms for symmetric keys and RSA for asymmetric keys. A variety
of key strengths are supported for each algorithm. Stream cipher algorithms, such as RC4 are
also supported but should NOT be used for data encryption. Some algorithms (such as AES) are
not supported by all operating systems that can host SQL Server. User-defined algorithms are not
supported. The key algorithm and key length choice should be predicated on the sensitivity of the
data.
SQL Server encrypts data on a cell level—data is specifically encrypted before it is stored into a
column value and each row can use a different encryption key for a specific column. To use data
encryption, a column must use the VARBINARY data type. The length of the column depends on
the encryption algorithm used and the length of the data to be encrypted (see Choosing an
Encryption Algorithm in SQL Server Books Online). The KEY_GUID of the key that is used for
encryption is stored with the column value. When the data is decrypted, this KEY_GUID is
checked against all open keys in the session. The data uses initialization vectors (also known as
salted hashes). Because of this, it is not possible to determine if two values are identical by
looking at the encrypted value. This means, for example, that I cannot determine all of the
patients who have a diagnosis of Flu if I know that Patient A has a diagnosis of Flu. Although this
means that data is more secure, it also means that you cannot use a column that is encrypted by
using the data encryption routines in indexes, because data values are not comparable.
Data encryption is becoming more commonplace with some vendors and industries (for example,
the payment card industry). Use data encryption only when it is required or for very high-value
sensitive data. In some cases, encrypting the network channel or using SQL Server permissions
is a better choice because of the complexity involved in managing keys and invoking
encryption/decryption routines.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 87
Because unencrypted data must be stored in memory buffers before being transmitted to clients,
it is impossible to keep data away from an administrator who has the ability to debug the process
or to patch the server. Memory dumps can also be a source of unintended data leakage. If
symmetric keys are protected by asymmetric keys and the asymmetric keys are encrypted by
using the database master key, a database administrator could impersonate a user of encrypted
data and access the data through the keys. If protection from the database administrator is
preferred, encryption keys must be secured by passwords, rather than by the database master
key. To guard against data loss, encryption keys that are secured by passwords must have an
associated disaster recovery policy (offsite storage, for example) in case of key loss. You can
also require users to specify the database master key by dropping encryption of the database
master key by the instance master key. Remember to back up the database in order to back up
the symmetric keys, because there are no specific DDL statements to back up symmetric and
asymmetric keys, just as there are specific DDL statements to back up certificates, the database
master key, and the service master key.
Best practices for data encryption
• Encrypt high-value and sensitive data.
• Use symmetric keys to encrypt data, and asymmetric keys or certificates to protect the
symmetric keys.
• Password-protect keys and remove master key encryption for the most secure configuration.
• Always back up the service master key, database master keys, and certificates by using the
key-specific DDL statements.
• Always back up your database to back up your symmetric and asymmetric keys.

Auditing
SQL Server 2005 supports login auditing, trigger-based auditing, and event auditing by using a
built-in trace facility. Password policy compliance is automatically enforceable through policy in
SQL Server 2005 for both Windows logins and SQL logins. Login auditing is available by using an
instance-level configuration parameter. Auditing failed logins is the default, but you can specify to
audit all logins. Although auditing all logins increases overhead, you may be able to deduce
patterns of multiple failed logins followed by a successful login, and use this information to detect
a possible login security breech. Auditing is provided on a wide variety of events including Add
Database User, Add Login, DBCC events, Change Password, GDR events (Grant/Deny/Revoke
events), and Server Principal Impersonation events. SQL Server 2005 SP2 also supports login
triggers.
SQL Server 2005 introduces auditing based on DDL triggers and event notifications. You can use
DDL triggers not only to record the occurrence of DDL, but also to roll back DDL statements as
part of the trigger processing. Because a DDL trigger executes synchronously (the DDL does not
complete until the trigger is finished), DDL triggers can potentially slow down DDL, depending on
the content and volume of the code. Event notifications can be used to record DDL usage
information asynchronously. An event notification is a database object that uses Service Broker to
send messages to the destination (Service Broker-based) service of your choosing. DDL cannot
be rolled back by using event notifications.
Because the surface area of SQL Server 2005 is larger than previous versions, more auditing
events are available in SQL Server 2005 than in previous versions. To audit security events, use
event-based auditing, specifically the events in the security audit event category (listed in SQL
Server Books Online). Event-based auditing can be trace-based, or event notifications-based.
Trace-based event auditing is easier to configure, but may result in a large event logs, if many
events are traced. On the other hand, event notifications send queued messages to Service
Broker queues that are in-database objects. Trace-based event auditing cannot trace all events;

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 88
some events, such as SQL:StmtComplete events, are not available when using event
notifications.
There is a WMI provider for events that can be used in conjunction with SQL Server Agent alerts.
This mechanism provides immediate notification through the Alert system that a specific event
has occurred. To use the WMI provider, select a WMI-based alert and provide a WQL query that
produces the event that you want to cause the alert. WQL queries use the same syntax for
naming as does event notifications. An example of a WQL query that looks for database principal
impersonation changes would be:

SELECT * FROM AUDIT_DATABASE_PRINCIPAL_IMPERSONATION_EVENT

SQL Server can be configured to support auditing that is compliant with C2 certification under the
Trusted Database Interpretation (TDI) of the Trusted Computer System Evaluation Criteria
(TCSEC) of the United States National Security Agency. This is known as C2 auditing.
C2 auditing is configured on an instance level by using the C2 audit mode configuration option in
sp_configure.
When C2 auditing is enabled, data is saved in a log file in the Data subdirectory in the directory in
which SQL Server is installed. The initial log file size for C2 auditing is 200 megabytes. When this
file is full, another 200 megabytes is allocated. If the volume on which the log file is stored runs
out of space, SQL Server shuts down until sufficient space is available or until the system is
manually started without auditing. Ensure that there is sufficient space available before enabling
C2 auditing and put a procedure in place for archiving the log files.
SQL Server 2005 SP2 allows configuring an option that provides three elements required for
Common Criteria compliance. The Common Criteria represents the outcome of efforts to develop
criteria for evaluation of IT security that are widely useful within the international community. It
stems from a number of source criteria: the existing European, US, and Canadian criteria (ITSEC,
TCSEC, and CTCPEC respectively). The Common Criteria resolves the conceptual and technical
differences between the source criteria. The three Common Criteria elements that can be
configured by using an instance configuration option are:
• Residual Information Protection, which overwrites memory with a known bit pattern before it
is reallocated to a new resource.
• The ability to view login statistics.
• A column-level GRANT does not override table-level DENY.
You can configure an instance to provide these three elements for Common Criteria compliance
by setting the configuration option common criteria compliance enabled as shown in the
following code.

sp_configure 'show advanced options', 1;


GO
RECONFIGURE;
GO
sp_configure 'common criteria compliance enabled', 1;
GO
RECONFIGURE;
GO

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 89
In addition to enabling the Common Criteria options in a SQL Server instance, you can use login
triggers in SQL Server 2005 SP2 to limit logins based upon time of day or based on an excessive
number of existing connections. The ability to limit logins based on these criteria is required for
Common Criteria compliance.
Best practices for auditing
• Auditing is scenario-specific. Balance the need for auditing with the overhead of generating
addition data.
• Audit successful logins in addition to unsuccessful logins if you store highly sensitive data.
• Audit DDL and specific server events by using trace events or event notifications.
• DML must be audited by using trace events.
• Use WMI to be alerted of emergency events.
• Enable C2 auditing or Common Criteria compliance only if required.

RECOVERY MODELS, BACKUPS and RESTORE


Overview
One of your last lines of defense for just about any system is to have a backup in place in case
there is a need to recover some or all of your data. This is also true for SQL Server.
In this tutorial we will discuss

• selecting the correct recovery models


• what backup options are available
• how to create backups using T-SQL commands and SQL Server Management Studio

If you are new to SQL Server you should review each of these topics, so you are aware of the
available options and what steps you will need to take in order to recover your data if ever there is
the need.
You can either use the outline on the left or click on the arrows to the right or below to scroll
through each of these topics.
SQL Server Recovery Models
(SET RECOVERY)
Overview
One of the first things that needs to be set in order to create the correct backups is to set the
proper recovery model for each database. The recovery model basically tells SQL Server what
data to keep in the transaction log file and for how long. Based on the recovery model that is
selected, this will also determine what types of backups you can perform and also what types of
database restores can be performed.
Explanation
The three types of recovery models that you can choose from are:

• Full
• Simple
• Bulk-Logged

Each database can have only one recovery model, but each of your databases can use a
different recovery model, so depending on the processing and the backup needs you can select

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 90
the appropriate recovery model per database. The only exception to this is the TempDB
database which has to use the "Simple" recovery model.
Also, the database recovery model can be changed at any time, but this will impact your backup
chain, so it is a good practice to issue a full backup after you change your recovery model.
The recovery model can be changed by either using T-SQL or SQL Server Management Studio.
Following are examples on how to do this.
Using T-SQL to change to the "Full" recovery for the AdventureWorks database.

ALTER DATABASE AdventureWorks SET RECOVERY FULL


GO
Using the SSMS to change the recovery model for the AdventureWorks database.

SQL Server Full Recovery Model


(SET RECOVERY FULL)
Overview
The "Full" recovery model tells SQL Server to keep all transaction data in the transaction log until
either a transaction log backup occurs or the transaction log is truncated. The way this works is
that all transactions that are issued against SQL Server first get entered into the transaction log
and then the data is written to the appropriate data file. This allows SQL Server to rollback each
step of the process in case there was an error or the transaction was cancelled for some reason.
So when the database is set to the "Full" recovery model since all transactions have been saved
you have the ability to do point in time recovery which means you can recover to a point right
before a transaction occurred like an accidental deletion of all data from a table.
Explanation
The full recovery model is the most complete recovery model and allows you to recover all of your
data to any point in time as long as all backup files are useable. With this model all operations are
fully logged which means that you can recover your database to any point. In addition, if the
database is set to the full recovery model you need to also issue transaction log backups
otherwise your database transaction log will continue to grow forever.
Here are some reasons why you may choose this recovery model:

• Data is critical and data can not be lost.


• You always need the ability to do a point-in-time recovery.
• You are using database mirroring

Type of backups you can run when the data is in the "Full" recovery model:

• Complete backups

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 91
• Differential backups
• File and/or Filegroup backups
• Partial backups
• Copy-Only backups
• Transaction log backups

How to set the full recovery model using T-SQL.

ALTER DATABASE dbName SET RECOVERY recoveryOption


GO
Example: change AdventureWorks database to "Full" recovery model

ALTER DATABASE AdventureWorks SET RECOVERY FULL


GO
How to set using SQL Server Management Studio

• Right click on database name and select Properties


• Go to the Options page
• Under Recovery model select "Full"
• Click "OK" to save

SQL Server Simple Recovery Model


(SET RECOVERY SIMPLE)
Overview
The "Simple" recovery model does what it implies, it gives you a simple backup that can be used
to replace your entire database in the event of a failure or if you have the need to restore your
database to another server. With this recovery model you have the ability to do complete
backups (an entire copy) or differential backups (any changes since the last complete backup).
With this recovery model you are exposed to any failures since the last backup completed.
Explanation
The "Simple" recovery model is the most basic recovery model for SQL Server. Every
transaction is still written to the transaction log, but once the transaction is complete and the data
has been written to the data file the space that was used in the transaction log file is now re-
usable by new transactions. Since this space is reused there is not the ability to do a point in
time recovery, therefore the most recent restore point will either be the complete backup or the
latest differential backup that was completed. Also, since the space in the transaction log can be
reused, the transaction log will not grow forever as was mentioned in the "Full" recovery model.
Here are some reasons why you may choose this recovery model:

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 92
• Your data is not critical and can easily be recreated
• The database is only used for test or development
• Data is static and does not change
• Losing any or all transactions since the last backup is not a problem
• Data is derived and can easily be recreated

Type of backups you can run when the data is in the "Simple" recovery model:

• Complete backups
• Differential backups
• File and/or Filegroup backups
• Partial backups
• Copy-Only backups

How to set the simple recovery model using T-SQL.

ALTER DATABASE dbName SET RECOVERY recoveryOption


GO
Example: change AdventureWorks database to "Simple" recovery model

ALTER DATABASE AdventureWorks SET RECOVERY SIMPLE


GO
How to set using SQL Server Management Studio

• Right click on database name and select Properties


• Go to the Options page
• Under Recovery model select "Simple"
• Click "OK" to save

SQL Server Bulk-Logged Recovery Model


(SET RECOVERY BULK_LOGGED)
Overview
The "Bulk-logged" recovery model sort of does what it implies. With this model there are certain
bulk operations such as BULK INSERT, CREATE INDEX, SELECT INTO, etc... that are not fully
logged in the transaction log and therefore do not take as much space in the transaction log.
Explanation
The advantage of using the "Bulk-logged" recovery model is that your transaction logs will not get
that large if you are doing bulk operations and it still allows you to do point in time recovery as

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 93
long as your last transaction log backup does not include a bulk operation as mentioned above. If
no bulk operations are run this recovery model works the same as the Full recovery model. One
thing to note is that if you use this recovery model you also need to issue transaction log backups
otherwise your database transaction log will continue to grow.
Here are some reasons why you may choose this recovery model:

• Data is critical, but you do not want to log large bulk operations
• Bulk operations are done at different times versus normal processing.
• You still want to be able to recover to a point in time

Type of backups you can run when the data is in the "Simple" recovery model:

• Complete backups
• Differential backups
• File and/or Filegroup backups
• Partial backups
• Copy-Only backups
• Transaction log backups

How to set the bulk-logged recovery model using T-SQL.

ALTER DATABASE dbName SET RECOVERY recoveryOption


GO
Example: change AdventureWorks database to "Bulk-logged" recovery model

ALTER DATABASE AdventureWorks SET RECOVERY BULK_LOGGED


GO
How to set using SQL Server Management Studio

• Right click on database name and select Properties


• Go to the Options page
• Under Recovery model select "Bulk-logged"
• Click "OK" to save

Types of SQL Server Backups


Overview
SQL Server offers many options for creating backups. In a previous topic, Recovery Models, we
discussed what types of backups can be performed based on the recovery model of the

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 94
database. In this section we will talk about each of these backup options and how to perform
these backups using SSMS and T-SQL.
Explanation
The different types of backups that you can create are as follows:

• Full backups
• Differential backups
• File backups
• File group backups
• Partial backups
• Copy-Only backups
• Mirror backups
• Transaction log backups

SQL Server Full Backups


Overview
The most common types of SQL Server backups are complete or full backups, also known as
database backups. These backups create a complete backup of your database as well as part of
the transaction log, so the database can be recovered. This allows for the simplest form of
database restoration, since all of the contents are contained in one backup.
Explanation
A full backup can be completed either using T-SQL or by using SSMS. The following examples
show you how to create a full backup.

Create a full backup of the AdventureWorks database to one disk file


T-SQL

BACKUP DATABASE AdventureWorks TO DISK = 'C:\AdventureWorks.BAK'


GO
SQL Server Management Studio

• Right click on the database name


• Select Tasks > Backup
• Select "Full" as the backup type
• Select "Disk" as the destination
• Click on "Add..." to add a backup file and type "C:\AdventureWorks.BAK" an click "OK"
• Click "OK" again to create the backup

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 95

SQL Server Transaction Log Backups


Overview
If your database is set to the "Full" or "Bulk-logged" recovery model then you will be able to issue
"Transaction Log" backups. By having transaction log backups along with full backups you have
the ability to do a point in time restore, so if someone accidently deletes all data in a database
you can recover the database to the point in time right before the delete occurred. The only
caveat to this is if your database is set to the "Bulk-logged" recovery model and a bulk operation
was issued, you will need to restore the entire transaction log.
Explanation
A transaction log backup allows you to backup the active part of the transaction log. So after you
issue a "Full" or "Differential" backup the transaction log backup will have any transactions that
were created after those other backups completed. After the transaction log backup is issued,
the space within the transaction log can be reused for other processes. If a transaction log
backup is not taken, the transaction log will continue to grow.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 96
A transaction log backup can be completed either using T-SQL or by using SSMS. The following
examples show you how to create a transaction log backup.

Create a transaction log backup of the AdventureWorks database to one disk file
T-SQL

BACKUP LOG AdventureWorks TO DISK = 'C:\AdventureWorks.TRN'


GO
SQL Server Management Studio

• Right click on the database name


• Select Tasks > Backup
• Select "Transaction Log" as the backup type
• Select "Disk" as the destination
• Click on "Add..." to add a backup file and type "C:\AdventureWorks.TRN" and click "OK"
• Click "OK" again to create the backup

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 97

SQL Server Differential Backups


Overview
Another option to assist with your recovery is to create "Differential" backups. A "Differential"
backup is a backup of any extent that has changed since the last "Full" backup was created.
Explanation
The way differential backups work is that they will backup all extents that have changed since the
last full backup. An extent is made up of eight 8KB pages, so an extent is 64KB of data. Each
time any data has been changed a flag is turned on to let SQL Server know that if a "Differential"
backup is created it should include the data from this extent. When a "Full" backup is taken these
flags are turned off.
So if you do a full backup and then do a differential backup, the differential backup will contain
only the extents that have changed. If you wait some time and do another differential backup,
this new differential backup will contain all extents that have changed since the last full backup.
Each time you create a new differential backup it will contain every extent changed since the last

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 98
full backup. When you go to restore your database, to get to the most current time you only need
to restore the full backup and the most recent differential backup. All of the other differential
backups can be ignored.
If your database is in the Simple recovery model, you can still use full and differential backups.
This does not allow you to do point in time recovery, but it will allow you to restore your data to a
more current point in time then if you only had a full backup.
If your database is in the Full or Bulk-Logged recovery model you can also use differential
backups to eliminate the number of transaction logs that will need to be restored. Since the
differential will backup all extents since the last full backup, at restore time you can restore your
full backup, your most recent differential backup and then any transaction log backups that were
created after the most recent differential backup. This cuts down on the number of files that need
to be restored.

Create a differential backup of the AdventureWorks database to one disk file


T-SQL

BACKUP DATABASE AdventureWorks TO DISK = 'C:\AdventureWorks.DIF' WITH


DIFFERENTIAL
GO
SQL Server Management Studio

• Right click on the database name


• Select Tasks > Backup
• Select "Differential" as the backup type
• Select "Disk" as the destination
• Click on "Add..." to add a backup file and type "C:\AdventureWorks.DIF" and click "OK"
• Click "OK" again to create the backup

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 99

SQL Server File Backups


Overview
Another option for backing up your databases is to use "File" backups. This allows you to backup
each file independently instead of having to backup the entire database. This is only relevant
when you have created multiple data files for your database. One reason for this type of backup
is if you have a very large files and need to back them up individually. For the most part you
probably only have one data file, so this is option is not relevant.
Explanation
As mentioned above you can back up each data file individually. If you have a very large
database and have large data files this option may be relevant.
A file backup can be completed either using T-SQL or by using SSMS. The following examples
show you how to create a transaction log backup.

Create a file backup of the TestBackup database

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 100
For this example I created a new database called TestBackup that has two data files and one log
file. The two data files are called 'TestBackup' and 'TestBackup2'. The code below shows how to
backup each file separately.
T-SQL

BACKUP DATABASE TestBackup FILE = 'TestBackup'


TO DISK = 'C:\TestBackup_TestBackup.FIL'
GO
BACKUP DATABASE TestBackup FILE = 'TestBackup2'
TO DISK = 'C:\TestBackup_TestBackup2.FIL'
GO
SQL Server Management Studio

• Right click on the database name


• Select Tasks > Backup
• Select either "Full" or "Differential" as the backup type
• Select "Files and filegroups"
• Select the appropriate file and click "OK"

• Select "Disk" as the destination


• Click on "Add..." to add a backup file and type "C:\TestBackup_TestBackup.FIL" and click
"OK"
• Click "OK" again to create the backup and repeat for other files

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 101

SQL Server Filegroup Backups


Overview
In addition to doing "File" backups you can also do "Filegroup" backups which allows you to
backup all files that are in a particular filegroup. By default each database has a PRIMARY
filegroup which is tied to the one data file that is created. You have an option of creating
additional filegroups and then placing new data files in any of the filegroups. In most cases you
will probably only have the PRIMARY filegroup, so this is topic is not relevant.
Explanation
As mentioned above you can back up each filegroup individually. The one advantage of using
filegroup backups over file backups is that you can create a Read-Only filegroup which means the
data will not change. So instead of backing up the entire database all of the time you can just
backup the Read-Write filegroups.
A filegroup backup can be completed either using T-SQL or by using SSMS.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 102
Create a filegroup backup of the TestBackup database
For this example I created a new database called TestBackup that has three data files and one
log file. Two data files are the PRIMARY filegroup and one file is in the ReadOnly filegroup. The
code below shows how to do a filegroup backup.
T-SQL

BACKUP DATABASE TestBackup FILEGROUP = 'ReadOnly'


TO DISK = 'C:\TestBackup_ReadOnly.FLG'
GO
SQL Server Management Studio

• Right click on the database name


• Select Tasks > Backup
• Select either "Full" or "Differential" as the backup type
• Select "Files and filegroups"
• Select the appropriate filegroup and click "OK"

• Select "Disk" as the destination


• Click on "Add..." to add a backup file and type "C:\TestBackup_ReadOnly.FLG" and click
"OK"
• Click "OK" again to create the backup and repeat for other filegroups

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 103

SQL Server Partial Backups


Overview
A new option is "Partial" backups which was introduced with SQL Server 2005. This allows you
to backup the PRIMARY filegroup, all Read-Write filegroups and any optionally specified files.
This is a good option if you have Read-Only filegroups in the database and do not want to backup
the entire database all of the time.
Explanation
A Partial backup can be issued for either a Full or Differential backup. This can not be used for
Transaction Log backups. If a filegroup is changed from Read-Only to Read-Write it will be
included in the next Partial backup, but if you change a filegroup from Read-Write to Read-Only
you should create a filegroup backup, since this filegroup will not be included in the next Partial
backup.
A partial backup can be completed only by using T-SQL. The following examples show you how
to create a partial backup.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 104
Create a partial backup of the TestBackup database
For this example I created a new database called TestBackup that has three data files and one
log file. Two data files are the PRIMARY filegroup and one file is in the ReadOnly filegroup. The
code below shows how to do a partial backup.
T-SQL
Create a full partial backup

BACKUP DATABASE TestBackup READ_WRITE_FILEGROUPS


TO DISK = 'C:\TestBackup_Partial.BAK'
GO
Create a differential partial backup

BACKUP DATABASE TestBackup READ_WRITE_FILEGROUPS


TO DISK = 'C:\TestBackup_Partial.DIF'
WITH DIFFERENTIAL
GO

SQL Server Backup Commands


Overview:
There are primarily two commands that are used to create SQL Server backups. Which are:

• BACKUP DATABASE
• BACKUP LOG

These commands also have various options that you can use to create full, differential, file,
transaction log backups, etc... as well as other options to specify how the backup command
should function and any other data to store with the backups.

SQL Server BACKUP DATABASE command


(BACKUP DATABASE)
Overview
There are only two commands for backup, the primary is BACKUP DATABASE. This allows you
to do a complete backup of your database as well as differential, file, etc. backups depending on
the options that you use.
Explanation
The BACKUP DATABASE command gives you many options for creating backups. Following are
different examples.
Create a full backup to disk
The command is BACKUP DATABASE databaseName. The "TO DISK" option specifies that the
backup should be written to disk and the location and filename to create the backup is specified.

BACKUP DATABASE AdventureWorks


TO DISK = 'C:\AdventureWorks.BAK'
GO
Create a differential backup
This command adds the "WITH DIFFERENTIAL" option.

BACKUP DATABASE AdventureWorks


TO DISK = 'C:\AdventureWorks.BAK'
WITH DIFFERENTIAL

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 105
GO
Create a file level backup
This command uses the "WITH FILE" option to specify a file backup. You need to specify the
logical filename within the database which can be obtained by using the command sp_helpdb
'databaseName', specifying the name of your database.

BACKUP DATABASE TestBackup FILE = 'TestBackup'


TO DISK = 'C:\TestBackup_TestBackup.FIL'
GO
Create a filegroup backup
This command uses the "WITH FILEGROUP" option to specify a filegroup backup. You need to
specify the filegroup name from the database which can be obtained by using the command
sp_helpdb 'databaseName', specifying the name of your database.

BACKUP DATABASE TestBackup FILEGROUP = 'ReadOnly'


TO DISK = 'C:\TestBackup_ReadOnly.FLG'
GO
Create a full backup to multiple disk files
This command uses the "DISK" option multiple times to write the backup to three equally sized
smaller files instead of one large file.

BACKUP DATABASE AdventureWorks


TO DISK = 'C:\AdventureWorks_1.BAK',
DISK = 'D:\AdventureWorks_2.BAK',
DISK = 'E:\AdventureWorks_3.BAK'
GO
Create a full backup with a password
This command creates a backup with a password that will need to be supplied when restoring the
database.

BACKUP DATABASE AdventureWorks


TO DISK = 'C:\AdventureWorks.BAK'
WITH PASSWORD = 'Q!W@E#R$'
GO
Create a full backup with progress stats
This command creates a full backup and also displays the progress of the backup. The default is
to show progress after every 10%.

BACKUP DATABASE AdventureWorks


TO DISK = 'C:\AdventureWorks.BAK'
WITH STATS
GO
Here is another option showing stats after every 1%.

BACKUP DATABASE AdventureWorks


TO DISK = 'C:\AdventureWorks.BAK'
WITH STATS = 1
GO
Create a backup and give it a description
This command uses the description option to give the backup a name. This can later be used
with some of the restore commands to see what is contained with the backup. The maximum
size is 255 characters.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 106
BACKUP DATABASE AdventureWorks
TO DISK = 'C:\AdventureWorks.BAK'
WITH DESCRIPTION = 'Full backup for AdventureWorks'
GO
Create a mirrored backup
This option allows you to create multiple copies of the backups, preferably to different locations.

BACKUP DATABASE AdventureWorks


TO DISK = 'C:\AdventureWorks.BAK'
MIRROR TO DISK = 'D:\AdventureWorks_mirror.BAK'
WITH FORMAT
GO
Specifying multiple options
This next example shows how you can use multiple options at the same time.

BACKUP DATABASE AdventureWorks


TO DISK = 'C:\AdventureWorks.BAK'
MIRROR TO DISK = 'D:\AdventureWorks_mirror.BAK'
WITH FORMAT, STATS, PASSWORD = 'Q!W@E#R$'
GO

SQL Server BACKUP LOG command


(BACKUP LOG)
Overview
There are only two commands for backup, the primary is BACKUP DATABASE which backs up
the entire database and BACKUP LOG which backs up the transaction log. The following will
show different options for doing transaction log backups.
Explanation
The BACKUP LOG command gives you many options for creating transaction log backups.
Following are different examples.
Create a simple transaction log backup to disk
The command is BACKUP LOG databaseName. The "TO DISK" option specifies that the backup
should be written to disk and the location and filename to create the backup is specified. The file
extension is "TRN". This helps me know it is a transaction log backup, but it could be any
extension you like. Also, the database has to be in the FULL or Bulk-Logged recovery model and
at least one Full backup has to have occurred.

BACKUP LOG AdventureWorks


TO DISK = 'C:\AdventureWorks.TRN'
GO
Create a log backup with a password
This command creates a log backup with a password that will need to be supplied when restoring
the database.

BACKUP LOG AdventureWorks


TO DISK = 'C:\AdventureWorks.TRN'
WITH PASSWORD = 'Q!W@E#R$'
GO
Create a log backup with progress stats
This command creates a log backup and also displays the progress of the backup. The default is
to show progress after every 10%.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 107
BACKUP LOG AdventureWorks
TO DISK = 'C:\AdventureWorks.TRN'
WITH STATS
GO
Here is another option showing stats after every 1%.

BACKUP LOG AdventureWorks


TO DISK = 'C:\AdventureWorks.TRN'
WITH STATS = 1
GO
Create a backup and give it a description
This command uses the description option to give the backup a name. This can later be used
with some of the restore commands to see what is contained with the backup. The maximum
size is 255 characters.

BACKUP LOG AdventureWorks


TO DISK = 'C:\AdventureWorks.TRN'
WITH DESCRIPTION = 'Log backup for AdventureWorks'
GO
Create a mirrored backup
This option allows you to create multiple copies of the backups, preferably to different locations.

BACKUP LOG AdventureWorks


TO DISK = 'C:\AdventureWorks.TRN'
MIRROR TO DISK = 'D:\AdventureWorks_mirror.TRN'
WITH FORMAT
GO
Specifying multiple options
This example shows how you can use multiple options at the same time.

BACKUP LOG AdventureWorks


TO DISK = 'C:\AdventureWorks.TRN'
MIRROR TO DISK = 'D:\AdventureWorks_mirror.TRN'
WITH FORMAT, STATS, PASSWORD = 'Q!W@E#R$'
GO

How to create a SQL Server backup


Overview
Creating backups for SQL Server is very easy. There are a few things you need to consider:
How will you create the backups:

• T-SQL commands
• Using SQL Server Management Studio
• Creating maintenance plans and
• Using third party backup tools

What options will you use

• Backup to disk or to tape


• Types of backups; full, differential, log, etc...

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 108
The next two topics cover the basics on how to create backups using either T-SQL or SQL Server
Management Studio.
Creating a backup using SQL Server Command Line (T-SQL)
Overview
Creating command line backups is very straightforward. There are basically two commands that
allow you to create backups, BACKUP DATABASE and BACKUP LOG.
Explanation
Here are some simple examples on how to create database and log backups using T-SQL. This
is the most basic syntax that is needed to create backups to disk.
Create a full backup

BACKUP DATABASE AdventureWorks


TO DISK = 'C:\AdventureWorks.BAK'
GO
Create a transaction log backup

BACKUP LOG AdventureWorks


TO DISK = 'C:\AdventureWorks.TRN'
GO
This is basically all you need to do to create the backups. There are other options that can be
used, but to create a valid and useable backup file this is all that needs to be done.
To read more about these options take a look at these topics:

• BACKUP DATABASE
• BACKUP LOG

Creating a backup using SQL Server Management Studio


Overview
Creating backups using SQL Server Management Studio is pretty simple as well. Based on how
simple the T-SQL commands are, there is a lot of clicking that needs to occur in SSMS to create
a backup.
Explanation
The following screen shots show you how to create a full backup and a transaction log backup.

• Expand the "Databases" tree


• Right click on the database name you want to backup
• Select "Tasks" then "Back Up..." as shown below

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 109

• Specify the "Backup type"; Full, Differential or Transaction Log

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 110

• Click on "Add..." to add the location and the name of the backup file
• Click "OK" to close this screen

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 111

• And click "OK" again to create the backup

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 112

SQL Server Restore Options and Commands


(Introduction)
Overview
What good is a backup if you do not know to restore the backup. In this tutorial we will look at
what restore options are available and which options are only accessible using T-SQL
commands.
As you will see there are many options that can be used, but just like the BACKUP commands
there are just a few parts of the RESTORE command that are needed to do a successful restore.

SQL Server Restore Commands


Overview
There are several RESTORE commands and options that can be used to restore and view the
contents of your backup files.
In this next section we will look at the following commands that can be used:

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 113
 RESTORE HEADERONLY - gives you a list of all the backups in a file
 RESTORE LABELONLY - gives you the backup media information
 RESTORE FILELISTONLY - gives you a list of all of the files that were backed up for a give
backup
 RESTORE DATABASE - allows you to restore a full, differential, file or file group backup
 RESTORE LOG - allows you to restore a transaction log backup
 RESTORE VERIFYONLY - verifies that the backup is readable by the RESTORE process
Take the time to get to understand what options are available and what can be done using SQL
Server Management Studio and what options are only available via T-SQL commands.
How to get the contents of a SQL Server backup file
(RESTORE HEADERONLY)
Overview
The RESTORE HEADERONLY option allows you to see the backup header information for all
backups for a particular backup device. So in most cases each backup you create only has one
backup stored in a physical file, so you will probably only see one header record, but if you had
multiple backups in one file you would see the information for each backup.
Explanation
The RESTORE HEADERONLY option can be simply issued as follows for a backup that exists on
disk.

Get headeronly information from a full backup


T-SQL

RESTORE HEADERONLY FROM DISK = 'C:\AdventureWorks.BAK'


GO
The result set would like the following. As you can see there is a lot of great information that is
returned when using HEADERONLY.

ColumnName Value
BackupName NULL
BackupDescription NULL
BackupType 1
ExpirationDate NULL
Compressed 0
Position 1
DeviceType 2
UserName TESTServer1\DBA
ServerName TESTServer1
DatabaseName AdventureWorks
DatabaseVersion 611
DatabaseCreationDate 10/22/08 13:48
BackupSize 177324544
FirstLSN 414000000754800000
LastLSN 414000000758300000
CheckpointLSN 414000000754800000
DatabaseBackupLSN 0
BackupStartDate 3/19/09 12:02
BackupFinishDate 3/19/09 12:02

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 114
SortOrder 0
CodePage 0
UnicodeLocaleId 1033
UnicodeComparisonStyle 196608
CompatibilityLevel 90
SoftwareVendorId 4608
SoftwareVersionMajor 9
SoftwareVersionMinor 0
SoftwareVersionBuild 3077
MachineName TESTServer1
Flags 512
BindingID 459DDE25-B461-4CFD-B72E-0D4388F50331
RecoveryForkID E1BF182D-E21A-485A-9E2F-09E9C7DEC9D4
Collation Latin1_General_CS_AS
FamilyGUID E1BF182D-E21A-485A-9E2F-09E9C7DEC9D4
HasBulkLoggedData 0
IsSnapshot 0
IsReadOnly 0
IsSingleUser 0
HasBackupChecksums 0
IsDamaged 0
BeginsLogChain 0
HasIncompleteMetaData 0
IsForceOffline 0
IsCopyOnly 0
FirstRecoveryForkID E1BF182D-E21A-485A-9E2F-09E9C7DEC9D4
ForkPointLSN NULL
RecoveryModel FULL
DifferentialBaseLSN NULL
DifferentialBaseGUID NULL
BackupTypeDescription Database
BackupSetGUID 0C6D57F2-2EDB-4DEB-9C10-53C68578B046
If this backup file contained multiple backups you will get information for each backup that was in
the file.

SQL Server Management Studio

• Right click on the Databases


• Select "Restore Database..."
• Select "From Device:" and click on the "..."
• Click on "Add" and select the back file, for this example it is "C:\AdventureWorks.BAK"
and click "OK"
• Click "OK" again to see the contents of the backup file, below you can see that there are
two backups in this one file

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 115

SQL Server RESTORE LABELONLY


(RESTORE LABELONLY)
Overview
The RESTORE LABELONLY option allows you to see the backup media information for the
backup device. So if a backup device, such as a backup file, has multiple backups you will only
get one record back that gives you information about the media set, such as the software that
was used to create the backup, the date the media was created, etc...
Explanation
This information can only be returned using T-SQL there is not a way to get this information from
SQL Server Management Studio.
The RESTORE LABELONLY option can be simply issued as follows for a backup that exists on
disk.

Get labelonly information from a backup file

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 116
T-SQL

RESTORE LABELONLY FROM DISK = 'C:\AdventureWorks.BAK'


GO
The result set would like the following. As you can see there is a lot of great information that is
returned when using LABELONLY.

ColumnName Value
MediaName NULL
MediaSetId 8825ADE0-2C83-45BD-994C-7469A5DFF124
FamilyCount 1
FamilySequenceNumber 1
MediaFamilyId 8A6648F8-0000-0000-0000-000000000000
MediaSequenceNumber 1
MediaLabelPresent 0
MediaDescription NULL
SoftwareName Microsoft SQL Server
SoftwareVendorId 4608
MediaDate 02:37.0
MirrorCount 1
SQL Server RESTORE FILELISTONLY
(RESTORE FILELISTONLY)
Overview
The RESTORE FILELISTONLY option allows you to see a list of the files that were backed up.
So for example if you have a full backup you will see all of the data files (mdf) and the log file (ldf).
Explanation
This information can only be returned using T-SQL there is not a way to get this information from
SQL Server Management Studio. Although if you do a restore and select options, you will see
some of this information in SSMS.
The RESTORE FILELISTONLY option can be simply issued as follows for a backup that exists
on disk. If there are multiple backups in one file and you do not specify "WITH FILE = X" you will
only get information for the first backup in the file. To get the FILE number use RESTORE
HEADERONLY and use the "Position" column.

Get filelistlonly information from a backup file


T-SQL

RESTORE FILELISTONLY FROM DISK = 'C:\AdventureWorks.BAK' WITH FILE = 1


GO
The result set would like the following. The things that are helpful here include the LogicalName
and PhysicalName.

ColumnName Value - Row 1 Value - Row2


LogicalName AdventureWorks_Data AdventureWorks_Log
C:\Program Files\Microsoft SQL C:\Program Files\Microsoft S
PhysicalName
Server\MSSQL.1\MSSQL\Data\AdventureWorks_Data.mdf Server\MSSQL.1\MSSQL\Da
Type D L
FileGroupName PRIMARY NULL

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 117
Size 202113024 153092096
MaxSize 35184372080640 2199023255552
FileId 1 2
CreateLSN 0 0
DropLSN 0 0
UniqueId 50A534B0-156C-42B7-82FE-A57D21A53EEA 4F544777-6DBB-4BBC-818A
ReadOnlyLSN 0 0
ReadWriteLSN 0 0
BackupSizeInBytes 177012736 0
SourceBlockSize 512 512
FileGroupId 1 0
LogGroupGUID NULL NULL
DifferentialBaseLSN 0 0
DifferentialBaseGUID 00000000-0000-0000-0000-000000000000 00000000-0000-0000-0000-0
IsReadOnly 0 0
IsPresent 1 1
How to restore a SQL Server backup
(RESTORE DATABASE)
Overview
The RESTORE DATABASE option allows you to restore either a full, differential, file or filegroup
backup.
Explanation
When restoring a database will need exclusive access to the database, which means no other
user connections can be using the database.
The RESTORE DATABASE option can be done using either T-SQL or using SQL Server
Management Studio.

T-SQL
Restore a full backup
This will restore the database using the specified file. If the database already exists it will
overwrite the files. If the database does not exist it will create the database and restore the files to
same location specified in the backup. The original location can be checked by using RESTORE
FILELISTONLY.

RESTORE DATABASE AdventureWorks FROM DISK = 'C:\AdventureWorks.BAK'


GO
Restore a full backup allowing additional restores such as a differential or transaction log
backup (NORECOVERY)
The NORECOVERY option leaves the database in a restoring state after the restore has
completed. This allows you to restore additional files to get the database more current. By default
this option is turned off.

RESTORE DATABASE AdventureWorks FROM DISK = 'C:\AdventureWorks.BAK' WITH


NORECOVERY
GO
Restore a differential backup
To restore a differential backup, the options are exactly the same. The first thing that has to

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 118
happen is to do a full restore using the NORECOVERY option. Then the differential can be
restored.

RESTORE DATABASE AdventureWorks FROM DISK = 'C:\AdventureWorks.BAK' WITH


NORECOVERY
GO
RESTORE DATABASE AdventureWorks FROM DISK = 'C:\AdventureWorks.DIF'
GO
Restore using a backup file that has multiple backups
Let's say we use the same backup file, AdventureWorks.BAK, to write our full backup and our
differential backup. We can use RESTORE HEADERONLY to see the backups and the positions
in the backup file. Let's say that the restore headeronly tells us that in position 1 we have a full
backup and in position 2 we have a differential backup. The restore commands would be.

RESTORE DATABASE AdventureWorks FROM DISK = 'C:\AdventureWorks.BAK' WITH


NORECOVERY, FILE = 1
GO
RESTORE DATABASE AdventureWorks FROM DISK = 'C:\AdventureWorks.BAK' WITH FILE
=2
GO
How to restore a SQL Server transaction log backup
(RESTORE LOG)
Overview
The RESTORE LOG command allows you to restore a transaction log backup. The options
include restoring the entire transaction log or to a certain point in time or to a certain transaction
mark.
Explanation
When restoring a transaction log you will need exclusive access to the database, which means no
other user connections can be using the database. If the database is in a restoring state this is
not an issue, because no one can be using the database.
The RESTORE LOG option can be done using either T-SQL or using SQL Server Management
Studio.

T-SQL
Restore a transaction log backup
To restore a transaction log backup the database need to be in a restoring state. This means that
you would have to restore a full backup and possibly a differential backup as well.

RESTORE LOG AdventureWorks FROM DISK = 'C:\AdventureWorks.TRN'


GO
Restore multiple transaction log files (NORECOVERY)
The NORECOVERY option leaves the database in a restoring state after the restore has
completed. This allows you to restore additional files to get the database more current. By default
this option is turned off. As was mentioned above the database needs to be in a restoring state,
so this would have already been done for at least one backup file that was restored.
This shows restoring two transaction log backups, the first using NORECOVERY and the second
statement does not which means the database will be accessible after the restore completes.

RESTORE LOG AdventureWorks FROM DISK = 'C:\AdventureWorks_1.TRN' WITH


NORECOVERY
GO
RESTORE LOG AdventureWorks FROM DISK = 'C:\AdventureWorks_2.TRN'

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 119
GO
Restore a differential backup
To restore a differential backup, the options are exactly the same. The first thing that has to
happen is to do a full restore using the NORECOVERY option. Then the differential can be
restored.

RESTORE LOG AdventureWorks FROM DISK = 'C:\AdventureWorks.BAK' WITH


NORECOVERY
GO
RESTORE LOG AdventureWorks FROM DISK = 'C:\AdventureWorks.DIF'
GO
Restore multiple transaction log backups from the same backup file
Let's say we use the same backup file, AdventureWorks.TRN, to write all of our transaction log
backups. This is not a best practice, because if the file is corrupt then this could corrupt all of
your backups in this file. We can use RESTORE HEADERONLY to see the backups and the
positions in the backup file. Let's say that the restore headeronly tells us that we have 3
transaction log backups in this file and we want to restore all three. The restore commands
would be.

RESTORE LOG AdventureWorks FROM DISK = 'C:\AdventureWorks.TRN' WITH


NORECOVERY, FILE = 1
GO
RESTORE LOG AdventureWorks FROM DISK = 'C:\AdventureWorks.TRN' WITH
NORECOVERY, FILE = 2
GO
RESTORE LOG AdventureWorks FROM DISK = 'C:\AdventureWorks.TRN' WITH FILE = 3
GO
Checking to make sure a SQL Server backup is useable
(RESTORE VERIFYONLY)
Overview
The RESTORE VERIFYONLY command checks the backup to ensure it is complete and the
entire backup is readable. The does not do an actual restore, but reads through the file to ensure
that SQL Server can read it in the event that a restore using this backup needs to occur.
Explanation
The RESTORE VERIFYONLY option is a good choice to check each backup after the backup
has completed. Unfortunately this takes additional processing time for this to complete, but it is a
good practice to put in place. Following are ways you can do this with T-SQL and SSMS.

T-SQL
Check a backup file on disk
The following command will check the backup file and return a message of whether the file is
valid or not. If it is not valid, this means the file is not going to be useable for a restore and a new
backup should be taken. One thing to note is that if there are multiple backups in a file, this only
checks the first file.

RESTORE VERIFYONLY FROM DISK = C:\AdventureWorks.BAK


GO
Check a backup file on disk for a particular backup
This command will check the second backup in this backup file. To check the contents in a
backup you can use RESTORE HEADERONLY and use the Position column to specify the FILE
number.

RESTORE VERIFYONLY FROM DISK = C:\AdventureWorks.BAK WITH FILE = 2

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 120
GO
SQL Server Management Studio
When creating a backups either using a maintenance plan or through SSMS you have the option
to turn on the RESTORE VERIFYONLY option as shown below. This can be done for all backup
types.
Maintenance Plan

Backup using SSMS

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 121

SQL Server Restore Options

Overview
In addition to the commands that we already discussed there are also many other options that
can be used along with these commands.
In this section we will look at these various options that can be included using the WITH option for
these various commands.
 RECOVERY
 NORECOVERY
 STATS
 REPLACE
 MOVE
 STOPAT

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 122
There are also additionl options that will be covered in the near future.

Recovering a database that is in the restoring state


(RESTORE ... WITH RECOVERY)
Overview
The RESTORE ... WITH RECOVERY option puts the database into a useable state, so users can
access a restored database.
Explanation
When you issue a RESTORE DATABASE or RESTORE LOG command the WITH RECOVERY
option is used by default. This option does not need to be specified for this action to take place.
If you restore a "Full" backup the default setting it to RESTORE WITH RECOVERY, so after the
database has been restored it can then be used by your end users.
If you are restoring a database using multiple backup files, you would use the WITH
NORECOVERY option for each restore except the last.
If your database is still in the restoring state and you want to recover it without restoring additional
backups you can issue a RESTORE DATABASE .. WITH RECOVERY to bring the database
online for users to use.

T-SQL
Restore full backup WITH RECOVERY
As mentioned above this option is the default, but you can specify as follows.

RESTORE DATABASE AdventureWorks FROM DISK = 'C:\AdventureWorks.BAK'


WITH RECOVERY
GO
Recover a database that is in the "restoring" state
The following command will take a database that is in the "restoring" state and make it available
for end users.

RESTORE DATABASE AdventureWorks WITH RECOVERY


GO
Restore multiple backups using WITH RECOVERY for last backup
The first restore uses the NORECOVERY option so additional restores can be done. The second
command restores the transaction log and then brings the database online for end user use.

RESTORE DATABASE AdventureWorks FROM DISK = 'C:\AdventureWorks.BAK'


WITH NORECOVERY
GO
RESTORE LOG AdventureWorks FROM DISK = 'C:\AdventureWorks.TRN'
WITH RECOVERY
GO
SQL Server Management Studio
When restoring using SSMS the WITH RECOVERY option is used by default, so there is nothing
that needs to be set but this can be set or changed on the options page when restoring.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 123

Restoring multiple backups to the same database


(RESTORE ... WITH NORECOVERY)
Overview
The RESTORE ... WITH NORECOVERY option puts the database into a "restoring" state, so
additional backups can be restored. When the database is in a "restoring" state no users can
access the database or the database contents.
Explanation
When you issue a RESTORE DATABASE or RESTORE LOG command the WITH
NORECOVERY option allows you to restore additional backup files before recovering the
database. This therefore allows you to get the database as current as possible before letting your
end users access the data.
This option is not on by default, so if you need to recover a database by restoring multiple backup
files and forget to use this option you have to start the backup process all over again.
The most common example of this would be to restore a "Full" backup and one or more
"Transaction Log" backups.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 124
T-SQL
Restore full backup and one transaction log backup
The first command does the restore and leaves the database in a restoring state and second
command restores the transaction log backup and then makes the database useable.

RESTORE DATABASE AdventureWorks FROM DISK = 'C:\AdventureWorks.BAK'


WITH NORECOVERY
GO
RESTORE LOG AdventureWorks FROM DISK = 'C:\AdventureWorks.TRN'
WITH RECOVERY
GO
Restore full backup and two transaction log backups
This restores the first two backups using NORECOVERY and then RECOVERY for the last
restore.

RESTORE DATABASE AdventureWorks FROM DISK = 'C:\AdventureWorks.BAK'


WITH NORECOVERY
GO
RESTORE LOG AdventureWorks FROM DISK = 'C:\AdventureWorks.TRN'
WITH NORECOVERY
GO
RESTORE LOG AdventureWorks FROM DISK = 'C:\AdventureWorks2.TRN'
WITH RECOVERY
GO
Restore full backup, latest differential and two transaction log backups
This restores the first three backups using NORECOVERY and then RECOVERY for the last
restore.

RESTORE DATABASE AdventureWorks FROM DISK = 'C:\AdventureWorks.BAK'


WITH NORECOVERY
GO
RESTORE DATABASE AdventureWorks FROM DISK = 'C:\AdventureWorks.DIF'
WITH NORECOVERY
GO
RESTORE LOG AdventureWorks FROM DISK = 'C:\AdventureWorks.TRN'
WITH NORECOVERY
GO
RESTORE LOG AdventureWorks FROM DISK = 'C:\AdventureWorks2.TRN'
WITH RECOVERY
GO
SQL Server Management Studio
To restore a database backup using the WITH NORECOVERY option go to the options page and
select the item highlighted below.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 125

Get percentage complete when restoring a database


(RESTORE ... WITH STATS)
Overview
The RESTORE WITH STATS option allows you to see how far along the restore process is, this
can be used for RESTORE DATABASE, RESTORE LOG, RESTORE VERIFYONLY.
Explanation
The RESTORE WITH STATS option will give you an idea of where the restore process currently
is an the overall process. This information is presented in percentage of completion. The default
is to display after every 10% or a percentage value can be specified. This information is displayed
on the Messages tab in your query window.

T-SQL
Restore a full database with default stats setting
The following will show the percentage complete after each 10% segment.

RESTORE DATABASE 'AdventureWorks' FROM DISK = 'C:\AdventureWorks.BAK'

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 126
GO
Restore a full database with stats showing for each 1 percent complete
This will show progress after each 1% of completion.

RESTORE DATABASE 'AdventureWorks' FROM DISK = 'C:\AdventureWorks.BAK' WITH


STATS = 1
GO
SQL Server Management Studio
When restoring a database using SSMS, this information is displayed as shown in the highlighted
section below. The default is 10% which can not be changed for the GUI.

Restore SQL Server database and overwrite existing database


(RESTORE ... WITH REPLACE)

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 127
Overview
The RESTORE ... WITH REPLACE option allows you to overwrite an existing database when
doing a restore. In some cases when you try to do a restore you may get an error that says "The
tail of the log for the database .. has not been backed up".
Explanation
The RESTORE ... WITH REPLACE allows you to write over an existing database when doing a
restore without first backing up the tail of the transaction log. The WITH REPLACE basically tells
SQL Server to just throw out any active contents in the transaction log and move forward with the
restore.
If you try to restore using T-SQL commands you will get this error message:

Msg 3159, Level 16, State 1, Line 1


The tail of the log for the database "AdventureWorks" has not been backed up. Use BACKUP LOG
WITH NORECOVERY to backup the log if it contains work you do not want to lose. Use the WITH
REPLACE or WITH STOPAT clause of the RESTORE statement to just overwrite the contents of
the log.
Msg 3013, Level 16, State 1, Line 1
RESTORE DATABASE is terminating abnormally.
If you try to restore using SQL Server Management Studio you will see this error message:

T-SQL
Restore full backup using WITH REPLACE
The command below will restore the database and disregard any active data in the current
transaction log.

RESTORE DATABASE AdventureWorks FROM DISK = 'C:\AdventureWorks.BAK'


WITH REPLACE
GO
SQL Server Management Studio
To restore using SSMS do the following, on the options page for the restore select "Overwrite the
existing database".

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 128

Retore SQL Server database to different filenames and locations


(RESTORE ... WITH MOVE)
Overview
The RESTORE ... WITH MOVE option allows you to restore your database, but also specify the
new location for the database files (mdf and ldf). If you are restoring an existing database from a
backup of that database then this is not required, but if you are restoring a database from a
different instance with different file locations then you may need to use this option.
Explanation
The RESTORE ... WITH MOVE option will let you determine what to name the database files and
also what location these files will be created in. Before using this option you need to know the
logical names for these files as well as know where SQL Server will restore the files if you do not
use the WITH MOVE option.
If another database already exists that uses the same file names you are trying to restore and the
database is online the restore will fail. But if the database is not online for some reason and the
files are not open, the restore will overwrite these files if you do not use the WITH MOVE option,
so be careful you do not accidently overwrite good database files.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 129
Also, when using the WITH MOVE option you need to make sure the account used for the SQL
Server engine has permissions to create these files in the folder you specify.

T-SQL
Determine contents of backup
So the first thing you need to do is determine the logical names and the physical location of the
files. This can be done by using the RESTORE FILELISTONLY command. This will give you the
logical and physical names.
Here is an example:

RESTORE FILELISTONLY FROM DISK = 'C:\AdventureWorks.BAK'


GO
This gives us these results:

ColumnName Value - Row 1 Value - Row2

LogicalName AdventureWorks_Data AdventureWorks_Log

C:\Program Files\Microsoft SQL C:\Program Files\Microsoft SQL


PhysicalName
Server\MSSQL.1\MSSQL\Data\AdventureWorks_Data.mdf Server\MSSQL.1\MSSQL\Data\Adventure

Type D L
Restore full backup WITH MOVE
So let's say we want to restore this database, but we want to put the data file in the "G:\SQLData"
folder and the transaction log file in the "H:\SQLLog" folder. The command would be like the
following:

RESTORE DATABASE AdventureWorks FROM DISK = 'C:\AdventureWorks.BAK'


WITH MOVE 'AdventureWorks_Data' TO 'G:\SQLData\AdvnetureWorks_Data.mdf',
MOVE 'AdventureWorks_Log' TO 'H:\SQLData\AdvnetureWorks_Log.ldf'
GO
Restore full and transaction log backup WITH MOVE
The WITH MOVE only needs to be specified for the first restore, because after this the database
will be in a "restoring" state. The second restore will just write the contents to this new location
that is being used.

RESTORE DATABASE AdventureWorks FROM DISK = 'C:\AdventureWorks.BAK'


WITH MOVE 'AdventureWorks_Data' TO 'G:\SQLData\AdvnetureWorks_Data.mdf',
MOVE 'AdventureWorks_Log' TO 'H:\SQLData\AdvnetureWorks_Log.ldf',
NORECOVERY
GO
RESTORE LOG AdventureWorks FROM DISK = 'C:\AdventureWorks.TRN'
GO
SQL Server Management Studio
To restore using SSMS do the following, on the options page for the restore, change the "Restore
As:" values for each file as shown below.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 130

SQL Server point in time restore


(RESTORE LOG ... WITH STOPAT)

Overview

The RESTORE ... WITH STOPAT option allows you to restore your database to a point in time.
This gives you the ability to restore a database prior to an event that occurred that was
detrimental to your database. In order for this option to work, the database needs to be either in
the FULL or Bulk-Logged recovery model and you need to be doing transaction log backups.
Explanation

When data is written to your database it is first written to the transaction log and then to the data
file after the transaction is complete. When you restore your transaction log, SQL Server will
replay all transactions that are in the transaction log and roll forward or roll back transactions that
it needs to prior to putting the database in a useable state.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 131
Each of these transactions has a LSN (logical sequence number) along with a timestamp, so
when restoring the transaction log you have the ability to tell SQL Server where to stop reading
transactions that need to be restored.
One thing to note is that if your database is using the Bulk-Logged recovery model and there is a
minimally logged operation (such as a bulk insert) in the transaction logs you can not do a point in
time recovery using that transaction log. But if you have another transaction log backup that
occurred later and this does not have a minimally logged operation you can still use this
transaction log to do a point in time recovery, but the point in time you are referencing has to
occur within this second transaction log backup.

T-SQL
Restore database with STOPAT
This will restore the AdventureWorks database to at point in time equal to "March 23, 2009 at
5:31PM".

RESTORE DATABASE AdventureWorks FROM DISK = 'C:\AdventureWorks.BAK'


WITH NORECOVERY
GO
RESTORE LOG AdventureWorks FROM DISK = 'C:\AdventureWorks.TRN'
WITH RECOVERY,
STOPAT = 'Mar 23, 2009 05:31:00 PM'
GO
Restore database with STOPAT where recovery model is Bulk-Logged and there is a
minimally logged operation
In this example we have a full backup and the transaction log has a minimally logged operation.
We can try to do a point in time recovery using the commands below:

RESTORE DATABASE AdventureWorks FROM DISK = 'C:\AdventureWorks.BAK'


WITH NORECOVERY
GO
RESTORE LOG AdventureWorks FROM DISK = 'C:\AdventureWorks.TRN'
WITH RECOVERY,
STOPAT = 'Mar 23, 2009 05:31:00 PM'
GO
But if there are bulk operations we will get this error.

Msg 4341, Level 16, State 1, Line 1


This log backup contains bulk-logged changes. It cannot be used to stop at an arbitrary point in
time.
Msg 4338, Level 16, State 1, Line 1
The STOPAT clause specifies a point too early to allow this backup set to be restored. Choose a
different stop point or use RESTORE DATABASE WITH RECOVERY to recover at the current
point.
Msg 3013, Level 16, State 1, Line 1
RESTORE LOG is terminating abnormally.
The restore operation will complete, but it will restore the entire transaction log backup and leave
the database in a "restoring" state. You could then either restore additional transaction logs or
use the RESTORE .. WITH RECOVERY option to bring the database back online.

SQL Server Management Studio


To restore to a point in time using SSMS do the following, select the backup and the transaction
logs you want to restore and then use the "To a point in time." option as shown below to select
the point in time you want to recover the database to.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 132

Restoring to a point in time with a bulk-logged operation in the transaction log


If you try to restore using SSMS you will get the following error message, similar to what we got
with the T-SQL code.

Getting exclusive access to a SQL Server database for restore

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 133
Overview
When restoring a database, one of the things you need to do is ensure that you have exclusive
access to the database. If any other users are in the database the restore will fail.
Explanation
When trying to do a restore, if any other user is in the database you will see these types of error
messages:
T-SQL

Msg 3101, Level 16, State 1, Line 1


Exclusive access could not be obtained because the database is in use.
Msg 3013, Level 16, State 1, Line 1
RESTORE DATABASE is terminating abnormally.
SSMS

Getting Exclusive Access


To get exclusive access, all other connections need to be dropped or the database that they are
in needs to be changed so they are not using the database you are trying to restore. You can
use sp_who2 or SSMS to see what connections are using the database you are trying to restore.
Using KILL
One option to get exclusive access is to use the KILL command to kill each connection that is
using the database., but be aware of what connections you are killing and the rollback issues that
may need to occur. See this tip for more information on how to do this.
Using ALTER DATABASE
Another option is to put the database in single user mode and then do the restore. This also does
a rollback depending on the option you use, but will do all connections at once. See this tip for
more information on how to do this.

ALTER DATABASE AdventureWorks SET SINGLE_USER WITH ROLLBACK IMMEDIATE


GO
RESTORE DATABASE AdventureWorks FROM DISK = 'C:\AdventureWorks.BAK'
GO

Tail log Backup:


Consider a situation where in your database got damaged for some reason and you are yet to
begin the emergency restore and it is possible for you to take a T-Log backup (if database is
offline and corrupt perform t-log backup using with No_truncate or with continue_after_error) for
that damaged database; so this T-Log backup is called Tail Log backup.

Well, when perform Tail Log backup it allows you to recover as much as possible data, you can
say it as poin-in-minute recovery and if Tail Log backup is not allowed you can recover your

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 134
data untill your last T-Log backup.

You can take t-log using


backup log databasename
to disk='somepath\file'
with no_truncate;
go

OR

backup log databasename


to disk='somepath\file'
with continue_after_error;
go

Msg 942, Level 14, State 3, Line 1


Database 'database name' cannot be opened because it is offline.
Msg 3013, Level 16, State 1, Line 1
BACKUP LOG is terminating abnormally.

That means you are not allowed to take tail log backup and hence you can recover your
database upto your lat t-log backup.

Tail Log Backup is the log backup taken after data corruption (Disaster). Though there is
file corruption we can try to take log backup (Tail Log Backup). This will be used during
point in time recovery.

Consider a scenario where in we have full backup of 12:00 noon one Transactional log
backup at 1:00 PM. The log backup is scheduled to run for every 1 hr.

If disaster happens at 1:30 PM then we can try to take tail log backup at 1:30 (after
disaster). If we can take tail log backup then

In recovery first restore full backup of 12:00 noon then 1:00 PM log backup recovery and
then last tail backup of 1:30 (After Disaster).

SQLSERVER AGENT:

IMPORT AND EXPORT:

LOGSHIPPING:

For distributed database application environment, it is always required to synchronize different


database servers, back up, copy Transaction Logs, etc. If we are going to implement using
application we have to put lots of efforts to build up the application. SQL Server 2005 provides an
advanced feature called Log Shipping. Log shipping is an Automated Process for backing up,
restoring, copying the transaction logs and synchronizing the database for distributed database
server application which can improve the application performance and availability of database. In

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 135
my recent project, I have done some short of experiment on it. I am going to explain it in this
article.

What is Log Shipping?


Log Shipping is used to synchronize the Distributed Database Server. Synchronize the database
by copying Transaction logs, Backing up, Restoring data. SQL Server used SQL Server Job
Agents for making those processes automatic. Log Shipping does not involve automatic transfer
of server if there is any failure. This means it has just synchronized the databases but if the
primary server fails, it will not redirect your application to a secondary server. This has to be done
manually.
Log shipping is the process of automating the backup of database and transaction log files on a
production SQL server, and then restoring them onto a standby server. But this is not all. The key
feature of log shipping is those is will automatically backup transaction logs throughout the day
(for whatever interval you specify) and automatically restore them on the standby server. This in
effect keeps the two SQL Servers in "synch". Should the production server fail, all you have to do
is point the users to the new server, and you are all set. Well, it’s not really that easy, but it comes
close if you put enough effort into your log shipping setup.

The Need for Standby Servers:

In a perfect world we wouldn't need standby servers for our SQL Servers. Our hardware would
never fail, NT Server 4.0 or Windows 2000 would never blue screen, SQL Server would never
stop running, and our applications would never balk.

In a partially perfect work, we could afford very expensive clustered SQL Servers that
automatically failover our wounded and dead production SQL Servers, reducing our stress and
keeping our users very happy.

But for most of us, the closest thing we can afford to implement when it comes to SQL Server
failover are standby servers that we have to manually fail over. And even some of us can't afford
this. But for this article, I am going to assume that you can afford a standby server.

The concept of standby servers is not a new one. It has been around a long time and been used
by many DBAs. Traditionally, using a standby server for failover has involved manually making
database and log backups on the production server and then restoring them to the standby server
on a regular basis. This way, should the production server fail, then users could access the
standby server instead, and downtime and data loss would be minimized.
This article is about log shipping, a refined variation of the traditional manual standby failover
server process. Its two major benefits over the traditional methods are that it automates most of
the manual work and helps to reduce potential data loss even more.

Benefits of Log Shipping:

While I have already talked about some of the benefits of log shipping, let's take a more
comprehensive look:

Log shipping doesn't require expensive hardware or software. While it is great if your standby
server is similar in capacity to your production server, it is not a requirement. In addition, you can
use the standby server for other tasks, helping to justify the cost of the standby server. Just keep
in mind that if you do need to fail over, that this server will have to handle not one, but two loads. I
like to make my standby server a development server. This way, I keep my developers off the
production server, but don't put too much work load on the standby server.

Once log shipping has been implemented, it is relatively easy to maintain.

Assuming you have implemented log shipping correctly, it is very reliable.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 136

The manual failover process is generally very short, typically 15 minutes or less.

Depending on how you have designed your log shipping process, very little, if any, data is lost
should you have to failover. The amount of data loss, if any, is also dependent on why your
production server failed.

Implementing log shipping is not technically difficult. Almost any DBA with several months or
more of SQL Server 7 experience can successfully implement it.

Problems with Log Shipping:

Let's face it, log shipping is a compromise. It is not the ideal solution, but it is often a practical
solution given real-world budget constraints. Some of the problems with log shipping include:
Log shipping failover is not automatic. The DBA must still manually failover the server, which
means the DBA must be present when the failover occurs.

The users will experience some downtime. How long depends on how well you implemented log
shipping, the nature of the production server failure, your network, the standby server, and the
application or applications to be failed over.

Some data can be lost, although not always. How much data is lost depends on how often you
schedule log shipping and whether or not the transaction log on the failed production server is
recoverable.

The database or databases that are being failed over to the standby server cannot be used for
anything else. But databases on the standby server not being used for failover can still be used
normally.

When it comes time for the actual failover, you must do one of two things to make your
applications work: either rename the standby server the same name as the failed production
server (and the IP address), or re-point your user's applications to the new standby server. In
some cases, neither of these options is practical.

Log Shipping Overview:

Before we get into the details of how to implement log shipping, let's take a look at the big picture.
Essentially, here's what you need to do in order to implement log shipping:
Ensure you have the necessary hardware and software properly prepared to implement log
shipping.
Synchronize the SQL Server login IDs between the production and standby servers.
Create two backup devices. One will be used for your database backups and the other will be
used for your transaction log backups.

On the production server, create a linked server to your standby server.

On the standby servers, create two stored procedures. One stored procedure will be used to
restore the database. The other stored procedure will be used to restore transaction logs.
On the production server, create two SQL Server jobs that will be used to perform the database
and transaction log backups. Each job will include multiple steps with scripts that will perform the
backups, copy the files from the production server to the standby server, and fire the remote
stored procedures used to restore the database and log files.

Start and test the log shipping process.

Devise and test the failover process.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 137

Monitor the log shipping process:

Obviously I have left out a lot of details, but at least now you know where we are headed.
To make my explanations easier to understand in this article, all my examples assume you will be
failing over only one database from the production server to the standby server. In the real world
you will probably want to failover more than just one. Once you have implemented log shipping
for one database, it should be obvious how to implement others. Generally, I just add additional
databases to my already existing scripts and jobs. But if you prefer, you can create separate
scripts and jobs for each database you want to failover using log shipping.
As you read the details of how I implement log shipping below, you may think of other ways to
accomplish the same steps

Hardware and Software Requirements:

The hardware and software requirements for log shipping are not difficult. The hardware for the
production and the standby server should be as similar as you can afford. If your production
server only handles a couple of dozen simultaneous users, then you probably don't need to
spend a small fortune on making the standby server just like the production server.
On the other hand, if your production server handles 500 simultaneous users, or has multi-
gigabyte database, then you may want to make your standby server as similar to the production
server as you can afford.

As far as software is concerned, I just try to ensure than I have NT Server and SQL Server at the
same level of service packs. In addition, the two servers must have SQL Server 7 configured
similarly. For example, the code page/character set, sort order, Unicode collation, and the local all
must be the same on both servers.

In order to help reduce any potential data loss during server failover from the production server to
the standby server, your production server should have its transaction logs stored on a separate
physical drive array than the database files. While this will boost your server's performance, the
main reason for this is to help reduce data loss.

For example, if the drive array with your database files on it goes down, then hopefully the drive
array with the log files will be OK. If this is the case, then you should be able to recover the
transaction log and move it to the standby server, significantly reducing any data loss. But if the
transaction logs are on the same drive array as the database files, and the drive array fails, then
you have lost any data entered into the system since the last log file was shipped to the standby
server.

The main functions of Log Shipping are as follows:

• Backing up the transaction log of the primary database


• Copying the transaction log backup to each secondary server
• Restoring the transaction log backup on the secondary database

Components of Log Shipping


For implementing Log Shipping, we need the following components - Primary Database Server,
Secondary Database Server, and Monitor Server.

• Primary Database Server: Primary Sever is the Main Database Server or SQL Server
Database Engine, which is being accessed by the application. Primary Server contains
the Primary Database or Master Database.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 138
• Secondary Database Server: Secondary Database Server is a SQL Server Database
Engine or a different Server that contains the backup of primary database. We can
have multiple secondary severs based on business requirements.
• Monitor Server: Monitor Server is a SQL Server Database Engine which Track the Log
shipping process.

Figure 1: Log Shipping Database Server Configuration

Log Shipping Prerequisites

• Must have at least two Database Servers or two SQL Server 2005 Database Engines.
• Configuration user should have Admin privilege on that server
• SQL Server Agent Service Configured properly
• Configuration mode of Primary database should be a Full or Bulk Logged recovery
model.
• Shared folder for copying the transaction logs.

SQL Server 2005 Version that Supports Log Shipping

SQL Server 2005 Version Available


SQL Server 2005 Enterprise Edition Yes
SQL Server 2005 Workgroup Edition Yes
SQL Server 2000 Standard Edition Yes

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 139
SQL Server 2005 Version Available
SQL Server 2005 Developer Edition Yes
SQL Server 2005 Express Edition No

Background Tables in Log shipping:

TABLES CREATED IN MSDB:

1. log_shipping_monitor_alert
2. log_shipping_monitor_error_detail
3. log_shipping_monitor_history_detail
4. log_shipping_monitor_primary
5. log_shipping_monitor_secondary
6. log_shipping_primaries
7. log_shipping_primary_databases
8. log_shipping_primary_secondaries
9. log_shipping_secondaries
10. log_shipping_secondary
11. log_shipping_secondary_databases

MIRRORING:

SQL Server 2005 provides a set of high availability methods that the users can use to achieve
fault tolerance and to prevent server outages and data loss. The selection of the high availability
method depends on various factors. Some DBAs need the servers to be available 24/7, while
others can afford an outage of a couple of hours. Cost also plays a role in the selection. For
example, Clustering is an expensive high availability method when compared to Database
Mirroring, but it allows the user to failover immediately.

The following high availability features are available with the Enterprise edition:

• Failover Clustering
• Multiple Instances(up to 50)
• Log shipping
• Database Snapshots
• Database Mirroring

The following high availability features are available with Standard Edition:

• Failover Clustering(maximum two nodes)


• Multiple instances(up to 16)
• Log shipping
• Database Mirroring

In this article, we will be discussing about Database Mirroring high availability method.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 140
Overview of Database Mirroring:

Database Mirroring is a primarily software solution for increasing database availability. Mirroring
is implemented on a per-database basis. Mirroring works only with full recovery model. Database
mirroring is available in the Enterprise edition and in the Standard edition. The user can mirror
only the user databases.

Mirroring allows the user to create an exact copy of a database on a different server. The
mirrored database must reside on different instance of SQL Server Database engine. Microsoft
fully supports database mirroring with SQL Server 2005 SP1 onwards. For the RTM release (prior
to SP1), Microsoft support services will not support databases or applications that use database
mirroring. The database mirroring feature should not be used in production environments. Prior to
SP1, database mirroring is disabled by default, but can be enabled for evaluation purposes by
using trace flag 1400. The following T-SQL statement can be used to achieve this:

DBCC TRACEON (1400)

Database Mirroring is only available in the Standard, Developer and Enterprise editions of SQL
Server 2005. These are the required versions for both the principal and mirror instances of SQL
Server. The witness server can run on any version of SQL Server. In addition, there are some
other features only available in the Developer and Enterprise editions of SQL Server, but the
base functionality exists in the Standard edition.

Benefits of Database Mirroring:

1. Implementing database mirroring is relatively easy. It does not require any additional hardware
in terms of clustering support. So it proves to be a cheaper implementation instead of clustering a
database.

2. Database mirroring provides complete or nearly complete redundancy of the data, depending
on the operating modes.

3. It increases the availability of the database.

Understanding Database Mirroring Concepts:

Principal: The principal server is the primary database. This acts as a starting point in a
database mirroring session. Every transaction that is applied to the principal database will be
transferred to the mirrored database.

Mirror: Mirror is the database that will receive the copies from the principal server. There should
be consistent connection between the mirrored and the principal server.

Standby Server: In the process of database mirroring, a standby server is maintained. This is not
accessible to the users. In case of the principal server failing; the users can easily switch over.

Modes of Database Mirroring: Database Mirroring can work in two ways: synchronous or
asynchronous

A) Synchronous mode: This is also called as high safety mode. In this mode, every transaction
applied to the principal will also be committed on the mirror server. The transaction on the
principal will be released only when it is also committed on the mirror. Once it receives an

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 141
acknowledgement from the mirror server, the principal will notify the client that the statement has
been completed. The high safety mode protects the data by requiring the data to be synchronized
between the principal and the mirror server.

1. High safety mode without automatic failover:

Transaction Safety set to full

When the partners are connected (Principal and Mirror) and the database is already
synchronized, manual failover is supported. If the mirror server instance goes down, the principal
server instance is unaffected and runs exposed (that is without mirroring the data). If the principal
server is lost, the mirror is suspended, but service can be manually forced to the mirror server
(with possible data loss).

2. High Safety mode with automatic failover:

Transaction Safety set to full

Automatic failover provides high availability by ensuring that the database is still served after the
loss of one server. Automatic failover requires that the session possess a third server instance,
the witness, which ideally resides on a third computer. The above figure shows the configuration
of a high-safety mode session that supports automatic failover.

B) Asynchronous mode: This is also known as the high performance mode. Here performance
is achieved at the cost of availability. In this mode, the principal server sends log information to
the mirror server, without waiting waiting for an acknowledgement from the mirror server.
Transactions on the principal server commit without waiting for the mirror server to commit to the
log file. The following figure shows the configuration of a session using high-performance mode.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 142
Transaction Safety set to off

This mode allows the principal server to run with minimum transactional latency and does not
allow the user to use automatic failover. Forced service is one of the possible responses to the
failure of the principal server. It uses the mirror server as a warm standby server. Because data
loss is possible, one should consider other alternatives before forcing service to the mirror.

Types of Mirroring:

To provide flexibility when dealing with different requirements, SQL Server 2005 offers three
operating modes, which are determined by presence of the witness and transaction safety level,
configurable on per mirroring session basis. The safety level can be turned either on or off. With
the safety level set to ON, committed transactions are guaranteed to be synchronized between
mirrored partners, with the safety turned OFF, synchronization is performed on a continuous
basis, but without assurance of full consistency between transaction logs of both databases.

High availability operating mode: synchronous with a witness (with transaction safety set to
ON) - In this case, transactions written to the transaction log of the database on the principal are
automatically transferred to the transaction log of its mirrored copy. The principal waits for the
confirmation of each successful write from its mirror before committing the corresponding
transaction locally, which guarantees consistency between the two (following the initial
synchronization). This type of synchronous operation is the primary prerequisite for the automatic
failover - the other is the presence and proper functioning of the witness server (which means that
only the synchronous mode with a witness offers such capability). Additionally, availability of the
witness also impacts operations in cases when the mirror server fails. In such a scenario, if the
principal can still communicate with the witness, it will continue running (once the witness detects
that the mirror is back online, it will automatically trigger its resynchronization), otherwise (if both
mirror and witness are not reachable from the principal), the mirrored database is placed in the
OFFLINE mode.

High protection operating mode: synchronous without a witness (with transaction safety set to
ON) - uses the same synchronization mechanism as the first mode, however, the lack of the
witness precludes automatic failover capability. The owner of the database can perform manual
failover as long as the principal is present, by running ALTER DATABASE statement with SET
PARTNER FAILOVER option from the principal). Alternately, the owner can force the service to
the mirror the database by running the ALTER DATABASE statement with the SET PARTNER
FORCE_SERVICE_ALLOW_DATA_LOSS option from the mirror, with potential data loss (if
databases are not in synchronized state). Unavailability of the mirror (due to server or network
link failure) causes the primary to place the mirrored database in OFFLINE mode (in order to
prevent the possibility of having two mirroring partners operating simultaneously as principals).

High performance operating mode: asynchronous without a witness (with transaction safety set
to OFF) - In this case, a transaction is committed on the principal before it is sent to its partner,
which means that it is not uncommon for the source database and its mirror to be out of synch.
However, since the process of transferring transaction log entries to the mirror is continuous, the
difference is minor. In the case of principle failure, the database owner can force service to the
mirror database, resulting in the former mirror taking on the role of the principal. Forcing the
service can result in data loss (encompassing all transaction log entries that constituted the
difference between the mirror and the principal at the time of its failure), so it should be used only
if such impact can be tolerated. Another choice when dealing with the principal failure in this
mode (which reduces possibility of data loss) is terminating the mirroring session and recovering
the database on the principal. Unlike in the synchronous mode with a witness, unavailability of the
mirror leaves the principal operational.

Note:

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 143
 Database mirroring is limited to only two servers.

 Mirroring with a Witness Server allows for High Availability and automatic fail over.

 You can configure your DSN string to have both mirrored servers in it so that when they
switch you notice nothing.

 While mirrored, your Mirrored Database cannot be accessed. It is in


Synchronizing/Restoring mode.

 Mirroring with SQL Server 2005 standard edition is not good for load balancing

Steps in Mirroring:

SQL Server 2005 - Mirror Server:

In this tutorial you will learn about Mirror Server in SQL Server 2005 - Preparing the Principal and
Mirror Server, Establishing a Mirroring Session, Establishing a Witness Server, Executing
Transactions, Simulating Principal Server Failure, Restarting the Failed Server, Terminating the
Mirror Session and Configuring Database Mirroring.

Preparing the Principal and Mirror Server:

Database mirroring is easy to set up and can be made self monitoring for automatic failover in the
event of the principal server being unavailable. The first step is to configure the relationship
between the principal server and the mirror server. This can be a synchronous mirroring with a
witness server that provides the highest availability of the database. A drawback in this type of
configuration is the need to log transactions on the mirror before such transactions being
committed to the principal server may retard performance. Asynchronous mirroring with a witness
server provides high availability and good performance. Transactions are committed to the
principal server immediately. This configuration is useful when there is latency or distance
between the principal server and the mirror. The third type of mirroring configuration is the
Synchronous mirroring without the witness server. This guarantees that data on both servers is
always concurrent and data integrity is of a very high order. However, automatic failover cannot
occur as there are not enough servers to form a quorum decision on which server is to take the
role of the principal server and which should be the mirror server.

Establishing a Mirroring Session:

Database mirroring is done within a mirror session. A mirror session maintains information about
the state of the databases, the mirroring partners and the witness server. The mirror server
identifies the most recent transaction log record that has been applied to the mirror database and
requests for subsequent transaction log records from the principal server. This phase is called the
synchronizing phase.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 144
Once synchronization is complete the principal server will transmit the transaction logs to the
mirror server even as changes are made. The mirror database is continually rolled forward to
match the principal database. The operating mode of the mirror database (synchronous or
asynchronous) will determine whether the transaction log records are applied to the mirror
database immediately or after the transactions have been recorded in the principal server.

The mirror session maintains information about the state of any witness servers. It ensures that
the witness server is visible both to the principal and the mirror servers.

A mirroring session is terminated by a number of causes. There may be a communication or


server failure. The principal server may fail and the mirror may become the principal server. This
can happen automatically or manually depending on the operating mode. The session may also
be terminated by the manual intervention of the Database Administrator using the TRANSACT-
SQL ALTER DATABASE command. Mirroring may be terminated or suspended in the process

Establishing a Witness Server:

A witness server is a must where the DBA wants to implement automatic failover and the
configuration must be in the synchronous operating mode. The witness server is usually on a
different computer from the principal and the mirror servers. However, one server can act as a
witness for multiple mirror partnerships.

The ALTER Database command with the SET WITNESS clause is used on the principal server to
create a witness server. The Witness server address is specified and the endpoint port is defined
to act as the witness for the server_network_address parameter.

A witness server can be disabled. However, the mirroring session will continue even when the
witness server is disabled. Automatic failover will no longer be possible.

Information about the witness server can be viewed in sys.database_mirroring_witnesses catalog


view.

Executing Transactions:

The ALTER DATABASE command has to be run on the mirror server specifying the principal
server endpoint address and then the same has to be done on the principal server so that
synchronization can commence. The operating mode has to then be selected. By default the
operating mode is synchronous. This can be changed by running the ALTER DATABASE
command with SET PARTNER SAFETY clause on either partner server. The saftety_mode
parameter can be either OFF or FULL. The mirror partnership information can be viewed by
running a query on sys.databases catalog view.

If the transaction safety is set to full, the principal and mirror servers operate on synchronous
transfer mode. The transaction logs are hardened in the principal server and transmitted to the
mirror and then the principal waits for the mirror to harden its logs and send its response. When
the safety is OFF the principal does not wait for the acknowledgement of the mirror. In this
instance the principal and the mirror may not be synchronized at all times.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 145

Synchronous transfer guarantees that the mirror is a faithful image of the principal database
transaction log

Simulating Principal Server Failure:

A principal server failure can be simulated in test scenarios to ensure that failover is smooth.
Failover implies that the mirror server takes over as the principal server and the mirror database
will have to act as the principal database. The failover can be manual, automatic or forced.

Automatic failover occurs when the high availability operating mode is synchronous and the
safety is FULL and a witness is part of the session. Manual occurs in high availability and high
protection operating modes. Safety has to be full and the partner databases are synchronized.
Forced service is used primarily in the High Performance mode with safety off.

Simulating Principal Server failure can be done by manual intervention of the DBA in an orderly
way. The safety will have to be first set to FULL and the principal and the mirror databases
synchronized. Manual failover can be invoked by invoking the ALTER DATABASE command on
the principal server or by clicking the failover button in the Database Properties/Mirroring dialog in
the Management Studio. A manual failover causes current users to be disconnected and all
unfinished transactions to roll back. These transactions will then be recovered from the redo
queue. The mirror assumes the role of the principal server and the two servers will negotiate a
new starting point for mirroring based on their mirroring failover LNS.

If the principal server is no longer operating, and safety is OFF, forced service can be resorted to.
This service causes some data loss.

Restarting the Failed Server:

A failed server can be restarted and can be synchronized with the principal server or the mirror
server as the case may be. Any suspending of transactions causes the log on the principal server
to grow with the transactions being logged and stored. Once the mirror session is resumed, the
principal transaction log is synchronized and written on to the mirror database log.

Terminating the Mirror Session:

A mirror session can be manually terminated and the relationship between the servers can be
ended. When a session is ended, all information about the session is removed from all servers
and leaves both the principal server and the independent server with an independent copy of the
database. The mirror server database will remain in the restoring state until it is manually
recovered or deleted.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 146
Configuring Database Mirroring:

Configuring a mirror server includes configuring the mirror server and the database.

The server designated as the mirror must be accessible and trusted by the principal database
server. Ideally both servers should belong to the same domain. The mirror server should also
have sufficient memory and processing power to act as the principal server in the event of
failover. It should be able to support users and applications without noticeable difference in the
quality of service.

The mirror database must be created manually. The file structure must match the principal
database file structure. Both databases must implement full recovery model. Once the mirror
database is created, the latest full database backup of the principal database must be applied to
the mirror using the RESTORE DATABASE command with the WITH NONRECOVERY clause.

The next step is to enable the communication mechanism through which the mirroring will take
place. This implies creation of endpoints on both servers. The endpoint controls the Transmission
Control Protocol (TCP) port on which the server listens for database mirroring messages. The
endpoint also defines the role that it must perform. A server needs to have only one configured
endpoint regardless of the number of mirroring sessions it participates in. However, each instance
requires a unique port on which to listen.

The next step is to establish a mirroring session. The process of establishing a mirroring session
has been discussed above. It involves creating a mirroring session using the ALTER DATABASE
command on the mirror server first and then on the principal server. The server_network_address
parameter will have to be specified. Then a partnership will have to be created on the mirror
server, the operating mode will have to be changed and so on.

Preparing for mirroring:

To prepare for database mirroring, user has to perform three configuration steps:

1. Configuring Security and communication between instances: To establish a database


mirror connection’s Server uses endpoints to specify the connection between servers. SQL
Server performs authentication over the endpoints. This can be achieved by using Windows
authentication or certificate based authentication. If witness server is also in the picture, and then
we need to specify the communication and authentication between the principal and the witness
and between the mirror and witness. Here, since we will be creating the end point for database
mirroring; only TCP can be used as transport protocol. Each database mirroring endpoint listens
on a unique TCP port number.
The endpoints can be created with the CREATE ENDPOINT TSQL statement.

Syntax:

CREATE ENDPOINT endPointName [ AUTHORIZATION login ]


[ STATE = { STARTED | STOPPED | DISABLED } ]
AS { HTTP | TCP } (
<protocol_specific_arguments>
)

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 147
FOR { SOAP | TSQL | SERVICE_BROKER | DATABASE_MIRRORING } (
<language_specific_arguments>
)

<AS TCP_protocol_specific_arguments> ::=


AS TCP (
LISTENER_PORT = listenerPort
[ [ , ] LISTENER_IP = ALL | ( 4-part-ip ) | ( "ip_address_v6" ) ]

<FOR DATABASE_MIRRORING_language_specific_arguments> ::=


FOR DATABASE_MIRRORING (
[ AUTHENTICATION = {
WINDOWS [ { NTLM | KERBEROS | NEGOTIATE } ]
| CERTIFICATE certificate_name
| WINDOWS [ { NTLM | KERBEROS | NEGOTIATE } ] CERTIFICATE certificate_name
| CERTIFICATE certificate_name WINDOWS [ { NTLM | KERBEROS | NEGOTIATE } ]
[ [ [ , ] ] ENCRYPTION = { DISABLED | { { SUPPORTED | REQUIRED }
[ ALGORITHM { RC4 | AES | AES RC4 | RC4 AES } ] }

]
[ , ] ROLE = { WITNESS | PARTNER | ALL }
)

Authentication= <authentication_options>

WINDOWS [{NTLM | KERBEROS | NEGOTIATE}]


Specifies the TCP/IP authentication requirements for connections for this endpoint. The default is
WINDOWS. Along with the authentication the user has to mention the authorization method
(NTLM or Kerberos).By default, the NEGOTIATE option is set, which will cause the endpoint to
negotiate between NTLM or Kerberos.

CERTIFICATE certificate_name
the user can also specify that the endpoint has to authenticate using a certificate. This can be
done by specifying the CERTIFICATE keyword and the name of the certificate. For certificate
based authentication, the endpoint must have the certificate with the matching public key

WINDOWS [ { NTLM | KERBEROS | NEGOTIATE } ] CERTIFICATE certificate_name


Specifies that endpoint has to first try to connect by using Windows Authentication and, if that
attempt fails, to then try using the specified certificate.

CERTIFICATE certificate_name WINDOWS [ { NTLM | KERBEROS | NEGOTIATE } ]


Specifies that endpoint has to first try to connect by using the specified certificate and, if that
attempt fails, to then try using Windows Authentication.

Encryption

Next, we will take a look at the encryption option. By default, database mirroring uses RC4
encryption.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 148

ENCRYPTION = {DISABLED | SUPPORTED | REQUIRED} [ALGORITHM {RC4 | AES | AES


RC4 | RC4 AES}]
Specifies whether encryption is used in the process. The default is REQUIRED.

Encryption options:

Option Description
DISABLED Specifies that data sent over a connection is not
encrypted.
SUPPORTED Specifies that the data is encrypted only if the
opposite endpoint specifies either SUPPORTED
or REQUIRED.
REQUIRED Specifies that connections to this endpoint must
use encryption. Therefore, to connect to this
endpoint, another endpoint must have
ENCRYPTION set to either SUPPORTED or
REQUIRED

Encryption Algorithm.

Option Description
RC4 Specifies that the endpoint must use the RC4
algorithm. This is the default.
AES Specifies that the endpoint must use the AES
algorithm.
AES RC4 Specifies that the two endpoints will negotiate for
an encryption algorithm with this endpoint giving
preference to the AES algorithm.
RC4 AES Specifies that the two endpoints will negotiate for
an encryption algorithm with this endpoint giving
preference to the RC4 algorithm.

RC4 is a relatively weak algorithm, and AES is a relatively strong algorithm. But AES is
considerably slower than RC4. If security is a higher priority than speed, then AES is
recommended.

Role:

We have to specify the endpoint’s role in the Database mirroring option. Role can be Partner,
Witness or All. Using the ALL keyword as the role specifies that the mirroring endpoint can be
used for witness as well as for a partner in the database mirroring scenario.

We can inspect the database mirroring endpoints on a server by querying the


sys.database_mirroring_endpoints catalog view:
SELECT *
FROM sys.database_mirroring_endpoints;

2.Creating the Mirror Database:

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 149
To create a mirror database, we have to restore the full backup of a principal including all other
types of backup (transactional logs) created on the principal before establishing a session. The
NORECOVERY option has to be used when restoring from backup so that the mirrored database
will remain in non usable state. The mirror database needs to have the same name as the
principal database.

3. Establishing a mirror session:


The next step in setting up database mirroring is to set up the mirror session on the database by
identifying the mirroring partners. We have to identify the partners involved in the mirroring
process on the principal database and on the mirror database.

Let us consider an example.

We will take Adventure Works as the sample database. This database has simple recovery model
by default. To use database mirroring with this database, we must alter it to use the full recovery
model.

USE master;
GO
ALTER DATABASE Adventure Works
SET RECOVERY FULL;
GO

We have two server instances which act as partners (Principal and Mirror) and one server
instance which acts as witness. These three instances are located on different computers. The
three server instances run the same Windows domain, but the user account is different for the
example's witness server instance.

1. Create an endpoint on the principal server instance

CREATE ENDPOINT Endpoint_Mirroring


STATE=STARTED
AS TCP (LISTENER_PORT=7022)
FOR DATABASE_MIRRORING (ROLE=PARTNER)
GO
--Partners under same domain user; login already exists in master.
--Create a login for the witness server instance,
--which is running as XYZ\witnessuser:
USE master ;
GO
CREATE LOGIN [XYZ\witnessuser] FROM WINDOWS ;
GO
-- Grant connect permissions on endpoint to login account of witness.
GRANT CONNECT ON ENDPOINT::Endpoint_Mirroring TO [XYZ\witnessuser];
GO

2.Create an endpoint on the mirror server instance

CREATE ENDPOINT Endpoint_Mirroring


STATE=STARTED
AS TCP (LISTENER_PORT=7022)
FOR DATABASE_MIRRORING (ROLE=ALL)
GO
--Partners under same domain user; login already exists in master.
--Create a login for the witness server instance,
--which is running as XYZ\witnessuser:

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 150
USE master ;
GO
CREATE LOGIN [XYZ\witnessuser] FROM WINDOWS ;
GO
--Grant connect permissions on endpoint to login account of witness.
GRANT CONNECT ON ENDPOINT::Endpoint_Mirroring TO [XYZ\witnessuser];
GO

3. Create an endpoint on the witness server instance

CREATE ENDPOINT Endpoint_Mirroring


STATE=STARTED
AS TCP (LISTENER_PORT=7022)
FOR DATABASE_MIRRORING (ROLE=WITNESS)
GO
--Create a login for the partner server instances,
--which are both running as Mydomain\dbousername:
USE master ;
GO
CREATE LOGIN [Mydomain\dbousername] FROM WINDOWS ;
GO
--Grant connect permissions on endpoint to login account of partners.
GRANT CONNECT ON ENDPOINT::Endpoint_Mirroring TO [Mydomain\dbousername];
GO

4. Create the mirror database. Refer step 2 in the “Preparing for Mirroring” block.

5. Configure the principal as the partner on the mirror.

ALTER DATABASE <Database_Name>


SET PARTNER =
<server_network_address>
GO

The syntax for a server network address is of the form:


TCP: // < system-address> : < port>

Where,
< system-address> is a string that unambiguously identifies the destination computer system.
Typically, the server address is a system name (if the systems are in the same domain), a fully
qualified domain name, or an IP address.

< Port> is the port number used by the mirroring endpoint of the partner server instance.

A database mirroring endpoint can use any available port on the computer system. Each port
number on a computer system must be associated with only one endpoint, and each endpoint is
associated with a single server instance; thus, different server instances on the same server
listen on different endpoints with different ports. In the server network address of a server
instance, only the number of the port associated with its mirroring endpoint distinguishes that
instance from any other instances on the computer.

Example:

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 151
ALTER DATABASE Adventure Works
SET PARTNER =
'TCP: //PARTNERHOST1.COM:7022'
GO

6. Configure the mirror as the partner on the principal.

ALTER DATABASE Adventure Works


SET PARTNER = 'TCP: //PARTNERHOST5.COM:7022'
GO

7. On the principal server, set the witness

ALTER DATABASE Adventure Works


SET WITNESS =
'TCP://WITNESSHOST4.COM:7022'
GO

Switching Roles:

When the principal server fails, we have to switch roles over to the mirror and from then on
specify that the mirror should become the principal database. This concept is called role
switching. The three options for role switching are:

1. Automatic failover: - When the witness server is present in the database mirroring session,
automatic failover will occur when the principal database becomes unavailable and
when the witness server confirms this. During the automatic failover, the mirror will be
automatically promoted to principal, and whenever the principal comes back on, it will
automatically take the role of mirror.

2. Manual Failover: - The user can perform manual failover only if both the principal and mirror
are alive and in synchronized status. DBAs use this operation most frequently to perform
maintenance tasks on the principal. The failover is initiated from the principal and later the roles
are reverted after the database maintenance job is done.

The statement used to switch database roles (manual failover) is shown below:

ALTER DATABASE Adventure Works SET PARTNER FAILOVER

3. Forced Service: - When the witness server is not used and if the principal database goes down
unexpectedly, then the user has to initiate manual failover to the mirror. In asynchronous mode of
operation, user does not have any idea whether the transaction that have got committed on the
principal have made it to the mirror or not. In this scenario, when the user wants to switch roles,
there is possibility of losing data.

To achieve this, we need to invoke an ALTER DATABASE statement as shown below:

ALTER DATABASE Adventure Works SET PARTNER


FORCE_SERVICE_ALLOW_DATA_LOSS

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 152

REPLICATION:

REPLICATION
IT IS THE ACT OF COPYING OR REPLICATING DATA FROM ONE TABLE OR DATABASE TO
ANOTHER TABLE OR DATABASE.

USING THIS TECHNOLOGY, WE CAN DISTRIBUTE COPIES OF AN ENTIRE DATABASE TO


MULTIPLE SYSTEMS THROUGHOUT OUR COMPANY, OR WE CAN DISTRIBUTE
SELECTED PIECES OF THE DATABASE.

WHEN SQL SERVER REPLICATION TECHNOLOGY IS USED THE TASK OF COPYING AND
DISTRIBUTING DATA IS AUTOMATED.

NO USER INTERVENTION IS NEEDED TO REPLICATE DATA ONCE REPLICATION HAS


BEEN SET UP AND CONFIGURED.

IF A FAILURE OCCURS DURING REPLICATION OPERATION RESUME AT THE POINT OF


FAILURE.

REPLICATION COMPONENTS

1. PUBLISHERS:
A PUBLISHER CONSISTS OF A MICROSOFT WINDOWS HOSTING SQL SERVER
DATABASE. THIS DATABASE PROVIDES DATA TO BE REPLICATED TO OTHER SYSTEMS.
PUBLISHER ALSO MAINTAINS INFORMATION ABOUT WHICH DATA IS CONFIGURED FOR
REPLICATION.A REPLICATED ENVIRONMENT CAN HAVE MULTIPLE DESTINATIONS
WHERE REPLICATION IS DONE, BUT ANY GIVEN SET OF DATA IS CONFIGURED FOR
RELICATION CAN HAVE OMLY ONE PUBLISHER.

HAVING ONLY ONE PUBLISHER FOR A PARTICULAR SET OF DATA DOES NOT MEAN
THAT THE PUBLISHER IS THE ONLY COMPONENT THAT CAN MODIFY THE DATA, THE
DESTINATION LOCATION CAN ALSO MODIFY AND EVEN REPUBLISH DATA

2. DISTRIBUTERS:
IN ADDITION TO CONTAINING THE DISTRIBUTION DATABASE, SERVERS ACTING AS
DISTRIBUTORS STORE METADATA, HISTORY DATA, AND OTHER INFO.IN MANY CASES
DISTRIBUTOR ALSO RESPONSIBLE FOR DISTRIBUTING THE REPLICATION DATA TO
DESTINATIONS. PUBLISHER AND DISTRIBUTOR NOT REQUIRED TO BE ON SAME
SERVER.

3. SUBSCRIBERS:
THESE ARE THE DATABASE SERVERS THAT STORE THE REPLICATED DATA AND
RECEIVE UPDATES. SUBSCRIBERS CAN ALSO MAKE UPDATES AND SERVE AS
PUBLISHERS TO OTHER SYSTEMS.

META DATA:

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 153
1. IS DATA ABOUT DATA.
2. IS USED IN REPLICATION TO KEEP TRACK OF THE STATE OF REPLICATION.
3. IS ALSO THE DATA THAT IS PROPOGATED BY THE DISTRIBUTOR TO OTHER
MEMBERS OF THE REPLICATION SET AND INCLUDES INFORMATION ABOUT THE
STRUCTURE OF DATA AND THE PROPERTIES OF DATA SUCH AS THE TYPE OF DATA IN
A COLUMN.

TYPES OF REPLICATION

1. SNAPSHOT REPLICATION
2. TRANSACTIONAL REPLICATION
3. MERGE REPLICATION

REPLICATION DATA

1. ARTICLES
2. PUBLICATIONS

REPLIATION AGENTS

1. SNAPSHOT AGENT
2. LOG READER AGENT
3. DISTRIBUTION AGENT
4. MERGE AGENT
5. QUEUE READER AGENT

REPLICATION COMPONENTS:

PUBLISHERS

A PUBLISHER CONSISTS OF A MICROSOFT WINDOWS HOSTING SQL SERVER


DATABASE
THIS DATABASE PROVIDES DATA TO BE REPLICATED TO OTHER SYSTEMS

PUBLISHER ALSO MAINTAINS INFORMATION ABOUT WHICH DATA IS CONFIGURED FOR


REPLICATION.

A REPLICATED ENVIRONMENT CAN HAVE MULTIPLE DESTINATIONS WHERE


REPLICATION IS DONE, BUT ANY GIVEN SET OF DATA IS CONFIGURED FOR RELICATION
CAN HAVE OMLY ONE PUBLISHER.

HAVING ONLY ONE PUBLISHER FOR A PARTICULAR SET OF DATA DOES NOT MEAN
THAT THE PUBLISHER IS THE ONLY COMPONENT THAT CAN MODIFY THE DATA, THE
DESTINATION LOCATION CAN ALSO MODIFY AND EVEN REPUBLISH DATA

DISTRIBUTOR

IN ADDITION TO CONTAINING THE DISTRIBUTION DATABASE, SERVERS ACTING AS


DISTRIBUTORS STORE METADATA, HISTORY DATA, AND OTHER INFO.

IN MANY CASES DISTRIBUTOR ALSO RESPONSIBLE FOR DISTRIBUTING THE


REPLICATION DATA TO DESTINATIONS.PUBLISHER AND DISTRIBUTOR NOT REQUIRED
TO BE ON SAME SERVER.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 154

META DATA

1. IT IS DATA ABOUT DATA.


2. IS USED IN REPLICATION TO KEEP TRACK OF THE STATE OF REPLICATION.
3. IS ALSO THE DATA THAT IS PROPOGATED BY THE DISTRIBUTOR TO OTHER
MEMBERS OF THE REPLICATION SET AND INCLUDES INFORMATION ABOUT THE
STRUCTURE OF DATA AND THE PROPERTIES OF DATA SUCH AS THE TYPE OF DATA IN
A COLUMN.

SUBSCRIBERS

THESE ARE THE DATABASE SERVERS THAT STORE THE REPLICATED DATA AND
RECEIVE UPDATES.

SUBSCRIBERS CAN ALSO MAKE UPDATES AND SERVE AS PUBLISHERS TO OTHER


SYSTEMS.

TYPES OF REPLICATION

SNAPSHOT REPLICATION

IS THE SIMPLEST REPLICATION TYPE.

WITH SNAPSHOT REPLICATION, A PICTURE, OR SNAPSHOT POF THE DATABSE IS


TAKEN PERIODICALLY AND PROPOGATED TO SUBSCRIBERS.
THE MAIN ADVANTAGE OF THIS REPLICATION IS THAT IT DOES NOT INVOLVE
CONTINIOUS OVERHEAD ON PUBLISHER AND SUBSCRIBER.

TRANSACTIONAL REPLICATION

THIS CAN BE USED TO REPLICATE CHANGES TO THE DATABSE.


WITH THIS ANY CHANGES MADE TO THE REPLICATION DATA ARE IMMEDIATELY
CAPTURED FROM THE TRANSACTION LOG AND PROPAGATED TO THE DISTRIBUTORS.

USING THIS WE CAN KEEP BOTH PUBLISHER AND SUBSCRIBER IN THE SAME STATE.

TRANSACTIONAL REPLICATION SHOULD BE USED WHEN IT IS IMPORTANT TO KEEP ALL


OF THE REPLICATED SYSTEMS CURRENT.

THIS USES MORE SYSTEM OVERHEAD THAN SNAPLSHOT BECAUSE IT INDIVIDUALLY


APPLIES EACH TRANSACTION THAT CHANGES DATA IN THE SYSTEM TO THE
REPLICATED DATA.

HOWEVER THIS KEEPS THE SYSTEMS IN UP-TO-DATE

MERGE REPLICATION

IS SIMILAR TO TRANASACTIONAL REPLICATION IN THAT IT KEEPS TRACK OF THE


CHANGES MADE TO THE REPLICATION DATA.

HOWEVER, INSTEAD OF INDIVIDUALLY PROPAGATING TRANSACTION THAT MAKE


CHANGES, THIS PERIODICALLY TRANSMITS A BACK OF CHANGES.

PUBLICATION

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 155
IS A SET OF ARTICLES GROUPED TOGETHER AS A UNIT.

THIS PROVIDE THE MEANS TO REPLICATE A LOGICAL GROUPING OF ARTICLES AS ONE


REPLICATION OBJECT.

A PUBLICATION CAN CONSIST OF A SINGLE ARTICLE, BUT IT ALMOST ALWAYS


CONTAINS MORE THAN ONE.

PUSH SUBSCRIPTIONS

IN THIS TYPE OF REPLCIATION, THE DISTRIBUTOR IS RESPONSIBLE FOR PROVIDING


UPDATES TO THE SUBSCRIBERS.

UPDATES ARE INITIATED WITHOUT ANY REQUEST FROM THE SUBSCRIBER.

THIS IS USEFUL WHEN CENTRALIZED ADMINISTRATION IS DESIRED BECAUSE THE


DISTRIBUTOR, RATHER THAN MULTIPLE SUBSCRIBERS, CONTROLS AND ADMINISTERS
REPLICATION

PULL SUBSCRIPTIONS

THIS ALLOWS SUBSCRIBERS TO INITIATE REPLICATION.

REPLCIATION CAN BE INITIATED EITHER VIA A SCHEDULED TASK OR MANUALLY.

PULL SUBSCRIPTION ARE USEFUL IF WE HAVE A LARGE NUMBER OF SUBSCRIBERS


AND IF THE SUBSCRIBERS ARE NOT ALWAYS ATTACHED TO THE NETWORK.

ARE NOT ALWAYS CONNECTED TO THE NETWORK CAN PERIODICALLY CONNECT AND
REQUEST REPLICATION DATA.
THIS CAN BE USEFUL IN REDUCING THE NUBER OF CONNECTION ERRORS REPORTED
ON THE SUBSCRIBERS INITIATE PULL SUBSCRIPTIONS, THESE DISTRIBUTORS.
IF DISTRIBUTOR TRIES TO INITIATE REPLICATION TOA SUBSCRIBER THAT DOES NOT
RESPOND, AN ERROR WILL BE REPORTED.
THUS IF THE REPLICATION IS INITIATED ON SUBSCRIBER ONLY WHEN IT IS ATTACHED,
NO ERRORS WILL BE REPORTED.

REPLICATION AGENTS

SNAPSHOT AGENT

IS USED FOR CREATING AND PROPOGATING THE SNAPSHOTS FROM THE PUBLISHER
TO THE DISTRIBUTOR
THIS CREATES THE REPLICATION DATA (SNAPSHOT DATA) AND CREATES
INFORMATION THAT IS USED BY THE DISTRIBUTION AGEN TO PROPAGATE THAT DATA
[META DATA]

1. SNAP SHOT AGENT ESTABLISHES CONNECTION FROM THE DISTRIBUTOR TO


THE PUBLISHER.
2. IT ENSGINEERS A COPY OF SCHEMA FOR EACH ARTICLE AND STORES THAT
INFO IN THE DISTRIBUTION DATABASE
3. IT TAKES A SNAPSHOT OF THE ACTUAL DATA ON THE PUBLISHER AND WRITES
IT TO AFILE AT THE SNAPSHOT LOCATION
4. AFTER THE DATA HAS BEEN COPIED, THE AGEN UPDATES INFO IN THE
DISTRIBUTION DATABASE
5. THEN IT RELEASES THE LOCKS

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 156

LOG READER AGENT

THIS IS USED IN THE TRANSACTIONAL REPLICATION TO EXTRACT CHANGED INFO


FORM THE TRANSACTION LOG ON THE PUBLISHER IN ORDER TO REPLICATE THESE
COMMANDS THE DISTRIBUTION DATABASE.

EACH DATABASE THAT USES TRANSACTIONAL REPLICATION HAS ITS OWN LOG
READER AGENT ON THE PUBLISHER.

QUEUE READER AGENT

THIS IS USEDF TO PROPAGATE CHANGES MADE TO SUBSCRIBERS OF SNAPSHOT OR


TRANSACTION REPLICATION THAT HAVE BEEN CONFIGURED WITH THE QUEUED
UPDATING OPTION.

THIS OPTION ALLOWS CHANGES TO BE MADE ON THE SUBSCRIBER WITHOUT THE


NEED TO USE A DISTRIBUTED TRANSACTION.

DISTIBUTION AGENT

THIS PROPAGATES SNAPSHOTS AND TRANSACTION FROM THE DISTRIBUTION


DATABASE TO SUBSCRIBERS. EACH PUBLICATION HAS ITS OWN DISTRIBUTION AGENT.
IF YOU ARE USING A PUSH SUBSCRIPTION, THE DISTRIBUTION AGENT RUNS ON THE
DISTRIBUTOR. IF YOU ARE USING A PULL SUBSCRIPTION IT IS ON THE SUBSCRIBER.

1. CONFIGURING THE REPLICATION

1. CONFIGURING SNAPSHOT REPLICATION

1. ENABLING THE SUBSCRIBERS

1. CONFIGURING PULL SUBSCRIPTIONS

1. CONFIGURING PUSH SUBSCRIPTIONS

1. REMOVING SNAPSHOT REPLICATION

TUNING FOR SNAPSHOT REPLICATION

I / O PERFOMANCE ON PUBLISHER

THE ENTIRE DATABASE IS COPIED FROM PUBLISHER, THE PERFORMANCE OF THE I / O


SUBSUSTEM ON THE PUBLISHER CAN BE LIMITING FACTOR.

I / O PERFOMANCE ON DISTIRBUTOR
THIS RECIEVES LARGE AMOUNTS OF DATA AT ONE TIME, AND AT SOME LATER TIME,
IT DISTRIBUTES THAT DATA. A SLOW I / O SUBSYSTEM ON THE DISTRIBUTOR WILL
BOG DOWN THE SNAPSHOT CREATION PROCESS
I / O PERFOMANCE ON SUBSCRIBER

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 157
THE DISTRIBUTOR ATTEMPTS TO DISTRIBUTE OR SUBSET OF A DATABASE TO THE
SUBSCRIBER ALL AT ONCE
BANDWIDTH OF NETWORK
LARGE AMOUNTS OF DATA ARE BEING TRANSFERRED A BOTTLE NECK CAN EASILY
OCCUR ON THE NETWORK.

1. CONFIGURING TRANSACTIONAL REPLICATION

1. CONFIGURING PULL SUBSCRIPTIONS

1. CONFIGURING PUSH SUBSCRIPTIONS

1. CONFIGURING MERGE REPLICATION SYSTEM


2. CONFIGURING PULL AND PUSH SUBSCRIPTIONS

CLUSTERING

UPGRADATION

PERFORMANCE TUNING

DATABASE STATES:
A database is always in one specific state. For example, these states include ONLINE, OFFLINE,
or SUSPECT. To verify the current state of a database, select the state_desc column in the
sys.databases catalog view or the Status property in the DATABASEPROPERTYEX function.

Database State Definitions


The following table defines the database states.

State Definition

ONLINE Database is available for access. The primary file group is online, although
the undo phase of recovery may not have been completed.
OFFLINE Database is unavailable. A database becomes offline by explicit user action
and remains offline until additional user action is taken. For example, the
database may be taken offline in order to move a file to a new disk. The
database is then brought back online after the move has been completed.
RESTORING One or more files of the primary file group are being restored, or one or more
secondary files are being restored offline. The database is unavailable.
RECOVERING Database is being recovered. The recovering process is a transient state; the
database will automatically become online if the recovery succeeds. If the
recovery fails, the database will become suspect. The database is
unavailable.
RECOVERY SQL Server has encountered a resource-related error during recovery. The
PENDING database is not damaged, but files may be missing or system resource

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 158
limitations may be preventing it from starting. The database is unavailable.
Additional action by the user is required to resolve the error and let the
recovery process be completed.
SUSPECT At least the primary file group is suspect and may be damaged. The database
cannot be recovered during startup of SQL Server. The database is
unavailable. Additional action by the user is required to resolve the problem.
EMERGENCY User has changed the database and set the status to EMERGENCY. The
database is in single-user mode and may be repaired or restored. The
database is marked READ_ONLY, logging is disabled, and access is limited
to members of the sysadmin fixed server role. EMERGENCY is primarily
used for troubleshooting purposes. For example, a database marked as
suspect can be set to the EMERGENCY state. This could permit the system
administrator read-only access to the database. Only members of the
sysadmin fixed server role can set a database to the EMERGENCY state.

MAIL CONFIGURATION

SNAPSHOTS

To create a database snapshot:

1. Based on the current size of the source database, ensure that you have sufficient disk
space to hold the database snapshot. The maximum size of a database snapshot is the size
of the source database at snapshot creation.

2. Issue a CREATE DATABASE statement on the files using the AS SNAPSHOT OF


clause. Creating a snapshot requires specifying the logical name of every database file of
the source database. For a formal description of the syntax for creating a database
snapshot, see CREATE DATABASE (Transact-SQL).

Note:
When you create a database snapshot, log files, offline files, restoring files, and defunct files are
not allowed in the CREATE DATABASE statement.

Example
This section contains examples of creating a database snapshot.

A. Creating a snapshot on the Adventure Works database


This example creates a database snapshot on the Adventure Works database. The snapshot
name, AdventureWorks_dbss_1800, and the file name of its sparse file,
AdventureWorks_data_1800.ss, indicate the creation time, 6 P.M (1800 hours).

CREATE DATABASE AdventureWorks_dbss1800 ON


( NAME = AdventureWorks_Data, FILENAME =
'C:\Program Files\Microsoft SQL
Server\MSSQL.1\MSSQL\Data\AdventureWorks_data_1800.ss' )
AS SNAPSHOT OF Adventure Works;
GO
Note:
The .ss extension used in the examples is arbitrary.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 159
B. Creating a snapshot on the Sales database
This example creates a database snapshot, sales_snapshot1200, on the Sales database. This
database was created in the example, "Creating a database that has file groups," in CREATE
DATABASE (Transact-SQL).

--Creating sales_snapshot1200 as snapshot of the


--Sales database:
CREATE DATABASE sales_snapshot1200 ON
( NAME = SPri1_dat, FILENAME =
'C:\Program Files\Microsoft SQL Server\mssql.1\mssql\data\SPri1dat_1200.ss'),
( NAME = SPri2_dat, FILENAME =
'C:\Program Files\Microsoft SQL Server\mssql.1\mssql\data\SPri2dt_1200.ss'),
( NAME = SGrp1Fi1_dat, FILENAME =
'C:\Program Files\Microsoft SQL Server\mssql.1\mssql\data\SG1Fi1dt_1200.ss'),
( NAME = SGrp1Fi2_dat, FILENAME =
'C:\Program Files\Microsoft SQL Server\mssql.1\mssql\data\SG1Fi2dt_1200.ss'),
( NAME = SGrp2Fi1_dat, FILENAME =
'C:\Program Files\Microsoft SQL Server\mssql.1\mssql\data\SG2Fi1dt_1200.ss'),
( NAME = SGrp2Fi2_dat, FILENAME =
'C:\Program Files\Microsoft SQL Server\mssql.1\mssql\data\SG2Fi2dt_1200.ss')
AS SNAPSHOT OF Sales
GO

Database snapshots:
SQL Server 2005 introduced the concept of a snapshot, or a read-only, static view of a database.
Snapshots are primarily created in order to supply a read-only version of a database for reporting
purposes. However, they do function in a similar way to backups. The one primary difference is
that all uncommitted transactions are rolled back. There is no option for rolling forward, capturing
logs, etc., that backups provide, nor are very many SQL Server resources used at all. Rather,
disk technology is used to create a copy of the data. Because of this they are much faster than
backups both to create and restore.
NOTE:
For more details on SQL 2005 Snapshot, please refer to http://www.simple-
talk.com/sql/database-administration/sql-server-2005-snapshots/.
A good use of snapshots, in addition to reporting, might be to create one prior to maintenance
after you've already removed all the active users (and their transactions) from the system. While
snapshots don't support the volatility of live backups, their speed and ease of recovery make a
great tool for quick recovery from a botched rollout. Snapshots are stored on the server, so you
must make sure you've got adequate storage.
The syntax is different because you're not backing up a database; you're creating a new one:
CREATE DATABASE Adventureworks_ss1430
ON (NAME = AdventureWorks_Data,
FILENAME = 'C:\Backups\AdventureWorks_data_1430.ss')
AS SNAPSHOT OF AdventureWorks;
Now it will be accessible for read-only access. Since we're primarily concerned with using this as
a backup mechanism, let's include the method for reverting a database to a database snapshot.
First, identify the snapshot you wish to use. If there is more than one on any database that you're
going to revert, you'll need to delete all except the one you are using:
DROP DATABASE Adventureworks_ss1440;

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 160
Then you can revert the database by running a RESTORE statement (mixed metaphors, not
good):
RESTORE DATABASE Adventureworks
FROM DATABASE_SNAPSHOT = Adventureworks_ss1430;
That's it. On my system, running the database snapshots of Adventureworks took 136 ms. The
full backup took 5,670 ms. The restore of the snapshot took 905ms and the database restore took
13,382ms. Incorporating this into a production rollout process could result in significant benefits
Again, it's worth noting that there are some caveats to using the snapshot. You have to have
enough disk space for a second copy of the database. You need to be careful dealing with
snapshots since most of the syntax is similar to that used by databases themselves. Last, while
there are snapshots attached to a database you can not run a restore from a database backup of
that database.

Best practices
The manner in which you perform database backups should not be a technical decision. It should
be dictated by the business. Small systems with low transaction rates and/or reporting systems
that are loaded regularly will only ever need a full database backup. Medium sized systems and
large systems become dependent on the type of data managed to determine what types of
backup are required.
For a medium sized system, a daily backup with log backups during the day would probably
answer most data requirements in a timely manner.
For a large database the best approach is to mix and match the backups to ensure maximum
recoverability in minimum time. For example, run a weekly full backup. Twice a day during the
week, run a differential backup. Every 10 minutes during the day, run a log backup. This gives
you a large number of recovery mechanisms.
For very large databases, you'll need to get into running filegroup and file backups because doing
a full backup or even a differential backup of the full database may not be possible. A number of
additional functions are available to help out in this area, but I won't be going into them here.
You should take the time to develop some scripts for running your backups and restores. A
naming convention so you know what database, from which server, from which date, in what
specific backup and format will be very conducive to your sanity. A common location for backups,
log, full or incremental, should be defined. Everyone responsible should be trained in both backup
and recovery and troubleshooting the same. There are many ways of doing this, but you can find
a few suggestions in Pop backs up and Pop Restores.
The real test is to run your backup mechanisms and then run a restore. Then try a different type
of restore, and another, and another. Be sure that, not only have you done due diligence in
defining how to backup the system, but that you've done the extra step of ensuring that you can
recover those backups. If you haven't practiced this and documented the practice and then tested
the document, in effect, you're not ready for a disaster.

Summary
Backups within your enterprise should be like voting in Chicago, early and often. Setting up basic
backups is quite simple. Adding on log backups and differentials is easy as well. Explore the
options to see how to add in file and file group backups and restores to increase the speed of
your backups and restores both of which will increase system availability and up time. Keep a
common naming standard. Be careful when using snapshots, but certainly employ them. Store
your files in a standard location between servers. Practice your recoveries. Finally, to really make
your backups sing, pick up a copy of Red Gate's SQL Backup, which speeds up backups and
compresses them, using less disk space and time.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 161

DBA ACTIVITIES (DAILY\WEEKLY\MONTHLY)

DBA Responsibilities

• Installation, configuration and upgrading of Microsoft SQL Server/MySQL/Oracle server


software and related products.
• Evaluate MSSQL/MySQL/Oracle features and MSSQL/MySQL/Oracle related products.
• Establish and maintain sound backup and recovery policies and procedures.
• Take care of the Database design and implementation.
• Implement and maintain database security (create and maintain users and roles, assign
privileges).
• Database tuning and performance monitoring.
• Application tuning and performance monitoring.
• Setup and maintain documentation and standards.
• Plan growth and changes (capacity planning).
• Work as part of a team and provide 7×24 supports when required.
• Do general technical trouble shooting and give consultation to development teams.
• Interface with MSSQL/MySQL/Oracle for technical support.
• ITIL Skill set requirement (Problem Management/Incident Management/Chain
Management etc)

Types of DBA

1. Administrative DBA – Work on maintaining the server and keeping it running.


Concerned with backups, security, patches, replication, etc. Things that concern the
actual server software.
2. Development DBA - works on building queries, stored procedures, etc. that meet
business needs. This is the equivalent of the programmer. You primarily write T-SQL.
3. Architect – Design schemas. Build tables, FKs, PKs, etc. Work to build a structure that
meets the business needs in general. The design is then used by developers and
development DBAs to implement the actual application.
4. Data Warehouse DBA - Newer role, but responsible for merging data from multiple
sources into a data warehouse. May have to design warehouse, but cleans,
standardizes, and scrubs data before loading. In SQL Server, this DBA would use DTS
heavily.
5. OLAP DBA – Builds multi-dimensional cubes for decision support or OLAP systems. The
primary language in SQL Server is MDX, not SQL here

Application DBA- Application DBAs straddle the fence between the DBMS and the application
software and are responsible for ensuring that the application is fully optimized for the database
and vice versa. They usually manage all the application components that interact with the
database and carry out activities such as application installation and patching, application
upgrades, database cloning, building and running data cleanup routines, data load process
management, etc.
Daily activities:
1. Check OS Event Logs, SQL Server Logs, and Security Logs for unusual events.
2. Verify that all scheduled jobs have run successfully.
3. Confirm that backups have been made and successfully saved to a secure location.
4. Monitor disk space to ensure your SQL Servers won’t run out of disk space.
5. Throughout the day, periodically monitor performance using both System Monitor and
Profiler.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 162
6. Use Enterprise Manager/Management Studio to monitor and identify blocking issues.
7. Keep a log of any changes you make to servers, including documentation of any
performance issues you identify and correct.
8. Create SQL Server alerts to notify you of potential problems, and have them emailed
to you. Take actions as needed.
9. Run the SQL Server Best Practices Analyzer on each of your server’s instances on a
periodic basis.
10. Take some time to learn something new as a DBA to further your professional
development.
11. Verify the Backups and Backup file size
12. Verifying Backups with the RESTORE VERIFYONLY Statement
13. In OFF-Peak Hours run the database consistency checker commands if possible

Creating a Database Maintenance Plan:

To avoid all the labor of creating multiple jobs for multiple databases, use the Maintenance Plan
Wizard. You can use this handy tool to create jobs for all the standard maintenance tasks
required to keep your database system running smoothly.
Follow these steps to create a database maintenance plan:
1. In SQL Server Management Studio, expand Management, right-click Maintenance Plans, and
select Maintenance Plan Wizard.
2. On the welcome screen, click the Next button.

3. On the Select a Target Server screen, enter Maintenance Plan 1 in the Name box, enter a
description if you’d like, select your default instance of SQL Server, and click Next.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 163

4. On the Select Maintenance Tasks screen, check the boxes for all the available tasks except
Execute SQL Server Agent Job, and click Next.

5. On the next screen, you can set the order in which these tasks are performed. Leave the
default, and click Next.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 164

6. The next screen allows you to select the databases on which you want to perform integrity
checks. When you click the drop-down list, you’ll see several choices:
_ All Databases: This encompasses all databases on the server in the same plan.
_ All System Databases: This choice affects only the master, model, and MSDB databases.
_ All User Databases: This affects all databases (including AdventureWorks) except the system
databases.
_ These Databases: This choice allows you to be selective about which databases to include in
your plan.
For this task, select All Databases, click OK, and then click Next

7. On the Define Shrink Database Task screen, select All Databases, click OK, and then click
Next.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 165

8. On the Define Reorganize Index Task screen, select All Databases from the Databases drop-
down list, click OK, and then click Next.

9. The Define Rebuild Index Task screen gives you a number of options for rebuilding your
indexes:
_ Reorganize Pages with the Default Amount of Free Space: This regenerates pages with their
original fill factor.
_ Change Free Space per Page Percentage To: This creates a new fill factor. If you set this to 10,
for example, your pages will contain 10 percent free space

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 166

10. Next comes the Define Update Statistics Task screen. Again, select All Databases, click OK,
and then click Next.

11. Next is the Define History Cleanup Task screen. All the tasks performed by the maintenance
plan are logged in the MSDB database. This list is referred to as the history, and it can become

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 167
quite large if you don’t prune it occasionally. On this screen, you can set when and how the
history is cleared from the database so you can keep it in check. Again, accept the defaults, and
click Next.

12. The next screen allows you to control how full backups are performed. Select All Databases
from the drop-down list, accept the defaults, click OK, and then click Next

13. The next screen allows you to control how differential backups are performed. Select All
Databases from the drop-down list, accept the defaults, click OK, and then click Next.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 168

14. The next screen allows you to control how transaction log backups are performed. Select All
Databases

15. On the Select Plan Properties screen, click the Change button to create a schedule for the
job.
16. Enter Maintenance Plan 1 Schedule for the schedule name, accept the rest of the defaults,
and click OK to create the schedule.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 169

17. Click Next to continue.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 170
18. On the Select Report Options screen, you can write a report to a text file every time the job
runs, and you can e-mail the report to an operator. In this case, write a report to C:\, and click
Next

19. On the next screen, you can view a summary of the tasks to perform. Click Finish to create
the maintenance plan

20. Once SQL Server is finished creating the maintenance plan, you can click Close.

Capacity Planning:

This article will discuss some of the point to consider when capacity planning and managing the
Capacity of SQL Server systems (Capacity Planning). It is not a definitive guide but hopefully it
does provide a starting point for the SQL Server DBA.

Principles of Capacity Planning


Microsoft’s SQL Server Operations guide identifies the following key points of capacity planning:

• Be familiar with your system's use of hardware resources


• Maintain a tangible record of your system's performance over time
• Use this information to plan for the future hardware needs or software projects required

Although the operations guide was initially written for SQL Server 2000 much of it is still valid on
the 2005 version of the product.
The complexity of Capacity planning really does depend on the complexity of the system you are
implementing. It is true to say that capacity planning becomes really important when expanding
the system has high costs involved. That said it does not need to be a complicated process but it
will require numeric precision and be fully documented for future reference.
The capacity planning process should tell you how much hardware you need to support a specific
load on your system, it is important to stress here that this will be an iterative process and the
numeric result gained will influence your decisions, as these figures change, your hardware
configuration has the potential to change too.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 171
CPU
This section will look at planning for your CPU(s) and is fairly straight forward. Using a tool like
Perf Mon (which ships with Windows) or possibly Idera, if you are prepared to pay a little more,
you need to monitor your current CPU utilisation (\\\\Processor(_Total)\\\\% Processor Time). If
the average value of this counter is over 50 percent, or if you have frequent spikes usage where
CPU utilisation climbs above 90 percent then you should consider adding either additional
processor(s) or faster processors.
In general, the processors you choose should be able to deliver the speed implied in your other
hardware items. If your system is highly specialized and filled with processor-intensive activities,
you will become aware of that as you observe the system over time.
SSIS (Integration Services) and advanced calculations are examples of such activities. SQL
Server itself is a CPU intensive application so CPU’s with high speed cache will help
performance. A rule of thumb when it comes to your CPU “When it come to the processor always
get the fastest and newest” a slow processor can be bottleneck for the whole system.
If you have a dedicated SQL Server computer then use all of the computers processors for SQL
Server. If your computer runs other applications in addition to SQL Server then consider
restricting SQL Server from using one or more processors. SQL Server can be resource intensive
and the other applications could suffer as result.

Memory
Memory is used mainly to optimise data access. SQL Server uses memory to store execution
plans, store data pages etc. Without enough memory, you will incur more disk I/O in reading data.
If your system does many reads, you might reduce disk I/O by significantly increasing your
memory, because the data will then remain in cache. Insufficient memory, or over-allocation of
memory, can result in paging. Memory plays an important role in SQL Server, and it is a resource
you should carefully monitor.
For those systems where Reads (OLAP) is the most frequent activity and also the highest priority
the more memory your system has the greater the performance. Memory can be used to
compensate for disk I/O in these systems, and large amounts of memory can significantly
decrease the number of disks (spindles) you will need to achieve high performance.
For systems for which writes are the highest priority (or OLTP), memory is still an important part
of the system, but you may benefit more from the addition of disk spindles, and more or faster
controller channels, rather than memory. It is important that you carefully monitor your system to
decide which resources are in highest demand.

I/O
Microsoft SQL Server uses Microsoft Windows operating system input/output (I/O) calls to
perform read and write operations on your disk. SQL Server manages when and how disk I/O is
performed, but the Windows operating system performs the underlying I/O operations. The I/O
subsystem includes the system bus, disk controller cards, disks, tape drives, CD-ROM drive, and
many other I/O devices. Disk I/O is frequently the cause of bottlenecks in a system.When
planning your hardware and disk configuration it is important to remember that the number of
disks is far more important than the total storage size. Generally the more disks you have the
better the performance will be. As an example if you have 20GB spread over disks, performance
will be far better than 20GB spread over a single disk.
Database file placement can have a significant impact on I/O on your server. If you have a set of
tables that is used frequently then consider putting these on a separate file group on separate
physical drives, if you do this on large heavily used systems then you may notice a significant
difference.
If you identify I/O as a problem but you are unable to add further spindles to a set of disks
considering putting your non-clustered indexes in a separate file group on a separate disk.

9246843869, 040 66750496


JAMPANI TECHNOLOGIES SURESH CHOWDARY 172
Growth Considerations
Depending on how quickly you expect your database to grow you may want consider the
following rules when purchasing hardware, these are recommended as best practise by
Microsoft, and they are basically common sense:

• If you expect the database to grow suddenly, buy hard that can be added to. Thus
allowing you to expand as needed
• If you expect your growth to be minimal then buy what you need.

It is important to size your database appropriately at the outset. This can avoid significant
performance overhead when the database needs to grow. Ideally your database will be sized
appropriately for the next 6 to 12 months. I’m not an advocate of the auto-grow function of SQL
Server, I feel that the database size is best managed manually, thus minimizing any overhead
caused by this process.

THIRDPARTY TOOLS

TROUBLE SHOOTING

9246843869, 040 66750496

Você também pode gostar